1
|
Yang S, Xiao D, Geng H, Ai D, Fan J, Fu T, Song H, Duan F, Yang J. Real-Time 3D Instrument Tip Tracking Using 2D X-Ray Fluoroscopy With Vessel Deformation Correction Under Free Breathing. IEEE Trans Biomed Eng 2025; 72:1422-1436. [PMID: 40117137 DOI: 10.1109/tbme.2024.3508840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2025]
Abstract
OBJECTIVE Accurate localization of the instrument tip within the hepatic vein is crucial for the success of transjugular intrahepatic portosystemic shunt (TIPS) procedures. Real-time tracking of the instrument tip in X-ray images is greatly influenced by vessel deformation due to patient's pose variation, respiratory motion, and puncture manipulation, frequently resulting in failed punctures. METHOD We propose a novel framework called deformable instrument tip tracking (DITT) to obtain the real-time tip positioning within the 3D deformable vasculature. First, we introduce a pose alignment module to improve the rigid matching between the preoperative vessel centerline and the intraoperative instrument centerline, in which the accurate matching of 3D/2D centerline features is implemented with an adaptive point sampling strategy. Second, a respiration compensation module using monoplane X-ray image sequences is constructed and provides the motion prior to predict intraoperative liver movement. Third, a deformation correction module is proposed to rectify the vessel deformation during procedures, in which a manifold regularization and the maximum likelihood-based acceleration are introduced to obtain the accurate and fast deformation learning. RESULTS Experimental results on simulated and clinical datasets show an average tracking error of 1.59 0.57 mm and 1.67 0.54 mm, respectively. CONCLUSION Our framework can track the tip in 3D vessel and dynamically overlap the branch roadmapping onto X-ray images to provide real-time guidance. SIGNIFICANCE Accurate and fast (43ms per frame) tip tracking with the proposed framework possesses a good potential for improving the outcomes of TIPS treatment and minimizes the usage of contrast agent.
Collapse
|
2
|
Tatum M, Kern A, Goetz JE, Thomas G, Anderson DD. A Novel System for Markerless Intra-Operative Bone and Bone Fragment Tracking. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2025; 13:2463327. [PMID: 39991594 PMCID: PMC11845215 DOI: 10.1080/21681163.2025.2463327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 02/01/2025] [Indexed: 02/25/2025]
Abstract
Fluoroscopic guidance is an integral tool in modern orthopedic surgery often used to track bones and/or bone fragments during a surgical procedure. However, relying upon this intra-operative 2D projective imaging modality for this purpose can challenge a surgeon's ability to interpret 3D position and orientation of any but the simplest bony anatomy. A number of object-tracking technologies have been developed to aid surgeons, but they have failed to be generalizable to a wider array of procedures, have required an unrealistic amount of time and effort to implement, or have unacceptably changed the flow of the surgery. This work describes a novel, general-purpose system for markerless, intra-operative bone tracking that seamlessly integrates into a surgical setting. The system uses a unique calibration object placed next to the patient, which provides a common reference for aligning multiple fluoroscopic images. This approach enables robust and expedient 3D object registration from only two semi-orthogonal 2D fluoroscopic images.
Collapse
Affiliation(s)
- Marcus Tatum
- Department of Orthopedics and Rehabilitation, The University of Iowa
- Department of Industrial and Systems Engineering, The University of Iowa
| | - Andrew Kern
- Department of Orthopedics and Rehabilitation, The University of Iowa
- Department of Biomedical Engineering, The University of Iowa
| | - Jessica E. Goetz
- Department of Orthopedics and Rehabilitation, The University of Iowa
- Department of Biomedical Engineering, The University of Iowa
| | - Geb Thomas
- Department of Orthopedics and Rehabilitation, The University of Iowa
- Department of Industrial and Systems Engineering, The University of Iowa
| | - Donald D. Anderson
- Department of Orthopedics and Rehabilitation, The University of Iowa
- Department of Industrial and Systems Engineering, The University of Iowa
- Department of Biomedical Engineering, The University of Iowa
| |
Collapse
|
3
|
Oya T, Kadomatsu Y, Chen-Yoshikawa TF, Nakao M. 2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation. Comput Med Imaging Graph 2024; 116:102418. [PMID: 39079410 DOI: 10.1016/j.compmedimag.2024.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 07/10/2024] [Accepted: 07/15/2024] [Indexed: 09/02/2024]
Abstract
Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.
Collapse
Affiliation(s)
- Tomoki Oya
- Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo, Kyoto, 606-8501, Japan
| | - Yuka Kadomatsu
- Nagoya University Hospital, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| | | | - Megumi Nakao
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-cho, Sakyo, Kyoto, 606-8507, Japan.
| |
Collapse
|
4
|
Geng H, Xiao D, Yang S, Fan J, Fu T, Lin Y, Bai Y, Ai D, Song H, Wang Y, Duan F, Yang J. CT2X-IRA: CT to x-ray image registration agent using domain-cross multi-scale-stride deep reinforcement learning. Phys Med Biol 2023; 68:175024. [PMID: 37549676 DOI: 10.1088/1361-6560/acede5] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 08/07/2023] [Indexed: 08/09/2023]
Abstract
Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.
Collapse
Affiliation(s)
- Haixiao Geng
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Deqiang Xiao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Shuo Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Tianyu Fu
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yucong Lin
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yanhua Bai
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yongtian Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Feng Duan
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
5
|
Nakao M, Nakamura M, Matsuda T. Image-to-Graph Convolutional Network for 2D/3D Deformable Model Registration of Low-Contrast Organs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3747-3761. [PMID: 35901001 DOI: 10.1109/tmi.2022.3194517] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope, e.g., in image-guided radiotherapy and surgical guidance. We propose an image-to-graph convolutional network that achieves deformable registration of a three-dimensional (3D) organ mesh for a low-contrast two-dimensional (2D) projection image. This framework enables simultaneous training of two types of transformation: from the 2D projection image to a displacement map, and from the sampled per-vertex feature to a 3D displacement that satisfies the geometrical constraint of the mesh structure. Assuming application to radiation therapy, the 2D/3D deformable registration performance is verified for multiple abdominal organs that have not been targeted to date, i.e., the liver, stomach, duodenum, and kidney, and for pancreatic cancer. The experimental results show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from digitally reconstructed radiographs with clinically acceptable accuracy.
Collapse
|
6
|
Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery. MICROMACHINES 2021; 12:mi12070844. [PMID: 34357254 PMCID: PMC8303962 DOI: 10.3390/mi12070844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/10/2021] [Accepted: 07/16/2021] [Indexed: 12/04/2022]
Abstract
Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS.
Collapse
|
7
|
Yang K, Luo Y, Zhao Y, Su S, Qu D, Zhao X, Song G. A novel 2D/3D hierarchical registration framework via principal-directional Fourier transform operator. Phys Med Biol 2021; 66:065030. [PMID: 33631735 DOI: 10.1088/1361-6560/abe9f5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An effective registration framework between preoperative 3D computed tomography and intraoperative 2D x-ray images is crucial in image-guided therapy. In this paper, a novel 2D/3D hierarchical registration framework via principal-directional Fourier transform operator (HRF-PDFTO) is proposed. First, a PDFTO was established to obtain the in-plane translation and rotation invariance. Then, an initial free template-matching approach based on PDFTO was utilized to avoid initial value assignment and expand the capture range of registration. Finally, the hierarchical registration framework, HRF-PDFTO, was proposed to reduce the dimensions of the registration search space from n 6 to n 2. The experimental results demonstrated that the proposed HRF-PDFTO has good performance with an accuracy of 0.72 mm, and a single registration time of 16 s, which improves the registration efficiency by ten times. Consequently, the HRF-PDFTO can meet the accuracy and efficiency requirements of 2D/3D registration in related clinical applications.
Collapse
Affiliation(s)
- Keke Yang
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Yang Luo
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Yiwen Zhao
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Shun Su
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Danyang Qu
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Xingang Zhao
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Guoli Song
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,The Liaoning Medical Surgery and Rehabilitation Robot Engineering Research Center, Shenyang 110134, People's Republic of China
| |
Collapse
|
8
|
Schaffert R, Wang J, Fischer P, Borsdorf A, Maier A. Learning an Attention Model for Robust 2-D/3-D Registration Using Point-To-Plane Correspondences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3159-3174. [PMID: 32305908 DOI: 10.1109/tmi.2020.2988410] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Minimally invasive procedures rely on image guidance for navigation at the operation site to avoid large surgical incisions. X-ray images are often used for guidance, but important structures may be not well visible. These structures can be overlaid from pre-operative 3-D images and accurate alignment can be established using 2-D/3-D registration. Registration based on the point-to-plane correspondence model was recently proposed and shown to achieve state-of-the-art performance. However, registration may still fail in challenging cases due to a large portion of outliers. In this paper, we describe a learning-based correspondence weighting scheme to improve the registration performance. By learning an attention model, inlier correspondences get higher attention in the motion estimation while the outlier correspondences are suppressed. Instead of using per-correspondence labels, our objective function allows to train the model directly by minimizing the registration error. We demonstrate a highly increased robustness, e.g. increasing the success rate from 84.9% to 97.0% for spine registration. In contrast to previously proposed learning-based methods, we also achieve a high accuracy of around 0.5mm mean re-projection distance. In addition, our method requires a relatively small amount of training data, is able to learn from simulated data, and generalizes to images with additional structures which are not present during training. Furthermore, a single model can be trained for both, different views and different anatomical structures.
Collapse
|
9
|
Singh SP, Wang L, Gupta S, Goli H, Padmanabhan P, Gulyás B. 3D Deep Learning on Medical Images: A Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5097. [PMID: 32906819 PMCID: PMC7570704 DOI: 10.3390/s20185097] [Citation(s) in RCA: 195] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 08/31/2020] [Accepted: 09/03/2020] [Indexed: 12/20/2022]
Abstract
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.
Collapse
Affiliation(s)
- Satya P. Singh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Sukrit Gupta
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Haveesh Goli
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Parasuraman Padmanabhan
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Balázs Gulyás
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
- Department of Clinical Neuroscience, Karolinska Institute, 17176 Stockholm, Sweden
| |
Collapse
|
10
|
Lange A, Heldmann S. Multilevel 2D-3D Intensity-Based Image Registration. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279926 DOI: 10.1007/978-3-030-50120-4_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
2D-3D image registration is an important task for computer-aided minimally invasive vascular therapies. A crucial component for practical image registration is the use of multilevel strategies to avoid local optima and to speed-up runtime. However, due to the different dimensionalities of the 2D fixed and 3D moving image, the setup of multilevel strategies is not straightforward. In this work, we propose an intensity-driven 2D-3D multiresolution registration approach using the normalized gradient fields (NGF) distance measure. We discuss and empirically analyze the impact on the choice of 2D and 3D image resolutions. Furthermore, we show that our approach produces results that are comparable or superior to other state-of-the-art methods.
Collapse
|
11
|
Schaffert R, Wang J, Fischer P, Maier A, Borsdorf A. Robust Multi-View 2-D/3-D Registration Using Point-To-Plane Correspondence Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:161-174. [PMID: 31199258 DOI: 10.1109/tmi.2019.2922931] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In minimally invasive procedures, the clinician relies on image guidance to observe and navigate the operation site. In order to show structures which are not visible in the live X-ray images, such as vessels or planning annotations, X-ray images can be augmented with pre-operatively acquired images. Accurate image alignment is needed and can be provided by 2-D/3-D registration. In this paper, a multi-view registration method based on the point-to-plane correspondence model is proposed. The correspondence model is extended to be independent of the used camera coordinates and different multi-view registration schemes are introduced and compared. Evaluation is performed for a wide range of clinically relevant registration scenarios. We show for different applications that registration using correspondences from both views simultaneously provides accurate and robust registration, while the performance of the other schemes varies considerably. Our method also outperforms the state-of-the-art method for cerebral angiography registration, achieving a capture range of 18 mm and an accuracy of 0.22±0.07 mm. Furthermore, investigations on the minimum angle between the views are performed in order to provide accurate and robust registration, while minimizing the obstruction to the clinical workflow. We show that small angles around 30° are sufficient to provide reliable registration results.
Collapse
|