1
|
FMB: Dual-view fusion and registration of 2D DSA images and 3D MRA images for neurointerventional-based procedures. Comput Biol Med 2024; 171:107987. [PMID: 38350395 DOI: 10.1016/j.compbiomed.2024.107987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/03/2024] [Accepted: 01/13/2024] [Indexed: 02/15/2024]
Abstract
OBJECTIVE Alignment between preoperative images (high-resolution magnetic resonance imaging, magnetic resonance angiography) and intraoperative medical images (digital subtraction angiography) is currently required in neurointerventional surgery. Treating a lesion is usually guided by a 2D DSA silhouette image. DSA silhouette images increase procedure time and radiation exposure time due to the lack of anatomical information, but information from MRA images can be utilized to compensate for this in order to improve procedure efficiency. In this paper, we abstract this into the problem of relative pose and correspondence between a 3D point and its 2D projection. Multimodal images have a large amount of noise and anomalies that are difficult to resolve using conventional methods. According to our research, there are fewer multimodal fusion methods to perform the full procedure. APPROACH Therefore, the paper introduces a registration pipeline for multimodal images with fused dual views is presented. Deep learning methods are introduced to accomplish feature extraction of multimodal images to automate the process. Besides, the paper proposes a registration method based on the Factor of Maximum Bounds (FMB). The key insights are to relax the constraints on the lower bound, enhance the constraints on the upper bounds, and mine more local consensus information in the point set using a second perspective to generate accurate pose estimation. MAIN RESULTS Compared to existing 2D/3D point set registration methods, this method utilizes a different problem formulation, searches the rotation and translation space more efficiently, and improves registration speed. SIGNIFICANCE Experiments with synthesized and real data show that the proposed method was achieved in accuracy, robustness, and time efficiency.
Collapse
|
2
|
A spatial registration method based on 2D-3D registration for an augmented reality spinal surgery navigation system. Int J Med Robot 2023:e2612. [PMID: 38113328 DOI: 10.1002/rcs.2612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/27/2023] [Accepted: 12/06/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND In order to provide accurate and reliable image guidance for augmented reality (AR) spinal surgery navigation, a spatial registration method has been proposed. METHODS In the AR spinal surgery navigation system, grayscale-based 2D/3D registration technology has been used to register preoperative computed tomography images with intraoperative X-ray images to complete the spatial registration, and then the fusion of virtual image and real spine has been realised. RESULTS In the image registration experiment, the success rate of spine model registration was 90%. In the spinal model verification experiment, the surface registration error of the spinal model ranged from 0.361 to 0.612 mm, and the total average surface registration error was 0.501 mm. CONCLUSION The spatial registration method based on 2D/3D registration technology can be used in AR spinal surgery navigation systems and is highly accurate and minimally invasive.
Collapse
|
3
|
Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
|
4
|
Visualization in 2D/3D registration matters for assuring technology-assisted image-guided surgery. Int J Comput Assist Radiol Surg 2023; 18:1017-1024. [PMID: 37079247 PMCID: PMC10986429 DOI: 10.1007/s11548-023-02888-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/27/2023] [Indexed: 04/21/2023]
Abstract
PURPOSE Image-guided navigation and surgical robotics are the next frontiers of minimally invasive surgery. Assuring safety in high-stakes clinical environments is critical for their deployment. 2D/3D registration is an essential, enabling algorithm for most of these systems, as it provides spatial alignment of preoperative data with intraoperative images. While these algorithms have been studied widely, there is a need for verification methods to enable human stakeholders to assess and either approve or reject registration results to ensure safe operation. METHODS To address the verification problem from the perspective of human perception, we develop novel visualization paradigms and use a sampling method based on approximate posterior distribution to simulate registration offsets. We then conduct a user study with 22 participants to investigate how different visualization paradigms (Neutral, Attention-Guiding, Correspondence-Suggesting) affect human performance in evaluating the simulated 2D/3D registration results using 12 pelvic fluoroscopy images. RESULTS All three visualization paradigms allow users to perform better than random guessing to differentiate between offsets of varying magnitude. The novel paradigms show better performance than the neutral paradigm when using an absolute threshold to differentiate acceptable and unacceptable registrations (highest accuracy: Correspondence-Suggesting (65.1%), highest F1 score: Attention-Guiding (65.7%)), as well as when using a paradigm-specific threshold for the same discrimination (highest accuracy: Attention-Guiding (70.4%), highest F1 score: Corresponding-Suggesting (65.0%)). CONCLUSION This study demonstrates that visualization paradigms do affect the human-based assessment of 2D/3D registration errors. However, further exploration is needed to understand this effect better and develop more effective methods to assure accuracy. This research serves as a crucial step toward enhanced surgical autonomy and safety assurance in technology-assisted image-guided surgery.
Collapse
|
5
|
A Surgical Robotic System for Osteoporotic Hip Augmentation: System Development and Experimental Evaluation. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2023; 5:18-29. [PMID: 37213937 PMCID: PMC10195101 DOI: 10.1109/tmrb.2023.3241589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Minimally-invasive Osteoporotic Hip Augmentation (OHA) by injecting bone cement is a potential treatment option to reduce the risk of hip fracture. This treatment can significantly benefit from computer-assisted planning and execution system to optimize the pattern of cement injection. We present a novel robotic system for the execution of OHA that consists of a 6-DOF robotic arm and integrated drilling and injection component. The minimally-invasive procedure is performed by registering the robot and preoperative images to the surgical scene using multiview image-based 2D/3D registration with no external fiducial attached to the body. The performance of the system is evaluated through experimental sawbone studies as well as cadaveric experiments with intact soft tissues. In the cadaver experiments, distance errors of 3.28mm and 2.64mm for entry and target points and orientation error of 2.30° are calculated. Moreover, the mean surface distance error of 2.13mm with translational error of 4.47mm is reported between injected and planned cement profiles. The experimental results demonstrate the first application of the proposed Robot-Assisted combined Drilling and Injection System (RADIS), incorporating biomechanical planning and intraoperative fiducial-less 2D/3D registration on human cadavers with intact soft tissues.
Collapse
|
6
|
2D/3D Non-Rigid Image Registration via Two Orthogonal X-ray Projection Images for Lung Tumor Tracking. Bioengineering (Basel) 2023; 10:bioengineering10020144. [PMID: 36829638 PMCID: PMC9951849 DOI: 10.3390/bioengineering10020144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/24/2023] Open
Abstract
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.
Collapse
|
7
|
Towards 2D/3D Registration of the Preoperative MRI to Intraoperative Fluoroscopic Images for Visualization of Bone Defects. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1096-1105. [PMID: 37555198 PMCID: PMC10406464 DOI: 10.1080/21681163.2022.2152375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/19/2022] [Indexed: 12/23/2022]
Abstract
Magnetic Resonance Imaging (MRI) is a medical imaging modality that allows for the evaluation of soft-tissue diseases and the assessment of bone quality. Preoperative MRI volumes are used by surgeons to identify defected bones, perform the segmentation of lesions, and generate surgical plans before the surgery. Nevertheless, conventional intraoperative imaging modalities such as fluoroscopy are less sensitive in detecting potential lesions. In this work, we propose a 2D/3D registration pipeline that aims to register preoperative MRI with intraoperative 2D fluoroscopic images. To showcase the feasibility of our approach, we use the core decompression procedure as a surgical example to perform 2D/3D femur registration. The proposed registration pipeline is evaluated using digitally reconstructed radiographs (DRRs) to simulate the intraoperative fluoroscopic images. The resulting transformation from the registration is later used to create overlays of preoperative MRI annotations and planning data to provide intraoperative visual guidance to surgeons. Our results suggest that the proposed registration pipeline is capable of achieving reasonable transformation between MRI and digitally reconstructed fluoroscopic images for intraoperative visualization applications.
Collapse
|
8
|
Comparison of in vivo kinematics of total knee arthroplasty between cruciate retaining and cruciate substituting insert. ASIA-PACIFIC JOURNAL OF SPORT MEDICINE ARTHROSCOPY REHABILITATION AND TECHNOLOGY 2021; 26:47-52. [PMID: 34722162 PMCID: PMC8521180 DOI: 10.1016/j.asmart.2021.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 09/22/2021] [Accepted: 10/04/2021] [Indexed: 11/25/2022]
Abstract
Background The decision to choose cruciate retaining (CR) insert or cruciate substituting (CS) insert during total knee arthroplasty (TKA) remains a controversial issue. We hypothesized that there are different knee kinematics between CR and CS inserts and that a raised anterior lip design would offer a potential minimization of the paradoxical movement and provide joint stability. The objective of this study was to evaluate and compare kinematics of a CR and CS TKA of the same single-radius design. Methods We investigated the in vivo knee kinematics of 20 knees with a CR TKA (10 knees in the CR insert and 10 knees in the CS insert). Patients were examined during deep knee flexion using fluoroscopy and femorotibial motion was determined using a 2- to 3-dimensional registration technique, which used computer-assisted design models to reproduce the spatial positions of the femoral and tibial components. We evaluated the knee range of motion (ROM), femoral axial rotation relative to the tibial component, anteroposterior translation, and kinematic pathway of the nearest point of the medial and lateral femoral condyles on the tibial tray. Results The average ROM was 121.0 ± 17.3° in CR and 110.8 ± 12.4° in CS. The amount of femoral axial rotation was 7.2 ± 3.9° in CR, and 7.4 ± 2.7° in CS. No significant difference was observed in the amount of anterior translation between CR and CS. The CR and CS inserts had a similar kinematic pattern up to 100° flexion that was central pivot up to 70° flexion and then paradoxical anterior femoral movement until 100° flexion. Conclusion The present study demonstrated that there was no significant difference between the inserts in knee kinematics. These kinematic results suggested that the increased anterior lip could not control anterior movement in the CS insert.
Collapse
|
9
|
Instrument localisation for endovascular aneurysm repair: Comparison of two methods based on tracking systems or using imaging. Int J Med Robot 2021; 17:e2327. [PMID: 34480406 DOI: 10.1002/rcs.2327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 08/23/2021] [Accepted: 08/30/2021] [Indexed: 11/11/2022]
Abstract
BACKGROUND In endovascular aneuysm repair (EVAR) procedures, medical instruments are currently navigated with a two-dimensional imaging based guidance requiring X-rays and contrast agent. METHODS Novel approaches for obtaining the three-dimensional instrument positions are introduced. Firstly, a method based on fibre optical shape sensing, one electromagnetic sensor and a preoperative computed tomography (CT) scan is described. Secondly, an approach based on image processing using one 2D fluoroscopic image and a preoperative CT scan is introduced. RESULTS For the tracking based method, average errors from 1.81 to 3.13 mm and maximum errors from 3.21 to 5.46 mm were measured. For the image-based approach, average errors from 3.07 to 6.02 mm and maximum errors from 8.05 to 15.75 mm were measured. CONCLUSION The tracking based method is promising for usage in EVAR procedures. For the image-based approach are applications in smaller vessels more suitable, since its errors increase with the vessel diameter.
Collapse
|
10
|
Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration. Int J Comput Assist Radiol Surg 2020; 15:759-769. [PMID: 32333361 PMCID: PMC7263976 DOI: 10.1007/s11548-020-02162-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 04/03/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Fluoroscopy is the standard imaging modality used to guide hip surgery and is therefore a natural sensor for computer-assisted navigation. In order to efficiently solve the complex registration problems presented during navigation, human-assisted annotations of the intraoperative image are typically required. This manual initialization interferes with the surgical workflow and diminishes any advantages gained from navigation. In this paper, we propose a method for fully automatic registration using anatomical annotations produced by a neural network. METHODS Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy. Training data are obtained using a computationally intensive, intraoperatively incompatible, 2D/3D registration of the pelvis and each femur. Ground truth 2D segmentation labels and anatomical landmark locations are established using projected 3D annotations. Intraoperative registration couples a traditional intensity-based strategy with annotations inferred by the network and requires no human assistance. RESULTS Ground truth segmentation labels and anatomical landmarks were obtained in 366 fluoroscopic images across 6 cadaveric specimens. In a leave-one-subject-out experiment, networks trained on these data obtained mean dice coefficients for left and right hemipelves, left and right femurs of 0.86, 0.87, 0.90, and 0.84, respectively. The mean 2D landmark localization error was 5.0 mm. The pelvis was registered within [Formula: see text] for 86% of the images when using the proposed intraoperative approach with an average runtime of 7 s. In comparison, an intensity-only approach without manual initialization registered the pelvis to [Formula: see text] in 18% of images. CONCLUSIONS We have created the first accurately annotated, non-synthetic, dataset of hip fluoroscopy. By using these annotations as training data for neural networks, state-of-the-art performance in fluoroscopic segmentation and landmark localization was achieved. Integrating these annotations allows for a robust, fully automatic, and efficient intraoperative registration during fluoroscopic navigation of the hip.
Collapse
|
11
|
Evaluation of an intensity-based algorithm for 2D/3D registration of natural knee videofluoroscopy data. Med Eng Phys 2020; 77:107-113. [PMID: 31980316 DOI: 10.1016/j.medengphy.2020.01.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 09/24/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
The accurate quantification of in-vivo tibio-femoral kinematics is essential for understanding joint functionality, but determination of the 3D pose of bones from 2D single-plane fluoroscopic images remains challenging. We aimed to evaluate the accuracy, reliability and repeatability of an intensity-based 2D/3D registration algorithm. The accuracy was evaluated using fluoroscopic images of 2 radiopaque bones in 18 different poses, compared against a gold-standard fiducial calibration device. In addition, 3 natural femora and 3 natural tibiae were used to examine registration reliability and repeatability. Both manual fitting and intensity-based registration exhibited a mean absolute error of <1 mm in-plane. Overall, intensity-based registration of the femoral bone model revealed significantly higher translational and rotational errors than manual fitting, while no statistical differences (except for y-axis translation) were found for the tibial bone model. The repeatability of 108 intensity-based registrations showed mean in-plane standard deviations of 0.23-0.56 mm, but out-of-plane position repeatability was lower (mean SD: femur 7.98 mm, tibia 6.96 mm). SDs for rotations averaged 0.77-2.52°. While the algorithm registered some images extremely well, other images clearly required manual intervention. When the algorithm registered the bones repeatably, it was also accurate, suggesting an approach that includes manual intervention could become practical for efficient and accurate registration.
Collapse
|
12
|
Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views. Int J Comput Assist Radiol Surg 2019; 14:1463-1473. [PMID: 31006106 DOI: 10.1007/s11548-019-01975-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 04/09/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. METHODS In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of [Formula: see text]. RESULTS On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. CONCLUSION We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.
Collapse
|
13
|
Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration. Int J Comput Assist Radiol Surg 2017; 12:1221-1230. [PMID: 28527025 DOI: 10.1007/s11548-017-1611-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 05/08/2017] [Indexed: 12/25/2022]
Abstract
PURPOSE In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.
Collapse
|
14
|
Real-time pose estimation of devices from x-ray images: Application to x-ray/echo registration for cardiac interventions. Med Image Anal 2016; 34:101-108. [PMID: 27179366 DOI: 10.1016/j.media.2016.04.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Revised: 04/08/2016] [Accepted: 04/23/2016] [Indexed: 11/18/2022]
Abstract
In recent years, registration between x-ray fluoroscopy (XRF) and transesophageal echocardiography (TEE) has been rapidly developed, validated, and translated to the clinic as a tool for advanced image guidance of structural heart interventions. This technology relies on accurate pose-estimation of the TEE probe via standard 2D/3D registration methods. It has been shown that latencies caused by slow registrations can result in errors during untracked frames, and a real-time ( > 15 hz) tracking algorithm is needed to minimize these errors. This paper presents two novel similarity metrics designed for accurate, robust, and extremely fast pose-estimation of devices from XRF images: Direct Splat Correlation (DSC) and Patch Gradient Correlation (PGC). Both metrics were implemented in CUDA C, and validated on simulated and clinical datasets against prior methods presented in the literature. It was shown that by combining DSC and PGC in a hybrid method (HYB), target registration errors comparable to previously reported methods were achieved, but at much higher speeds and lower failure rates. In simulated datasets, the proposed HYB method achieved a median projected target registration error (pTRE) of 0.33 mm and a mean registration frame-rate of 12.1 hz, while previously published methods produced median pTREs greater than 1.5 mm and mean registration frame-rates less than 4 hz. In clinical datasets, the HYB method achieved a median pTRE of 1.1 mm and a mean registration frame-rate of 20.5 hz, while previously published methods produced median pTREs greater than 1.3 mm and mean registration frame-rates less than 12 hz. The proposed hybrid method also had much lower failure rates than previously published methods.
Collapse
|
15
|
Real-time 6DoF pose recovery from X-ray images using library-based DRR and hybrid optimization. Int J Comput Assist Radiol Surg 2016; 11:1211-20. [PMID: 27038967 DOI: 10.1007/s11548-016-1387-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2016] [Accepted: 03/14/2016] [Indexed: 11/26/2022]
Abstract
PURPOSE Real-time 6 degrees of freedom (6DoF) pose recovery and tracking from X-ray images is a key enabling technology for many interventional imaging applications. However, real-time 2D/3D registration is a very challenging problem because of the heavy computation in iterative digitally reconstructed radiograph (DRR) generation. In this paper, we propose a real-time 2D/3D registration framework using library-based DRRs to achieve high computational efficiency. METHOD The proposed method pre-computes a library of canonical DRRs and reconstructs library-based DRRs (libDRRs) during registration without online rendering. The transformation parameters are decoupled to 2 geometry-relevant and 4 geometry-irrelevant ones so that canonical DRRs only need to cover the variation of 2 geometry-relevant parameters, making it practical to be pre-computed and stored. The 2D/3D registration using libDRRs is then solved as a hybrid optimization problem, i.e., continuous in geometry-irrelevant parameters while discrete in geometry-relevant parameters. RESULTS On 5 fluoroscopic sequences with 246 frames acquired during animal studies with a transesophageal echocardiography (TEE) probe in the field of view, 6DoF tracking of the TEE probe using the proposed method achieved a mean target registration error in the projection direction (mTREproj) of 0.81 mm, a success rate of 100 % (defined as mTREproj [Formula: see text]2.5 mm), and a registration frame rate of 23.1 fps on a pure CPU-based implementation executed in a single thread. CONCLUSION Using libDRRs with a hybrid optimization can significantly improve the computational efficiency (up to tenfold) for 6DoF pose recovery and tracking with little degradation in robustness and accuracy, compared to conventional intensity-based 2D/3D registration using ray casting DRRs with a continuous optimization.
Collapse
|
16
|
Dynamic tracking of prosthetic valve motion and deformation from bi-plane x-ray views: feasibility study. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9786:978604. [PMID: 28008211 PMCID: PMC5166601 DOI: 10.1117/12.2216588] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 ± 2.6 mm (mean ± S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm ± 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 ± 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.
Collapse
|
17
|
Development of fast patient position verification software using 2D-3D image registration and its clinical experience. JOURNAL OF RADIATION RESEARCH 2015; 56:818-29. [PMID: 26081313 PMCID: PMC4577001 DOI: 10.1093/jrr/rrv032] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 03/23/2015] [Accepted: 05/08/2015] [Indexed: 05/20/2023]
Abstract
To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy.
Collapse
|
18
|
Intra-operative gaps affect outcome and postoperative kinematics in vivo following cruciate-retaining total knee arthroplasty. INTERNATIONAL ORTHOPAEDICS 2015; 40:41-9. [PMID: 26133289 DOI: 10.1007/s00264-015-2847-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2015] [Accepted: 04/30/2015] [Indexed: 12/30/2022]
Abstract
PURPOSE The following investigation evaluates the effect of intra-operative gaps after posterior cruciate ligament-retaining total knee arthroplasty using two-dimensional/three-dimensional registration and the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). METHODS Patients were divided into two groups according to their 90°-0° component gap changes using a device designed by our laboratory. The wide gap group was defined as more than 3 mm (4.3 ± 0.7 mm), and the narrow gap group was defined as less than 3 mm (1.3 ± 1.3 mm). RESULTS Under non-WB (weight bearing) conditions, the wide flexion gap group (N = 10) showed a significant anterior displacement of the medial femoral condyle as compared with the narrow flexion gap group (N = 20). Despite no significant differences observed under WB conditions, both femoral condyle positions during flexion were significantly more posterior than during extension. WOMAC of the tight gap group showed worse scores for two functional items demanding knee flexion (bending to floor and getting on/off toilet). CONCLUSION The large flexion gap could influence the late rollback under non-WB conditions and better WOMAC functional scores in the flexion items. Three to four millimetre laxity at 90°-0° component gaps may be adequate and might be necessary to carry out daily life activities.
Collapse
|
19
|
Automatic fusion of lateral cephalograms and digital volume tomography data-perspective for combining two modalities in the future. Dentomaxillofac Radiol 2015; 44:20150073. [PMID: 26119213 DOI: 10.1259/dmfr.20150073] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES This article investigates the combination of three-dimensional (3D) digital volume tomography data with two-dimensional (2D) cephalograms in dentomaxillofacial imaging. METHODS An automatic hierarchical method to adjust the geometrical relations of these two modalities is presented. The approach is tested on phantom and patient case data, where the feasibility, usability and potential possibilities of the presented innovative method are highlighted. Digitally reconstructed radiographs are computed by casting rays through the 3D volume to get a 2D projection of the volume to produce realistic simulated cephalograms. Different similarity measures are considered based on variations of statistical and deterministic optimization procedures. Stability, precision and accuracy of the method are investigated. RESULTS The presented algorithm demonstrates a reasonable solution of the corresponding 2D/3D registration problem. Exemplary results from phantom and patient case data are presented. Tooth movement could be determined, in contrast to the 2D lateral cephalogram, separated for each side in all three spatial directions. CONCLUSIONS Achieved results are highlighted from a clinical point of view and demonstrate the clinical benefit in daily praxis.
Collapse
|
20
|
In vivo kinematic analysis of posterior-stabilized total knee arthroplasty for the valgus knee operated by the gap-balancing technique. Knee 2014; 21:1124-8. [PMID: 25153613 DOI: 10.1016/j.knee.2014.07.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Revised: 07/05/2014] [Accepted: 07/21/2014] [Indexed: 02/02/2023]
Abstract
BACKGROUND Most in vivo kinematic studies of total knee arthroplasty (TKA) report on the varus knee. The objective of the present study was to evaluate in vivo kinematics of a posterior-stabilized fixed-bearing TKA operated on a valgus knee during knee bending in weight-bearing (WB) and non-weight-bearing (NWB). METHODS A total of sixteen valgus knees in 12 cases that underwent TKA with Scorpio NRG PS knee prosthesis and that were operated on using the gap balancing technique were evaluated. We evaluated the in vivo kinematics of the knee using fluoroscopy and femorotibial translation relative to the tibial tray using a 2-dimensional to 3-dimensional registration technique. RESULTS The average flexion angle was 111.3°±7.5° in WB and 114.9° ± 8.4° in NWB. The femoral component demonstrated a mean external rotation of 5.9° ± 5.8° in WB and 7.4° ± 5.2° in NWB. In WB and NWB, the femoral component showed a medial pivot pattern from 0° to midflexion and a bicondylar rollback pattern from midflexion to full flexion. The medial condyle moved similarly in the WB condition and in the NWB condition. The lateral condyle moved posteriorly at a slightly earlier angle during the WB condition than during the NWB condition. CONCLUSIONS We conclude that similar kinematics after TKA can be obtained with the gap balancing technique for the preoperative valgus deformity when compared to the kinematics of a normal knee, even though the magnitude of external rotation was small. LEVEL OF EVIDENCE IV.
Collapse
|
21
|
Significant effect of the posterior tibial slope on the weight-bearing, midflexion in vivo kinematics after cruciate-retaining total knee arthroplasty. J Arthroplasty 2014; 29:2324-30. [PMID: 24269068 DOI: 10.1016/j.arth.2013.10.018] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 10/10/2013] [Accepted: 10/20/2013] [Indexed: 02/01/2023] Open
Abstract
The purpose of the present study was to compare weight bearing (WB) and non-WB conditions, and to evaluate the effect of the posterior tibial slope (PTS) on the in vivo kinematics of 21 knees after posterior cruciate ligament-retaining total knee arthroplasty during midflexion using 2-dimensional/3-dimensional registration. During WB, medial pivot and bicondylar rollback were observed. During non-WB, both the medial and lateral condyles moved significantly more anteriorly as compared to the WB state. These patients were divided into 2 groups according to their PTS. The large PTS group showed a significant posterior displacement of the medial femoral condyle as compared with the small PTS group, but no significant difference was observed at the lateral femoral condyle during both WB and non-WB. The PTS influenced knee kinematics through gravity (124/125).
Collapse
|
22
|
Augmented depth perception visualization in 2D/3D image fusion. Comput Med Imaging Graph 2014; 38:744-52. [PMID: 25066009 DOI: 10.1016/j.compmedimag.2014.06.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Revised: 06/24/2014] [Accepted: 06/27/2014] [Indexed: 11/27/2022]
Abstract
2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.
Collapse
|