51
|
Mitrović U, Pernuš F, Likar B, Špiclin Ž. Simultaneous 3D-2D image registration and C-arm calibration: Application to endovascular image-guided interventions. Med Phys 2016; 42:6433-47. [PMID: 26520733 DOI: 10.1118/1.4932626] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Three-dimensional to two-dimensional (3D-2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D-2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D-2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm's geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D-2D registration method against a highly accurate reference or "gold standard" registration, performed on clinical image datasets acquired in the context of the intervention. METHODS The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D-2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. RESULTS Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D-2D image registration. CONCLUSIONS Because the tested methods perform simultaneous C-arm calibration and 3D-2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors' main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.
Collapse
Affiliation(s)
- Uroš Mitrović
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Cosylab, Control System Laboratory, Teslova ulica 30, Ljubljana 1000, Slovenia
| | - Franjo Pernuš
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia
| | - Boštjan Likar
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Sensum, Computer Vision Systems, Tehnološki Park 21, Ljubljana 1000, Slovenia
| | - Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Sensum, Computer Vision Systems, Tehnološki Park 21, Ljubljana 1000, Slovenia
| |
Collapse
|
52
|
|
53
|
Otake Y, Wang AS, Uneri A, Kleinszig G, Vogt S, Aygun N, Lo SFL, Wolinsky JP, Gokaslan ZL, Siewerdsen JH. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation. Phys Med Biol 2016; 60:2075-90. [PMID: 25674851 DOI: 10.1088/0031-9155/60/5/2075] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely 'LevelCheck') to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product) in a manner consistent with natural surgical workflow.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
54
|
Zhang J, Wang J, Wang X, Gao X, Feng D. Physical Constraint Finite Element Model for Medical Image Registration. PLoS One 2015; 10:e0140567. [PMID: 26495841 PMCID: PMC4619665 DOI: 10.1371/journal.pone.0140567] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 09/28/2015] [Indexed: 11/18/2022] Open
Abstract
Due to being derived from linear assumption, most elastic body based non-rigid image registration algorithms are facing challenges for soft tissues with complex nonlinear behavior and with large deformations. To take into account the geometric nonlinearity of soft tissues, we propose a registration algorithm on the basis of Newtonian differential equation. The material behavior of soft tissues is modeled as St. Venant-Kirchhoff elasticity, and the nonlinearity of the continuum represents the quadratic term of the deformation gradient under the Green- St.Venant strain. In our algorithm, the elastic force is formulated as the derivative of the deformation energy with respect to the nodal displacement vectors of the finite element; the external force is determined by the registration similarity gradient flow which drives the floating image deforming to the equilibrium condition. We compared our approach to three other models: 1) the conventional linear elastic finite element model (FEM); 2) the dynamic elastic FEM; 3) the robust block matching (RBM) method. The registration accuracy was measured using three similarities: MSD (Mean Square Difference), NC (Normalized Correlation) and NMI (Normalized Mutual Information), and was also measured using the mean and max distance between the ground seeds and corresponding ones after registration. We validated our method on 60 image pairs including 30 medical image pairs with artificial deformation and 30 clinical image pairs for both the chest chemotherapy treatment in different periods and brain MRI normalization. Our method achieved a distance error of 0.320±0.138 mm in x direction and 0.326±0.111 mm in y direction, MSD of 41.96±13.74, NC of 0.9958±0.0019, NMI of 1.2962±0.0114 for images with large artificial deformations; and average NC of 0.9622±0.008 and NMI of 1.2764±0.0089 for the real clinical cases. Student’s t-test demonstrated that our model statistically outperformed the other methods in comparison (p-values <0.05).
Collapse
Affiliation(s)
- Jingya Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, P.R.China
- Changshu Inst Technol, Dept Phys, Changshu 215500, P.R.China
| | - Jiajun Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, P.R.China
- * E-mail: (JW); (XG)
| | - Xiuying Wang
- Institute of Biomedical Engineering and Technology and School of Information Technologies, University of Sydney, Sydney, NSW 2006, Australia
| | - Xin Gao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215006, P.R.China
- * E-mail: (JW); (XG)
| | - Dagan Feng
- Institute of Biomedical Engineering and Technology and School of Information Technologies, University of Sydney, Sydney, NSW 2006, Australia
- Med-X Research Institute, Shanghai Jiao Tong University, P.R.China
| |
Collapse
|
55
|
Fully automated 2D-3D registration and verification. Med Image Anal 2015; 26:108-19. [PMID: 26387052 DOI: 10.1016/j.media.2015.08.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Revised: 07/17/2015] [Accepted: 08/20/2015] [Indexed: 11/24/2022]
Abstract
Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range.
Collapse
|
56
|
Multimodal image registration with joint structure tensor and local entropy. Int J Comput Assist Radiol Surg 2015; 10:1765-75. [PMID: 26018848 DOI: 10.1007/s11548-015-1219-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2015] [Accepted: 05/01/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE Nonrigid registration of multimodal medical images remains a challenge in image-guided interventions. A common approach is to use mutual information (MI), which is robust to the intensity variations across modalities. However, primarily based on intensity distribution, MI does not take into account of underlying spatial and structural information of the images, which might lead to local optimization. To address such a challenge, this paper proposes a two-stage multimodal nonrigid registration scheme with joint structural information and local entropy. METHODS In our two-stage multimodal nonrigid registration scheme, both the reference image and floating image are firstly converted to a common space. A unified representation in the common space for the images is constructed by fusing the structure tensor (ST) trace with the local entropy (LE). Through the representation that reflects its geometry uniformly across modalities, the complicated deformation field is estimated using L(1) or L(2) distance. RESULTS We compared our approach to four other methods: (1) the method using LE, (2) the method using ST, (3) the method using spatially weighted LE and (4) the conventional MI-based method. Quantitative evaluations on 80 multimodal image pairs of different organs including 50 pairs of MR images with artificial deformations, 20 pairs of medical brain MR images and 10 pairs of breast images showed that our proposed method outperformed the comparison methods. Student's t test demonstrated that our method achieved statistically significant improvement on registration accuracy. CONCLUSION The two-stage registration with joint ST and LE outperformed the conventional MI-based method for multimodal images. Both the ST and the LE contributed to the improved registration accuracy.
Collapse
|
57
|
Minimally invasive registration for computer-assisted orthopedic surgery: combining tracked ultrasound and bone surface points via the P-IMLOP algorithm. Int J Comput Assist Radiol Surg 2015; 10:761-71. [PMID: 25895079 DOI: 10.1007/s11548-015-1188-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2015] [Accepted: 03/20/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE We present a registration method for computer-assisted total hip replacement (THR) surgery, which we demonstrate to improve the state of the art by both reducing the invasiveness of current methods and increasing registration accuracy. A critical element of computer-guided procedures is the determination of the spatial correspondence between the patient and a computational model of patient anatomy. The current method for establishing this correspondence in robot-assisted THR is to register points intraoperatively sampled by a tracked pointer from the exposed proximal femur and, via auxiliary incisions, from the distal femur. METHODS In this paper, we demonstrate a noninvasive technique for sampling points on the distal femur using tracked B-mode ultrasound imaging and present a new algorithm for registering these data called Projected Iterative Most-Likely Oriented Point (P-IMLOP). Points and normal orientations of the distal bone surface are segmented from ultrasound images and registered to the patient model along with points sampled from the exposed proximal femur via a tracked pointer. RESULTS The proposed approach is evaluated using a bone- and tissue-mimicking leg phantom constructed to enable accurate assessment of experimental registration accuracy with respect to a CT-image-based model of the phantom. These experiments demonstrate that localization of the femur shaft is greatly improved by tracked ultrasound. The experiments further demonstrate that, for ultrasound-based data, the P-IMLOP algorithm significantly improves registration accuracy compared to the standard ICP algorithm. CONCLUSION Registration via tracked ultrasound and the P-IMLOP algorithm has high potential to reduce the invasiveness and improve the registration accuracy of computer-assisted orthopedic procedures.
Collapse
|
58
|
Otake Y, Leonard S, Reiter A, Rajan P, Siewerdsen JH, Gallia GL, Ishii M, Taylor RH, Hager GD. Rendering-Based Video-CT Registration with Physical Constraints for Image-Guided Endoscopic Sinus Surgery. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9415. [PMID: 25991876 DOI: 10.1117/12.2081732] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.
Collapse
Affiliation(s)
- Y Otake
- Department of Computer Science, Johns Hopkins University, Baltimore MD, USA ; Graduate School of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - S Leonard
- Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | - A Reiter
- Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | - P Rajan
- Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | - J H Siewerdsen
- Department of Boimedical Engineering, Johns Hopkins University, Baltimore MD, USA
| | - G L Gallia
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University, Baltimore MD, USA
| | - M Ishii
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University, Baltimore MD, USA
| | - R H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | - G D Hager
- Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| |
Collapse
|
59
|
Liu WP, Otake Y, Azizian M, Wagner OJ, Sorger JM, Armand M, Taylor RH. 2D-3D radiograph to cone-beam computed tomography (CBCT) registration for C-arm image-guided robotic surgery. Int J Comput Assist Radiol Surg 2014; 10:1239-52. [PMID: 25503592 DOI: 10.1007/s11548-014-1132-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 11/13/2014] [Indexed: 11/25/2022]
Abstract
PURPOSE C-arm radiographs are commonly used for intraoperative image guidance in surgical interventions. Fluoroscopy is a cost-effective real-time modality, although image quality can vary greatly depending on the target anatomy. Cone-beam computed tomography (CBCT) scans are sometimes available, so 2D-3D registration is needed for intra-procedural guidance. C-arm radiographs were registered to CBCT scans and used for 3D localization of peritumor fiducials during a minimally invasive thoracic intervention with a da Vinci Si robot. METHODS Intensity-based 2D-3D registration of intraoperative radiographs to CBCT was performed. The feasible range of X-ray projections achievable by a C-arm positioned around a da Vinci Si surgical robot, configured for robotic wedge resection, was determined using phantom models. Experiments were conducted on synthetic phantoms and animals imaged with an OEC 9600 and a Siemens Artis zeego, representing the spectrum of different C-arm systems currently available for clinical use. RESULTS The image guidance workflow was feasible using either an optically tracked OEC 9600 or a Siemens Artis zeego C-arm, resulting in an angular difference of Δθ:∼ 30°. The two C-arm systems provided TRE mean ≤ 2.5 mm and TRE mean ≤ 2.0 mm, respectively (i.e., comparable to standard clinical intraoperative navigation systems). CONCLUSIONS C-arm 3D localization from dual 2D-3D registered radiographs was feasible and applicable for intraoperative image guidance during da Vinci robotic thoracic interventions using the proposed workflow. Tissue deformation and in vivo experiments are required before clinical evaluation of this system.
Collapse
Affiliation(s)
- Wen Pei Liu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA,
| | | | | | | | | | | | | |
Collapse
|
60
|
Basafa E, Murphy RJ, Otake Y, Kutzer MD, Belkoff SM, Mears SC, Armand M. Subject-specific planning of femoroplasty: an experimental verification study. J Biomech 2014; 48:59-64. [PMID: 25468663 DOI: 10.1016/j.jbiomech.2014.11.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Revised: 10/01/2014] [Accepted: 11/03/2014] [Indexed: 11/25/2022]
Abstract
The risk of osteoporotic hip fractures may be reduced by augmenting susceptible femora with acrylic polymethylmethacrylate (PMMA) bone cement. Grossly filling the proximal femur with PMMA has shown promise, but the augmented bones can suffer from thermal necrosis or cement leakage, among other side effects. We hypothesized that, using subject-specific planning and computer-assisted augmentation, we can minimize cement volume while increasing bone strength and reducing the risk of fracture. We mechanically tested eight pairs of osteoporotic femora, after augmenting one from each pair following patient-specific planning reported earlier, which optimized cement distribution and strength increase. An average of 9.5(±1.7) ml of cement was injected in the augmented set. Augmentation significantly (P<0.05) increased the yield load by 33%, maximum load by 30%, yield energy by 118%, and maximum energy by 94% relative to the non-augmented controls. Also predicted yield loads correlated well (R(2)=0.74) with the experiments and, for augmented specimens, cement profiles were predicted with an average surface error of <2 mm, further validating our simulation techniques. Results of the current study suggest that subject-specific planning of femoroplasty reduces the risk of hip fracture while minimizing the amount of cement required.
Collapse
Affiliation(s)
- Ehsan Basafa
- Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA.
| | - Ryan J Murphy
- Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA; Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Yoshito Otake
- Graduate School of Information Science, Nara Institute of Science and Technology, Ikoma, Nara, Japan
| | | | - Stephen M Belkoff
- International Center for Orthopaedic Advancement, Bayview Medical Center, Johns Hopkins University, Baltimore, MD, USA
| | - Simon C Mears
- Total Joint Replacement Center, Baylor Regional Medical Center, Plano, TX, USA
| | - Mehran Armand
- Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA; Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| |
Collapse
|
61
|
Bertelsen A, Garin-Muga A, Echeverría M, Gómez E, Borro D. Distortion correction and calibration of intra-operative spine X-ray images using a constrained DLT algorithm. Comput Med Imaging Graph 2014; 38:558-68. [PMID: 24993596 DOI: 10.1016/j.compmedimag.2014.06.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2013] [Revised: 05/29/2014] [Accepted: 06/05/2014] [Indexed: 10/25/2022]
|
62
|
Gong RH, Güler Ö, Kürklüoglu M, Lovejoy J, Yaniv Z. Interactive initialization of 2D/3D rigid registration. Med Phys 2014; 40:121911. [PMID: 24320522 DOI: 10.1118/1.4830428] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. METHODS The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. RESULTS In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. CONCLUSIONS Based on the authors' evaluation, the authors conclude that the registration approaches are sufficiently accurate for initializing 2D/3D registration in the OR setting, both when a tracking system is not in use (gesture based approach), and when a tracking system is already in use (AR based approach).
Collapse
Affiliation(s)
- Ren Hui Gong
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC 20010
| | | | | | | | | |
Collapse
|
63
|
Establishing cephalometric landmarks for the translational study of Le Fort-based facial transplantation in Swine: enhanced applications using computer-assisted surgery and custom cutting guides. Plast Reconstr Surg 2014; 133:1138-1151. [PMID: 24445879 DOI: 10.1097/prs.0000000000000110] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND Le Fort-based, maxillofacial allotransplantation is a reconstructive alternative gaining clinical acceptance. However, the vast majority of single-jaw transplant recipients demonstrate less-than-ideal skeletal and dental relationships, with suboptimal aesthetic harmony. The purpose of this study was to investigate reproducible cephalometric landmarks in a large-animal model, where refinement of computer-assisted planning, intraoperative navigational guidance, translational bone osteotomies, and comparative surgical techniques could be performed. METHODS Cephalometric landmarks that could be translated into the human craniomaxillofacial skeleton, and that would remain reliable following maxillofacial osteotomies with midfacial alloflap inset, were sought on six miniature swine. Le Fort I- and Le Fort III-based alloflaps were harvested in swine with osteotomies, and all alloflaps were either autoreplanted or transplanted. Cephalometric analyses were performed on lateral cephalograms preoperatively and postoperatively. Critical cephalometric data sets were identified with the assistance of surgical planning and virtual prediction software and evaluated for reliability and translational predictability. RESULTS Several pertinent landmarks and human analogues were identified, including pronasale, zygion, parietale, gonion, gnathion, lower incisor base, and alveolare. Parietale-pronasale-alveolare and parietale-pronasale-lower incisor base were found to be reliable correlates of sellion-nasion-A point angle and sellion-nasion-B point angle measurements in humans, respectively. CONCLUSIONS There is a set of reliable cephalometric landmarks and measurement angles pertinent for use within a translational large-animal model. These craniomaxillofacial landmarks will enable development of novel navigational software technology, improve cutting guide designs, and facilitate exploration of new avenues for investigation and collaboration.
Collapse
|
64
|
Akter M, Lambert AJ, Pickering MR, Scarvell JM, Smith PN. Robust initialisation for single-plane 3D CT to 2D fluoroscopy image registration. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2014. [DOI: 10.1080/21681163.2014.897649] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
65
|
Otake Y, Wang AS, Webster Stayman J, Uneri A, Kleinszig G, Vogt S, Khanna AJ, Gokaslan ZL, Siewerdsen JH. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation. Phys Med Biol 2013; 58:8535-53. [PMID: 24246386 DOI: 10.1088/0031-9155/58/23/8535] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA. Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
66
|
Lin CC, Lu TW, Shih TF, Tsai TY, Wang TM, Hsu SJ. Intervertebral anticollision constraints improve out-of-plane translation accuracy of a single-plane fluoroscopy-to-CT registration method for measuring spinal motion. Med Phys 2013; 40:031912. [PMID: 23464327 DOI: 10.1118/1.4792309] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. METHODS The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopy images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. RESULTS The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. CONCLUSIONS The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.
Collapse
Affiliation(s)
- Cheng-Chung Lin
- Institute of Biomedical Engineering, National Taiwan University, Taiwan 10051, Republic of China
| | | | | | | | | | | |
Collapse
|
67
|
Lin CC, Lu TW, Wang TM, Hsu CY, Shih TF. Comparisons of surface vs. volumetric model-based registration methods using single-plane vs. bi-plane fluoroscopy in measuring spinal kinematics. Med Eng Phys 2013; 36:267-74. [PMID: 24011956 DOI: 10.1016/j.medengphy.2013.08.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2013] [Revised: 06/27/2013] [Accepted: 08/14/2013] [Indexed: 10/26/2022]
Abstract
Several 2D-to-3D image registration methods are available for measuring 3D vertebral motion but their performance has not been evaluated under the same experimental protocol. In this study, four major types of fluoroscopy-to-CT registration methods, with different use of surface vs. volumetric models, and single-plane vs. bi-plane fluoroscopy, were evaluated: STS (surface, single-plane), VTS (volumetric, single-plane), STB (surface, bi-plane) and VTB (volumetric, bi-plane). Two similarity measures were used: 'Contour Difference' for STS and STB and 'Weighted Edge-Matching Score' for VTS and VTB. Two cadaveric porcine cervical spines positioned in a box filled with paraffin and embedded with four radiopaque markers were CT scanned to obtain vertebral models and marker coordinates, and imaged at ten static positions using bi-plane fluoroscopy for subsequent registrations using different methods. The registered vertebral poses were compared to the gold standard poses defined by the marker positions determined using CT and Roentgen stereophotogrammetry analysis. The VTB was found to have the highest precision (translation: 0.4mm; rotation: 0.3°), comparable with the VTS in rotations (0.3°), and the STB in translations (0.6mm). The STS had the lowest precision (translation: 4.1mm; rotation: 2.1°).
Collapse
Affiliation(s)
- Cheng-Chung Lin
- Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC
| | - Tung-Wu Lu
- Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC; Department of Orthopaedic Surgery, School of Medicine, National Taiwan University, Taiwan, ROC.
| | - Ting-Ming Wang
- Department of Orthopaedic Surgery, National Taiwan University Hospital, Taiwan, ROC
| | - Chao-Yu Hsu
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University Hospital Hsin-Chu Branch, Taiwan, ROC; Department of Radiology, College of Medicine, National Taiwan University, Taiwan, ROC
| | - Ting-Fang Shih
- Department of Radiology, College of Medicine, National Taiwan University, Taiwan, ROC; Department of Medical Imaging, National Taiwan University Hospital, Taiwan, ROC
| |
Collapse
|
68
|
Rodriguez y Baena F, Hawke T, Jakopec M. A bounded iterative closest point method for minimally invasive registration of the femur. Proc Inst Mech Eng H 2013; 227:1135-44. [PMID: 23959859 DOI: 10.1177/0954411913500948] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This article describes a novel method for image-based, minimally invasive registration of the femur, for application to computer-assisted unicompartmental knee arthroplasty. The method is adapted from the well-known iterative closest point algorithm. By utilising an estimate of the hip centre on both the preoperative model and intraoperative patient anatomy, the proposed 'bounded' iterative closest point algorithm robustly produces accurate varus-valgus and anterior-posterior femoral alignment with minimal distal access requirements. Similar to the original iterative closest point implementation, the bounded iterative closest point algorithm converges monotonically to the closest minimum, and the presented case includes a common method for global minimum identification. The bounded iterative closest point method has shown to have exceptional resistance to noise during feature acquisition through simulations and in vitro plastic bone trials, where its performance is compared to a standard form of the iterative closest point algorithm.
Collapse
|
69
|
Armand M, Otake Y, Cheung PYS, Taylor RH. Robustness and accuracy of feature-based single image 2-D-3-D registration without correspondences for image-guided intervention. IEEE Trans Biomed Eng 2013; 61:149-61. [PMID: 23955696 DOI: 10.1109/tbme.2013.2278619] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D-3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D-3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane ( X- Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences.
Collapse
|
70
|
Armiger RS, Otake Y, Iwaskiw AS, Wickwire AC, Ott KA, Voo LM, Armand M, Merkle AC. Biomechanical Response of Blast Loading to the Head Using 2D-3D Cineradiographic Registration. ACTA ACUST UNITED AC 2013. [DOI: 10.1007/978-3-319-00777-9_18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
|
71
|
A Particle Model for Prediction of Cement Infiltration of Cancellous Bone in Osteoporotic Bone Augmentation. PLoS One 2013; 8:e67958. [PMID: 23840794 PMCID: PMC3693961 DOI: 10.1371/journal.pone.0067958] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2013] [Accepted: 05/23/2013] [Indexed: 11/25/2022] Open
Abstract
Femoroplasty is a potential preventive treatment for osteoporotic hip fractures. It involves augmenting mechanical properties of the femur by injecting Polymethylmethacrylate (PMMA) bone cement. To reduce the risks involved and maximize the outcome, however, the procedure needs to be carefully planned and executed. An important part of the planning system is predicting infiltration of cement into the porous medium of cancellous bone. We used the method of Smoothed Particle Hydrodynamics (SPH) to model the flow of PMMA inside porous media. We modified the standard formulation of SPH to incorporate the extreme viscosities associated with bone cement. Darcy creeping flow of fluids through isotropic porous media was simulated and the results were compared with those reported in the literature. Further validation involved injecting PMMA cement inside porous foam blocks — osteoporotic cancellous bone surrogates — and simulating the injections using our proposed SPH model. Millimeter accuracy was obtained in comparing the simulated and actual cement shapes. Also, strong correlations were found between the simulated and the experimental data of spreading distance (R2 = 0.86) and normalized pressure (R2 = 0.90). Results suggest that the proposed model is suitable for use in an osteoporotic femoral augmentation planning framework.
Collapse
|
72
|
Varnavas A, Carrell T, Penney G. Increasing the automation of a 2D-3D registration system. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:387-399. [PMID: 23362246 DOI: 10.1109/tmi.2012.2227337] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Routine clinical use of 2D-3D registration algorithms for Image Guided Surgery remains limited. A key aspect for routine clinical use of this technology is its degree of automation, i.e., the amount of necessary knowledgeable interaction between the clinicians and the registration system. Current image-based registration approaches usually require knowledgeable manual interaction during two stages: for initial pose estimation and for verification of produced results. We propose four novel techniques, particularly suited to vertebra-based registration systems, which can significantly automate both of the above stages. Two of these techniques are based upon the intraoperative "insertion" of a virtual fiducial marker into the preoperative data. The remaining two techniques use the final registration similarity value between multiple CT vertebrae and a single fluoroscopy vertebra. The proposed methods were evaluated with data from 31 operations (31 CT scans, 419 fluoroscopy images). Results show these methods can remove the need for manual vertebra identification during initial pose estimation, and were also very effective for result verification, producing a combined true positive rate of 100% and false positive rate equal to zero. This large decrease in required knowledgeable interaction is an important contribution aiming to enable more widespread use of 2D-3D registration technology.
Collapse
Affiliation(s)
- Andreas Varnavas
- Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King’s College London, King’s Health Partners, St. Thomas’ Hospital, London, UK.
| | | | | |
Collapse
|
73
|
Otake Y, Schafer S, Stayman JW, Zbijewski W, Kleinszig G, Graumann R, Khanna AJ, Siewerdsen JH. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery. Phys Med Biol 2012; 57:5485-508. [PMID: 22864366 DOI: 10.1088/0031-9155/57/17/5485] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.
Collapse
Affiliation(s)
- Y Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | |
Collapse
|
74
|
Fisher M, Dorgham O, Laycock SD. Fast reconstructed radiographs from octree-compressed volumetric data. Int J Comput Assist Radiol Surg 2012; 8:313-22. [PMID: 22821505 DOI: 10.1007/s11548-012-0783-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2012] [Accepted: 07/04/2012] [Indexed: 10/28/2022]
Abstract
PURPOSE Simulated 2D X-ray images called digitally reconstructed radiographs (DRRs) have important applications within medical image registration frameworks where they are compared with reference X-rays or used in implementations of digital tomosynthesis (DTS). However, rendering DRRs from a CT volume is computationally demanding and relatively slow using the conventional ray-casting algorithm. Image-guided radiation therapy systems using DTS to verify target location require a large number DRRs to be precomputed since there is insufficient time within the automatic image registration procedure to generate DRRs and search for an optimal pose. METHOD DRRs were rendered from octree-compressed CT data. Previous work showed that octree-compressed volumes rendered by conventional ray casting deliver a registration with acceptable clinical accuracy, but efficiently rendering the irregular grid of an octree data structure is a challenge for conventional ray casting. We address this by using vertex and fragment shaders of modern graphics processing units (GPUs) to directly project internal spaces of the octree, represented by textured particle sprites, onto the view plane. The texture is procedurally generated and depends on the CT pose. RESULTS The performance of this new algorithm was found to be 4 times faster than that of a ray-casting algorithm implemented using NVIDIA™Compute Unified Device Architecture (CUDA™) on an equivalent GPU (~95 % octree compression). Rendering artifacts are apparent (consistent with other splatting algorithm), but image quality tends to improve with compression and fewer particles are needed. A peak signal-to-noise ratio analysis confirmed that the images rendered from compressed volumes were of marginally better quality to those rendered using Gaussian footprints. CONCLUSIONS Using octree-encoded DRRs within a 2D/3D registration framework indicated the approach may be useful in accelerating automatic image registration.
Collapse
Affiliation(s)
- Mark Fisher
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | | | | |
Collapse
|
75
|
Navab N, Taylor R, Yang GZ. Guest editorial: special issue on interventional imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:857-859. [PMID: 22582415 DOI: 10.1109/tmi.2012.2189153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
|