51
|
|
52
|
An augmented reality C-arm for intraoperative assessment of the mechanical axis: a preclinical study. Int J Comput Assist Radiol Surg 2016; 11:2111-2117. [DOI: 10.1007/s11548-016-1426-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Accepted: 05/25/2016] [Indexed: 10/21/2022]
|
53
|
Wang J, Suenaga H, Yang L, Kobayashi E, Sakuma I. Video see-through augmented reality for oral and maxillofacial surgery. Int J Med Robot 2016; 13. [PMID: 27283505 DOI: 10.1002/rcs.1754] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 03/26/2016] [Accepted: 04/29/2016] [Indexed: 11/11/2022]
Abstract
BACKGROUND Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. METHODS A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. RESULTS The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. CONCLUSIONS The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China.,Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Liangjing Yang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
54
|
Loy Rodas N, Barrera F, Padoy N. See It With Your Own Eyes: Markerless Mobile Augmented Reality for Radiation Awareness in the Hybrid Room. IEEE Trans Biomed Eng 2016; 64:429-440. [PMID: 27164565 DOI: 10.1109/tbme.2016.2560761] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
GOAL We present an approach to provide awareness to the harmful ionizing radiation generated during X-ray-guided minimally invasive procedures. METHODS A hand-held screen is used to display directly in the user's view information related to radiation safety in a mobile augmented reality (AR) manner. Instead of using markers, we propose a method to track the observer's viewpoint, which relies on the use of multiple RGB-D sensors and combines equipment detection for tracking initialization with a KinectFusion-like approach for frame-to-frame tracking. Two of the sensors are ceiling-mounted and a third one is attached to the hand-held screen. The ceiling cameras keep an updated model of the room's layout, which is used to exploit context information and improve the relocalization procedure. RESULTS The system is evaluated on a multicamera dataset generated inside an operating room (OR) and containing ground-truth poses of the AR display. This dataset includes a wide variety of sequences with different scene configurations, occlusions, motion in the scene, and abrupt viewpoint changes. Qualitative results illustrating the different AR visualization modes for radiation awareness provided by the system are also presented. CONCLUSION Our approach allows the user to benefit from a large AR visualization area and permits to recover from tracking failure caused by vast motion or changes in the scene just by looking at a piece of equipment. SIGNIFICANCE The system enables the user to see the 3-D propagation of radiation, the medical staff's exposure, and/or the doses deposited on the patient's surface as seen through his own eyes.
Collapse
|
55
|
Lee SC, Fuerst B, Fotouhi J, Fischer M, Osgood G, Navab N. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization. Int J Comput Assist Radiol Surg 2016; 11:967-75. [DOI: 10.1007/s11548-016-1396-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Accepted: 03/19/2016] [Indexed: 11/29/2022]
|
56
|
Preclinical usability study of multiple augmented reality concepts for K-wire placement. Int J Comput Assist Radiol Surg 2016; 11:1007-14. [PMID: 26995603 DOI: 10.1007/s11548-016-1363-x] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 02/24/2016] [Indexed: 10/22/2022]
Abstract
PURPOSE In many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon's view to assist accurate placement of tools. METHOD We design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization. RESULTS The evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency. CONCLUSION The 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.
Collapse
|
57
|
Pourmorteza A, Dang H, Siewerdsen JH, Stayman JW. Reconstruction of difference in sequential CT studies using penalized likelihood estimation. Phys Med Biol 2016; 61:1986-2002. [PMID: 26894795 PMCID: PMC4948746 DOI: 10.1088/0031-9155/61/5/1986] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data. Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical scenarios including image-guided surgeries and treatments where accurate and quantitative assessments of anatomical change is desired.
Collapse
Affiliation(s)
- A Pourmorteza
- Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20814, USA
| | - H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
58
|
Application of a New Wearable Augmented Reality Video See-Through Display to Aid Percutaneous Procedures in Spine Surgery. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-40651-0_4] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
59
|
Dang H, Siewerdsen JH, Stayman JW. Prospective regularization design in prior-image-based reconstruction. Phys Med Biol 2015; 60:9515-36. [PMID: 26606653 PMCID: PMC4833649 DOI: 10.1088/0031-9155/60/24/9515] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
Collapse
Affiliation(s)
- Hao Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | |
Collapse
|
60
|
|
61
|
Kim DN, Chae YS, Kim MY. X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery. Int J Comput Assist Radiol Surg 2015; 11:529-41. [DOI: 10.1007/s11548-015-1290-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 08/31/2015] [Indexed: 11/28/2022]
|
62
|
Fallavollita P, Wang L, Weidert S, Navab N. Augmented Reality in Orthopaedic Interventions and Education. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/978-3-319-23482-3_13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
63
|
Chen X, Xu L, Wang Y, Wang H, Wang F, Zeng X, Wang Q, Egger J. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J Biomed Inform 2015; 55:124-31. [PMID: 25882923 DOI: 10.1016/j.jbi.2015.04.003] [Citation(s) in RCA: 95] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2014] [Revised: 03/20/2015] [Accepted: 04/09/2015] [Indexed: 10/23/2022]
Abstract
The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements.
Collapse
Affiliation(s)
- Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Lu Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping Wang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huixiang Wang
- Shanghai First People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fang Wang
- Shanghai First People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiangsen Zeng
- Shanghai First People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiugen Wang
- Shanghai First People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jan Egger
- Faculty of Computer Science and Biomedical Engineering, Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| |
Collapse
|
64
|
Londei R, Esposito M, Diotte B, Weidert S, Euler E, Thaller P, Navab N, Fallavollita P. Intra-operative augmented reality in distal locking. Int J Comput Assist Radiol Surg 2015; 10:1395-403. [PMID: 25814098 DOI: 10.1007/s11548-015-1169-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 02/25/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE To design an augmented reality solution that assists surgeons during the distal locking of intramedullary nailing procedures. METHOD Traditionally, the procedure is performed under X-ray guidance and requires a significant amount of time and radiation exposure. To absolve these complications, we propose video guidance that allows surgeons to achieve both the down-the-beam position of the intramedullary nail and its subsequent locking. For the down-the-beam position, the IM nail pose in X-ray is calculated using a 2D/3D registration scheme and later related to the patient leg pose which is calculated using video-tracked AR markers. For the distal locking, surgeons use an augmented radiolucent drill in which its tip position is detected and tracked in real-time under video guidance. VALIDATION To evaluate the feasibility of our solution, we performed a preclinical study on dry bone phantom with the participation of four clinicians. RESULTS Participants achieved 100 % success rate in the down-the beam positioning and 93 % success rate in distal locking using only two X-ray images in 100 s. CONCLUSIONS We confirmed that intra-operative navigation using augmented reality provides an alternative way to perform distal locking in a safe and timely manner.
Collapse
Affiliation(s)
- Roberto Londei
- Chair for Computer Aided Medical Procedures, Technische Univ. Munchen, Munich, Germany
| | | | | | | | | | | | | | | |
Collapse
|
65
|
Diotte B, Fallavollita P, Wang L, Weidert S, Euler E, Thaller P, Navab N. Multi-modal intra-operative navigation during distal locking of intramedullary nails. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:487-495. [PMID: 25296403 DOI: 10.1109/tmi.2014.2361155] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The interlocking of intramedullary nails is a technically demanding procedure which involves a considerable amount of X-ray acquisitions; one study lists as many as 48 to successfully complete the procedure and fix screws into 4-6 mm distal holes of the nail. We propose to design an augmented radiolucent drill to assist surgeons in completing the distal locking procedure without any additional X-ray acquisitions. Using an augmented reality fluoroscope that coregisters optical and X-ray images, we exploit solely the optical images to detect the augmented radiolucent drill and estimate its tip position in real-time. Consequently, the surgeons will be able to maintain the down the beam positioning required to drill the screws into the nail holes successfully. To evaluate the accuracy of the proposed augmented drill, we perform a preclinical study involving six surgeons and ask them to perform distal locking on dry bone phantoms. Surgeons completed distal locking 98.3% of the time using only a single X-ray image with an average navigation time of 1.4 ± 0.9 min per hole.
Collapse
|
66
|
Dang H, Wang AS, Sussman MS, Siewerdsen JH, Stayman JW. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images. Phys Med Biol 2014; 59:4799-826. [PMID: 25097144 PMCID: PMC4142353 DOI: 10.1088/0031-9155/59/17/4799] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
Collapse
Affiliation(s)
- H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, USA
| | | | | | | | | |
Collapse
|
67
|
Chen X, Naik H, Wang L, Navab N, Fallavollita P. Video-guided calibration of an augmented reality mobile C-arm. Int J Comput Assist Radiol Surg 2014; 9:987-96. [PMID: 24664269 DOI: 10.1007/s11548-014-0995-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2013] [Accepted: 03/08/2014] [Indexed: 10/25/2022]
Abstract
PURPOSE The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. METHODS Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. RESULTS The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. CONCLUSION A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.
Collapse
Affiliation(s)
- Xin Chen
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Munich, Germany
| | - Hemal Naik
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Munich, Germany
| | - Lejing Wang
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Munich, Germany
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Munich, Germany
| | - Pascal Fallavollita
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Munich, Germany.
| |
Collapse
|
68
|
Deng W, Li F, Wang M, Song Z. Easy-to-use augmented reality neuronavigation using a wireless tablet PC. Stereotact Funct Neurosurg 2013; 92:17-24. [PMID: 24216673 DOI: 10.1159/000354816] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2012] [Accepted: 08/06/2013] [Indexed: 11/19/2022]
Abstract
BACKGROUND/AIMS Augmented reality (AR) technology solves the problem of view switching in traditional image-guided neurosurgery systems by integrating computer-generated objects into the actual scene. However, the state-of-the-art AR solution using head-mounted displays has not been widely accepted in clinical applications because it causes some inconvenience for the surgeon during surgery. METHODS In this paper, we present a Tablet-AR system that transmits navigation information to a movable tablet PC via a wireless local area network and overlays this information on the tablet screen, which simultaneously displays the actual scene captured by its back-facing camera. With this system, the surgeon can directly observe the intracranial anatomical structure of the patient with the overlaid virtual projection images to guide the surgery. RESULTS The alignment errors in the skull specimen study and clinical experiment were 4.6 pixels (approx. 1.6 mm) and 6 pixels (approx. 2.1 mm), respectively. The system was also used for navigation in 2 actual clinical cases of neurosurgery, which demonstrated its feasibility in a clinical application. CONCLUSIONS The easy-to-use Tablet-AR system presented in this study is accurate and feasible in clinical applications and has the potential to become a routine device in AR neuronavigation.
Collapse
Affiliation(s)
- Weiwei Deng
- Digital Medical Research Center, Shanghai Medical School, Fudan University, Shanghai, PR China
| | | | | | | |
Collapse
|
69
|
Abe Y, Sato S, Kato K, Hyakumachi T, Yanagibashi Y, Ito M, Abumi K. A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note. J Neurosurg Spine 2013; 19:492-501. [PMID: 23952323 DOI: 10.3171/2013.7.spine12917] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Augmented reality (AR) is an imaging technology by which virtual objects are overlaid onto images of real objects captured in real time by a tracking camera. This study aimed to introduce a novel AR guidance system called virtual protractor with augmented reality (VIPAR) to visualize a needle trajectory in 3D space during percutaneous vertebroplasty (PVP). The AR system used for this study comprised a head-mount display (HMD) with a tracking camera and a marker sheet. An augmented scene was created by overlaying the preoperatively generated needle trajectory path onto a marker detected on the patient using AR software, thereby providing the surgeon with augmented views in real time through the HMD. The accuracy of the system was evaluated by using a computer-generated simulation model in a spine phantom and also evaluated clinically in 5 patients. In the 40 spine phantom trials, the error of the insertion angle (EIA), defined as the difference between the attempted angle and the insertion angle, was evaluated using 3D CT scanning. Computed tomography analysis of the 40 spine phantom trials showed that the EIA in the axial plane significantly improved when VIPAR was used compared with when it was not used (0.96° ± 0.61° vs 4.34° ± 2.36°, respectively). The same held true for EIA in the sagittal plane (0.61° ± 0.70° vs 2.55° ± 1.93°, respectively). In the clinical evaluation of the AR system, 5 patients with osteoporotic vertebral fractures underwent VIPAR-guided PVP from October 2011 to May 2012. The postoperative EIA was evaluated using CT. The clinical results of the 5 patients showed that the EIA in all 10 needle insertions was 2.09° ± 1.3° in the axial plane and 1.98° ± 1.8° in the sagittal plane. There was no pedicle breach or leakage of polymethylmethacrylate. VIPAR was successfully used to assist in needle insertion during PVP by providing the surgeon with an ideal insertion point and needle trajectory through the HMD. The findings indicate that AR guidance technology can become a useful assistive device during spine surgeries requiring percutaneous procedures.
Collapse
Affiliation(s)
- Yuichiro Abe
- Department of Orthopedic Surgery, Eniwa Hospital, Eniwa, Hokkaido
| | | | | | | | | | | | | |
Collapse
|
70
|
Wang L, Fallavollita P, Brand A, Erat O, Weidert S, Thaller PH, Euler E, Navab N. Intra-op measurement of the mechanical axis deviation: an evaluation study on 19 human cadaver legs. ACTA ACUST UNITED AC 2013; 15:609-16. [PMID: 23286099 DOI: 10.1007/978-3-642-33418-4_75] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
The alignment of the lower limb in high tibial osteotomy (HTO) or total knee arthroplasty (TKA) must be determined intraoperatively. One way to do so is to deform the mechanical axis deviation (MAD), for which a tolerance measurement of 10 mm is widely accepted. Many techniques are proposed in clinical practice such as visual inspection, cable method, grid with lead impregnated reference lines, or more recently, navigation systems. Each has their disadvantages including reliability of the MAD measurement, excess radiation, prolonged operation time, complicated setup and high cost. To alleviate such shortcomings, we propose a novel clinical protocol that allows quick and accurate intraoperative calculation of MAD. This is achieved by an X-ray stitching method requiring only three X-ray images placed into a panoramic image frame during the entire procedure. The method has been systematically analyzed in a simulation framework in order to investigate its accuracy and robustness. Furthermore, we validated our protocol via a preclinical study comprising 19 human cadaver legs. Four surgeons determined MAD measurements using our X-ray panorama and compared these values to a gold-standard CT-based technique. The maximum average MAD error was 3.5mm which shows great potential for the technique.
Collapse
Affiliation(s)
- Lejing Wang
- Chair for Computer Aided Medical Procedures (CAMP), TU Munich, Germany
| | | | | | | | | | | | | | | |
Collapse
|
71
|
Vitiello V, Lee SL, Cundy TP, Yang GZ. Emerging robotic platforms for minimally invasive surgery. IEEE Rev Biomed Eng 2012; 6:111-26. [PMID: 23288354 DOI: 10.1109/rbme.2012.2236311] [Citation(s) in RCA: 138] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Recent technological advances in surgery have resulted in the development of a range of new techniques that have reduced patient trauma, shortened hospitalization, and improved diagnostic accuracy and therapeutic outcome. Despite the many appreciated benefits of minimally invasive surgery (MIS) compared to traditional approaches, there are still significant drawbacks associated with conventional MIS including poor instrument control and ergonomics caused by rigid instrumentation and its associated fulcrum effect. The use of robot assistance has helped to realize the full potential of MIS with improved consistency, safety and accuracy. The development of articulated, precision tools to enhance the surgeon's dexterity has evolved in parallel with advances in imaging and human-robot interaction. This has improved hand-eye coordination and manual precision down to micron scales, with the capability of navigating through complex anatomical pathways. In this review paper, clinical requirements and technical challenges related to the design of robotic platforms for flexible access surgery are discussed. Allied technical approaches and engineering challenges related to instrument design, intraoperative guidance, and intelligent human-robot interaction are reviewed. We also highlight emerging designs and research opportunities in the field by assessing the current limitations and open technical challenges for the wider clinical uptake of robotic platforms in MIS.
Collapse
|
72
|
Dang H, Otake Y, Schafer S, Stayman JW, Kleinszig G, Siewerdsen JH. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance. Med Phys 2012; 39:6484-98. [PMID: 23039683 PMCID: PMC3477200 DOI: 10.1118/1.4754589] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2012] [Revised: 09/04/2012] [Accepted: 09/05/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. METHODS Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. RESULTS The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all anatomical sites, including challenging scenarios involving the presence of interventional tools. The reprojection error of marker localization was independent of the distance of the ARM from isocenter, and the overall TRE was dominated by the configuration of individual fiducials and distance from the target as predicted by theory. The median TRE increased with greater ARM-to-isocenter distance (e.g., for the Free-Form method, TRE increasing from 0.78 mm to 2.04 mm at distances of ∼75 mm and 370 mm, respectively). The median TRE within ∼200 mm distance was consistently lower than that of the manual method (TRE = 0.82 mm). Registration performance was independent of anatomical site (head, thorax, and abdomen). The Free-Form method demonstrated a statistically significant improvement (p = 0.0044) in reproducibility compared to manual registration (0.22 mm versus 0.30 mm, respectively). CONCLUSIONS Automatic image-to-world registration methods demonstrate the potential for improved accuracy, reproducibility, and workflow in CBCT-guided procedures. A Free-Form method was shown to exhibit robustness against anatomical site, with comparable or improved TRE compared to manual registration. It was also comparable or superior in performance to a Known-Model method in which the ARM configuration is specified as a predefined tool, thereby allowing configuration of fiducials on the fly or attachment to the patient.
Collapse
Affiliation(s)
- H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21202, USA
| | | | | | | | | | | |
Collapse
|
73
|
Chen X, Wang L, Fallavollita P, Navab N. Precise X-ray and video overlay for augmented reality fluoroscopy. Int J Comput Assist Radiol Surg 2012; 8:29-38. [PMID: 22592259 DOI: 10.1007/s11548-012-0746-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2012] [Accepted: 05/02/2012] [Indexed: 11/24/2022]
Abstract
PURPOSE The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. METHODS A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. RESULTS The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. CONCLUSION The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.
Collapse
Affiliation(s)
- Xin Chen
- Fakultät für Informatik, Technische Universität München, Munich, Germany
| | | | | | | |
Collapse
|
74
|
Wang L, Fallavollita P, Zou R, Chen X, Weidert S, Navab N. Closed-form inverse kinematics for interventional C-arm X-ray imaging with six degrees of freedom: modeling and application. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1086-1099. [PMID: 22293978 DOI: 10.1109/tmi.2012.2185708] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
For trauma and orthopedic surgery, maneuvering a mobile C-arm fluoroscope into a desired position to acquire an X-ray is a routine surgical task. The precision and ease of use of the C-arm becomes even more important for advanced interventional imaging techniques such as parallax-free X-ray image stitching. Today's standard mobile C-arms have been modeled with only five degrees of freedom (DOF), which definitely restricts their motions in 3-D Cartesian space. In this paper, we present a method to model both the mobile C-arm and patient's table as an integrated kinematic chain having six DOF without constraining table position. The closed-form solutions for the inverse kinematics problem are derived in order to obtain the required values for all C-arm joint and table movements to position the fluoroscope at a desired pose. The modeling method and the closed-form solutions can be applied to general isocentric or nonisocentric mobile C-arms. By achieving this we develop an efficient and intuitive inverse kinematics-based method for parallax-free panoramic X-ray imaging. In addition, we implement a 6-DOF C-arm system from a low-cost mobile fluoroscope to optimally acquire X-ray images based solely on the computation of the required movement for each joint by solving the inverse kinematics on a continuous basis. Through simulation experimentation, we demonstrate that the 6-DOF C-arm model has a larger working space than the 5-DOF model. C-arm repositioning experiments show the practicality and accuracy of our 6-DOF C-arm system. We also evaluate the novel parallax-free X-ray stitching method on phantom and dry bones. Using five trials, results show that parallax-free panoramas generated by our method are of high visual quality and within clinical tolerances for accurate evaluation of long bone geometry (i.e., image and metric measurement errors are less than 1% compared to ground-truth).
Collapse
Affiliation(s)
- Lejing Wang
- Technical University of Munich, 85748 Munich, Germany
| | | | | | | | | | | |
Collapse
|
75
|
Reaungamornrat S, Otake Y, Uneri A, Schafer S, Mirota DJ, Nithiananthan S, Stayman JW, Kleinszig G, Khanna AJ, Taylor RH, Siewerdsen JH. An on-board surgical tracking and video augmentation system for C-arm image guidance. Int J Comput Assist Radiol Surg 2012; 7:647-65. [PMID: 22539008 DOI: 10.1007/s11548-012-0682-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Accepted: 03/20/2012] [Indexed: 11/27/2022]
Abstract
PURPOSE Conventional tracker configurations for surgical navigation carry a variety of limitations, including limited geometric accuracy, line-of-sight obstruction, and mismatch of the view angle with the surgeon's-eye view. This paper presents the development and characterization of a novel tracker configuration (referred to as "Tracker-on-C") intended to address such limitations by incorporating the tracker directly on the gantry of a mobile C-arm for fluoroscopy and cone-beam CT (CBCT). METHODS A video-based tracker (MicronTracker, Claron Technology Inc., Toronto, ON, Canada) was mounted on the gantry of a prototype mobile isocentric C-arm next to the flat-panel detector. To maintain registration within a dynamically moving reference frame (due to rotation of the C-arm), a reference marker consisting of 6 faces (referred to as a "hex-face marker") was developed to give visibility across the full range of C-arm rotation. Three primary functionalities were investigated: surgical tracking, generation of digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool or the current C-arm angle, and augmentation of the tracker video scene with image, DRR, and planning data. Target registration error (TRE) was measured in comparison with the same tracker implemented in a conventional in-room configuration. Graphics processing unit (GPU)-accelerated DRRs were generated in real time as an assistant to C-arm positioning (i.e., positioning the C-arm such that target anatomy is in the field-of-view (FOV)), radiographic search (i.e., a virtual X-ray projection preview of target anatomy without X-ray exposure), and localization (i.e., visualizing the location of the surgical target or planning data). Video augmentation included superimposing tracker data, the X-ray FOV, DRRs, planning data, preoperative images, and/or intraoperative CBCT onto the video scene. Geometric accuracy was quantitatively evaluated in each case, and qualitative assessment of clinical feasibility was analyzed by an experienced and fellowship-trained orthopedic spine surgeon within a clinically realistic surgical setup of the Tracker-on-C. RESULTS The Tracker-on-C configuration demonstrated improved TRE (0.87 ± 0.25) mm in comparison with a conventional in-room tracker setup (1.92 ± 0.71) mm (p < 0.0001) attributed primarily to improved depth resolution of the stereoscopic camera placed closer to the surgical field. The hex-face reference marker maintained registration across the 180° C-arm orbit (TRE = 0.70 ± 0.32 mm). DRRs generated from the perspective of the C-arm X-ray detector demonstrated sub- mm accuracy (0.37 ± 0.20 mm) in correspondence with the real X-ray image. Planning data and DRRs overlaid on the video scene exhibited accuracy of (0.59 ± 0.38) pixels and (0.66 ± 0.36) pixels, respectively. Preclinical assessment suggested potential utility of the Tracker-on-C in a spectrum of interventions, including improved line of sight, an assistant to C-arm positioning, and faster target localization, while reducing X-ray exposure time. CONCLUSIONS The proposed tracker configuration demonstrated sub- mm TRE from the dynamic reference frame of a rotational C-arm through the use of the multi-face reference marker. Real-time DRRs and video augmentation from a natural perspective over the operating table assisted C-arm setup, simplified radiographic search and localization, and reduced fluoroscopy time. Incorporation of the proposed tracker configuration with C-arm CBCT guidance has the potential to simplify intraoperative registration, improve geometric accuracy, enhance visualization, and reduce radiation exposure.
Collapse
Affiliation(s)
- S Reaungamornrat
- Department of Biomedical Engineering, Johns Hopkins University, Traylor Building, Room #726, 720 Rutland Avenue, Baltimore, MD 21205-2109, USA.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
76
|
Diotte B, Fallavollita P, Wang L, Weidert S, Thaller PH, Euler E, Navab N. Radiation-Free Drill Guidance in Interlocking of Intramedullary Nails. ACTA ACUST UNITED AC 2012; 15:18-25. [DOI: 10.1007/978-3-642-33415-3_3] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
|
77
|
Cleary K, Peters TM. Image-guided interventions: technology review and clinical applications. Annu Rev Biomed Eng 2010; 12:119-42. [PMID: 20415592 DOI: 10.1146/annurev-bioeng-070909-105249] [Citation(s) in RCA: 229] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Image-guided interventions are medical procedures that use computer-based systems to provide virtual image overlays to help the physician precisely visualize and target the surgical site. This field has been greatly expanded by the advances in medical imaging and computing power over the past 20 years. This review begins with a historical overview and then describes the component technologies of tracking, registration, visualization, and software. Clinical applications in neurosurgery, orthopedics, and the cardiac and thoracoabdominal areas are discussed, together with a description of an evolving technology named Natural Orifice Transluminal Endoscopic Surgery (NOTES). As the trend toward minimally invasive procedures continues, image-guided interventions will play an important role in enabling new procedures, while improving the accuracy and success of existing approaches. Despite this promise, the role of image-guided systems must be validated by clinical trials facilitated by partnerships between scientists and physicians if this field is to reach its full potential.
Collapse
Affiliation(s)
- Kevin Cleary
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC 20007, USA.
| | | |
Collapse
|