1
|
Li Y, Jiang S, Yang Z, Yang S, Zhou Z. Microscopic augmented reality calibration with contactless line-structured light registration for surgical navigation. Med Biol Eng Comput 2025:10.1007/s11517-025-03288-z. [PMID: 39806119 DOI: 10.1007/s11517-025-03288-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 01/02/2025] [Indexed: 01/16/2025]
Abstract
The use of AR technology in image-guided neurosurgery enables visualization of lesions that are concealed deep within the brain. Accurate AR registration is required to precisely match virtual lesions with anatomical structures displayed under a microscope. The purpose of this work was to develop a real-time augmented surgical navigation system using contactless line-structured light registration, microscope calibration, and visible optical tracking. Contactless discrete sparse line-structured light point cloud is utilized to construct patient-image registration. Microscope calibration optimization with dimensional invariant calibrator is employed to enable real-time tracking of the microscope. The visible optical tracking integrates a 3D medical model with surgical microscope video in real time, generating an augmented microscope stream. The proposed patient-image registration algorithm yielded an average root mean square error (RMSE) of 0.78 ± 0.14 mm. The pixel match ratio error (PMRE) of the microscope calibration was found to be 0.646%. The RMSE and PMRE of the system experiments are 0.79 ± 0.10 mm and 3.30 ± 1.08%, respectively. Experimental evaluations confirmed the feasibility and efficiency of microscope AR surgical navigation (MASN) registration. By means of registration technology, MASN overlays virtual lesions onto the microscopic view of the real lesions in real time, which can help surgeons to localize lesions hidden deep in tissue.
Collapse
Affiliation(s)
- Yuhua Li
- Mechanical Engineering Department, Tianjin University, No. 135, Yaguan Road, Haihe Education Park, Jinnan District, Tianjin City, 300350, China
| | - Shan Jiang
- Mechanical Engineering Department, Tianjin University, No. 135, Yaguan Road, Haihe Education Park, Jinnan District, Tianjin City, 300350, China.
| | - Zhiyong Yang
- Mechanical Engineering Department, Tianjin University, No. 135, Yaguan Road, Haihe Education Park, Jinnan District, Tianjin City, 300350, China
| | - Shuo Yang
- Mechanical Engineering Department, Tianjin University, No. 135, Yaguan Road, Haihe Education Park, Jinnan District, Tianjin City, 300350, China
| | - Zeyang Zhou
- Mechanical Engineering Department, Tianjin University, No. 135, Yaguan Road, Haihe Education Park, Jinnan District, Tianjin City, 300350, China
| |
Collapse
|
2
|
Kazemzadeh K, Akhlaghdoust M, Zali A. Advances in artificial intelligence, robotics, augmented and virtual reality in neurosurgery. Front Surg 2023; 10:1241923. [PMID: 37693641 PMCID: PMC10483402 DOI: 10.3389/fsurg.2023.1241923] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023] Open
Abstract
Neurosurgical practitioners undergo extensive and prolonged training to acquire diverse technical proficiencies, while neurosurgical procedures necessitate a substantial amount of pre-, post-, and intraoperative clinical data acquisition, making decisions, attention, and convalescence. The past decade witnessed an appreciable escalation in the significance of artificial intelligence (AI) in neurosurgery. AI holds significant potential in neurosurgery as it supplements the abilities of neurosurgeons to offer optimal interventional and non-interventional care to patients by improving prognostic and diagnostic outcomes in clinical therapy and assisting neurosurgeons in making decisions while surgical interventions to enhance patient outcomes. Other technologies including augmented reality, robotics, and virtual reality can assist and promote neurosurgical methods as well. Moreover, they play a significant role in generating, processing, as well as storing experimental and clinical data. Also, the usage of these technologies in neurosurgery is able to curtail the number of costs linked with surgical care and extend high-quality health care to a wider populace. This narrative review aims to integrate the results of articles that elucidate the role of the aforementioned technologies in neurosurgery.
Collapse
Affiliation(s)
- Kimia Kazemzadeh
- Students’ Scientific Research Center, Tehran University of Medical Sciences, Tehran, Iran
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
| | - Meisam Akhlaghdoust
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Zali
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
3
|
Mishra R, Narayanan MK, Umana GE, Montemurro N, Chaurasia B, Deora H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:1719. [PMID: 35162742 PMCID: PMC8835688 DOI: 10.3390/ijerph19031719] [Citation(s) in RCA: 91] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023]
Abstract
BACKGROUND While several publications have focused on the intuitive role of augmented reality (AR) and virtual reality (VR) in neurosurgical planning, the aim of this review was to explore other avenues, where these technologies have significant utility and applicability. METHODS This review was conducted by searching PubMed, PubMed Central, Google Scholar, the Scopus database, the Web of Science Core Collection database, and the SciELO citation index, from 1989-2021. An example of a search strategy used in PubMed Central is: "Virtual reality" [All Fields] AND ("neurosurgical procedures" [MeSH Terms] OR ("neurosurgical" [All Fields] AND "procedures" [All Fields]) OR "neurosurgical procedures" [All Fields] OR "neurosurgery" [All Fields] OR "neurosurgery" [MeSH Terms]). Using this search strategy, we identified 487 (PubMed), 1097 (PubMed Central), and 275 citations (Web of Science Core Collection database). RESULTS Articles were found and reviewed showing numerous applications of VR/AR in neurosurgery. These applications included their utility as a supplement and augment for neuronavigation in the fields of diagnosis for complex vascular interventions, spine deformity correction, resident training, procedural practice, pain management, and rehabilitation of neurosurgical patients. These technologies have also shown promise in other area of neurosurgery, such as consent taking, training of ancillary personnel, and improving patient comfort during procedures, as well as a tool for training neurosurgeons in other advancements in the field, such as robotic neurosurgery. CONCLUSIONS We present the first review of the immense possibilities of VR in neurosurgery, beyond merely planning for surgical procedures. The importance of VR and AR, especially in "social distancing" in neurosurgery training, for economically disadvantaged sections, for prevention of medicolegal claims and in pain management and rehabilitation, is promising and warrants further research.
Collapse
Affiliation(s)
- Rakesh Mishra
- Department of Neurosurgery, Institute of Medical Sciences, Banaras Hindu University, Varanasi 221005, India;
| | | | - Giuseppe E. Umana
- Trauma and Gamma-Knife Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
| | - Bipin Chaurasia
- Department of Neurosurgery, Bhawani Hospital, Birgunj 44300, Nepal;
| | - Harsh Deora
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru 560029, India;
| |
Collapse
|
4
|
Tanzi L, Piazzolla P, Porpiglia F, Vezzetti E. Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance. Int J Comput Assist Radiol Surg 2021; 16:1435-1445. [PMID: 34165672 PMCID: PMC8354939 DOI: 10.1007/s11548-021-02432-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 05/10/2021] [Indexed: 01/16/2023]
Abstract
Purpose The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team’s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. Results U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model’s anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model’s rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). Conclusion This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.
Collapse
Affiliation(s)
- Leonardo Tanzi
- Department of Management, Production and Design Engineering, Polytechnic University of Turin, Turin, Italy.
| | - Pietro Piazzolla
- Department of Management, Production and Design Engineering, Polytechnic University of Turin, Turin, Italy
| | - Francesco Porpiglia
- Division of Urology, Department of Oncology, School of Medicine, University of Turin, Turin, Italy
| | - Enrico Vezzetti
- Department of Management, Production and Design Engineering, Polytechnic University of Turin, Turin, Italy
| |
Collapse
|
5
|
Ma L, Fei B. Comprehensive review of surgical microscopes: technology development and medical applications. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200292VRR. [PMID: 33398948 PMCID: PMC7780882 DOI: 10.1117/1.jbo.26.1.010901] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/04/2020] [Indexed: 05/06/2023]
Abstract
SIGNIFICANCE Surgical microscopes provide adjustable magnification, bright illumination, and clear visualization of the surgical field and have been increasingly used in operating rooms. State-of-the-art surgical microscopes are integrated with various imaging modalities, such as optical coherence tomography (OCT), fluorescence imaging, and augmented reality (AR) for image-guided surgery. AIM This comprehensive review is based on the literature of over 500 papers that cover the technology development and applications of surgical microscopy over the past century. The aim of this review is threefold: (i) providing a comprehensive technical overview of surgical microscopes, (ii) providing critical references for microscope selection and system development, and (iii) providing an overview of various medical applications. APPROACH More than 500 references were collected and reviewed. A timeline of important milestones during the evolution of surgical microscope is provided in this study. An in-depth technical overview of the optical system, mechanical system, illumination, visualization, and integration with advanced imaging modalities is provided. Various medical applications of surgical microscopes in neurosurgery and spine surgery, ophthalmic surgery, ear-nose-throat (ENT) surgery, endodontics, and plastic and reconstructive surgery are described. RESULTS Surgical microscopy has been significantly advanced in the technical aspects of high-end optics, bright and shadow-free illumination, stable and flexible mechanical design, and versatile visualization. New imaging modalities, such as hyperspectral imaging, OCT, fluorescence imaging, photoacoustic microscopy, and laser speckle contrast imaging, are being integrated with surgical microscopes. Advanced visualization and AR are being added to surgical microscopes as new features that are changing clinical practices in the operating room. CONCLUSIONS The combination of new imaging technologies and surgical microscopy will enable surgeons to perform challenging procedures and improve surgical outcomes. With advanced visualization and improved ergonomics, the surgical microscope has become a powerful tool in neurosurgery, spinal, ENT, ophthalmic, plastic and reconstructive surgeries.
Collapse
Affiliation(s)
- Ling Ma
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
6
|
Lavé A, Meling TR, Schaller K, Corniola MV. Augmented reality in intracranial meningioma surgery: report of a case and systematic review. J Neurosurg Sci 2020; 64:369-376. [DOI: 10.23736/s0390-5616.20.04945-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Gribaudo M, Piazzolla P, Porpiglia F, Vezzetti E, Violante MG. 3D augmentation of the surgical video stream: Toward a modular approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105505. [PMID: 32387863 DOI: 10.1016/j.cmpb.2020.105505] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/29/2020] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon's spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient's organ aligned over its image captured by the endoscope. METHODS We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system. RESULTS We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy. CONCLUSIONS The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.
Collapse
Affiliation(s)
- Marco Gribaudo
- Dept. of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Pietro Piazzolla
- Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy.
| | - Francesco Porpiglia
- Division of Urology, Department of Oncology, School of Medicine, University of Turin, Italy
| | - Enrico Vezzetti
- Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy
| | - Maria Grazia Violante
- Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy
| |
Collapse
|
8
|
Singh G, Ellis SR, Swan JE. The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1385-1398. [PMID: 30222576 DOI: 10.1109/tvcg.2018.2869729] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Many augmented reality (AR) applications operate within near-field reaching distances, and require matching the depth of a virtual object with a real object. The accuracy of this matching was measured in three experiments, which examined the effect of focal distance, age, and brightness, within distances of 33.3 to 50 cm, using a custom-built AR haploscope. Experiment I examined the effect of focal demand, at the levels of collimated (infinite focal distance), consistent with other depth cues, and at the midpoint of reaching distance. Observers were too young to exhibit age-related reductions in accommodative ability. The depth matches of collimated targets were increasingly overestimated with increasing distance, consistent targets were slightly underestimated, and midpoint targets were accurately estimated. Experiment II replicated Experiment I, with older observers. Results were similar to Experiment I. Experiment III replicated Experiment I with dimmer targets, using young observers. Results were again consistent with Experiment I, except that both consistent and midpoint targets were accurately estimated. In all cases, collimated results were explained by a model, where the collimation biases the eyes' vergence angle outwards by a constant amount. Focal demand and brightness affect near-field AR depth matching, while age-related reductions in accommodative ability have no effect.
Collapse
|
9
|
Bárdosi Z, Plattner C, Özbek Y, Hofmann T, Milosavljevic S, Schartinger V, Freysinger W. CIGuide: in situ augmented reality laser guidance. Int J Comput Assist Radiol Surg 2019; 15:49-57. [PMID: 31506882 PMCID: PMC6949325 DOI: 10.1007/s11548-019-02066-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 09/02/2019] [Indexed: 11/28/2022]
Abstract
PURPOSE : A robotic intraoperative laser guidance system with hybrid optic-magnetic tracking for skull base surgery is presented. It provides in situ augmented reality guidance for microscopic interventions at the lateral skull base with minimal mental and workload overhead on surgeons working without a monitor and dedicated pointing tools. METHODS : Three components were developed: a registration tool (Rhinospider), a hybrid magneto-optic-tracked robotic feedback control scheme and a modified robotic end-effector. Rhinospider optimizes registration of patient and preoperative CT data by excluding user errors in fiducial localization with magnetic tracking. The hybrid controller uses an integrated microscope HD camera for robotic control with a guidance beam shining on a dual plate setup avoiding magnetic field distortions. A robotic needle insertion platform (iSYS Medizintechnik GmbH, Austria) was modified to position a laser beam with high precision in a surgical scene compatible to microscopic surgery. RESULTS : System accuracy was evaluated quantitatively at various target positions on a phantom. The accuracy found is 1.2 mm ± 0.5 mm. Errors are primarily due to magnetic tracking. This application accuracy seems suitable for most surgical procedures in the lateral skull base. The system was evaluated quantitatively during a mastoidectomy of an anatomic head specimen and was judged useful by the surgeon. CONCLUSION : A hybrid robotic laser guidance system with direct visual feedback is proposed for navigated drilling and intraoperative structure localization. The system provides visual cues directly on/in the patient anatomy, reducing the standard limitations of AR visualizations like depth perception. The custom- built end-effector for the iSYS robot is transparent to using surgical microscopes and compatible with magnetic tracking. The cadaver experiment showed that guidance was accurate and that the end-effector is unobtrusive. This laser guidance has potential to aid the surgeon in finding the optimal mastoidectomy trajectory in more difficult interventions.
Collapse
Affiliation(s)
| | | | - Yusuf Özbek
- Medical University Innsbruck, Innsbruck, Austria
| | | | | | | | | |
Collapse
|
10
|
Abstract
BACKGROUND One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. OBJECTIVE In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. MATERIAL AND METHODS Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. RESULTS A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. CONCLUSION In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Collapse
Affiliation(s)
- Ulrich Eck
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| | - Alexander Winkler
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| |
Collapse
|
11
|
Drouin S, DiGiovanni DA, Kersten-Oertel MA, Collins L. Interaction driven enhancement of depth perception in angiographic volumes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 26:2247-2257. [PMID: 30530366 DOI: 10.1109/tvcg.2018.2884940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
User interaction has the potential to greatly facilitate the exploration and understanding of 3D medical images for diagnosis and treatment. However, in certain specialized environments such as in an operating room (OR), technical and physical constraints such as the need to enforce strict sterility rules, make interaction challenging. In this paper, we propose to facilitate the intraoperative exploration of angiographic volumes by leveraging the motion of a tracked surgical pointer, a tool that is already manipulated by the surgeon when using a navigation system in the OR. We designed and implemented three interactive rendering techniques based on this principle. The benefit of each of these techniques is compared to its non-interactive counterpart in a psychophysics experiment where 20 medical imaging experts were asked to perform a reaching/targeting task while visualizing a 3D volume of angiographic data. The study showed a significant improvement of the appreciation of local vascular structure when using dynamic techniques, while not having a negative impact on the appreciation of the global structure and only a marginal impact on the execution speed. A qualitative evaluation of the different techniques showed a preference for dynamic chroma-depth in accordance with the objective metrics but a discrepancy between objective and subjective measures for dynamic aerial perspective and shading.
Collapse
|
12
|
Grubert J, Itoh Y, Moser K, Swan JE. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2649-2662. [PMID: 28961115 DOI: 10.1109/tvcg.2017.2754257] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user's eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user's eyes within the tracking system's coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research.
Collapse
|
13
|
Léger É, Drouin S, Collins DL, Popa T, Kersten-Oertel M. Quantifying attention shifts in augmented reality image-guided neurosurgery. Healthc Technol Lett 2017; 4:188-192. [PMID: 29184663 PMCID: PMC5683248 DOI: 10.1049/htl.2017.0062] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 07/27/2017] [Indexed: 11/20/2022] Open
Abstract
Image-guided surgery (IGS) has allowed for more minimally invasive procedures, leading to better patient outcomes, reduced risk of infection, less pain, shorter hospital stays and faster recoveries. One drawback that has emerged with IGS is that the surgeon must shift their attention from the patient to the monitor for guidance. Yet both cognitive and motor tasks are negatively affected with attention shifts. Augmented reality (AR), which merges the realworld surgical scene with preoperative virtual patient images and plans, has been proposed as a solution to this drawback. In this work, we studied the impact of two different types of AR IGS set-ups (mobile AR and desktop AR) and traditional navigation on attention shifts for the specific task of craniotomy planning. We found a significant difference in terms of the time taken to perform the task and attention shifts between traditional navigation, but no significant difference between the different AR set-ups. With mobile AR, however, users felt that the system was easier to use and that their performance was better. These results suggest that regardless of where the AR visualisation is shown to the surgeon, AR may reduce attention shifts, leading to more streamlined and focused procedures.
Collapse
Affiliation(s)
- Étienne Léger
- Department of Computer Science and Software Engineering & Perform Centre, Concordia University, Montreal, Canada
| | - Simon Drouin
- McConnell Brain Imaging Centre, Montreal Neuro, McGill University, Montréal, Canada
| | - D. Louis Collins
- McConnell Brain Imaging Centre, Montreal Neuro, McGill University, Montréal, Canada
| | - Tiberiu Popa
- Department of Computer Science and Software Engineering & Perform Centre, Concordia University, Montreal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering & Perform Centre, Concordia University, Montreal, Canada
| |
Collapse
|
14
|
Perruisseau-Carrier A, Bahlouli N, Bierry G, Vernet P, Facca S, Liverneaux P. Comparison between isotropic linear-elastic law and isotropic hyperelastic law in the finite element modeling of the brachial plexus. ANN CHIR PLAST ESTH 2017; 62:664-668. [PMID: 28385568 DOI: 10.1016/j.anplas.2017.03.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Accepted: 03/02/2017] [Indexed: 12/17/2022]
Abstract
Augmented reality could help the identification of nerve structures in brachial plexus surgery. The goal of this study was to determine which law of mechanical behavior was more adapted by comparing the results of Hooke's isotropic linear elastic law to those of Ogden's isotropic hyperelastic law, applied to a biomechanical model of the brachial plexus. A model of finite elements was created using the ABAQUS® from a 3D model of the brachial plexus acquired by segmentation and meshing of MRI images at 0°, 45° and 135° of shoulder abduction of a healthy subject. The offset between the reconstructed model and the deformed model was evaluated quantitatively by the Hausdorff distance and qualitatively by the identification of 3 anatomical landmarks. In every case the Hausdorff distance was shorter with Ogden's law compared to Hooke's law. On a qualitative aspect, the model deformed by Ogden's law followed the concavity of the reconstructed model whereas the model deformed by Hooke's law remained convex. In conclusion, the results of this study demonstrate that the behavior of Ogden's isotropic hyperelastic mechanical model was more adapted to the modeling of the deformations of the brachial plexus.
Collapse
Affiliation(s)
- A Perruisseau-Carrier
- Department of hand surgery, SOS main, CCOM, university of Strasbourg, Icube CNRS 7357, university hospital of Strasbourg, FMTS, 10, avenue Baumann, 67403 Illkirch cedex, France
| | - N Bahlouli
- Department of mechanics, university of Strasbourg/CNRS, ICUBE, 2, rue Boussingault, 67000 Strasbourg, France
| | - G Bierry
- Guillaume Bierry, radiology department, university of Strasbourg, FMTS, 1, place de l'Hôpital, 67000 Strasbourg, France
| | - P Vernet
- Department of hand surgery, SOS main, CCOM, university of Strasbourg, Icube CNRS 7357, university hospital of Strasbourg, FMTS, 10, avenue Baumann, 67403 Illkirch cedex, France
| | - S Facca
- Department of hand surgery, SOS main, CCOM, university of Strasbourg, Icube CNRS 7357, university hospital of Strasbourg, FMTS, 10, avenue Baumann, 67403 Illkirch cedex, France
| | - P Liverneaux
- Department of hand surgery, SOS main, CCOM, university of Strasbourg, Icube CNRS 7357, university hospital of Strasbourg, FMTS, 10, avenue Baumann, 67403 Illkirch cedex, France.
| |
Collapse
|
15
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
16
|
Hawkes DJ. From clinical imaging and computational models to personalised medicine and image guided interventions. Med Image Anal 2016; 33:50-55. [PMID: 27407003 DOI: 10.1016/j.media.2016.06.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 06/10/2016] [Accepted: 06/15/2016] [Indexed: 11/25/2022]
Abstract
This short paper describes the development of the UCL Centre for Medical Image Computing (CMIC) from 2006 to 2016, together with reference to historical developments of the Computational Imaging sciences Group (CISG) at Guy's Hospital. Key early work in automated image registration led to developments in image guided surgery and improved cancer diagnosis and therapy. The work is illustrated with examples from neurosurgery, laparoscopic liver and gastric surgery, diagnosis and treatment of prostate cancer and breast cancer, and image guided radiotherapy for lung cancer.
Collapse
Affiliation(s)
- David J Hawkes
- Centre for Medical Image Computing, UCL, London, UK, WC1E 6BT, United Kingdom.
| |
Collapse
|
17
|
Augmented Endoscopic Images Overlaying Shape Changes in Bone Cutting Procedures. PLoS One 2016; 11:e0161815. [PMID: 27584732 PMCID: PMC5008631 DOI: 10.1371/journal.pone.0161815] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Accepted: 08/12/2016] [Indexed: 11/19/2022] Open
Abstract
In microendoscopic discectomy for spinal disorders, bone cutting procedures are performed in tight spaces while observing a small portion of the target structures. Although optical tracking systems are able to measure the tip of the surgical tool during surgery, the poor shape information available during surgery makes accurate cutting difficult, even if preoperative computed tomography and magnetic resonance images are used for reference. Shape estimation and visualization of the target structures are essential for accurate cutting. However, time-varying shape changes during cutting procedures are still challenging issues for intraoperative navigation. This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support bone cutting procedures. This framework handles the history of the location of the measured drill tip as a volume label and visualizes the remains to be cut overlaid on the endoscopic image in real time. A cutting experiment was performed with volunteers, and the feasibility of this concept was examined using a clinical navigation system. The efficacy of the cutting aid was evaluated with respect to the shape similarity, total moved distance of a cutting tool, and required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.
Collapse
|
18
|
Miller RS, Hashisaki GT, Kesser BW. Image-guided Localization of the Internal Auditory Canal via the Middle Cranial Fossa Approach. Otolaryngol Head Neck Surg 2016; 134:778-82. [PMID: 16647534 DOI: 10.1016/j.otohns.2005.12.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2005] [Accepted: 12/06/2005] [Indexed: 11/25/2022]
Abstract
OBJECTIVE: We sought to determine the accuracy of an electromagnetic image guidance surgical navigation system in localizing the midpoint of the internal auditory canal (IAC) and other structures of the temporal bone through the middle cranial fossa approach. MATERIALS AND METHODS: Seven fresh cadaveric whole heads were dissected via a middle cranial fossa approach. High-resolution CT scans were used with an InstaTrak 3500 Plus electromagnetic image guidance system (General Electric, Fairfield, CT). We evaluated the accuracy of identifying several middle cranial fossa landmarks including the midpoint of the IAC; the labyrinthine segment of the facial nerve; and the arcuate eminence, the carotid artery, and foramen spinosum. RESULTS: We were able to identify the middle of the IAC within 2.31 mm (range 0.65-7.52 mm, SD 2.39 mm). The arcuate eminence could be identified within 1.86 mm (range 1.49-2.37 mm, SD 0.36 mm). We noted some interference when the handpiece was within 6 to 8 cm of the microscope. CONCLUSION: Although computer-aided navigational tools are no substitute for thorough knowledge of temporal bone anatomy, we found the InstaTrak system reliable in identifying the midpoint of the IAC to within 2.4 mm through a middle fossa approach.
Collapse
Affiliation(s)
- Robert Sean Miller
- Department of Otolaryngology-Head and Neck Surgery, University of Virginia, Charlottesville, VA 22908-0713, USA
| | | | | |
Collapse
|
19
|
Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images. Brain Inform 2016; 3:53-61. [PMID: 27747598 PMCID: PMC4883165 DOI: 10.1007/s40708-016-0033-7] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Accepted: 01/12/2016] [Indexed: 01/07/2023] Open
Abstract
To isolate the brain from non-brain tissues using a fully automatic method may be affected by the presence of radio frequency non-homogeneity of MR images (MRI), regional anatomy, MR sequences, and the subjects of the study. In order to automate the brain tumor (Glioblastoma) detection, we proposed a novel approach of skull stripping for axial slices derived from MRI. Then, the brain tumor was detected using multi-level threshold segmentation based on histogram analysis. Skull-stripping method, was applied by adaptive morphological operations approach. This is considered an empirical threshold by calculation of the area of brain tissue, iteratively. It was employed on the registration of non-contrast T1-weighted (T1-WI) and its corresponding fluid attenuated inversion recovery sequence. Then, we used multi-thresholding segmentation (MTS) method which is proposed by Otsu. We calculated the performance metrics based on the similarity coefficients for patients (n = 120) with tumor. The adaptive algorithm of skull stripping and MTS of segmented tumors were achieved efficient in preliminary results with 92 and 80 % of Dice similarity coefficient and 0.3 and 25.8 % of false negative rate, respectively. The adaptive skull stripping algorithm provides robust skull-stripping results, and the tumor area for medical diagnosis was determined by MTS.
Collapse
|
20
|
Swan JE, Singh G, Ellis SR. Matching and reaching depth judgments with real and augmented reality targets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:1289-1298. [PMID: 26340777 DOI: 10.1109/tvcg.2015.2459895] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Many compelling augmented reality (AR) applications require users to correctly perceive the location of virtual objects, some with accuracies as tight as 1 mm. However, measuring the perceived depth of AR objects at these accuracies has not yet been demonstrated. In this paper, we address this challenge by employing two different depth judgment methods, perceptual matching and blind reaching, in a series of three experiments, where observers judged the depth of real and AR target objects presented at reaching distances. Our experiments found that observers can accurately match the distance of a real target, but when viewing an AR target through collimating optics, their matches systematically overestimate the distance by 0.5 to 4.0 cm. However, these results can be explained by a model where the collimation causes the eyes' vergence angle to rotate outward by a constant angular amount. These findings give error bounds for using collimating AR displays at reaching distances, and suggest that for these applications, AR displays need to provide an adjustable focus. Our experiments further found that observers initially reach ∼4 cm too short, but reaching accuracy improves with both consistent proprioception and corrective visual feedback, and eventually becomes nearly as accurate as matching.
Collapse
|
21
|
Yoshino M, Saito T, Kin T, Nakagawa D, Nakatomi H, Oyama H, Saito N. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics. Neurol Med Chir (Tokyo) 2015; 55:674-9. [PMID: 26226982 PMCID: PMC4628159 DOI: 10.2176/nmc.tn.2014-0278] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.
Collapse
Affiliation(s)
- Masanori Yoshino
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo
| | | | | | | | | | | | | |
Collapse
|
22
|
Cabrilo I, Bijlenga P, Schaller K. Augmented reality in the surgery of cerebral aneurysms: a technical report. Neurosurgery 2015; 10 Suppl 2:252-60; discussion 260-1. [PMID: 24594927 DOI: 10.1227/neu.0000000000000328] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Augmented reality is the overlay of computer-generated images on real-world structures. It has previously been used for image guidance during surgical procedures, but it has never been used in the surgery of cerebral aneurysms. OBJECTIVE To report our experience of cerebral aneurysm surgery aided by augmented reality. METHODS Twenty-eight patients with 39 unruptured aneurysms were operated on in a prospective manner with augmented reality. Preoperative 3-dimensional image data sets (angio-magnetic resonance imaging, angio-computed tomography, and 3-dimensional digital subtraction angiography) were used to create virtual segmentations of patients' vessels, aneurysms, aneurysm necks, skulls, and heads. These images were injected intraoperatively into the eyepiece of the operating microscope. An example case of an unruptured posterior communicating artery aneurysm clipping is illustrated in a video. RESULTS The described operating procedure allowed continuous monitoring of the accuracy of patient registration with neuronavigation data and assisted in the performance of tailored surgical approaches and optimal clipping with minimized exposition. CONCLUSION Augmented reality may add to the performance of a minimally invasive approach, although further studies need to be performed to evaluate whether certain groups of aneurysms are more likely to benefit from it. Further technological development is required to improve its user friendliness.
Collapse
Affiliation(s)
- Ivan Cabrilo
- Neurosurgery Division, Department of Clinical Neurosciences, Faculty of Medicine, Geneva University Medical Center, Geneva, Switzerland
| | | | | |
Collapse
|
23
|
Abhari K, Baxter JSH, Chen ECS, Khan AR, Peters TM, de Ribaupierre S, Eagleson R. Training for planning tumour resection: augmented reality and human factors. IEEE Trans Biomed Eng 2014; 62:1466-77. [PMID: 25546854 DOI: 10.1109/tbme.2014.2385874] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes.
Collapse
|
24
|
Cabrilo I, Schaller K, Bijlenga P. Augmented reality-assisted bypass surgery: embracing minimal invasiveness. World Neurosurg 2014; 83:596-602. [PMID: 25527874 DOI: 10.1016/j.wneu.2014.12.020] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2014] [Accepted: 12/10/2014] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The overlay of virtual images on the surgical field, defined as augmented reality, has been used for image guidance during various neurosurgical procedures. Although this technology could conceivably address certain inherent problems of extracranial-to-intracranial bypass procedures, this potential has not been explored to date. We evaluate the usefulness of an augmented reality-based setup, which could help in harvesting donor vessels through their precise localization in real-time, in performing tailored craniotomies, and in identifying preoperatively selected recipient vessels for the purpose of anastomosis. METHODS Our method was applied to 3 patients with Moya-Moya disease who underwent superficial temporal artery-to-middle cerebral artery anastomoses and 1 patient who underwent an occipital artery-to-posteroinferior cerebellar artery bypass because of a dissecting aneurysm of the vertebral artery. Patients' heads, skulls, and extracranial and intracranial vessels were segmented preoperatively from 3-dimensional image data sets (3-dimensional digital subtraction angiography, angio-magnetic resonance imaging, angio-computed tomography), and injected intraoperatively into the operating microscope's eyepiece for image guidance. RESULTS In each case, the described setup helped in precisely localizing donor and recipient vessels and in tailoring craniotomies to the injected images. CONCLUSIONS The presented system based on augmented reality can optimize the workflow of extracranial-to-intracranial bypass procedures by providing essential anatomical information, entirely integrated to the surgical field, and help to perform minimally invasive procedures.
Collapse
Affiliation(s)
- Ivan Cabrilo
- Neurosurgery Division, Department of Clinical Neurosciences, Faculty of Medicine, Geneva University Medical Center, Geneva, Switzerland.
| | - Karl Schaller
- Neurosurgery Division, Department of Clinical Neurosciences, Faculty of Medicine, Geneva University Medical Center, Geneva, Switzerland
| | - Philippe Bijlenga
- Neurosurgery Division, Department of Clinical Neurosciences, Faculty of Medicine, Geneva University Medical Center, Geneva, Switzerland
| |
Collapse
|
25
|
Cabrilo I, Sarrafzadeh A, Bijlenga P, Landis B, Schaller K. Augmented reality-assisted skull base surgery. Neurochirurgie 2014; 60:304-6. [DOI: 10.1016/j.neuchi.2014.07.001] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 05/29/2014] [Accepted: 07/19/2014] [Indexed: 12/01/2022]
|
26
|
Efficient stereo image geometrical reconstruction at arbitrary camera settings from a single calibration. ACTA ACUST UNITED AC 2014; 17:440-7. [PMID: 25333148 DOI: 10.1007/978-3-319-10404-1_55] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Camera calibration is central to obtaining a quantitative image-to-physical-space mapping from stereo images acquired in the operating room (OR). A practical challenge for cameras mounted to the operating microscope is maintenance of image calibration as the surgeon's field-of-view is repeatedly changed (in terms of zoom and focal settings) throughout a procedure. Here, we present an efficient method for sustaining a quantitative image-to-physical space relationship for arbitrary image acquisition settings (S) without the need for camera re-calibration. Essentially, we warp images acquired at S into the equivalent data acquired at a reference setting, S(0), using deformation fields obtained with optical flow by successively imaging a simple phantom. Closed-form expressions for the distortions were derived from which 3D surface reconstruction was performed based on the single calibration at S(0). The accuracy of the reconstructed surface was 1.05 mm and 0.59 mm along and perpendicular to the optical axis of the operating microscope on average, respectively, for six phantom image pairs, and was 1.26 mm and 0.71 mm for images acquired with a total of 47 arbitrary settings during three clinical cases. The technique is presented in the context of stereovision; however, it may also be applicable to other types of video image acquisitions (e.g., endoscope) because it does not rely on any a priori knowledge about the camera system itself, suggesting the method is likely of considerable significance.
Collapse
|
27
|
Cabrilo I, Bijlenga P, Schaller K. Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations. Acta Neurochir (Wien) 2014; 156:1769-74. [PMID: 25037466 DOI: 10.1007/s00701-014-2183-9] [Citation(s) in RCA: 75] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2014] [Accepted: 07/10/2014] [Indexed: 11/28/2022]
Abstract
BACKGROUND Augmented reality technology has been used for intraoperative image guidance through the overlay of virtual images, from preoperative imaging studies, onto the real-world surgical field. Although setups based on augmented reality have been used for various neurosurgical pathologies, very few cases have been reported for the surgery of arteriovenous malformations (AVM). We present our experience with AVM surgery using a system designed for image injection of virtual images into the operating microscope's eyepiece, and discuss why augmented reality may be less appealing in this form of surgery. METHODS N = 5 patients underwent AVM resection assisted by augmented reality. Virtual three-dimensional models of patients' heads, skulls, AVM nidi, and feeder and drainage vessels were selectively segmented and injected into the microscope's eyepiece for intraoperative image guidance, and their usefulness was assessed in each case. RESULTS Although the setup helped in performing tailored craniotomies, in guiding dissection and in localizing drainage veins, it did not provide the surgeon with useful information concerning feeder arteries, due to the complexity of AVM angioarchitecture. CONCLUSION The difficulty in intraoperatively conveying useful information on feeder vessels may make augmented reality a less engaging tool in this form of surgery, and might explain its underrepresentation in the literature. Integrating an AVM's hemodynamic characteristics into the augmented rendering could make it more suited to AVM surgery.
Collapse
Affiliation(s)
- Ivan Cabrilo
- Neurosurgery Division, Department of Clinical Neurosciences, Faculty of Medicine, Geneva University Medical Center, Rue Gabrielle-Perret-Gentil 4, 1211, Genève 14, Switzerland,
| | | | | |
Collapse
|
28
|
Kumar AN, Miga MI, Pheiffer TS, Chambless LB, Thompson RC, Dawant BM. Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope. Med Image Anal 2014; 19:30-45. [PMID: 25189364 DOI: 10.1016/j.media.2014.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Revised: 07/22/2014] [Accepted: 07/23/2014] [Indexed: 12/15/2022]
Abstract
One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient's preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81 mm range on the phantom object and in the 0.54-1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient's preoperative images and facilitate active surgical guidance.
Collapse
Affiliation(s)
- Ankur N Kumar
- Vanderbilt University, Department of Electrical Engineering, Nashville, TN 37235, USA
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37235, USA
| | - Thomas S Pheiffer
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37235, USA
| | - Lola B Chambless
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, TN 37232, USA
| | - Reid C Thompson
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, TN 37232, USA
| | - Benoit M Dawant
- Vanderbilt University, Department of Electrical Engineering, Nashville, TN 37235, USA
| |
Collapse
|
29
|
Ji S, Fan X, Roberts DW, Hartov A, Paulsen KD. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration. Med Image Anal 2014; 18:1169-83. [PMID: 25077845 DOI: 10.1016/j.media.2014.07.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2012] [Revised: 07/03/2014] [Accepted: 07/03/2014] [Indexed: 10/25/2022]
Abstract
Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases - 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7-2.1mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3-24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ∼15 s), for applications in the OR.
Collapse
Affiliation(s)
- Songbai Ji
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA; Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA.
| | - Xiaoyao Fan
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - David W Roberts
- Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA; Dartmouth Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Alex Hartov
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - Keith D Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA; Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
30
|
Pratt P, Bergeles C, Darzi A, Yang GZ. Practical intraoperative stereo camera calibration. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:667-75. [PMID: 25485437 DOI: 10.1007/978-3-319-10470-6_83] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.
Collapse
|
31
|
Rodriguez Palma S, Becker BC, Lobes LA, Riviere CN. Comparative evaluation of monocular augmented-reality display for surgical microscopes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2012:1409-12. [PMID: 23366164 DOI: 10.1109/embc.2012.6346203] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.
Collapse
|
32
|
Linte CA, Davenport KP, Cleary K, Peters C, Vosburgh KG, Navab N, Edwards PE, Jannin P, Peters TM, Holmes DR, Robb RA. On mixed reality environments for minimally invasive therapy guidance: systems architecture, successes and challenges in their implementation from laboratory to clinic. Comput Med Imaging Graph 2013; 37:83-97. [PMID: 23632059 PMCID: PMC3796657 DOI: 10.1016/j.compmedimag.2012.12.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2012] [Revised: 11/16/2012] [Accepted: 12/24/2012] [Indexed: 11/21/2022]
Abstract
Mixed reality environments for medical applications have been explored and developed over the past three decades in an effort to enhance the clinician's view of anatomy and facilitate the performance of minimally invasive procedures. These environments must faithfully represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical instrument tracking, and display technology into a common framework centered around and registered to the patient. However, in spite of their reported benefits, few mixed reality environments have been successfully translated into clinical use. Several challenges that contribute to the difficulty in integrating such environments into clinical practice are presented here and discussed in terms of both technical and clinical limitations. This article should raise awareness among both developers and end-users toward facilitating a greater application of such environments in the surgical practice of the future.
Collapse
|
33
|
Thompson S, Penney G, Billia M, Challacombe B, Hawkes D, Dasgupta P. Design and evaluation of an image-guidance system for robot-assisted radical prostatectomy. BJU Int 2013; 111:1081-90. [DOI: 10.1111/j.1464-410x.2012.11692.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Stephen Thompson
- Centre for Medical Image Computing; University College London; London UK
| | - Graeme Penney
- Interdisciplinary Medical Imaging Group; Kings College London; London UK
| | - Michele Billia
- MRC Centre for Transplantation; NHIR Biomedical Research Centre; King's Health Partners; Guy's Hospital; London UK
| | - Ben Challacombe
- MRC Centre for Transplantation; NHIR Biomedical Research Centre; King's Health Partners; Guy's Hospital; London UK
| | - David Hawkes
- Centre for Medical Image Computing; University College London; London UK
| | - Prokar Dasgupta
- MRC Centre for Transplantation; NHIR Biomedical Research Centre; King's Health Partners; Guy's Hospital; London UK
| |
Collapse
|
34
|
Thompson S, Penney G, Dasgupta P, Hawkes D. Improved modelling of tool tracking errors by modelling dependent marker errors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:165-177. [PMID: 22961298 DOI: 10.1109/tmi.2012.2216890] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Accurate understanding of equipment tracking error is essential for decision making in image guided surgery. For tools tracked using markers attached to a rigid body, existing error estimation methods use the assumption that the individual marker errors are independent random variables. This assumption is not valid for all tracking systems. This paper presents a method to estimate a more accurate tracking error function, consisting of a systematic and random component. The proposed method does not require detailed knowledge of the tracking system physics. Results from a pointer calibration are used to demonstrate that the proposed method provides a better match to observed results than the existing state of the art. A simulation of the pointer calibration process is then used to show that existing methods can underestimate the pointer calibration error by a factor of two. A further simulation of laparoscopic camera tracking is used to show that existing methods cannot model important variations in system performance due to the angular arrangement of the tracking markers. By arranging the markers such that the systematic errors are nearly identical for all markers, the rotational component of the tracking error can be reduced, resulting in a significant reduction in target tracking errors.
Collapse
Affiliation(s)
- Stephen Thompson
- Centre for Medical Image Computing, University College London, London, UK.
| | | | | | | |
Collapse
|
35
|
Kockro RA, Reisch R, Serra L, Goh LC, Lee E, Stadie AT. Image-Guided Neurosurgery With 3-Dimensional Multimodal Imaging Data on a Stereoscopic Monitor. Neurosurgery 2013; 72 Suppl 1:78-88. [DOI: 10.1227/neu.0b013e3182739aae] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
36
|
Sauer F. Image registration: enabling technology for image guided surgery and therapy. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2012; 2005:7242-5. [PMID: 17281951 DOI: 10.1109/iembs.2005.1616182] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Imaging looks inside the patient's body, exposing the patient's anatomy beyond what is visible on the surface. Medical imaging has a very successful history for medical diagnosis. It also plays an increasingly important role as enabling technology for minimally invasive procedures. Interventional procedures (e.g. catheter based cardiac interventions) are traditionally supported by intra-procedure imaging (X-ray fluoro, ultrasound). There is realtime feedback, but the images provide limited information. Surgical procedures are traditionally supported with pre-operative images (CT, MR). The image quality can be very good; however, the link between images and patient has been lost. For both cases, image registration can play an essential role -augmenting intra-op images with pre-op images, and mapping pre-op images to the patient's body. We will present examples of both approaches from an application oriented perspective, covering electrophysiology, radiation therapy, and neuro-surgery. Ultimately, as the boundaries between interventional radiology and surgery are becoming blurry, also the different methods for image guidance will merge. Image guidance will draw upon a combination of pre-op and intra-op imaging together with magnetic or optical tracking systems, and enable precise minimally invasive procedures. The information is registered into a common coordinate system, and allows advanced methods for visualization such as augmented reality or advanced methods for therapy delivery such as robotics.
Collapse
|
37
|
A Realistic Test and Development Environment for Mixed Reality in Neurosurgery. AUGMENTED ENVIRONMENTS FOR COMPUTER-ASSISTED INTERVENTIONS 2012. [DOI: 10.1007/978-3-642-32630-1_2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
38
|
Abstract
Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room.
Collapse
Affiliation(s)
- Stéphane Nicolau
- IRCAD/EITS, Hôpitaux Universitaires de Strasbourg, Digestive and Endocrine Surgery, 1 Place de l'Hôpital, 67091 Strasbourg Cedex, France
| | | | | | | |
Collapse
|
39
|
Salah Z, Preim B, Elolf E, Franke J, Rose G. Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization. BILDVERARBEITUNG FÜR DIE MEDIZIN 2011 2011. [DOI: 10.1007/978-3-642-19335-4_66] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
40
|
Halic T, Kockara S, Bayrak C, Rowe R. Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery. BMC Bioinformatics 2010; 11 Suppl 6:S11. [PMID: 20946594 PMCID: PMC3026358 DOI: 10.1186/1471-2105-11-s6-s11] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians' training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. FINDINGS A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method. CONCLUSIONS The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure.
Collapse
Affiliation(s)
- Tansel Halic
- Mechanical, Aerospace and Nuclear Engineering Department, Rensselaer Polytechnic Institute, Troy, New York, USA.
| | | | | | | |
Collapse
|
41
|
Navab N, Heining SM, Traub J. Camera augmented mobile C-arm (CAMC): calibration, accuracy study, and clinical applications. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:1412-1423. [PMID: 20659830 DOI: 10.1109/tmi.2009.2021947] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Mobile C-arm is an essential tool in everyday trauma and orthopedics surgery. Minimally invasive solutions, based on X-ray imaging and coregistered external navigation created a lot of interest within the surgical community and started to replace the traditional open surgery for many procedures. These solutions usually increase the accuracy and reduce the trauma. In general, they introduce new hardware into the OR and add the line of sight constraints imposed by optical tracking systems. They thus impose radical changes to the surgical setup and overall procedure. We augment a commonly used mobile C-arm with a standard video camera and a double mirror system allowing real-time fusion of optical and X-ray images. The video camera is mounted such that its optical center virtually coincides with the C-arm's X-ray source. After a one-time calibration routine, the acquired X-ray and optical images are coregistered. This paper describes the design of such a system, quantifies its technical accuracy, and provides a qualitative proof of its efficiency through cadaver studies conducted by trauma surgeons. In particular, it studies the relevance of this system for surgical navigation within pedicle screw placement, vertebroplasty, and intramedullary nail locking procedures. The image overlay provides an intuitive interface for surgical guidance with an accuracy of < 1 mm, ideally with the use of only one single X-ray image. The new system is smoothly integrated into the clinical application with no additional hardware especially for down-the-beam instrument guidance based on the anteroposterior oblique view, where the instrument axis is aligned with the X-ray source. Throughout all experiments, the camera augmented mobile C-arm system proved to be an intuitive and robust guidance solution for selected clinical routines.
Collapse
Affiliation(s)
- Nassir Navab
- Chair for Computer Aided Medical Procedures, Technische Universität München, 80333 München, Germany.
| | | | | |
Collapse
|
42
|
Xu Y, Higgins EC, Xiao M, Pomplun M. Mapping the Color Space of Saccadic Selectivity in Visual Search. Cogn Sci 2010; 31:877-87. [DOI: 10.1080/03640210701530789] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
43
|
Kockro RA, Tsai YT, Ng I, Hwang P, Zhu C, Agusanto K, Hong LX, Serra L. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery 2010; 65:795-807; discussion 807-8. [PMID: 19834386 DOI: 10.1227/01.neu.0000349918.36700.1c] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE We developed an augmented reality system that enables intraoperative image guidance by using 3-dimensional (3D) graphics overlaid on a video stream. We call this system DEX-Ray and report on its development and the initial intraoperative experience in 12 cases. METHODS DEX-Ray consists of a tracked handheld probe that integrates a lipstick-size video camera. The camera looks over the probe's tip into the surgical field. The camera's video stream is augmented with coregistered, multimodality 3D graphics and landmarks obtained during neurosurgical planning with 3D workstations. The handheld probe functions as a navigation device to view and point and as an interaction device to adjust the 3D graphics. We tested the system's accuracy in the laboratory and evaluated it intraoperatively with a series of tumor and vascular cases. RESULTS DEX-Ray provided accurate and real-time video-based augmented reality display. The system could be seamlessly integrated into the surgical workflow. The see-through effect revealing 3D information below the surgically exposed surface proved to be of significant value, especially during the macroscopic phase of an operation, providing easily understandable structural navigational information. Navigation in deep and narrow surgical corridors was limited by the camera resolution and light sensitivity. CONCLUSION The system was perceived as an improved navigational experience because the augmented see-through effect allowed direct understanding of the surgical anatomy beyond the visible surface and direct guidance toward surgical targets.
Collapse
Affiliation(s)
- Ralf A Kockro
- Department of Neurosurgery, University Hospital Zürich, Zürich, Switzerland.
| | | | | | | | | | | | | | | |
Collapse
|
44
|
Fichtinger G, Deguet A, Fischer G, Iordachita I, Balogh E, Masamune K, Taylor RH, Fayad LM, de Oliveira M, Zinreich SJ. Image overlay for CT-guided needle insertions. ACTA ACUST UNITED AC 2010; 10:241-55. [PMID: 16393793 DOI: 10.3109/10929080500230486] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE We present a 2D image overlay device to assist needle placement on computed tomography (CT) scanners. MATERIALS AND METHODS The system consists of a flat display and a semitransparent mirror mounted on the gantry. When the physician looks at the patient through the mirror, the CT image appears to be floating inside the body with correct size and position as if the physician had 2D 'X-ray vision'. The physician draws the optimal path on the CT image. The composite image is rendered on the display and thus reflected in the mirror. The reflected image is used to guide the physician in the procedure. In this article, we describe the design and various embodiments of the 2D image overlay system, followed by the results of phantom and cadaver experiments in multiple clinical applications. RESULTS Multiple skeletal targets were successfully accessed with one insertion attempt. Generally, successful access was recorded on liver targets when a clear path opened, but the number of attempts and accuracy showed variability because of occasional lack of access. Soft tissue deformation further reduced the accuracy and consistency in comparison to skeletal targets. CONCLUSION The system demonstrated strong potential for reducing faulty needle insertion attempts, thereby reducing X-ray dose and patient discomfort.
Collapse
|
45
|
|
46
|
Liao H, Ishihara H, Tran HH, Masamune K, Sakuma I, Dohi T. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay. Comput Med Imaging Graph 2010; 34:46-54. [DOI: 10.1016/j.compmedimag.2009.07.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2009] [Revised: 05/21/2009] [Accepted: 07/16/2009] [Indexed: 10/20/2022]
|
47
|
Figl M, Rueckert D, Hawkes D, Casula R, Hu M, Pedro O, Zhang DP, Penney G, Bello F, Edwards P. Image guidance for robotic minimally invasive coronary artery bypass. Comput Med Imaging Graph 2010; 34:61-8. [DOI: 10.1016/j.compmedimag.2009.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2009] [Revised: 07/25/2009] [Accepted: 08/07/2009] [Indexed: 11/16/2022]
|
48
|
Liao H, Tsuzuki M, Mochizuki T, Kobayashi E, Chiba T, Sakuma I. Fast image mapping of endoscopic image mosaics with three-dimensional ultrasound image for intrauterine fetal surgery. MINIM INVASIV THER 2009; 18:332-40. [DOI: 10.3109/13645700903201217] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
49
|
Deguchi D, Mori K, Feuerstein M, Kitasaka T, Maurer Jr. CR, Suenaga Y, Takabatake H, Mori M, Natori H. Selective image similarity measure for bronchoscope tracking based on image registration. Med Image Anal 2009; 13:621-33. [DOI: 10.1016/j.media.2009.06.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2008] [Revised: 05/29/2009] [Accepted: 06/02/2009] [Indexed: 10/20/2022]
|
50
|
Abstract
The intraoperative need for exact orientation during interventions in the paranasal sinuses and the augmented need for navigational aids in lateral skull base surgery have lead to the development of computer-aided tools during the last fifteen years. These tools, which provide the position of a tool or a pointer in the patient's preoperative radiologic imaging, have quickly gained a wide acceptance for revision surgeries and the surgical treatment of complex pathologies in Ear-, Nose- and Throat (ENT-) surgery. Currently, the use of such systems is spreading from academic centers to smaller hospitals and will become a standard tool in the near future. We review the present state of computer-aided surgery (CAS) systems, based on our experience as clinical and research centers with a long experience in the field, provide some technological background information and, based on selected cases, show the merits of this technology. The systems we have been working with cover a wide variety of intraoperative navigational systems in ENT surgery (Easy Guide, MedScan II, MKM, SNN, STN, SurgiGATE ORL, Treon, VectorVision, Viewing Wand, [without claiming completeness]), and virtually the whole area of ENT surgeries: macroscopic, (video-)endoscopic and microscopic procedures. The 3D tracking technologies involved cover mechanical, optical (active and passive), magnetic and robotic principles. The visualization tools used are computer monitors, video monitors, head-up-displays and the microscope's oculars, thus spanning the area from pointer-systems to real navigators and a surgical telepresence demonstrator, implementing the majority of available patient-to-image referencing strategies. Clinically, the systems can be operated with an acceptable accuracy of around 1 mm, whereas in laboratory settings and in cadaver studies application accuracy may be pushed to its limits: the physical resolution of the radiologic imaging used for navigation.
Collapse
|