1
|
Evans M, Kang S, Bajaber A, Gordon K, Martin C. Augmented Reality for Surgical Navigation: A Review of Advanced Needle Guidance Systems for Percutaneous Tumor Ablation. Radiol Imaging Cancer 2025; 7:e230154. [PMID: 39750112 PMCID: PMC11791678 DOI: 10.1148/rycan.230154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/31/2024] [Accepted: 11/08/2024] [Indexed: 01/04/2025]
Abstract
Percutaneous tumor ablation has become a widely accepted and used treatment option for both soft and hard tissue malignancies. The current standard-of-care techniques for performing these minimally invasive procedures require providers to navigate a needle to their intended target using two-dimensional (2D) US or CT to obtain complete local response. These traditional image-guidance systems require operators to mentally transpose what is visualized on a 2D screen into the inherent three-dimensional (3D) context of human anatomy. Advanced navigation systems designed specifically for percutaneous needle-based procedures often fuse multiple imaging modalities to provide greater awareness and planned needle trajectories for the avoidance of critical structures. However, even many of these advanced systems still require mental transposition of anatomy from a 2D screen to human anatomy. Augmented reality (AR)-based systems have the potential to provide a 3D view of the patient's anatomy, eliminating the need for mental transposition by the operator. The purpose of this article is to review commercially available advanced percutaneous surgical navigation platforms and discuss the current state of AR-based navigation systems, including their potential benefits, challenges for adoption, and future developments. Keywords: Computer Applications-Virtual Imaging, Technology Assessment, Augmented Reality, Surgical Navigation, Percutaneous Ablation, Interventional Radiology ©RSNA, 2025.
Collapse
Affiliation(s)
- Michael Evans
- From the Department of Clinical Affairs, MediView XR, Cleveland, Ohio
(M.E.); College of Medicine, Alfaisal University, Riyadh, Saudi Arabia (A.B.);
and Department of Diagnostic Radiology, Section of Interventional Radiology,
Cleveland Clinic Foundation, 9500 Euclid Ave, Cleveland, OH 44195-5243 (S.K.,
K.G., C.M.)
| | | | - Abubakr Bajaber
- From the Department of Clinical Affairs, MediView XR, Cleveland, Ohio
(M.E.); College of Medicine, Alfaisal University, Riyadh, Saudi Arabia (A.B.);
and Department of Diagnostic Radiology, Section of Interventional Radiology,
Cleveland Clinic Foundation, 9500 Euclid Ave, Cleveland, OH 44195-5243 (S.K.,
K.G., C.M.)
| | | | - Charles Martin
- From the Department of Clinical Affairs, MediView XR, Cleveland, Ohio
(M.E.); College of Medicine, Alfaisal University, Riyadh, Saudi Arabia (A.B.);
and Department of Diagnostic Radiology, Section of Interventional Radiology,
Cleveland Clinic Foundation, 9500 Euclid Ave, Cleveland, OH 44195-5243 (S.K.,
K.G., C.M.)
| |
Collapse
|
2
|
Asadi Z, Asadi M, Kazemipour N, Léger É, Kersten-Oertel M. A decade of progress: bringing mixed reality image-guided surgery systems in the operating room. Comput Assist Surg (Abingdon) 2024; 29:2355897. [PMID: 38794834 DOI: 10.1080/24699322.2024.2355897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024] Open
Abstract
Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.
Collapse
Affiliation(s)
- Zahra Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Mehrdad Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Negar Kazemipour
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Étienne Léger
- Montréal Neurological Institute & Hospital (MNI/H), Montréal, Canada
- McGill University, Montréal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| |
Collapse
|
3
|
Li B, Wei H, Yan J, Wang X. A novel portable augmented reality surgical navigation system for maxillofacial surgery: technique and accuracy study. Int J Oral Maxillofac Surg 2024; 53:961-967. [PMID: 38839534 DOI: 10.1016/j.ijom.2024.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 01/21/2024] [Accepted: 02/06/2024] [Indexed: 06/07/2024]
Abstract
Surgical navigation, despite its potential benefits, faces challenges in widespread adoption in clinical practice. Possible reasons include the high cost, increased surgery time, attention shifts during surgery, and the mental task of mapping from the monitor to the patient. To address these challenges, a portable, all-in-one surgical navigation system using augmented reality (AR) was developed, and its feasibility and accuracy were investigated. The system achieves AR visualization by capturing a live video stream of the actual surgical field using a visible light camera and merging it with preoperative virtual images. A skull model with reference spheres was used to evaluate the accuracy. After registration, virtual models were overlaid on the real skull model. The discrepancies between the centres of the real spheres and the virtual model were measured to assess the AR visualization accuracy. This AR surgical navigation system demonstrated precise AR visualization, with an overall overlap error of 0.53 ± 0.21 mm. By seamlessly integrating the preoperative virtual plan with the intraoperative field of view in a single view, this novel AR navigation system could provide a feasible solution for the use of AR visualization to guide the surgeon in performing the operation as planned.
Collapse
Affiliation(s)
- B Li
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China
| | - H Wei
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China
| | - J Yan
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China
| | - X Wang
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China.
| |
Collapse
|
4
|
Alizadeh M, Xiao Y, Kersten-Oertel M. Virtual and Augmented Reality in Ventriculostomy: A Systematic Review. World Neurosurg 2024; 189:90-107. [PMID: 38823448 DOI: 10.1016/j.wneu.2024.05.151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/03/2024]
Abstract
BACKGROUND Ventriculostomy, one of the most common neurosurgical procedures, involves inserting a draining catheter into the brain's ventricular system to alleviate excessive cerebrospinal fluid accumulation. Traditionally, this procedure has relied on freehand techniques guided by anatomical landmarks, which have shown a high rate of misplacement. Recent advancements in virtual reality (VR) and augmented reality (AR) technologies have opened up new possibilities in the field. This comprehensive review aims to analyze the existing literature, examine the diverse applications of VR and AR in ventriculostomy procedures, address their limitations, and propose potential future directions. METHODS A systematic search was conducted in Web of Science and PubMed databases to identify studies employing VR and AR technologies in ventriculostomy procedures. Review papers, non-English records, studies unrelated to VR/AR technologies in ventriculostomy, and supplementary documents were excluded. In total 29 papers were included in the review. RESULTS The development of various VR and AR systems aimed at enhancing the ventriculostomy procedure are categorized according to the Data, Visualization and View taxonomy. The study investigates the data utilized by these systems, the visualizations employed, and the virtual or augmented environments created. Furthermore, the surgical scenarios and applications of each method, as well as the validation and evaluation metrics used, are discussed. DISCUSSION The review delves into the fundamental challenges encountered in the implementation of VR and AR systems in ventriculostomy. Additionally, potential future directions and areas for improvement are proposed, addressing the identified limitations and paving the way for further advancements in the field.
Collapse
Affiliation(s)
- Maryam Alizadeh
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada.
| | - Yiming Xiao
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada
| |
Collapse
|
5
|
Doughty M, Ghugre NR, Wright GA. Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery. J Imaging 2022; 8:jimaging8070203. [PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 02/01/2023] Open
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - Graham A. Wright
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
6
|
Puladi B, Ooms M, Bellgardt M, Cesov M, Lipprandt M, Raith S, Peters F, Möhlhenrich SC, Prescher A, Hölzle F, Kuhlen TW, Modabber A. Augmented Reality-Based Surgery on the Human Cadaver Using a New Generation of Optical Head-Mounted Displays: Development and Feasibility Study. JMIR Serious Games 2022; 10:e34781. [PMID: 35468090 PMCID: PMC9086879 DOI: 10.2196/34781] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/04/2022] [Accepted: 03/05/2022] [Indexed: 12/15/2022] Open
Abstract
Background Although nearly one-third of the world’s disease burden requires surgical care, only a small proportion of digital health applications are directly used in the surgical field. In the coming decades, the application of augmented reality (AR) with a new generation of optical-see-through head-mounted displays (OST-HMDs) like the HoloLens (Microsoft Corp) has the potential to bring digital health into the surgical field. However, for the application to be performed on a living person, proof of performance must first be provided due to regulatory requirements. In this regard, cadaver studies could provide initial evidence. Objective The goal of the research was to develop an open-source system for AR-based surgery on human cadavers using freely available technologies. Methods We tested our system using an easy-to-understand scenario in which fractured zygomatic arches of the face had to be repositioned with visual and auditory feedback to the investigators using a HoloLens. Results were verified with postoperative imaging and assessed in a blinded fashion by 2 investigators. The developed system and scenario were qualitatively evaluated by consensus interview and individual questionnaires. Results The development and implementation of our system was feasible and could be realized in the course of a cadaver study. The AR system was found helpful by the investigators for spatial perception in addition to the combination of visual as well as auditory feedback. The surgical end point could be determined metrically as well as by assessment. Conclusions The development and application of an AR-based surgical system using freely available technologies to perform OST-HMD–guided surgical procedures in cadavers is feasible. Cadaver studies are suitable for OST-HMD–guided interventions to measure a surgical end point and provide an initial data foundation for future clinical trials. The availability of free systems for researchers could be helpful for a possible translation process from digital health to AR-based surgery using OST-HMDs in the operating theater via cadaver studies.
Collapse
Affiliation(s)
- Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Mark Ooms
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Martin Bellgardt
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
| | - Mark Cesov
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Visual Computing Institute, RWTH Aachen University, Aachen, Germany
| | - Myriam Lipprandt
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Stefan Raith
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Florian Peters
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Stephan Christian Möhlhenrich
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Department of Orthodontics, Private University of Witten/Herdecke, Witten, Germany
| | - Andreas Prescher
- Institute of Molecular and Cellular Anatomy, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | | | - Ali Modabber
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
7
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Matthew Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
8
|
Scherl C, Stratemeier J, Karle C, Rotter N, Hesser J, Huber L, Dias A, Hoffmann O, Riffel P, Schoenberg SO, Schell A, Lammert A, Affolter A, Männle D. Augmented reality with HoloLens in parotid surgery: how to assess and to improve accuracy. Eur Arch Otorhinolaryngol 2020; 278:2473-2483. [PMID: 32910225 DOI: 10.1007/s00405-020-06351-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Accepted: 08/31/2020] [Indexed: 11/28/2022]
Abstract
PURPOSE Augmented reality improves planning and execution of surgical procedures. The aim of this study was to evaluate the feasibility of a 3D augmented reality hologram in live parotic surgery. Another goal was to develop an accuracy measuring instrument and to determine the accuracy of the system. METHODS We created a software to build and manually align 2D and 3D augmented reality models generated from MRI data onto the patient during surgery using the HoloLens® 1 (Microsoft Corporation, Redmond, USA). To assess the accuracy of the system, we developed a specific measuring tool applying a standard electromagnetic navigation device (Fiagon GmbH, Hennigsdorf, Germany). RESULTS The accuracy of our system was measured during real surgical procedures. Training of the experimenters and the use of fiducial markers significantly reduced the accuracy of holographic system (p = 0.0166 and p = 0.0132). Precision of the developed measuring system was very high with a mean error of the basic system of 1.3 mm. Feedback evaluation demonstrated 86% of participants agreed or strongly agreed that the HoloLens will play a role in surgical education. Furthermore, 80% of participants agreed or strongly agreed that the HoloLens is feasible to be introduced in clinical routine and will play a role within surgery in the future. CONCLUSION The use of fiducial markers and repeated training reduces the positional error between the hologram and the real structures. The developed measuring device under the use of the Fiagon navigation system is suitable to measure accuracies of holographic augmented reality images of the HoloLens.
Collapse
Affiliation(s)
- Claudia Scherl
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Johanna Stratemeier
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Celine Karle
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Jürgen Hesser
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Lena Huber
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Andre Dias
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Oliver Hoffmann
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Philipp Riffel
- Department of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Stefan O Schoenberg
- Department of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Angela Schell
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Anne Lammert
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Annette Affolter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - David Männle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| |
Collapse
|
9
|
Mapping the intellectual structure of research on surgery with mixed reality: Bibliometric network analysis (2000-2019). J Biomed Inform 2020; 109:103516. [PMID: 32736125 DOI: 10.1016/j.jbi.2020.103516] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 06/16/2020] [Accepted: 07/17/2020] [Indexed: 12/27/2022]
Abstract
OBJECTIVE The purpose of this study is to view research trends on surgery with mixed reality, and present the intellectual structure using bibliometric network analysis for the period 2000-2019. METHODS Analyses are implemented in the following four steps: (1) literature dataset acquisition from article database (Web of Science, Scopus, PubMed, and IEEE digital library), (2) dataset pre-processing and refinement, (3) network construction and visualization, and (4) analysis and interpretation. Descriptive analysis, bibliometric network analysis, and in-depth qualitative analysis were conducted. RESULTS The 14,591 keywords of 5897 abstracts data were ultimately used to ascertain the intellectual structure of research on surgery with mixed reality. The dynamics of the evolution of keywords in the structure throughout the four periods is summarized with four aspects: (a) maintaining a predominant utilization tool for training, (b) widening clinical application area, (c) reallocating the continuum of mixed reality, and (d) steering advanced imaging and simulation technology. CONCLUSIONS The results of this study can provide valuable insights into technology adoption and research trends of mixed reality in surgery. These findings can help clinicians to overview prospective medical research on surgery using mixed reality. Hospitals can also understand the periodical maturity of technology of mixed reality in surgery, and, therefore, these findings can suggest an academic landscape to make a decision in adopting new technologies in surgery.
Collapse
|
10
|
Tuladhar S, AlSallami N, Alsadoon A, Prasad PWC, Alsadoon OH, Haddad S, Alrubaie A. A recent review and a taxonomy for hard and soft tissue visualization-based mixed reality. Int J Med Robot 2020; 16:1-22. [PMID: 32388923 DOI: 10.1002/rcs.2120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 11/10/2022]
Abstract
BACKGROUND Mixed reality (MR) visualization is gaining popularity in image-guided surgery (IGS) systems, especially for hard and soft tissue surgeries. However, a few MR systems are implemented in real time. Some factors are limiting MR technology and creating a difficulty in setting up and evaluating the MR system in real environments. Some of these factors include: the end users are not considered, the limitations in the operating room, and the medical images are not fully unified into the operating interventions. METHODOLOGY The purpose of this article is to use Data, Visualization processing, and View (DVV) taxonomy to evaluate the current MR systems. DVV includes all the components required to be considered and validated for the MR used in hard and soft tissue surgeries. This taxonomy helps the developers and end users like researchers and surgeons to enhance MR system for the surgical field. RESULTS We evaluated, validated, and verified the taxonomy based on system comparison, completeness, and acceptance criteria. Around 24 state-of-the-art solutions that are picked relate to MR visualization, which is then used to demonstrate and validate this taxonomy. The results showed that most of the findings are evaluated and others are validated. CONCLUSION The DVV taxonomy acts as a great resource for MR visualization in IGS. State-of-the-art solutions are classified, evaluated, validated, and verified to elaborate the process of MR visualization during surgery. The DVV taxonomy provides the benefits to the end users and future improvements in MR.
Collapse
Affiliation(s)
- Selina Tuladhar
- School of Computing and Mathematics, Charles Sturt University, Sydney, New South Wales, Australia
| | - Nada AlSallami
- Computer Science Department, Worcester State University, Worcester, Massachusetts, USA
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University, Sydney, New South Wales, Australia.,Department of Information Technology, Study Group Australia, Sydney, New South Wales, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University, Sydney, New South Wales, Australia
| | - Omar H Alsadoon
- Department of Islamic Sciences, Al Iraqia University, Baghdad, Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial Services, Greater Western Sydney Area Health Services, Sydney, New South Wales, Australia.,Department of Oral and Maxillofacial Services, Central Coast Area Health, Gosford, New South Wales, Australia
| | - Ahmad Alrubaie
- Faculty of Medicine, University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
11
|
A Review on Mixed Reality: Current Trends, Challenges and Prospects. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020636] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Currently, new technologies have enabled the design of smart applications that are used as decision-making tools in the problems of daily life. The key issue in designing such an application is the increasing level of user interaction. Mixed reality (MR) is an emerging technology that deals with maximum user interaction in the real world compared to other similar technologies. Developing an MR application is complicated, and depends on the different components that have been addressed in previous literature. In addition to the extraction of such components, a comprehensive study that presents a generic framework comprising all components required to develop MR applications needs to be performed. This review studies intensive research to obtain a comprehensive framework for MR applications. The suggested framework comprises five layers: the first layer considers system components; the second and third layers focus on architectural issues for component integration; the fourth layer is the application layer that executes the architecture; and the fifth layer is the user interface layer that enables user interaction. The merits of this study are as follows: this review can act as a proper resource for MR basic concepts, and it introduces MR development steps and analytical models, a simulation toolkit, system types, and architecture types, in addition to practical issues for stakeholders such as considering MR different domains.
Collapse
|
12
|
Non-linear-Optimization Using SQP for 3D Deformable Prostate Model Pose Estimation in Minimally Invasive Surgery. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2020. [DOI: 10.1007/978-3-030-17795-9_35] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
13
|
Azimi E, Winkler A, Tucker E, Qian L, Doswell J, Navab N, Kazanzides P. Can Mixed-Reality Improve the Training of Medical Procedures? ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:4065-4068. [PMID: 30441249 DOI: 10.1109/embc.2018.8513387] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
One cause of preventable death is a lack of proper skills for providing critical care. The conventional course taught to non-medical individuals involves instructions of advanced emergency procedures routinely limited to a verbal block of instructions in a standardized presentation (for example, an instructional video).In the present study, we evaluate the benefits of using an OST-HMD for training of caregivers in an emergency medical environment. A rich user interface was implemented that provides 3D visual aids including images, text and tracked 3D overlays corresponding to each task that needs to be performed. A user study with 20 participants is conducted which involves training of two tasks where each subject performs one task with the HMD and the other with standard training. Two evaluations were performed, with the first immediately after the training followed by a second one three weeks later. Our results indicate that using a mixed reality HMD is more engaging, improves the time-on-task, and increases the confidence level of users in providing emergency and critical care.
Collapse
|
14
|
Meola A, Chang SD. Letter: Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience. Oper Neurosurg (Hagerstown) 2019; 14:E71-E72. [PMID: 29590481 DOI: 10.1093/ons/opy048] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Antonio Meola
- Department of Neurosurgery Stanford University Stanford, California
| | - Steven D Chang
- Department of Neurosurgery Stanford University Stanford, California
| |
Collapse
|
15
|
Joeres F, Schindele D, Luz M, Blaschke S, Russwinkel N, Schostak M, Hansen C. How well do software assistants for minimally invasive partial nephrectomy meet surgeon information needs? A cognitive task analysis and literature review study. PLoS One 2019; 14:e0219920. [PMID: 31318919 PMCID: PMC6638947 DOI: 10.1371/journal.pone.0219920] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 07/04/2019] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Intraoperative software assistance is gaining increasing importance in laparoscopic and robot-assisted surgery. Within the user-centred development process of such systems, the first question to be asked is: What information does the surgeon need and when does he or she need it? In this article, we present an approach to investigate these surgeon information needs for minimally invasive partial nephrectomy and compare these needs to the relevant surgical computer assistance literature. MATERIALS AND METHODS First, we conducted a literature-based hierarchical task analysis of the surgical procedure. This task analysis was taken as a basis for a qualitative in-depth interview study with nine experienced surgical urologists. The study employed a cognitive task analysis method to elicit surgeons' information needs during minimally invasive partial nephrectomy. Finally, a systematic literature search was conducted to review proposed software assistance solutions for minimally invasive partial nephrectomy. The review focused on what information the solutions present to the surgeon and what phase of the surgery they aim to support. RESULTS The task analysis yielded a workflow description for minimally invasive partial nephrectomy. During the subsequent interview study, we identified three challenging phases of the procedure, which may particularly benefit from software assistance. These phases are I. Hilar and vascular management, II. Tumour excision, and III. Repair of the renal defects. Between these phases, 25 individual challenges were found which define the surgeon information needs. The literature review identified 34 relevant publications, all of which aim to support the surgeon in hilar and vascular management (phase I) or tumour excision (phase II). CONCLUSION The work presented in this article identified unmet surgeon information needs in minimally invasive partial nephrectomy. Namely, our results suggest that future solutions should address the repair of renal defects (phase III) or put more focus on the renal collecting system as a critical anatomical structure.
Collapse
Affiliation(s)
- Fabian Joeres
- Department of Simulation and Graphics, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Daniel Schindele
- Clinic of Urology and Paediatric Urology, University Hospital of Magdeburg, Magdeburg, Germany
| | - Maria Luz
- Department of Simulation and Graphics, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Simon Blaschke
- Clinic of Urology and Paediatric Urology, University Hospital of Magdeburg, Magdeburg, Germany
| | - Nele Russwinkel
- Department of Cognitive Modelling in Dynamic Human-Machine Systems, Technische Universität Berlin, Berlin, Germany
| | - Martin Schostak
- Clinic of Urology and Paediatric Urology, University Hospital of Magdeburg, Magdeburg, Germany
| | - Christian Hansen
- Department of Simulation and Graphics, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| |
Collapse
|
16
|
Lee C, Wong GKC. Virtual reality and augmented reality in the management of intracranial tumors: A review. J Clin Neurosci 2019; 62:14-20. [PMID: 30642663 DOI: 10.1016/j.jocn.2018.12.036] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Accepted: 12/22/2018] [Indexed: 01/19/2023]
Abstract
Neurosurgeons are faced with the challenge of planning, performing, and learning complex surgical procedures. With improvements in computational power and advances in visual and haptic display technologies, augmented and virtual surgical environments can offer potential benefits for tests in a safe and simulated setting, as well as improve management of real-life procedures. This systematic literature review is conducted in order to investigate the roles of such advanced computing technology in neurosurgery subspecialization of intracranial tumor removal. The study would focus on an in-depth discussion on the role of virtual reality and augmented reality in the management of intracranial tumors: the current status, foreseeable challenges, and future developments.
Collapse
Affiliation(s)
- Chester Lee
- Division of Neurosurgery, Department of Surgery, The Chinese University of Hong Kong, Hong Kong Special Administrative Region
| | - George Kwok Chu Wong
- Division of Neurosurgery, Department of Surgery, The Chinese University of Hong Kong, Hong Kong Special Administrative Region.
| |
Collapse
|
17
|
Song T, Yang C, Dianat O, Azimi E. Endodontic guided treatment using augmented reality on a head‐mounted display system. Healthc Technol Lett 2018. [DOI: 10.1049/htl.2018.5062] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Affiliation(s)
- Tianyu Song
- Laboratory for Computational Sensing and Robotics Johns Hopkins University Baltimore USA
| | - Chenglin Yang
- Laboratory for Computational Sensing and Robotics Johns Hopkins University Baltimore USA
| | - Omid Dianat
- Division of Endodontics, School of Dentistry University of Maryland Baltimore USA
| | - Ehsan Azimi
- Laboratory for Computational Sensing and Robotics Johns Hopkins University Baltimore USA
| |
Collapse
|
18
|
Abstract
Augmentation reality technology offers virtual information in addition to that of the real environment and thus opens new possibilities in various fields. The medical applications of augmentation reality are generally concentrated on surgery types, including neurosurgery, laparoscopic surgery and plastic surgery. Augmentation reality technology is also widely used in medical education and training. In dentistry, oral and maxillofacial surgery is the primary area of use, where dental implant placement and orthognathic surgery are the most frequent applications. Recent technological advancements are enabling new applications of restorative dentistry, orthodontics and endodontics. This review briefly summarizes the history, definitions, features, and components of augmented reality technology and discusses its applications and future perspectives in dentistry.
Collapse
Affiliation(s)
- Ho-Beom Kwon
- Department of Prosthodontics, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| | - Young-Seok Park
- Department of Oral Medicine and Oral Diagnosis, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| |
Collapse
|
19
|
Wong K, Yee HM, Xavier BA, Grillone GA. Applications of Augmented Reality in Otolaryngology: A Systematic Review. Otolaryngol Head Neck Surg 2018; 159:956-967. [PMID: 30126323 DOI: 10.1177/0194599818796476] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Augmented reality (AR) is a rapidly developing technology. The aim of this systematic review was to (1) identify and evaluate applications of AR in otolaryngology and (2) examine trends in publication over time. DATA SOURCES PubMed and EMBASE. REVIEW METHODS A systematic review was performed according to PRISMA guidelines without temporal limits. Studies were included if they reported otolaryngology-related applications of AR. Exclusion criteria included non-English articles, abstracts, letters/commentaries, and reviews. A linear regression model was used to compare publication trends over time. RESULTS Twenty-three articles representing 18 AR platforms were included. Publications increased between 1997 and 2018 ( P < .05). Twelve studies were level 5 evidence; 9 studies, level 4; 1 study, level 2; and 1 study, level 1. There was no trend toward increased level of evidence over time. The most common subspecialties represented were rhinology (52.2%), head and neck (30.4%), and neurotology (26%). The most common purpose of AR was intraoperative guidance (54.5%), followed by surgical planning (24.2%) and procedural simulations (9.1%). The most common source of visual inputs was endoscopes (50%), followed by eyewear (22.2%) and microscopes (4.5%). Computed tomography was the most common virtual input (83.3%). Optical trackers and fiducial markers were the most common forms of tracking and registration, respectively (38.9% and 44.4%). Mean registration error was 2.48 mm. CONCLUSION AR holds promise in simulation, surgical planning, and perioperative navigation. Although level of evidence remains modest, the role of AR in otolaryngology has grown rapidly and continues to expand.
Collapse
Affiliation(s)
- Kevin Wong
- 1 Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Halina M Yee
- 2 Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston, Massachusetts, USA
| | - Brian A Xavier
- 3 Department of Radiology, University of Illinois College of Medicine at Chicago, Chicago, Illinois, USA
| | - Gregory A Grillone
- 2 Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston, Massachusetts, USA
| |
Collapse
|
20
|
Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V. Augmented reality in open surgery. Updates Surg 2018; 70:389-400. [PMID: 30006832 DOI: 10.1007/s13304-018-0567-8] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 07/08/2018] [Indexed: 12/17/2022]
Abstract
Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) "augmented reality" AND "open surgery", (2) "augmented reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic", (3) "mixed reality" AND "open surgery", (4) "mixed reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic". The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice.
Collapse
Affiliation(s)
- Benish Fida
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy. .,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.
| | - Gregorio di Franco
- General Surgery Unit, Department of Surgery, Translational and New Technologies, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.,Vascular Surgery Unit, Cisanello University Hospital AOUP, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
21
|
Deib G, Johnson A, Unberath M, Yu K, Andress S, Qian L, Osgood G, Navab N, Hui F, Gailloud P. Image guided percutaneous spine procedures using an optical see-through head mounted display: proof of concept and rationale. J Neurointerv Surg 2018; 10:1187-1191. [PMID: 29848559 DOI: 10.1136/neurintsurg-2017-013649] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Revised: 02/27/2018] [Accepted: 02/28/2018] [Indexed: 12/19/2022]
Abstract
BACKGROUND AND PURPOSE Optical see-through head mounted displays (OST-HMDs) offer a mixed reality (MixR) experience with unhindered procedural site visualization during procedures using high resolution radiographic imaging. This technical note describes our preliminary experience with percutaneous spine procedures utilizing OST-HMD as an alternative to traditional angiography suite monitors. METHODS MixR visualization was achieved using the Microsoft HoloLens system. Various spine procedures (vertebroplasty, kyphoplasty, and percutaneous discectomy) were performed on a lumbar spine phantom with commercially available devices. The HMD created a real time MixR environment by superimposing virtual posteroanterior and lateral views onto the interventionalist's field of view. The procedures were filmed from the operator's perspective. Videos were reviewed to assess whether key anatomic landmarks and materials were reliably visualized. Dosimetry and procedural times were recorded. The operator completed a questionnaire following each procedure, detailing benefits, limitations, and visualization mode preferences. RESULTS Percutaneous vertebroplasty, kyphoplasty, and discectomy procedures were successfully performed using OST-HMD image guidance on a lumbar spine phantom. Dosimetry and procedural time compared favorably with typical procedural times. Conventional and MixR visualization modes were equally effective in providing image guidance, with key anatomic landmarks and materials reliably visualized. CONCLUSION This preliminary study demonstrates the feasibility of utilizing OST-HMDs for image guidance in interventional spine procedures. This novel visualization approach may serve as a valuable adjunct tool during minimally invasive percutaneous spine treatment.
Collapse
Affiliation(s)
- Gerard Deib
- Division of Interventional Neuroradiology, The Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Alex Johnson
- Department of Orthopedic Surgery, The Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Mathias Unberath
- Computer Aided Medical Procedures, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Yu
- Computer Aided Medical Procedures, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sebastian Andress
- Computer Aided Medical Procedures, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Long Qian
- Computer Aided Medical Procedures, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Gregory Osgood
- Department of Orthopedic Surgery, The Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Ferdinand Hui
- Division of Interventional Neuroradiology, The Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Philippe Gailloud
- Division of Interventional Neuroradiology, The Johns Hopkins Hospital, Baltimore, Maryland, USA
| |
Collapse
|
22
|
Gerard IJ, Kersten-Oertel M, Drouin S, Hall JA, Petrecca K, De Nigris D, Di Giovanni DA, Arbel T, Collins DL. Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases. J Med Imaging (Bellingham) 2018; 5:021210. [PMID: 29392162 DOI: 10.1117/1.jmi.5.2.021210] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 01/08/2018] [Indexed: 11/14/2022] Open
Abstract
We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient's cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.
Collapse
Affiliation(s)
- Ian J Gerard
- McGill University, Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, Montreal, Québec, Canada
| | - Marta Kersten-Oertel
- Concordia University, PERFORM Centre, Department of Computer Science and Software Engineering, Montreal, Québec, Canada
| | - Simon Drouin
- McGill University, Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, Montreal, Québec, Canada
| | - Jeffery A Hall
- McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada
| | - Kevin Petrecca
- McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada
| | - Dante De Nigris
- McGill University, Centre for Intelligent Machines, Department of Electrical and Computer Engineering, Montreal, Québec, Canada
| | - Daniel A Di Giovanni
- McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada
| | - Tal Arbel
- McGill University, Centre for Intelligent Machines, Department of Electrical and Computer Engineering, Montreal, Québec, Canada
| | - D Louis Collins
- McGill University, Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, Montreal, Québec, Canada.,McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada.,McGill University, Centre for Intelligent Machines, Department of Electrical and Computer Engineering, Montreal, Québec, Canada
| |
Collapse
|
23
|
Cutolo F, Meola A, Carbone M, Sinceri S, Cagnazzo F, Denaro E, Esposito N, Ferrari M, Ferrari V. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom. Comput Assist Surg (Abingdon) 2017; 22:39-53. [PMID: 28754068 DOI: 10.1080/24699322.2017.1358400] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Affiliation(s)
- Fabrizio Cutolo
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| | - Antonio Meola
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marina Carbone
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Sara Sinceri
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | | | - Ennio Denaro
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Nicola Esposito
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Vascular Surgery, Pisa University Medical School, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| |
Collapse
|
24
|
Xiao Y, Fortin M, Unsgård G, Rivaz H, Reinertsen I. REtroSpective Evaluation of Cerebral Tumors (RESECT): A clinical database of pre-operative MRI and intra-operative ultrasound in low-grade glioma surgeries. Med Phys 2017; 44:3875-3882. [PMID: 28391601 DOI: 10.1002/mp.12268] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 03/05/2017] [Accepted: 04/05/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The advancement of medical image processing techniques, such as image registration, can effectively help improve the accuracy and efficiency of brain tumor surgeries. However, it is often challenging to validate these techniques with real clinical data due to the rarity of such publicly available repositories. ACQUISITION AND VALIDATION METHODS Pre-operative magnetic resonance images (MRI), and intra-operative ultrasound (US) scans were acquired from 23 patients with low-grade gliomas who underwent surgeries at St. Olavs University Hospital between 2011 and 2016. Each patient was scanned by Gadolinium-enhanced T1w and T2-FLAIR MRI protocols to reveal the anatomy and pathology, and series of B-mode ultrasound images were obtained before, during, and after tumor resection to track the surgical progress and tissue deformation. Retrospectively, corresponding anatomical landmarks were identified across US images of different surgical stages, and between MRI and US, and can be used to validate image registration algorithms. Quality of landmark identification was assessed with intra- and inter-rater variability. DATA FORMAT AND ACCESS In addition to co-registered MRIs, each series of US scans are provided as a reconstructed 3D volume. All images are accessible in MINC2 and NIFTI formats, and the anatomical landmarks were annotated in MNI tag files. Both the imaging data and the corresponding landmarks are available online as the RESECT database at https://archive.norstore.no (search for "RESECT"). POTENTIAL IMPACT The proposed database provides real high-quality multi-modal clinical data to validate and compare image registration algorithms that can potentially benefit the accuracy and efficiency of brain tumor resection. Furthermore, the database can also be used to test other image processing methods and neuro-navigation software platforms.
Collapse
Affiliation(s)
- Yiming Xiao
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Maryse Fortin
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Geirmund Unsgård
- Department of Neurosurgery, St. Olavs University Hospital, Trondheim, NO-7006, Norway.,Department of Neuroscience, Norwegian University of Science and Technology, Trondheim, NO-7491, Norway.,Norwegian National Advisory Unit for Ultrasound and Image Guided Therapy, St. Olavs University Hospital, Trondheim, NO-7006, Norway
| | - Hassan Rivaz
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Ingerid Reinertsen
- Department of Medical Technology, SINTEF, Trondheim, NO-7465, Norway.,Norwegian National Advisory Unit for Ultrasound and Image Guided Therapy, St. Olavs University Hospital, Trondheim, NO-7006, Norway
| |
Collapse
|
25
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
26
|
Qian L, Barthel A, Johnson A, Osgood G, Kazanzides P, Navab N, Fuerst B. Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display. Int J Comput Assist Radiol Surg 2017; 12:901-910. [PMID: 28343301 DOI: 10.1007/s11548-017-1564-y] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2017] [Accepted: 03/13/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE Optical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information. METHODS Criteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance. RESULTS Statistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario. CONCLUSIONS With ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.
Collapse
Affiliation(s)
- Long Qian
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA. .,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
| | - Alexander Barthel
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Alex Johnson
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Peter Kazanzides
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Bernhard Fuerst
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
27
|
Black D, Hettig J, Luz M, Hansen C, Kikinis R, Hahn H. Auditory feedback to support image-guided medical needle placement. Int J Comput Assist Radiol Surg 2017; 12:1655-1663. [PMID: 28213646 DOI: 10.1007/s11548-017-1537-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Accepted: 02/01/2017] [Indexed: 11/27/2022]
Abstract
PURPOSE During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician's visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. METHODS An auditory synthesis model using pitch comparison and stereo panning parameter mapping was developed to augment or replace visual feedback for navigated needle placement. In contrast to existing approaches which augment but still require a visual display, this method allows view-free needle placement. An evaluation with 12 novice participants compared both auditory and combined audiovisual feedback against existing visual methods. RESULTS Using combined audiovisual display, participants show similar task completion times and report similar subjective workload and accuracy while viewing the screen less compared to using the conventional visual method. The auditory feedback leads to higher task completion times and subjective workload compared to both combined and visual feedback. CONCLUSION Audiovisual feedback shows promising results and establishes a basis for applying auditory feedback as a supplement to visual information to other navigated interventions, especially those for which viewing a patient is beneficial or necessary.
Collapse
Affiliation(s)
- David Black
- Jacobs University, Bremen, Germany.
- Medical Image Computing, University of Bremen, Bremen, Germany.
- Fraunhofer MEVIS, Bremen, Germany.
| | - Julian Hettig
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Maria Luz
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Christian Hansen
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Ron Kikinis
- Medical Image Computing, University of Bremen, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
- Surgical Planning Laboratory, Brigham and Women's Hospital, Boston, MA, USA
| | - Horst Hahn
- Jacobs University, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
28
|
|
29
|
Augmented reality in neurosurgery: a systematic review. Neurosurg Rev 2016; 40:537-548. [PMID: 27154018 DOI: 10.1007/s10143-016-0732-9] [Citation(s) in RCA: 172] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2015] [Revised: 02/18/2016] [Accepted: 03/13/2016] [Indexed: 12/16/2022]
Abstract
Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms "Augmented reality" and "Neurosurgery." Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed.
Collapse
|
30
|
Precise 3D/2D calibration between a RGB-D sensor and a C-arm fluoroscope. Int J Comput Assist Radiol Surg 2016; 11:1385-95. [PMID: 26811080 DOI: 10.1007/s11548-015-1347-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 12/30/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Calibration and registration are the first steps for augmented reality and mixed reality applications. In the medical field, the calibration between an RGB-D camera and a C-arm fluoroscope is a new topic which introduces challenges. METHOD A convenient and efficient calibration phantom is designed by combining the traditional calibration object of X-ray images with a checkerboard plane. After the localization of the 2D marker points in the X-ray images and the corresponding 3D points from the RGB-D images, we calculate the projection matrix from the RGB-D sensor coordinates to the X-ray, instead of estimating the extrinsic and intrinsic parameters simultaneously. VALIDATION In order to evaluate the effect of every step of our calibration process, we performed five experiments by combining different steps leading to the calibration. We also compared our calibration method to Tsai's method to evaluate the advancement of our solution. At last, we simulated the process of estimating the rotation movement of the RGB-D camera using MATLAB and demonstrate that calculating the projection matrix can reduce the angle error of the rotation. RESULTS A RMS reprojection error of 0.5 mm is achieved using our calibration method which is promising for surgical applications. Our calibration method is more accurate when compared to Tsai's method. Lastly, the simulation result shows that using a projection matrix has a lower error than using intrinsic and extrinsic parameters in the rotation estimation. CONCLUSIONS We designed and evaluated a 3D/2D calibration method for the combination of a RGB-D camera and a C-arm fluoroscope.
Collapse
|
31
|
Application of a New Wearable Augmented Reality Video See-Through Display to Aid Percutaneous Procedures in Spine Surgery. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-40651-0_4] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
32
|
Kersten-Oertel M, Gerard I, Drouin S, Mok K, Sirhan D, Sinclair DS, Collins DL. Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. Int J Comput Assist Radiol Surg 2015; 10:1823-36. [DOI: 10.1007/s11548-015-1163-8] [Citation(s) in RCA: 67] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 02/10/2015] [Indexed: 11/24/2022]
|
33
|
Abhari K, Baxter JSH, Chen ECS, Khan AR, Peters TM, de Ribaupierre S, Eagleson R. Training for planning tumour resection: augmented reality and human factors. IEEE Trans Biomed Eng 2014; 62:1466-77. [PMID: 25546854 DOI: 10.1109/tbme.2014.2385874] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes.
Collapse
|
34
|
Kersten-Oertel M, Chen SJS, Collins DL. An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:391-403. [PMID: 24434220 DOI: 10.1109/tvcg.2013.240] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Cerebral vascular images obtained through angiography are used by neurosurgeons for diagnosis, surgical planning, and intraoperative guidance. The intricate branching of the vessels and furcations, however, make the task of understanding the spatial three-dimensional layout of these images challenging. In this paper, we present empirical studies on the effect of different perceptual cues (fog, pseudo-chromadepth, kinetic depth, and depicting edges) both individually and in combination on the depth perception of cerebral vascular volumes and compare these to the cue of stereopsis. Two experiments with novices and one experiment with experts were performed. The results with novices showed that the pseudo-chromadepth and fog cues were stronger cues than that of stereopsis. Furthermore, the addition of the stereopsis cue to the other cues did not improve relative depth perception in cerebral vascular volumes. In contrast to novices, the experts also performed well with the edge cue. In terms of both novice and expert subjects, pseudo-chromadepth and fog allow for the best relative depth perception. By using such cues to improve depth perception of cerebral vasculature, we may improve diagnosis, surgical planning, and intraoperative guidance.
Collapse
|
35
|
Design and validation of an augmented reality system for laparoscopic surgery in a real environment. BIOMED RESEARCH INTERNATIONAL 2013; 2013:758491. [PMID: 24236293 PMCID: PMC3819885 DOI: 10.1155/2013/758491] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Revised: 09/12/2013] [Accepted: 09/16/2013] [Indexed: 01/19/2023]
Abstract
Purpose. This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Method. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Results and Conclusion. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery.
Collapse
|
36
|
Simpson AL, Ma B, Vasarhelyi EM, Borschneck DP, Ellis RE, James Stewart A. Computation and visualization of uncertainty in surgical navigation. Int J Med Robot 2013; 10:332-43. [PMID: 24123606 DOI: 10.1002/rcs.1541] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2013] [Revised: 08/26/2013] [Accepted: 08/28/2013] [Indexed: 11/09/2022]
Abstract
BACKGROUND Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. METHODS The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. RESULTS It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. CONCLUSIONS Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display.
Collapse
Affiliation(s)
- Amber L Simpson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | | | | | | | | | | |
Collapse
|
37
|
Kersten-Oertel M, Jannin P, Collins DL. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph 2013; 37:98-112. [PMID: 23490236 DOI: 10.1016/j.compmedimag.2013.01.009] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2012] [Revised: 01/04/2013] [Accepted: 01/23/2013] [Indexed: 11/26/2022]
Abstract
This paper presents a review of the state of the art of visualization in mixed reality image guided surgery (IGS). We used the DVV (data, visualization processing, view) taxonomy to classify a large unbiased selection of publications in the field. The goal of this work was not only to give an overview of current visualization methods and techniques in IGS but more importantly to analyze the current trends and solutions used in the domain. In surveying the current landscape of mixed reality IGS systems, we identified a strong need to assess which of the many possible data sets should be visualized at particular surgical steps, to focus on novel visualization processing techniques and interface solutions, and to evaluate new systems.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- Department of Biomedical Engineering, McGill University, McConnell Brain Imaging Center, Montreal Neurological Institute, Montréal, Canada.
| | | | | |
Collapse
|
38
|
Thompson S, Penney G, Billia M, Challacombe B, Hawkes D, Dasgupta P. Design and evaluation of an image-guidance system for robot-assisted radical prostatectomy. BJU Int 2013; 111:1081-90. [DOI: 10.1111/j.1464-410x.2012.11692.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Stephen Thompson
- Centre for Medical Image Computing; University College London; London UK
| | - Graeme Penney
- Interdisciplinary Medical Imaging Group; Kings College London; London UK
| | - Michele Billia
- MRC Centre for Transplantation; NHIR Biomedical Research Centre; King's Health Partners; Guy's Hospital; London UK
| | - Ben Challacombe
- MRC Centre for Transplantation; NHIR Biomedical Research Centre; King's Health Partners; Guy's Hospital; London UK
| | - David Hawkes
- Centre for Medical Image Computing; University College London; London UK
| | - Prokar Dasgupta
- MRC Centre for Transplantation; NHIR Biomedical Research Centre; King's Health Partners; Guy's Hospital; London UK
| |
Collapse
|
39
|
Christopher LA, William A, Cohen-Gadol AA. Future Directions in 3-Dimensional Imaging and Neurosurgery. Neurosurgery 2013; 72 Suppl 1:131-8. [DOI: 10.1227/neu.0b013e318270d9c0] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
40
|
Changing paradigms in radioguided surgery and intraoperative imaging: the GOSTT concept. Eur J Nucl Med Mol Imaging 2011; 39:1-3. [DOI: 10.1007/s00259-011-1951-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|