1
|
Begagić E, Bečulić H, Pugonja R, Memić Z, Balogun S, Džidić-Krivić A, Milanović E, Salković N, Nuhović A, Skomorac R, Sefo H, Pojskić M. Augmented Reality Integration in Skull Base Neurosurgery: A Systematic Review. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:335. [PMID: 38399622 PMCID: PMC10889940 DOI: 10.3390/medicina60020335] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 02/05/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Background and Objectives: To investigate the role of augmented reality (AR) in skull base (SB) neurosurgery. Materials and Methods: Utilizing PRISMA methodology, PubMed and Scopus databases were explored to extract data related to AR integration in SB surgery. Results: The majority of 19 included studies (42.1%) were conducted in the United States, with a focus on the last five years (77.8%). Categorization included phantom skull models (31.2%, n = 6), human cadavers (15.8%, n = 3), or human patients (52.6%, n = 10). Microscopic surgery was the predominant modality in 10 studies (52.6%). Of the 19 studies, surgical modality was specified in 18, with microscopic surgery being predominant (52.6%). Most studies used only CT as the data source (n = 9; 47.4%), and optical tracking was the prevalent tracking modality (n = 9; 47.3%). The Target Registration Error (TRE) spanned from 0.55 to 10.62 mm. Conclusion: Despite variations in Target Registration Error (TRE) values, the studies highlighted successful outcomes and minimal complications. Challenges, such as device practicality and data security, were acknowledged, but the application of low-cost AR devices suggests broader feasibility.
Collapse
Affiliation(s)
- Emir Begagić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Hakija Bečulić
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Ragib Pugonja
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Zlatan Memić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Simon Balogun
- Division of Neurosurgery, Department of Surgery, Obafemi Awolowo University Teaching Hospitals Complex, Ilesa Road PMB 5538, Ile-Ife 220282, Nigeria
| | - Amina Džidić-Krivić
- Department of Neurology, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina
| | - Elma Milanović
- Neurology Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Naida Salković
- Department of General Medicine, School of Medicine, University of Tuzla, Univerzitetska 1, 75000 Tuzla, Bosnia and Herzegovina;
| | - Adem Nuhović
- Department of General Medicine, School of Medicine, University of Sarajevo, Univerzitetska 1, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Rasim Skomorac
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Surgery, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina
| | - Haso Sefo
- Neurosurgery Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Mirza Pojskić
- Department of Neurosurgery, University Hospital Marburg, Baldingerstr., 35033 Marburg, Germany
| |
Collapse
|
2
|
Abstract
BACKGROUND In recent years, numerous innovative yet challenging surgeries, such as minimally invasive procedures, have introduced an overwhelming amount of new technologies, increasing the cognitive load for surgeons and potentially diluting their attention. Cognitive support technologies (CSTs) have been in development to reduce surgeons' cognitive load and minimize errors. Despite its huge demands, it still lacks a systematic review. METHODS Literature was searched up until May 21st, 2021. Pubmed, Web of Science, and IEEExplore. Studies that aimed at reducing the cognitive load of surgeons were included. Additionally, studies that contained an experimental trial with real patients and real surgeons were prioritized, although phantom and animal studies were also included. Major outcomes that were assessed included surgical error, anatomical localization accuracy, total procedural time, and patient outcome. RESULTS A total of 37 studies were included. Overall, the implementation of CSTs had better surgical performance than the traditional methods. Most studies reported decreased error rate and increased efficiency. In terms of accuracy, most CSTs had over 90% accuracy in identifying anatomical markers with an error margin below 5 mm. Most studies reported a decrease in surgical time, although some were statistically insignificant. DISCUSSION CSTs have been shown to reduce the mental workload of surgeons. However, the limited ergonomic design of current CSTs has hindered their widespread use in the clinical setting. Overall, more clinical data on actual patients is needed to provide concrete evidence before the ubiquitous implementation of CSTs.
Collapse
Affiliation(s)
- Zhong Shi Zhang
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Yun Wu
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Bin Zheng
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
3
|
Kos TM, Colombo E, Bartels LW, Robe PA, van Doormaal TPC. Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review. Oper Neurosurg (Hagerstown) 2023:01787389-990000000-01007. [PMID: 38146941 PMCID: PMC11008635 DOI: 10.1227/ons.0000000000001009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/10/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies. METHODS PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics. CONCLUSION For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.
Collapse
Affiliation(s)
- Tessa M Kos
- Image Sciences Institute, University Medical Center Utrecht, Utrecht , The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht , The Netherlands
| | - Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich , The Netherlands
| | - L Wilbert Bartels
- Image Sciences Institute, University Medical Center Utrecht, Utrecht , The Netherlands
| | - Pierre A Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht , The Netherlands
| | - Tristan P C van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich , The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht , The Netherlands
| |
Collapse
|
4
|
Qian L, Song T, Unberath M, Kazanzides P. AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2550-2562. [PMID: 33170780 DOI: 10.1109/tvcg.2020.3037284] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
Collapse
|
5
|
Kalaiarasan K, Prathap L, Ayyadurai M, Subhashini P, Tamilselvi T, Avudaiappan T, Infant Raj I, Alemayehu Mamo S, Mezni A. Clinical Application of Augmented Reality in Computerized Skull Base Surgery. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE : ECAM 2022; 2022:1335820. [PMID: 35600956 PMCID: PMC9117015 DOI: 10.1155/2022/1335820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/19/2022] [Indexed: 12/02/2022]
Abstract
Cranial base tactics comprise the regulation of tiny and complicated structures in the domains of otology, rhinology, neurosurgery, and maxillofacial medical procedure. Basic nerves and veins are in the nearness of these buildings. Increased the truth is a coming innovation that may reform the cerebral basis approach by supplying vital physical and navigational facts brought together in a solitary presentation. In any case, the awareness and acknowledgment of prospective results of expanding reality frameworks in the cerebral base region are really poor. This article targets examining the handiness of expanded reality frameworks in cranial foundation medical procedures and emphasizes the obstacles that present innovation encounters and their prospective adjustments. A specialized perspective on distinct strategies used being produced of an improved realty framework is furthermore offered. The newest item offers an expansion in interest in expanded reality frameworks that may motivate more secure and practical procedures. In any case, a couple of concerns have to be cared to before that can be for the vast part fused into normal practice.
Collapse
Affiliation(s)
- K. Kalaiarasan
- Department of Information Technology, M. Kumarasamy College of Engineering, Karur, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - M. Ayyadurai
- SG, Institute of ECE, Saveetha School of Engineering, SIMATS, Chennai, Tamil Nadu 600077, India
| | - P. Subhashini
- Department of Computer Science and Engineering, J.N.N Institute of Engineering, Kannigaipair, Tamil Nadu 601102, India
| | - T. Tamilselvi
- Department of Computer Science and Engineering, Panimalar Institute of Technology, Varadarajapuram, Tamil Nadu 600123, India
| | - T. Avudaiappan
- Computer Science and Engineering, K. Ramakrishnan College of Technology, Trichy 621112, India
| | - I. Infant Raj
- Department of Computer Science and Engineering, K. Ramakrishnan College of Engineering, Trichy, India
| | - Samson Alemayehu Mamo
- Department of Electrical and Computer Engineering, Faculty of Electrical and Biomedical Engineering, Institute of Technology, Hawassa University, Awasa, Ethiopia
| | - Amine Mezni
- Department of Chemistry, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| |
Collapse
|
6
|
Abstract
OBJECTIVES A mixed reality (MR) headset that enables three-dimensional (3D) visualization of interactive holograms anchored to specific points in physical space was developed for use with lateral skull base anatomy. The objectives of this study are to: 1) develop an augmented reality platform using the headset for visualization of temporal bone structures, and 2) measure the accuracy of the platform as an image guidance system. METHODS A combination of semiautomatic and manual segmentation was used to generate 3D reconstructions of soft tissue and bony anatomy of cadaver heads and temporal bones from 2D computed tomography images. A Mixed-Reality platform was developed using C# programming to generate interactive 3D holograms that could be displayed in the HoloLens headset. Accuracy of visual surface registration was determined by target registration error between seven predefined points on a 3D holographic skull and 3D printed model. RESULTS Interactive 3D holograms of soft tissue, bony anatomy, and internal ear structures of cadaveric models were generated and visualized in the MR headset. Software user interface was developed to allow for user control of the virtual images through gaze, voice, and gesture commands. Visual surface point matching registration was used to align and anchor holograms to physical objects. The average target registration error of our system was 5.76 mm ± 0.54. CONCLUSION In this article, we demonstrate that an MR headset can be applied to display interactive 3D anatomic structures of the temporal bone that can be overlaid on physical models. This technology has the potential to be used as an image guidance tool during anatomic dissection and lateral skull base surgery.
Collapse
|
7
|
Rahman R, Wood ME, Qian L, Price CL, Johnson AA, Osgood GM. Head-Mounted Display Use in Surgery: A Systematic Review. Surg Innov 2019; 27:88-100. [DOI: 10.1177/1553350619871787] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Purpose. We analyzed the literature to determine (1) the surgically relevant applications for which head-mounted display (HMD) use is reported; (2) the types of HMD most commonly reported; and (3) the surgical specialties in which HMD use is reported. Methods. The PubMed, Embase, Cochrane Library, and Web of Science databases were searched through August 27, 2017, for publications describing HMD use during surgically relevant applications. We identified 120 relevant English-language, non-opinion publications for inclusion. HMD types were categorized as “heads-up” (nontransparent HMD display and direct visualization of the real environment), “see-through” (visualization of the HMD display overlaid on the real environment), or “non–see-through” (visualization of only the nontransparent HMD display). Results. HMDs were used for image guidance and augmented reality (70 publications), data display (63 publications), communication (34 publications), and education/training (18 publications). See-through HMDs were described in 55 publications, heads-up HMDs in 41 publications, and non–see-through HMDs in 27 publications. Google Glass, a see-through HMD, was the most frequently used model, reported in 32 publications. The specialties with the highest frequency of published HMD use were urology (20 publications), neurosurgery (17 publications), and unspecified surgical specialty (20 publications). Conclusion. Image guidance and augmented reality were the most commonly reported applications for which HMDs were used. See-through HMDs were the most commonly reported type used in surgically relevant applications. Urology and neurosurgery were the specialties with greatest published HMD use.
Collapse
Affiliation(s)
- Rafa Rahman
- The Johns Hopkins University, Baltimore, MD, USA
| | | | - Long Qian
- The Johns Hopkins University, Baltimore, MD, USA
| | | | | | | |
Collapse
|
8
|
Hussain R, Lalande A, Guigou C, Bozorg Grayeli A. Contribution of Augmented Reality to Minimally Invasive Computer-Assisted Cranial Base Surgery. IEEE J Biomed Health Inform 2019; 24:2093-2106. [DOI: 10.1109/jbhi.2019.2954003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
Yoon JW, Chen RE, Kim EJ, Akinduro OO, Kerezoudis P, Han PK, Si P, Freeman WD, Diaz RJ, Komotar RJ, Pirris SM, Brown BL, Bydon M, Wang MY, Wharen RE, Quinones-Hinojosa A. Augmented reality for the surgeon: Systematic review. Int J Med Robot 2018; 14:e1914. [DOI: 10.1002/rcs.1914] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Revised: 03/19/2018] [Accepted: 03/20/2018] [Indexed: 11/06/2022]
Affiliation(s)
- Jang W. Yoon
- Department of Neurological Surgery; Mayo Clinic; Jacksonville Florida USA
| | - Robert E. Chen
- Emory University School of Medicine; Atlanta Georgia USA
- Georgia Institute of Technology; Atlanta Georgia USA
| | | | | | | | | | - Phong Si
- Georgia Institute of Technology; Atlanta Georgia USA
| | | | - Roberto J. Diaz
- Department of Neurosurgery and Neurology; Montreal Neurological Institute and Hospital, McGill University; Montreal Quebec Canada
| | - Ricardo J. Komotar
- Department of Neurological Surgery; University of Miami Miller School of Medicine, University of Miami Hospital, University of Miami Brain Tumor Initiative; Miami Florida USA
| | - Stephen M. Pirris
- Department of Neurological Surgery; Mayo Clinic; Jacksonville Florida USA
- St. Vincent's Spine and Brain Institute; Jacksonville Florida USA
| | - Benjamin L. Brown
- Department of Neurological Surgery; Mayo Clinic; Jacksonville Florida USA
| | - Mohamad Bydon
- Department of Neurological Surgery; Mayo Clinic; Rochester Minnesota USA
| | - Michael Y. Wang
- Department of Neurological Surgery; University of Miami Miller School of Medicine, University of Miami Hospital, University of Miami Brain Tumor Initiative; Miami Florida USA
| | - Robert E. Wharen
- Department of Neurological Surgery; Mayo Clinic; Jacksonville Florida USA
| | | |
Collapse
|
10
|
Wager M, Rigoard P, Bouyer C, Baudiffier V, Stal V, Bataille B, Gil R, Du Boisgueheneuc F. Operating environment for awake brain surgery – Choice of tests. Neurochirurgie 2017; 63:150-157. [DOI: 10.1016/j.neuchi.2016.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Revised: 09/25/2016] [Accepted: 10/17/2016] [Indexed: 10/19/2022]
|
11
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
12
|
Towards Augmented Reality Guided Craniotomy Planning in Tumour Resections. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-43775-0_15] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
13
|
Azagury DE, Dua MM, Barrese JC, Henderson JM, Buchs NC, Ris F, Cloyd JM, Martinie JB, Razzaque S, Nicolau S, Soler L, Marescaux J, Visser BC. Image-guided surgery. Curr Probl Surg 2015; 52:476-520. [PMID: 26683419 DOI: 10.1067/j.cpsurg.2015.10.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2015] [Accepted: 10/01/2015] [Indexed: 12/11/2022]
Affiliation(s)
- Dan E Azagury
- Department of Surgery, Stanford University School of Medicine, Stanford, CA
| | - Monica M Dua
- Department of Surgery, Stanford University School of Medicine, Stanford, CA
| | - James C Barrese
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA
| | - Jaimie M Henderson
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA
| | - Nicolas C Buchs
- Department of Surgery, University Hospital of Geneva, Clinic for Visceral and Transplantation Surgery, Geneva, Switzerland
| | - Frederic Ris
- Department of Surgery, University Hospital of Geneva, Clinic for Visceral and Transplantation Surgery, Geneva, Switzerland
| | - Jordan M Cloyd
- Department of Surgery, Stanford University School of Medicine, Stanford, CA
| | - John B Martinie
- Department of Surgery, Carolinas Healthcare System, Charlotte, NC
| | - Sharif Razzaque
- Department of Surgery, Carolinas Healthcare System, Charlotte, NC
| | - Stéphane Nicolau
- IRCAD (Research Institute Against Digestive Cancer), Strasbourg, France
| | - Luc Soler
- IRCAD (Research Institute Against Digestive Cancer), Strasbourg, France
| | - Jacques Marescaux
- IRCAD (Research Institute Against Digestive Cancer), Strasbourg, France
| | - Brendan C Visser
- Department of Surgery, Stanford University School of Medicine, Stanford, CA.
| |
Collapse
|
14
|
Kersten-Oertel M, Gerard I, Drouin S, Mok K, Sirhan D, Sinclair DS, Collins DL. Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. Int J Comput Assist Radiol Surg 2015; 10:1823-36. [DOI: 10.1007/s11548-015-1163-8] [Citation(s) in RCA: 67] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 02/10/2015] [Indexed: 11/24/2022]
|
15
|
Augmented Reality for Specific Neurovascular Surgical Tasks. AUGMENTED ENVIRONMENTS FOR COMPUTER-ASSISTED INTERVENTIONS 2015. [DOI: 10.1007/978-3-319-24601-7_10] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
16
|
Stewart N, Lock G, Hopcraft A, Kanesarajah J, Coucher J. Stereoscopy in diagnostic radiology and procedure planning: Does stereoscopic assessment of volume-rendered CT angiograms lead to more accurate characterisation of cerebral aneurysms compared with traditional monoscopic viewing? J Med Imaging Radiat Oncol 2014; 58:172-82. [DOI: 10.1111/1754-9485.12146] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2013] [Accepted: 11/19/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Nikolas Stewart
- Princess Alexandra Hospital; Queensland Health; Brisbane Queensland Australia
| | - Gregory Lock
- Princess Alexandra Hospital; Queensland Health; Brisbane Queensland Australia
| | - Anthony Hopcraft
- Princess Alexandra Hospital; Queensland Health; Brisbane Queensland Australia
- Centre for Military and Veterans' Health; The University of Queensland; Brisbane Queensland Australia
| | - Jeeva Kanesarajah
- Centre for Military and Veterans' Health; The University of Queensland; Brisbane Queensland Australia
| | - John Coucher
- Princess Alexandra Hospital; Queensland Health; Brisbane Queensland Australia
| |
Collapse
|
17
|
Kersten-Oertel M, Jannin P, Collins DL. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph 2013; 37:98-112. [PMID: 23490236 DOI: 10.1016/j.compmedimag.2013.01.009] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2012] [Revised: 01/04/2013] [Accepted: 01/23/2013] [Indexed: 11/26/2022]
Abstract
This paper presents a review of the state of the art of visualization in mixed reality image guided surgery (IGS). We used the DVV (data, visualization processing, view) taxonomy to classify a large unbiased selection of publications in the field. The goal of this work was not only to give an overview of current visualization methods and techniques in IGS but more importantly to analyze the current trends and solutions used in the domain. In surveying the current landscape of mixed reality IGS systems, we identified a strong need to assess which of the many possible data sets should be visualized at particular surgical steps, to focus on novel visualization processing techniques and interface solutions, and to evaluate new systems.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- Department of Biomedical Engineering, McGill University, McConnell Brain Imaging Center, Montreal Neurological Institute, Montréal, Canada.
| | | | | |
Collapse
|
18
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
19
|
Abstract
Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room.
Collapse
Affiliation(s)
- Stéphane Nicolau
- IRCAD/EITS, Hôpitaux Universitaires de Strasbourg, Digestive and Endocrine Surgery, 1 Place de l'Hôpital, 67091 Strasbourg Cedex, France
| | | | | | | |
Collapse
|
20
|
Wang A, Mirsattari SM, Parrent AG, Peters TM. Fusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidance. ACTA ACUST UNITED AC 2011; 16:149-60. [PMID: 21668293 DOI: 10.3109/10929088.2011.585805] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE During epilepsy surgery it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. Augmented Reality (AR) provides a solution for combining the real environment with virtual models. However, AR usually requires the use of specialized displays, and its effectiveness in the surgery still needs to be evaluated. The objective of this research was to develop an alternative approach to provide enhanced visualization by fusing a direct (photographic) view of the surgical field with the 3D patient model during image guided epilepsy surgery. MATERIALS AND METHODS We correlated the preoperative plan with the intraoperative surgical scene, first by a manual landmark-based registration and then by an intensity-based perspective 3D-2D registration for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. The algorithm was validated by a phantom study and also in the clinical environment with a neuronavigation system. RESULTS In the phantom experiment, the 3D Mean Registration Error (MRE) was 2.43 ± 0.32 mm with a success rate of 100%. In the clinical experiment, the 3D MRE was 5.15 ± 0.49 mm with 2D in-plane error of 3.30 ± 1.41 mm. A clinical application of our fusion method for enhanced and augmented visualization for integrated image and functional guidance during neurosurgery is also presented. CONCLUSIONS This paper presents an alternative approach to a sophisticated AR environment for assisting in epilepsy surgery, whereby a real intraoperative scene is mapped onto the surface model of the brain. In contrast to the AR approach, this method needs no specialized display equipment. Moreover, it requires minimal changes to existing systems and workflow, and is therefore well suited to the OR environment. In the phantom and in vivo clinical experiments, we demonstrate that the fusion method can achieve a level of accuracy sufficient for the requirements of epilepsy surgery.
Collapse
Affiliation(s)
- A Wang
- Imaging Research Laboratories, Robarts Research Institute , London, Ontario
| | | | | | | |
Collapse
|
21
|
Cleary K, Peters TM. Image-guided interventions: technology review and clinical applications. Annu Rev Biomed Eng 2010; 12:119-42. [PMID: 20415592 DOI: 10.1146/annurev-bioeng-070909-105249] [Citation(s) in RCA: 229] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Image-guided interventions are medical procedures that use computer-based systems to provide virtual image overlays to help the physician precisely visualize and target the surgical site. This field has been greatly expanded by the advances in medical imaging and computing power over the past 20 years. This review begins with a historical overview and then describes the component technologies of tracking, registration, visualization, and software. Clinical applications in neurosurgery, orthopedics, and the cardiac and thoracoabdominal areas are discussed, together with a description of an evolving technology named Natural Orifice Transluminal Endoscopic Surgery (NOTES). As the trend toward minimally invasive procedures continues, image-guided interventions will play an important role in enabling new procedures, while improving the accuracy and success of existing approaches. Despite this promise, the role of image-guided systems must be validated by clinical trials facilitated by partnerships between scientists and physicians if this field is to reach its full potential.
Collapse
Affiliation(s)
- Kevin Cleary
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC 20007, USA.
| | | |
Collapse
|
22
|
Kockro RA, Tsai YT, Ng I, Hwang P, Zhu C, Agusanto K, Hong LX, Serra L. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery 2010; 65:795-807; discussion 807-8. [PMID: 19834386 DOI: 10.1227/01.neu.0000349918.36700.1c] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE We developed an augmented reality system that enables intraoperative image guidance by using 3-dimensional (3D) graphics overlaid on a video stream. We call this system DEX-Ray and report on its development and the initial intraoperative experience in 12 cases. METHODS DEX-Ray consists of a tracked handheld probe that integrates a lipstick-size video camera. The camera looks over the probe's tip into the surgical field. The camera's video stream is augmented with coregistered, multimodality 3D graphics and landmarks obtained during neurosurgical planning with 3D workstations. The handheld probe functions as a navigation device to view and point and as an interaction device to adjust the 3D graphics. We tested the system's accuracy in the laboratory and evaluated it intraoperatively with a series of tumor and vascular cases. RESULTS DEX-Ray provided accurate and real-time video-based augmented reality display. The system could be seamlessly integrated into the surgical workflow. The see-through effect revealing 3D information below the surgically exposed surface proved to be of significant value, especially during the macroscopic phase of an operation, providing easily understandable structural navigational information. Navigation in deep and narrow surgical corridors was limited by the camera resolution and light sensitivity. CONCLUSION The system was perceived as an improved navigational experience because the augmented see-through effect allowed direct understanding of the surgical anatomy beyond the visible surface and direct guidance toward surgical targets.
Collapse
Affiliation(s)
- Ralf A Kockro
- Department of Neurosurgery, University Hospital Zürich, Zürich, Switzerland.
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Linte CA, Moore J, Wiles AD, Wedlake C, Peters TM. Virtual reality-enhanced ultrasound guidance: A novel technique for intracardiac interventions. ACTA ACUST UNITED AC 2010; 13:82-94. [DOI: 10.3109/10929080801951160] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
24
|
|
25
|
Linte CA, White J, Eagleson R, Guiraudon GM, Peters TM. Virtual and Augmented Medical Imaging Environments: Enabling Technology for Minimally Invasive Cardiac Interventional Guidance. IEEE Rev Biomed Eng 2010; 3:25-47. [DOI: 10.1109/rbme.2010.2082522] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Figl M, Rueckert D, Hawkes D, Casula R, Hu M, Pedro O, Zhang DP, Penney G, Bello F, Edwards P. Image guidance for robotic minimally invasive coronary artery bypass. Comput Med Imaging Graph 2010; 34:61-8. [DOI: 10.1016/j.compmedimag.2009.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2009] [Revised: 07/25/2009] [Accepted: 08/07/2009] [Indexed: 11/16/2022]
|
27
|
High resolution stereoscopic volume visualization of the mouse arginine vasopressin system. J Neurosci Methods 2009; 187:41-5. [PMID: 20036282 DOI: 10.1016/j.jneumeth.2009.12.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Revised: 12/10/2009] [Accepted: 12/11/2009] [Indexed: 11/20/2022]
Abstract
New imaging technologies have increased our capabilities to resolve three-dimensional structures from microscopic samples. Laser-scanning confocal microscopy is particularly amenable to this task because it allows the researcher to optically section biological samples, creating three-dimensional image volumes. However, a number of problems arise when studying neural tissue samples. These include data set size, physical scanning restrictions, volume registration and display. To deal with these issues, we undertook large-scale confocal scanning microscopy in order to visualize neural networks spanning multiple tissue sections. We demonstrate a technique to create and visualize a three-dimensional digital reconstruction of the hypothalamic arginine vasopressin neuroendocrine system in the male mouse. The generated three-dimensional data included a volume of tissue that measures 4.35 mm x 2.6 mm x 1.4mm with a voxel resolution of 1.2 microm. The dataset matrix included 3508 x 2072 x 700 pixels and was a composite of 19,600 optical sections. Once reconstructed into a single volume, the data is suitable for interactive stereoscopic projection. Stereoscopic imaging provides greater insight and understanding of spatial relationships in neural tissues' inherently three-dimensional structure. This technique provides a model approach for the development of data sets that can provide new and informative volume rendered views of brain structures. This study affirms the value of stereoscopic volume-based visualization in neuroscience research and education, and the feasibility of creating large-scale high resolution interactive three-dimensional reconstructions of neural tissue from microscopic imagery.
Collapse
|
28
|
Bichlmeier C, Heining SM, Feuerstein M, Navab N. The virtual mirror: a new interaction paradigm for augmented reality environments. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:1498-1510. [PMID: 19336291 DOI: 10.1109/tmi.2009.2018622] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.
Collapse
Affiliation(s)
- Christoph Bichlmeier
- Department of Computer Science, Technische Universität München, München, Germany.
| | | | | | | |
Collapse
|
29
|
Vikal S, U-Thainual P, Carrino JA, Iordachita I, Fischer GS, Fichtinger G. Perk Station--Percutaneous surgery training and performance measurement platform. Comput Med Imaging Graph 2009; 34:19-32. [PMID: 19539446 DOI: 10.1016/j.compmedimag.2009.05.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2009] [Revised: 04/27/2009] [Accepted: 05/05/2009] [Indexed: 01/23/2023]
Abstract
MOTIVATION Image-guided percutaneous (through the skin) needle-based surgery has become part of routine clinical practice in performing procedures such as biopsies, injections and therapeutic implants. A novice physician typically performs needle interventions under the supervision of a senior physician; a slow and inherently subjective training process that lacks objective, quantitative assessment of the surgical skill and performance. Shortening the learning curve and increasing procedural consistency are important factors in assuring high-quality medical care. METHODS This paper describes a laboratory validation system, called Perk Station, for standardized training and performance measurement under different assistance techniques for needle-based surgical guidance systems. The initial goal of the Perk Station is to assess and compare different techniques: 2D image overlay, biplane laser guide, laser protractor and conventional freehand. The main focus of this manuscript is the planning and guidance software system developed on the 3D Slicer platform, a free, open source software package designed for visualization and analysis of medical image data. RESULTS The prototype Perk Station has been successfully developed, the associated needle insertion phantoms were built, and the graphical user interface was fully implemented. The system was inaugurated in undergraduate teaching and a wide array of outreach activities. Initial results, experiences, ongoing activities and future plans are reported.
Collapse
|
30
|
Multipurpose Navigation System-Based Concept for Surgical Template Production. J Oral Maxillofac Surg 2009; 67:1113-20. [DOI: 10.1016/j.joms.2008.12.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2008] [Revised: 10/03/2008] [Accepted: 12/15/2008] [Indexed: 11/22/2022]
|
31
|
Inside the beating heart: an in vivo feasibility study on fusing pre- and intra-operative imaging for minimally invasive therapy. Int J Comput Assist Radiol Surg 2008; 4:113-23. [PMID: 20033609 DOI: 10.1007/s11548-008-0278-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2008] [Accepted: 10/07/2008] [Indexed: 10/21/2022]
Abstract
OBJECTIVE An interventional system for minimally invasive cardiac surgery was developed for therapy delivery inside the beating heart, in absence of direct vision. METHOD A system was developed to provide a virtual reality (VR) environment that integrates pre-operative imaging, real-time intra-operative guidance using 2D trans-esophageal ultrasound, and models of the surgical tools tracked using a magnetic tracking system. Detailed 3D dynamic cardiac models were synthesized from high-resolution pre-operative MR data and registered within the intra-operative imaging environment. The feature-based registration technique was employed to fuse pre- and intra-operative data during in vivo intracardiac procedures on porcine subjects. RESULTS This method was found to be suitable for in vivo applications as it relies on easily identifiable landmarks, and hence, it ensures satisfactory alignment of pre- and intra-operative anatomy in the region of interest (4.8 mm RMS alignment accuracy) within the VR environment. Our initial experience in translating this work to guide intracardiac interventions, such as mitral valve implantation and atrial septal defect repair demonstrated feasibility of the methods. CONCLUSION Surgical guidance in the absence of direct vision and with no exposure to ionizing radiation was achieved, so our virtual environment constitutes a feasible candidate for performing various off-pump intracardiac interventions.
Collapse
|
32
|
Towards a Medical Virtual Reality Environment for Minimally Invasive Cardiac Surgery. LECTURE NOTES IN COMPUTER SCIENCE 2008. [DOI: 10.1007/978-3-540-79982-5_1] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
33
|
Widmann G. Image-guided surgery and medical robotics in the cranial area. Biomed Imaging Interv J 2007; 3:e11. [PMID: 21614255 PMCID: PMC3097655 DOI: 10.2349/biij.3.1.e11] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2006] [Accepted: 02/21/2007] [Indexed: 11/17/2022] Open
Abstract
Surgery in the cranial area includes complex anatomic situations with high-risk structures and high demands for functional and aesthetic results. Conventional surgery requires that the surgeon transfers complex anatomic and surgical planning information, using spatial sense and experience. The surgical procedure depends entirely on the manual skills of the operator. The development of image-guided surgery provides new revolutionary opportunities by integrating presurgical 3D imaging and intraoperative manipulation. Augmented reality, mechatronic surgical tools, and medical robotics may continue to progress in surgical instrumentation, and ultimately, surgical care. The aim of this article is to review and discuss state-of-the-art surgical navigation and medical robotics, image-to-patient registration, aspects of accuracy, and clinical applications for surgery in the cranial area.
Collapse
Affiliation(s)
- G Widmann
- Department of Radiology, Innsbruck Medical University, Anichstr, Austria
| |
Collapse
|
34
|
Abstract
Contemporary imaging modalities can now provide the surgeon with high quality three- and four-dimensional images depicting not only normal anatomy and pathology, but also vascularity and function. A key component of image-guided surgery (IGS) is the ability to register multi-modal pre-operative images to each other and to the patient. The other important component of IGS is the ability to track instruments in real time during the procedure and to display them as part of a realistic model of the operative volume. Stereoscopic, virtual- and augmented-reality techniques have been implemented to enhance the visualization and guidance process. For the most part, IGS relies on the assumption that the pre-operatively acquired images used to guide the surgery accurately represent the morphology of the tissue during the procedure. This assumption may not necessarily be valid, and so intra-operative real-time imaging using interventional MRI, ultrasound, video and electrophysiological recordings are often employed to ameliorate this situation. Although IGS is now in extensive routine clinical use in neurosurgery and is gaining ground in other surgical disciplines, there remain many drawbacks that must be overcome before it can be employed in more general minimally-invasive procedures. This review overviews the roots of IGS in neurosurgery, provides examples of its use outside the brain, discusses the infrastructure required for successful implementation of IGS approaches and outlines the challenges that must be overcome for IGS to advance further.
Collapse
Affiliation(s)
- Terry M Peters
- Robarts Research Institute, University of Western Ontario, PO Box 5015, 100 Perth Drive, London, ON N6A 5K8, Canada.
| |
Collapse
|
35
|
Paul P, Fleig O, Jannin P. Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1500-11. [PMID: 16279086 DOI: 10.1109/tmi.2005.857029] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Displaying anatomical and physiological information derived from preoperative medical images in the operating room is critical in image-guided neurosurgery. This paper presents a new approach referred to as augmented virtuality (AV) for displaying intraoperative views of the operative field over three-dimensional (3-D) multimodal preoperative images onto an external screen during surgery. A calibrated stereovision system was set up between the surgical microscope and the binocular tubes. Three-dimensional surface meshes of the operative field were then generated using stereopsis. These reconstructed 3-D surface meshes were directly displayed without any additional geometrical transform over preoperative images of the patient in the physical space. Performance evaluation was achieved using a physical skull phantom. Accuracy of the reconstruction method itself was shown to be within 1 mm (median: 0.76 mm +/- 0.27), whereas accuracy of the overall approach was shown to be within 3 mm (median: 2.29 mm +/- 0.59), including the image-to-physical space registration error. We report the results of six surgical cases where AV was used in conjunction with augmented reality. AV not only enabled vision beyond the cortical surface but also gave an overview of the surgical area. This approach facilitated understanding of the spatial relationship between the operative field and the preoperative multimodal 3-D images of the patient.
Collapse
Affiliation(s)
- Perrine Paul
- Laboratoire IDM, Faculté de Médecine, 35043 Rennes Cedex, France.
| | | | | |
Collapse
|
36
|
Figl M, Ede C, Hummel J, Wanschitz F, Ewers R, Bergmann H, Birkfellner W. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1492-9. [PMID: 16279085 DOI: 10.1109/tmi.2005.856746] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.
Collapse
Affiliation(s)
- Michael Figl
- Center for Biomedical Engineering and Physics, Medical University of Vienna, A-1090 Vienna, Austria.
| | | | | | | | | | | | | |
Collapse
|
37
|
Appendices. J Laparoendosc Adv Surg Tech A 2005. [DOI: 10.1089/lap.2005.15.563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
38
|
Hoffmann J, Westendorff C, Gomez-Roman G, Reinert S. Accuracy of navigation-guided socket drilling before implant installation compared to the conventional free-hand method in a synthetic edentulous lower jaw model. Clin Oral Implants Res 2005; 16:609-14. [PMID: 16164469 DOI: 10.1111/j.1600-0501.2005.01153.x] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
In this study, the three-dimensional (3D) accuracy of navigation-guided (NG) socket drilling before implant installation was compared to the conventional free-hand (CF) method in a synthetic edentulous lower jaw model. The drillings were performed by two surgeons with different years of working experience. The inter-individual outcome was assessed. NG drillings were performed using an optical computerized tomography (CT)-based navigation system. CF drillings were performed using a surgical template. The coordinates of the drilled sockets were determined on the basis of CT scans. A total of n=224 drillings was evaluated. Inter-individual differences in terms of the surgeons' years of work experience were without statistical significance. The mean deviation of the CF drilled sockets (n=112) on the vestibulo-oral and mesio-distal direction was 11.2+/-5.6 degrees (range: 4.1-25.3 degrees ). With respect to the NG drilled sockets (n=112), the mean deviation was 4.2+/-1.8 degrees (range: 2.3-11.5). The mean distance to the mandibular canal was 1.1+/-0.6 mm (range: 0.1-2.3 mm) for CF-drilled sockets and 0.7+/-0.5 mm (range: 0.1-1.8 mm) for NG drilled sockets. The differences between the two methods were highly significant (P<0.01). A potential benefit from image-data-based navigation in implant surgery is discussed against the background of cost-effectiveness.
Collapse
Affiliation(s)
- Jürgen Hoffmann
- Department of Oral and Maxillofacial Surgery, Tübingen University Hospital, Tübingen, Germany.
| | | | | | | |
Collapse
|