1
|
Allen D, Peters T, Chen ECS. Enhancing surgical navigation: a robust hand-eye calibration method for the Microsoft HoloLens 2. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03250-8. [PMID: 39259481 DOI: 10.1007/s11548-024-03250-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 08/01/2024] [Indexed: 09/13/2024]
Abstract
PURPOSE Optical-see-through head-mounted displays have the ability to seamlessly integrate virtual content with the real world through a transparent lens and an optical combiner. Although their potential for use in surgical settings has been explored, their clinical translation is sparse in the current literature, largely due to their limited tracking capabilities and the need for manual alignment of virtual representations of objects with their real-world counterparts. METHODS We propose a simple and robust hand-eye calibration process for the depth camera of the Microsoft HoloLens 2, utilizing a tracked surgical stylus fitted with infrared reflective spheres as the calibration tool. RESULTS Using a Monte Carlo simulation and a paired-fiducial registration algorithm, we show that a calibration accuracy of 1.65 mm can be achieved with as little as 6 fiducial points. We also present heuristics for optimizing the accuracy of the calibration. The ability to use our calibration method in a clinical setting is validated through a user study, with users achieving a mean calibration accuracy of 1.67 mm in an average time of 42 s. CONCLUSION This work enables real-time hand-eye calibration for the Microsoft HoloLens 2, without any need for a manual alignment process. Using this framework, existing surgical navigation systems employing optical or electromagnetic tracking can easily be incorporated into an augmented reality environment with a high degree of accuracy.
Collapse
Affiliation(s)
- Daniel Allen
- School of Biomedical Engineering, Western University, London, Ontario, Canada.
- Robarts Research Institute, Western University, London, Ontario, Canada.
| | - Terry Peters
- School of Biomedical Engineering, Western University, London, Ontario, Canada
- Robarts Research Institute, Western University, London, Ontario, Canada
- Department of Medical Biophysics, Western University, London, Ontario, Canada
| | - Elvis C S Chen
- School of Biomedical Engineering, Western University, London, Ontario, Canada
- Robarts Research Institute, Western University, London, Ontario, Canada
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Department of Electrical and Computer Engineering, Western University, London, Ontario, Canada
| |
Collapse
|
2
|
Qi Z, Jin H, Wang Q, Gan Z, Xiong R, Zhang S, Liu M, Wang J, Ding X, Chen X, Zhang J, Nimsky C, Bopp MHA. The Feasibility and Accuracy of Holographic Navigation with Laser Crosshair Simulator Registration on a Mixed-Reality Display. SENSORS (BASEL, SWITZERLAND) 2024; 24:896. [PMID: 38339612 PMCID: PMC10857152 DOI: 10.3390/s24030896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
Addressing conventional neurosurgical navigation systems' high costs and complexity, this study explores the feasibility and accuracy of a simplified, cost-effective mixed reality navigation (MRN) system based on a laser crosshair simulator (LCS). A new automatic registration method was developed, featuring coplanar laser emitters and a recognizable target pattern. The workflow was integrated into Microsoft's HoloLens-2 for practical application. The study assessed the system's precision by utilizing life-sized 3D-printed head phantoms based on computed tomography (CT) or magnetic resonance imaging (MRI) data from 19 patients (female/male: 7/12, average age: 54.4 ± 18.5 years) with intracranial lesions. Six to seven CT/MRI-visible scalp markers were used as reference points per case. The LCS-MRN's accuracy was evaluated through landmark-based and lesion-based analyses, using metrics such as target registration error (TRE) and Dice similarity coefficient (DSC). The system demonstrated immersive capabilities for observing intracranial structures across all cases. Analysis of 124 landmarks showed a TRE of 3.0 ± 0.5 mm, consistent across various surgical positions. The DSC of 0.83 ± 0.12 correlated significantly with lesion volume (Spearman rho = 0.813, p < 0.001). Therefore, the LCS-MRN system is a viable tool for neurosurgical planning, highlighting its low user dependency, cost-efficiency, and accuracy, with prospects for future clinical application enhancements.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Ruochu Xiong
- Department of Neurosurgery, Division of Medicine, Graduate School of Medical Sciences, Kanazawa University, Takara-machi 13-1, Kanazawa 920-8641, Japan;
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Minghang Liu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Xinyu Ding
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| |
Collapse
|
3
|
Colombo E, Regli L, Esposito G, Germans MR, Fierstra J, Serra C, Sebök M, van Doormaal T. Mixed Reality for Cranial Neurosurgical Planning: A Single-Center Applicability Study With the First 107 Subsequent Holograms. Oper Neurosurg (Hagerstown) 2023; 26:01787389-990000000-01013. [PMID: 38156882 PMCID: PMC11008664 DOI: 10.1227/ons.0000000000001033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 10/17/2023] [Indexed: 01/03/2024] Open
Abstract
BACKGROUND AND OBJECTIVES Mixed reality (MxR) benefits neurosurgery by improving anatomic visualization, surgical planning and training. We aim to validate the usability of a dedicated certified system for this purpose. METHODS All cases prepared with MxR in our center in 2022 were prospectively collected. Holographic rendering was achieved using an incorporated fully automatic algorithm in the MxR application, combined with contrast-based semiautomatic rendering and/or manual segmentation where necessary. Hologram segmentation times were documented. Visualization during surgical preparation (defined as the interval between finalized anesthesiological induction and sterile draping) was performed using MxR glasses and direct streaming to a side screen. Surgical preparation times were compared with a matched historical cohort of 2021. Modifications of the surgical approach after 3-dimensional (3D) visualization were noted. Usability was assessed by evaluating 7 neurosurgeons with more than 3 months of experience with the system using a Usefulness, Satisfaction and Ease of use (USE) questionnaire. RESULTS One hundred-seven neurosurgical cases prepared with a 3D hologram were collected. Surgical indications were oncologic (63/107, 59%), cerebrovascular (27/107, 25%), and carotid endarterectomy (17/107, 16%). Mean hologram segmentation time was 39.4 ± 20.4 minutes. Average surgical preparation time was 48.0 ± 17.3 minutes for MxR cases vs 52 ± 17 minutes in the matched 2021 cohort without MxR (mean difference 4, 95% CI 1.7527-9.7527). Based on the 3D hologram, the surgical approach was modified in 3 cases. Good usability was found by 57% of the users. CONCLUSION The perioperative use of 3D holograms improved direct anatomic visualization while not significantly increasing intraoperative surgical preparation time. Usability of the system was adequate. Further technological development is necessary to improve the automatic algorithms and reduce the preparation time by circumventing manual and semiautomatic segmentation. Future studies should focus on quantifying the potential benefits in teaching, training, and the impact on surgical and functional outcomes.
Collapse
Affiliation(s)
- Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universität Zürich, Universitätsspital Zürich, Zurich, Switzerland
| | - Luca Regli
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| | - Giuseppe Esposito
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| | - Menno R. Germans
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| | - Jorn Fierstra
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| | - Carlo Serra
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| | - Martina Sebök
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| | - Tristan van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, Switzerland
| |
Collapse
|
4
|
Kos TM, Colombo E, Bartels LW, Robe PA, van Doormaal TPC. Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review. Oper Neurosurg (Hagerstown) 2023; 26:01787389-990000000-01007. [PMID: 38146941 PMCID: PMC11008635 DOI: 10.1227/ons.0000000000001009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/10/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies. METHODS PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics. CONCLUSION For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.
Collapse
Affiliation(s)
- Tessa M. Kos
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
| | - L. Wilbert Bartels
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Pierre A. Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P. C. van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
5
|
Cui Y, Zhou Y, Zhang H, Yuan Y, Wang J, Zhang Z. Application of Glasses-Free Augmented Reality Localization in Neurosurgery. World Neurosurg 2023; 180:e296-e301. [PMID: 37757949 DOI: 10.1016/j.wneu.2023.09.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 09/15/2023] [Accepted: 09/16/2023] [Indexed: 09/29/2023]
Abstract
OBJECTIVE The accurate localization of intracranial lesions is critical in neurosurgery. Most surgeons locate the vast majority of neurosurgical sites through skull surface markers, combined with neuroimaging examination and marking lines. This project's primary purpose was to develop an augmented reality (AR) technology or tool that can be used for surgical positioning using the naked eye. METHODS Brain models were predesigned with intracranial lesions using computerized tomography scan, and Digital Imaging and Communications in Medicine data were segmented and modeled by 3D slicer software. The processed data were imported into a smartphone 3D viewing software application (Persp 3D) and were used by a Remebot surgical robot. The localization of intracranial lesions was performed, and the AR localization error was calculated compared with standard robot localization. RESULTS After mastering the AR localization registration method, surgeons achieved an average localization error of 1.39 ± 0.82 mm. CONCLUSIONS The error of AR positioning technology in surgical simulation tests based on brain modeling was millimeter level, which has verified the feasibility of clinical application. More efficient registration remains a need that should be addressed.
Collapse
Affiliation(s)
- Yahui Cui
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Yupeng Zhou
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Haipeng Zhang
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Yuxiao Yuan
- Department of Radiology, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Wang
- Operating Room, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Zuyong Zhang
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China.
| |
Collapse
|
6
|
Qi Z, Bopp MHA, Nimsky C, Chen X, Xu X, Wang Q, Gan Z, Zhang S, Wang J, Jin H, Zhang J. A Novel Registration Method for a Mixed Reality Navigation System Based on a Laser Crosshair Simulator: A Technical Note. Bioengineering (Basel) 2023; 10:1290. [PMID: 38002414 PMCID: PMC10669875 DOI: 10.3390/bioengineering10111290] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 11/01/2023] [Indexed: 11/26/2023] Open
Abstract
Mixed Reality Navigation (MRN) is pivotal in augmented reality-assisted intelligent neurosurgical interventions. However, existing MRN registration methods face challenges in concurrently achieving low user dependency, high accuracy, and clinical applicability. This study proposes and evaluates a novel registration method based on a laser crosshair simulator, evaluating its feasibility and accuracy. A novel registration method employing a laser crosshair simulator was introduced, designed to replicate the scanner frame's position on the patient. The system autonomously calculates the transformation, mapping coordinates from the tracking space to the reference image space. A mathematical model and workflow for registration were designed, and a Universal Windows Platform (UWP) application was developed on HoloLens-2. Finally, a head phantom was used to measure the system's target registration error (TRE). The proposed method was successfully implemented, obviating the need for user interactions with virtual objects during the registration process. Regarding accuracy, the average deviation was 3.7 ± 1.7 mm. This method shows encouraging results in efficiency and intuitiveness and marks a valuable advancement in low-cost, easy-to-use MRN systems. The potential for enhancing accuracy and adaptability in intervention procedures positions this approach as promising for improving surgical outcomes.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| |
Collapse
|
7
|
Kazemzadeh K, Akhlaghdoust M, Zali A. Advances in artificial intelligence, robotics, augmented and virtual reality in neurosurgery. Front Surg 2023; 10:1241923. [PMID: 37693641 PMCID: PMC10483402 DOI: 10.3389/fsurg.2023.1241923] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023] Open
Abstract
Neurosurgical practitioners undergo extensive and prolonged training to acquire diverse technical proficiencies, while neurosurgical procedures necessitate a substantial amount of pre-, post-, and intraoperative clinical data acquisition, making decisions, attention, and convalescence. The past decade witnessed an appreciable escalation in the significance of artificial intelligence (AI) in neurosurgery. AI holds significant potential in neurosurgery as it supplements the abilities of neurosurgeons to offer optimal interventional and non-interventional care to patients by improving prognostic and diagnostic outcomes in clinical therapy and assisting neurosurgeons in making decisions while surgical interventions to enhance patient outcomes. Other technologies including augmented reality, robotics, and virtual reality can assist and promote neurosurgical methods as well. Moreover, they play a significant role in generating, processing, as well as storing experimental and clinical data. Also, the usage of these technologies in neurosurgery is able to curtail the number of costs linked with surgical care and extend high-quality health care to a wider populace. This narrative review aims to integrate the results of articles that elucidate the role of the aforementioned technologies in neurosurgery.
Collapse
Affiliation(s)
- Kimia Kazemzadeh
- Students’ Scientific Research Center, Tehran University of Medical Sciences, Tehran, Iran
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
| | - Meisam Akhlaghdoust
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Zali
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
8
|
Ragnhildstveit A, Li C, Zimmerman MH, Mamalakis M, Curry VN, Holle W, Baig N, Uğuralp AK, Alkhani L, Oğuz-Uğuralp Z, Romero-Garcia R, Suckling J. Intra-operative applications of augmented reality in glioma surgery: a systematic review. Front Surg 2023; 10:1245851. [PMID: 37671031 PMCID: PMC10476869 DOI: 10.3389/fsurg.2023.1245851] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/04/2023] [Indexed: 09/07/2023] Open
Abstract
Background Augmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential. Methods A systematic review of the literature was conducted. This adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. PubMed, Embase, and Scopus electronic databases were queried from inception to October 10, 2022. Leveraging the Population, Intervention, Comparison, Outcomes, and Study design (PICOS) framework, study eligibility was evaluated in the qualitative synthesis. Data regarding AR workflow, surgical application, and associated outcomes were then extracted. The quality of evidence was additionally examined, using hierarchical classes of evidence in neurosurgery. Results The search returned 77 articles. Forty were subject to title and abstract screening, while 25 proceeded to full text screening. Of these, 22 articles met eligibility criteria and were included in the final review. During abstraction, studies were classified as "development" or "intervention" based on primary aims. Overall, AR was qualitatively advantageous, due to enhanced visualization of gliomas and critical structures, frequently aiding in maximal safe resection. Non-rigid applications were also useful in disclosing and compensating for intra-operative brain shift. Irrespective, there was high variance in registration methods and measurements, which considerably impacted projection accuracy. Most studies were of low-level evidence, yielding heterogeneous results. Conclusions AR has increasing potential for glioma surgery, with capacity to positively influence the onco-functional balance. However, technical and design limitations are readily apparent. The field must consider the importance of consistency and replicability, as well as the level of evidence, to effectively converge on standard approaches that maximize patient benefit.
Collapse
Affiliation(s)
- Anya Ragnhildstveit
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Chao Li
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, England
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England
| | | | - Michail Mamalakis
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Victoria N. Curry
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Willis Holle
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Physics and Astronomy, The University of Utah, Salt Lake City, UT, United States
| | - Noor Baig
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, United States
| | | | - Layth Alkhani
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Biology, Stanford University, Stanford, CA, United States
| | | | - Rafael Romero-Garcia
- Department of Psychiatry, University of Cambridge, Cambridge, England
- Instituto de Biomedicina de Sevilla (IBiS) HUVR/CSIC/Universidad de Sevilla/CIBERSAM, ISCIII, Dpto. de Fisiología Médica y Biofísica
| | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge, England
| |
Collapse
|
9
|
Goto Y, Kawaguchi A, Inoue Y, Nakamura Y, Oyama Y, Tomioka A, Higuchi F, Uno T, Shojima M, Kin T, Shin M. Efficacy of a Novel Augmented Reality Navigation System Using 3D Computer Graphic Modeling in Endoscopic Transsphenoidal Surgery for Sellar and Parasellar Tumors. Cancers (Basel) 2023; 15:cancers15072148. [PMID: 37046809 PMCID: PMC10093001 DOI: 10.3390/cancers15072148] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/30/2023] [Accepted: 04/03/2023] [Indexed: 04/14/2023] Open
Abstract
In endoscopic transsphenoidal skull base surgery, knowledge of tumor location on imaging and the anatomic structures is required simultaneously. However, it is often difficult to accurately reconstruct the endoscopic vision of the surgical field from the pre-surgical radiographic images because the lesion remarkably displaces the geography of normal anatomic structures. We created a precise three-dimensional computer graphic model from preoperative radiographic data that was then superimposed on a visual image of the actual surgical field and displayed on a video monitor during endoscopic transsphenoidal surgery. We evaluated the efficacy of this augmented reality (AR) navigation system in 15 consecutive patients with sellar and parasellar tumors. The average score overall was 4.7 [95% confidence interval: 4.58-4.82], which indicates that the AR navigation system was as useful as or more useful than conventional navigation in certain patients. In two patients, AR navigation was assessed as less useful than conventional navigation because perception of the depth of the lesion was more difficult. The developed system was more useful than conventional navigation for facilitating an immediate three-dimensional understanding of the lesion and surrounding structures.
Collapse
Affiliation(s)
- Yoshiaki Goto
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Ai Kawaguchi
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Yuki Inoue
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Yuki Nakamura
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Yuta Oyama
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Arisa Tomioka
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Fumi Higuchi
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Takeshi Uno
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Masaaki Shojima
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| | - Taichi Kin
- Department of Neurosurgery, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo 133-8655, Japan
| | - Masahiro Shin
- Department of Neurosurgery, University of Teikyo Hospital, 2-11-1 Kaga, Itabashi-ku, Tokyo 179-8606, Japan
| |
Collapse
|
10
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
11
|
Van Gestel F, Frantz T, Buyck F, Geens W, Neuville Q, Bruneau M, Jansen B, Scheerlinck T, Vandemeulebroucke J, Duerinck J. Neuro-oncological augmented reality planning for intracranial tumor resection. Front Neurol 2023; 14:1104571. [PMID: 36998774 PMCID: PMC10043492 DOI: 10.3389/fneur.2023.1104571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 02/14/2023] [Indexed: 03/18/2023] Open
Abstract
BackgroundBefore starting surgery for the resection of an intracranial tumor, its outlines are typically marked on the skin of the patient. This allows for the planning of the optimal skin incision, craniotomy, and angle of approach. Conventionally, the surgeon determines tumor borders using neuronavigation with a tracked pointer. However, interpretation errors can lead to important deviations, especially for deep-seated tumors, potentially resulting in a suboptimal approach with incomplete exposure. Augmented reality (AR) allows displaying of the tumor and critical structures directly on the patient, which can simplify and improve surgical preparation.MethodsWe developed an AR-based workflow for intracranial tumor resection planning deployed on the Microsoft HoloLens II, which exploits the built-in infrared-camera for tracking the patient. We initially performed a phantom study to assess the accuracy of the registration and tracking. Following this, we evaluated the AR-based planning step in a prospective clinical study for patients undergoing resection of a brain tumor. This planning step was performed by 12 surgeons and trainees with varying degrees of experience. After patient registration, tumor outlines were marked on the patient's skin by different investigators, consecutively using a conventional neuronavigation system and an AR-based system. Their performance in both registration and delineation was measured in terms of accuracy and duration and compared.ResultsDuring phantom testing, registration errors remained below 2.0 mm and 2.0° for both AR-based navigation and conventional neuronavigation, with no significant difference between both systems. In the prospective clinical trial, 20 patients underwent tumor resection planning. Registration accuracy was independent of user experience for both AR-based navigation and the commercial neuronavigation system. AR-guided tumor delineation was deemed superior in 65% of cases, equally good in 30% of cases, and inferior in 5% of cases when compared to the conventional navigation system. The overall planning time (AR = 119 ± 44 s, conventional = 187 ± 56 s) was significantly reduced through the adoption of the AR workflow (p < 0.001), with an average time reduction of 39%.ConclusionBy providing a more intuitive visualization of relevant data to the surgeon, AR navigation provides an accurate method for tumor resection planning that is quicker and more intuitive than conventional neuronavigation. Further research should focus on intraoperative implementations.
Collapse
Affiliation(s)
- Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Center for Neurosciences (C4N-NEUR), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- *Correspondence: Frederick Van Gestel
| | - Taylor Frantz
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- IMEC, Leuven, Belgium
| | - Felix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Wietse Geens
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Quentin Neuville
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Center for Neurosciences (C4N-NEUR), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Bart Jansen
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- IMEC, Leuven, Belgium
| | - Thierry Scheerlinck
- Department of Orthopedic Surgery and Traumatology, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Beeldvorming en Fysische Wetenschappen (BEFY-ORTHO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- IMEC, Leuven, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Center for Neurosciences (C4N-NEUR), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| |
Collapse
|
12
|
Boaro A, Moscolo F, Feletti A, Polizzi G, Nunes S, Siddi F, Broekman M, Sala F. Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon. BRAIN & SPINE 2022; 2:100926. [PMID: 36248169 PMCID: PMC9560703 DOI: 10.1016/j.bas.2022.100926] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/23/2022] [Accepted: 08/10/2022] [Indexed: 11/22/2022]
Abstract
Introduction The evolution of neurosurgery coincides with the evolution of visualization and navigation. Augmented reality technologies, with their ability to bring digital information into the real environment, have the potential to provide a new, revolutionary perspective to the neurosurgeon. Research question To provide an overview on the historical and technical aspects of visualization and navigation in neurosurgery, and to provide a systematic review on augmented reality (AR) applications in neurosurgery. Material and methods We provided an overview on the main historical milestones and technical features of visualization and navigation tools in neurosurgery. We systematically searched PubMed and Scopus databases for AR applications in neurosurgery and specifically discussed their relationship with current visualization and navigation systems, as well as main limitations. Results The evolution of visualization in neurosurgery is embodied by four magnification systems: surgical loupes, endoscope, surgical microscope and more recently the exoscope, each presenting independent features in terms of magnification capabilities, eye-hand coordination and the possibility to implement additional functions. In regard to navigation, two independent systems have been developed: the frame-based and the frame-less systems. The most frequent application setting for AR is brain surgery (71.6%), specifically neuro-oncology (36.2%) and microscope-based (29.2%), even though in the majority of cases AR applications presented their own visualization supports (66%). Discussion and conclusions The evolution of visualization and navigation in neurosurgery allowed for the development of more precise instruments; the development and clinical validation of AR applications, have the potential to be the next breakthrough, making surgeries safer, as well as improving surgical experience and reducing costs.
Collapse
Affiliation(s)
- A. Boaro
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Moscolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - A. Feletti
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - G.M.V. Polizzi
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - S. Nunes
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Siddi
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
| | - M.L.D. Broekman
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
- Department of Neurosurgery, Leiden University Medical Center, Leiden, Zuid-Holland, the Netherlands
| | - F. Sala
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| |
Collapse
|
13
|
Doughty M, Ghugre NR, Wright GA. Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery. J Imaging 2022; 8:jimaging8070203. [PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 02/01/2023] Open
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - Graham A. Wright
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
14
|
Robertson FC, Sha RM, Amich JM, Essayed WI, Lal A, Lee BH, Calvachi Prieto P, Tokuda J, Weaver JC, Kirollos RW, Chen MW, Gormley WB. Frameless neuronavigation with computer vision and real-time tracking for bedside external ventricular drain placement: a cadaveric study. J Neurosurg 2022; 136:1475-1484. [PMID: 34653985 DOI: 10.3171/2021.5.jns211033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 05/18/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE A major obstacle to improving bedside neurosurgical procedure safety and accuracy with image guidance technologies is the lack of a rapidly deployable, real-time registration and tracking system for a moving patient. This deficiency explains the persistence of freehand placement of external ventricular drains, which has an inherent risk of inaccurate positioning, multiple passes, tract hemorrhage, and injury to adjacent brain parenchyma. Here, the authors introduce and validate a novel image registration and real-time tracking system for frameless stereotactic neuronavigation and catheter placement in the nonimmobilized patient. METHODS Computer vision technology was used to develop an algorithm that performed near-continuous, automatic, and marker-less image registration. The program fuses a subject's preprocedure CT scans to live 3D camera images (Snap-Surface), and patient movement is incorporated by artificial intelligence-driven recalibration (Real-Track). The surface registration error (SRE) and target registration error (TRE) were calculated for 5 cadaveric heads that underwent serial movements (fast and slow velocity roll, pitch, and yaw motions) and several test conditions, such as surgical draping with limited anatomical exposure and differential subject lighting. Six catheters were placed in each cadaveric head (30 total placements) with a simulated sterile technique. Postprocedure CT scans allowed comparison of planned and actual catheter positions for user error calculation. RESULTS Registration was successful for all 5 cadaveric specimens, with an overall mean (± standard deviation) SRE of 0.429 ± 0.108 mm for the catheter placements. Accuracy of TRE was maintained under 1.2 mm throughout specimen movements of low and high velocities of roll, pitch, and yaw, with the slowest recalibration time of 0.23 seconds. There were no statistically significant differences in SRE when the specimens were draped or fully undraped (p = 0.336). Performing registration in a bright versus a dimly lit environment had no statistically significant effect on SRE (p = 0.742 and 0.859, respectively). For the catheter placements, mean TRE was 0.862 ± 0.322 mm and mean user error (difference between target and actual catheter tip) was 1.674 ± 1.195 mm. CONCLUSIONS This computer vision-based registration system provided real-time tracking of cadaveric heads with a recalibration time of less than one-quarter of a second with submillimetric accuracy and enabled catheter placements with millimetric accuracy. Using this approach to guide bedside ventriculostomy could reduce complications, improve safety, and be extrapolated to other frameless stereotactic applications in awake, nonimmobilized patients.
Collapse
Affiliation(s)
- Faith C Robertson
- 1Department of Neurosurgery, Massachusetts General Hospital, Boston
- 2Computational Neuroscience Outcomes Center, Brigham and Women's Hospital, Boston
- 3Harvard Medical School, Boston
| | - Raahil M Sha
- 4Zeta Surgical Inc., Boston
- 5Harvard Innovation Labs, Boston
| | - Jose M Amich
- 4Zeta Surgical Inc., Boston
- 5Harvard Innovation Labs, Boston
| | - Walid Ibn Essayed
- 3Harvard Medical School, Boston
- 6Department of Neurosurgery, Brigham and Women's Hospital, Boston
| | - Avinash Lal
- 4Zeta Surgical Inc., Boston
- 5Harvard Innovation Labs, Boston
| | - Benjamin H Lee
- 4Zeta Surgical Inc., Boston
- 5Harvard Innovation Labs, Boston
| | - Paola Calvachi Prieto
- 2Computational Neuroscience Outcomes Center, Brigham and Women's Hospital, Boston
- 3Harvard Medical School, Boston
| | - Junichi Tokuda
- 7Department of Radiology, Brigham and Women's Hospital, Boston
| | - James C Weaver
- 8Harvard John A. Paulson School of Engineering and Applied Sciences, Cambridge, Massachusetts
| | - Ramez W Kirollos
- 9Department of Neurosurgery, National Neuroscience Institute, Singapore; and
- 10Department of Neurosurgery, SingHealth Duke-NUS, National University of Singapore, Singapore
| | - Min Wei Chen
- 9Department of Neurosurgery, National Neuroscience Institute, Singapore; and
| | - William B Gormley
- 2Computational Neuroscience Outcomes Center, Brigham and Women's Hospital, Boston
- 3Harvard Medical School, Boston
- 6Department of Neurosurgery, Brigham and Women's Hospital, Boston
| |
Collapse
|
15
|
Mishra R, Narayanan MK, Umana GE, Montemurro N, Chaurasia B, Deora H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031719. [PMID: 35162742 PMCID: PMC8835688 DOI: 10.3390/ijerph19031719] [Citation(s) in RCA: 69] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023]
Abstract
Background: While several publications have focused on the intuitive role of augmented reality (AR) and virtual reality (VR) in neurosurgical planning, the aim of this review was to explore other avenues, where these technologies have significant utility and applicability. Methods: This review was conducted by searching PubMed, PubMed Central, Google Scholar, the Scopus database, the Web of Science Core Collection database, and the SciELO citation index, from 1989–2021. An example of a search strategy used in PubMed Central is: “Virtual reality” [All Fields] AND (“neurosurgical procedures” [MeSH Terms] OR (“neurosurgical” [All Fields] AND “procedures” [All Fields]) OR “neurosurgical procedures” [All Fields] OR “neurosurgery” [All Fields] OR “neurosurgery” [MeSH Terms]). Using this search strategy, we identified 487 (PubMed), 1097 (PubMed Central), and 275 citations (Web of Science Core Collection database). Results: Articles were found and reviewed showing numerous applications of VR/AR in neurosurgery. These applications included their utility as a supplement and augment for neuronavigation in the fields of diagnosis for complex vascular interventions, spine deformity correction, resident training, procedural practice, pain management, and rehabilitation of neurosurgical patients. These technologies have also shown promise in other area of neurosurgery, such as consent taking, training of ancillary personnel, and improving patient comfort during procedures, as well as a tool for training neurosurgeons in other advancements in the field, such as robotic neurosurgery. Conclusions: We present the first review of the immense possibilities of VR in neurosurgery, beyond merely planning for surgical procedures. The importance of VR and AR, especially in “social distancing” in neurosurgery training, for economically disadvantaged sections, for prevention of medicolegal claims and in pain management and rehabilitation, is promising and warrants further research.
Collapse
Affiliation(s)
- Rakesh Mishra
- Department of Neurosurgery, Institute of Medical Sciences, Banaras Hindu University, Varanasi 221005, India;
| | | | - Giuseppe E. Umana
- Trauma and Gamma-Knife Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
- Correspondence:
| | - Bipin Chaurasia
- Department of Neurosurgery, Bhawani Hospital, Birgunj 44300, Nepal;
| | - Harsh Deora
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru 560029, India;
| |
Collapse
|
16
|
Fick T, van Doormaal JAM, Tosic L, van Zoest RJ, Meulstee JW, Hoving EW, van Doormaal TPC. Fully automatic brain tumor segmentation for 3D evaluation in augmented reality. Neurosurg Focus 2021; 51:E14. [PMID: 34333477 DOI: 10.3171/2021.5.focus21200] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/18/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set. METHODS Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95). RESULTS The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). CONCLUSIONS The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.
Collapse
Affiliation(s)
- Tim Fick
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Jesse A M van Doormaal
- 2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Lazar Tosic
- 3Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and
| | - Renate J van Zoest
- 4Department of Neurology and Neurosurgery, Curaçao Medical Center, Willemstad, Curaçao
| | - Jene W Meulstee
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Eelco W Hoving
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.,2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P C van Doormaal
- 2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.,3Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and
| |
Collapse
|
17
|
Van Gestel F, Frantz T, Vannerom C, Verhellen A, Gallagher AG, Elprama SA, Jacobs A, Buyl R, Bruneau M, Jansen B, Vandemeulebroucke J, Scheerlinck T, Duerinck J. The effect of augmented reality on the accuracy and learning curve of external ventricular drain placement. Neurosurg Focus 2021; 51:E8. [PMID: 34333479 DOI: 10.3171/2021.5.focus21215] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/13/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The traditional freehand technique for external ventricular drain (EVD) placement is most frequently used, but remains the primary risk factor for inaccurate drain placement. As this procedure could benefit from image guidance, the authors set forth to demonstrate the impact of augmented-reality (AR) assistance on the accuracy and learning curve of EVD placement compared with the freehand technique. METHODS Sixteen medical students performed a total of 128 EVD placements on a custom-made phantom head, both before and after receiving a standardized training session. They were guided by either the freehand technique or by AR, which provided an anatomical overlay and tailored guidance for EVD placement through inside-out infrared tracking. The outcome was quantified by the metric accuracy of EVD placement as well as by its clinical quality. RESULTS The mean target error was significantly impacted by either AR (p = 0.003) or training (p = 0.02) in a direct comparison with the untrained freehand performance. Both untrained (11.9 ± 4.5 mm) and trained (12.2 ± 4.7 mm) AR performances were significantly better than the untrained freehand performance (19.9 ± 4.2 mm), which improved after training (13.5 ± 4.7 mm). The quality of EVD placement as assessed by the modified Kakarla scale (mKS) was significantly impacted by AR guidance (p = 0.005) but not by training (p = 0.07). Both untrained and trained AR performances (59.4% mKS grade 1 for both) were significantly better than the untrained freehand performance (25.0% mKS grade 1). Spatial aptitude testing revealed a correlation between perceptual ability and untrained AR-guided performance (r = 0.63). CONCLUSIONS Compared with the freehand technique, AR guidance for EVD placement yielded a higher outcome accuracy and quality for procedure novices. With AR, untrained individuals performed as well as trained individuals, which indicates that AR guidance not only improved performance but also positively impacted the learning curve. Future efforts will focus on the translation and evaluation of AR for EVD placement in the clinical setting.
Collapse
Affiliation(s)
- Frederick Van Gestel
- 1Department of Neurosurgery, Vrije Universiteit Brussel, Universitair Ziekenhuis Brussel, Brussels.,2Research Group Center For Neurosciences (C4N-NEUR), Vrije Universiteit Brussel, Brussels
| | - Taylor Frantz
- 3Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels.,4imec, Leuven
| | - Cédric Vannerom
- 1Department of Neurosurgery, Vrije Universiteit Brussel, Universitair Ziekenhuis Brussel, Brussels.,2Research Group Center For Neurosciences (C4N-NEUR), Vrije Universiteit Brussel, Brussels
| | - Anouk Verhellen
- 5Department of Studies on Media, Innovation & Technology (SMIT), Vrije Universiteit Brussel, Brussels
| | | | - Shirley A Elprama
- 5Department of Studies on Media, Innovation & Technology (SMIT), Vrije Universiteit Brussel, Brussels
| | - An Jacobs
- 5Department of Studies on Media, Innovation & Technology (SMIT), Vrije Universiteit Brussel, Brussels
| | - Ronald Buyl
- 7Department of Public Health, Research Group Biostatistics and Medical Informatics (BISI), Vrije Universiteit Brussel, Brussels
| | - Michaël Bruneau
- 1Department of Neurosurgery, Vrije Universiteit Brussel, Universitair Ziekenhuis Brussel, Brussels
| | - Bart Jansen
- 3Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels.,4imec, Leuven
| | - Jef Vandemeulebroucke
- 3Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels.,4imec, Leuven
| | - Thierry Scheerlinck
- 8Department of Orthopedic Surgery and Traumatology, Vrije Universiteit Brussel, Universitair Ziekenhuis Brussel, Brussels; and.,9Research Group Beeldvorming en Fysische wetenschappen (BEFY-ORTHO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Johnny Duerinck
- 1Department of Neurosurgery, Vrije Universiteit Brussel, Universitair Ziekenhuis Brussel, Brussels.,2Research Group Center For Neurosciences (C4N-NEUR), Vrije Universiteit Brussel, Brussels
| |
Collapse
|
18
|
Evaluation of a Wearable AR Platform for Guiding Complex Craniotomies in Neurosurgery. Ann Biomed Eng 2021; 49:2590-2605. [PMID: 34297263 DOI: 10.1007/s10439-021-02834-8] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/12/2021] [Indexed: 10/20/2022]
Abstract
Today, neuronavigation is widely used in daily clinical routine to perform safe and efficient surgery. Augmented reality (AR) interfaces can provide anatomical models and preoperative planning contextually blended with the real surgical scenario, overcoming the limitations of traditional neuronavigators. This study aims to demonstrate the reliability of a new-concept AR headset in navigating complex craniotomies. Moreover, we aim to prove the efficacy of a patient-specific template-based methodology for fast, non-invasive, and fully automatic planning-to-patient registration. The AR platform navigation performance was assessed with an in-vitro study whose goal was twofold: to measure the real-to-virtual 3D target visualization error (TVE), and assess the navigation accuracy through a user study involving 10 subjects in tracing a complex craniotomy. The feasibility of the template-based registration was preliminarily tested on a volunteer. The TVE mean and standard deviation were 1.3 and 0.6 mm. The results of the user study, over 30 traced craniotomies, showed that 97% of the trajectory length was traced within an error margin of 1.5 mm, and 92% within a margin of 1 mm. The in-vivo test confirmed the feasibility and reliability of the patient-specific template for registration. The proposed AR headset allows ergonomic and intuitive fruition of preoperative planning, and it can represent a valid option to support neurosurgical tasks.
Collapse
|