1
|
Nam SM, Byun YH, Dho YS, Park CK. Envisioning the Future of the Neurosurgical Operating Room with the Concept of the Medical Metaverse. J Korean Neurosurg Soc 2025; 68:137-149. [PMID: 39492606 PMCID: PMC11924637 DOI: 10.3340/jkns.2024.0160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 10/31/2024] [Indexed: 11/05/2024] Open
Abstract
The medical metaverse can be defined as a virtual spatiotemporal framework wherein higher-dimensional medical information is generated, exchanged, and utilized through communication among medical personnel or patients. This occurs through the integration of cutting-edge technologies such as augmented reality (AR), virtual reality (VR), artificial intelligence (AI), big data, cloud computing, and others. We can envision a future neurosurgical operating room that utilizes such medical metaverse concept such as shared extended reality (AR/VR) of surgical field, AI-powered intraoperative neurophysiological monitoring, and real-time intraoperative tissue diagnosis. The future neurosurgical operation room will evolve into a true medical metaverse where participants of surgery can communicate in overlapping virtual layers of surgery, monitoring, and diagnosis.
Collapse
Affiliation(s)
- Sun Mo Nam
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Yoon Hwan Byun
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
- Department of Neurosurgery, SMG-SNU Boramae Medical Center, Seoul, Korea
| | - Yun-Sik Dho
- Neuro-Oncology Clinic, National Cancer Center, Goyang, Korea
| | - Chul-Kee Park
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
2
|
Carbone M, Montemurro N, Cattari N, Autelitano M, Cutolo F, Ferrari V, Cigna E, Condino S. Targeting accuracy of neuronavigation: a comparative evaluation of an innovative wearable AR platform vs. traditional EM navigation. Front Digit Health 2025; 6:1500677. [PMID: 39877694 PMCID: PMC11772343 DOI: 10.3389/fdgth.2024.1500677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Accepted: 12/30/2024] [Indexed: 01/31/2025] Open
Abstract
Wearable augmented reality in neurosurgery offers significant advantages by enabling the visualization of navigation information directly on the patient, seamlessly integrating virtual data with the real surgical field. This ergonomic approach can facilitate a more intuitive understanding of spatial relationships and guidance cues, potentially reducing cognitive load and enhancing the accuracy of surgical gestures by aligning critical information with the actual anatomy in real-time. This study evaluates the benefits of a novel AR platform, VOSTARS, by comparing its targeting accuracy to that of the gold-standard electromagnetic (EM) navigation system, Medtronic StealthStation® S7®. Both systems were evaluated in phantom and human studies. In the phantom study, participants targeted 13 predefined landmarks using identical pointers to isolate system performance. In the human study, three facial landmarks were targeted in nine volunteers post-brain tumor surgery. The performance of the VOSTARS system was superior to that of the standard neuronavigator in both the phantom and human studies. In the phantom study, users achieved a median accuracy of 1.4 mm (IQR: 1.2 mm) with VOSTARS compared to 2.9 mm (IQR: 1.4 mm) with the standard neuronavigator. In the human study, the median targeting accuracy with VOSTARS was significantly better for selected landmarks in the outer eyebrow (3.7 mm vs. 6.6 mm, p = 0.05) and forehead (4.5 mm vs. 6.3 mm, p = 0.021). Although the difference for the pronasal point was not statistically significant (2.7 mm vs. 3.5 mm, p = 0.123), the trend towards improved accuracy with VOSTARS is clear. These findings suggest that the proposed AR technology has the potential to significantly improve surgical outcomes in neurosurgery.
Collapse
Affiliation(s)
- Marina Carbone
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Nicola Montemurro
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
- Department of Neurosurgery, Azienda Ospedaliero Universitaria Pisana, Pisa, Italy
| | - Nadia Cattari
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Martina Autelitano
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
- Department of Neurosurgery, Azienda Ospedaliero Universitaria Pisana, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Emanuele Cigna
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Sara Condino
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| |
Collapse
|
3
|
Frisk H, Jensdottir M, Coronado L, Conrad M, Hager S, Arvidsson L, Bartek J, Burström G, El-Hajj VG, Edström E, Elmi-Terander A, Persson O. Automatic Image Registration Provides Superior Accuracy Compared with Surface Matching in Cranial Navigation. SENSORS (BASEL, SWITZERLAND) 2024; 24:7341. [PMID: 39599122 PMCID: PMC11597983 DOI: 10.3390/s24227341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Revised: 11/02/2024] [Accepted: 11/16/2024] [Indexed: 11/29/2024]
Abstract
OBJECTIVE The precision of neuronavigation systems relies on the correct registration of the patient's position in space and aligning it with radiological 3D imaging data. Registration is usually performed by the acquisition of anatomical landmarks or surface matching based on facial features. Another possibility is automatic image registration using intraoperative imaging. This could provide better accuracy, especially in rotated or prone positions where the other methods may be difficult to perform. The aim of this study was to validate automatic image registration (AIR) using intraoperative cone-beam computed tomography (CBCT) for cranial neurosurgical procedures and compare the registration accuracy to the traditional surface matching (SM) registration method based on preoperative MRI. The preservation of navigation accuracy throughout the surgery was also investigated. METHODS Adult patients undergoing intracranial tumor surgery were enrolled after consent. A standard SM registration was performed, and reference points were acquired. An AIR was then performed, and the same reference points were acquired again. Accuracy was calculated based on the referenced and acquired coordinates of the points for each registration method. The reference points were acquired before and after draping and at the end of the procedure to assess the persistency of accuracy. RESULTS In total, 22 patients were included. The mean accuracy was 6.6 ± 3.1 mm for SM registration and 1.0 ± 0.3 mm for AIR. The AIR was superior to the SM registration (p < 0.0001), with a mean improvement in accuracy of 5.58 mm (3.71-7.44 mm 99% CI). The mean accuracy for the AIR registration pre-drape was 1.0 ± 0.3 mm. The corresponding accuracies post-drape and post-resection were 2.9 ± 4.6 mm and 4.1 ± 4.9 mm, respectively. Although a loss of accuracy was identified between the preoperative and end-of-procedure measurements, there was no statistically significant decline during surgery. CONCLUSIONS AIR for cranial neuronavigation consistently delivered greater accuracy than SM and should be considered the new gold standard for patient registration in cranial neuronavigation. If intraoperative imaging is a limited resource, AIR should be prioritized in rotated or prone position procedures, where the benefits are the greatest.
Collapse
Affiliation(s)
- Henrik Frisk
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Department of Neurosurgery, Karolinska University Hospital, SE 17176 Stockholm, Sweden
| | - Margret Jensdottir
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Department of Neurosurgery, Karolinska University Hospital, SE 17176 Stockholm, Sweden
| | - Luisa Coronado
- Clinical Affairs, Brainlab AG, 81829 Munich, Germany; (L.C.); (M.C.); (S.H.)
| | - Markus Conrad
- Clinical Affairs, Brainlab AG, 81829 Munich, Germany; (L.C.); (M.C.); (S.H.)
| | - Susanne Hager
- Clinical Affairs, Brainlab AG, 81829 Munich, Germany; (L.C.); (M.C.); (S.H.)
| | - Lisa Arvidsson
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Department of Neurosurgery, Karolinska University Hospital, SE 17176 Stockholm, Sweden
| | - Jiri Bartek
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Department of Neurosurgery, Karolinska University Hospital, SE 17176 Stockholm, Sweden
| | - Gustav Burström
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Department of Neurosurgery, Karolinska University Hospital, SE 17176 Stockholm, Sweden
| | - Victor Gabriel El-Hajj
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
| | - Erik Edström
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Capio Spine Center Stockholm, Löwenströmska Hospital, SE 19489 Upplands-Väsby, Sweden
| | - Adrian Elmi-Terander
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Capio Spine Center Stockholm, Löwenströmska Hospital, SE 19489 Upplands-Väsby, Sweden
- Department of Surgical Sciences, Uppsala University, SE 75236 Uppsala, Sweden
| | - Oscar Persson
- Department of Clinical Neuroscience, Karolinska Institutet, SE 17177 Stockholm, Sweden; (H.F.); (M.J.); (L.A.); (J.B.J.); (G.B.); (V.G.E.-H.); (E.E.); (O.P.)
- Department of Neurosurgery, Karolinska University Hospital, SE 17176 Stockholm, Sweden
| |
Collapse
|
4
|
Chou DW, Annadata V, Willson G, Gray M, Rosenberg J. Augmented and Virtual Reality Applications in Facial Plastic Surgery: A Scoping Review. Laryngoscope 2024; 134:2568-2577. [PMID: 37947302 DOI: 10.1002/lary.31178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 10/05/2023] [Accepted: 10/27/2023] [Indexed: 11/12/2023]
Abstract
OBJECTIVES Augmented reality (AR) and virtual reality (VR) are emerging technologies with wide potential applications in health care. We performed a scoping review of the current literature on the application of augmented and VR in the field of facial plastic and reconstructive surgery (FPRS). DATA SOURCES PubMed and Web of Science. REVIEW METHODS According to PRISMA guidelines, PubMed and Web of Science were used to perform a scoping review of literature regarding the utilization of AR and/or VR relevant to FPRS. RESULTS Fifty-eight articles spanning 1997-2023 met the criteria for review. Five overarching categories of AR and/or VR applications were identified across the articles: preoperative, intraoperative, training/education, feasibility, and technical. The following clinical areas were identified: burn, craniomaxillofacial surgery (CMF), face transplant, face lift, facial analysis, facial palsy, free flaps, head and neck surgery, injectables, locoregional flaps, mandible reconstruction, mandibuloplasty, microtia, skin cancer, oculoplastic surgery, rhinology, rhinoplasty, and trauma. CONCLUSION AR and VR have broad applications in FPRS. AR for surgical navigation may have the most emerging potential in CMF surgery and free flap harvest. VR is useful as distraction analgesia for patients and as an immersive training tool for surgeons. More data on these technologies' direct impact on objective clinical outcomes are still needed. LEVEL OF EVIDENCE N/A Laryngoscope, 134:2568-2577, 2024.
Collapse
Affiliation(s)
- David W Chou
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Vivek Annadata
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Gloria Willson
- Education and Research Services, Levy Library, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Mingyang Gray
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Joshua Rosenberg
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
5
|
Qi Z, Jin H, Xu X, Wang Q, Gan Z, Xiong R, Zhang S, Liu M, Wang J, Ding X, Chen X, Zhang J, Nimsky C, Bopp MHA. Head model dataset for mixed reality navigation in neurosurgical interventions for intracranial lesions. Sci Data 2024; 11:538. [PMID: 38796526 PMCID: PMC11127921 DOI: 10.1038/s41597-024-03385-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/15/2024] [Indexed: 05/28/2024] Open
Abstract
Mixed reality navigation (MRN) technology is emerging as an increasingly significant and interesting topic in neurosurgery. MRN enables neurosurgeons to "see through" the head with an interactive, hybrid visualization environment that merges virtual- and physical-world elements. Offering immersive, intuitive, and reliable guidance for preoperative and intraoperative intervention of intracranial lesions, MRN showcases its potential as an economically efficient and user-friendly alternative to standard neuronavigation systems. However, the clinical research and development of MRN systems present challenges: recruiting a sufficient number of patients within a limited timeframe is difficult, and acquiring low-cost, commercially available, medically significant head phantoms is equally challenging. To accelerate the development of novel MRN systems and surmount these obstacles, the study presents a dataset designed for MRN system development and testing in neurosurgery. It includes CT and MRI data from 19 patients with intracranial lesions and derived 3D models of anatomical structures and validation references. The models are available in Wavefront object (OBJ) and Stereolithography (STL) formats, supporting the creation and assessment of neurosurgical MRN applications.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany.
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China.
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
- NCO School, Army Medical University, 050081, Shijiazhuang, China
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Ruochu Xiong
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Department of Neurosurgery, Division of Medicine, Graduate School of Medical Sciences, Kanazawa University, Takara-machi 13-1, 920-8641, Kanazawa, Ishikawa, Japan
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Minghang Liu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Xinyu Ding
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China.
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), 35043, Marburg, Germany
| | - Miriam H A Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany.
- Center for Mind, Brain and Behavior (CMBB), 35043, Marburg, Germany.
| |
Collapse
|
6
|
Qi Z, Jin H, Wang Q, Gan Z, Xiong R, Zhang S, Liu M, Wang J, Ding X, Chen X, Zhang J, Nimsky C, Bopp MHA. The Feasibility and Accuracy of Holographic Navigation with Laser Crosshair Simulator Registration on a Mixed-Reality Display. SENSORS (BASEL, SWITZERLAND) 2024; 24:896. [PMID: 38339612 PMCID: PMC10857152 DOI: 10.3390/s24030896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
Addressing conventional neurosurgical navigation systems' high costs and complexity, this study explores the feasibility and accuracy of a simplified, cost-effective mixed reality navigation (MRN) system based on a laser crosshair simulator (LCS). A new automatic registration method was developed, featuring coplanar laser emitters and a recognizable target pattern. The workflow was integrated into Microsoft's HoloLens-2 for practical application. The study assessed the system's precision by utilizing life-sized 3D-printed head phantoms based on computed tomography (CT) or magnetic resonance imaging (MRI) data from 19 patients (female/male: 7/12, average age: 54.4 ± 18.5 years) with intracranial lesions. Six to seven CT/MRI-visible scalp markers were used as reference points per case. The LCS-MRN's accuracy was evaluated through landmark-based and lesion-based analyses, using metrics such as target registration error (TRE) and Dice similarity coefficient (DSC). The system demonstrated immersive capabilities for observing intracranial structures across all cases. Analysis of 124 landmarks showed a TRE of 3.0 ± 0.5 mm, consistent across various surgical positions. The DSC of 0.83 ± 0.12 correlated significantly with lesion volume (Spearman rho = 0.813, p < 0.001). Therefore, the LCS-MRN system is a viable tool for neurosurgical planning, highlighting its low user dependency, cost-efficiency, and accuracy, with prospects for future clinical application enhancements.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Ruochu Xiong
- Department of Neurosurgery, Division of Medicine, Graduate School of Medical Sciences, Kanazawa University, Takara-machi 13-1, Kanazawa 920-8641, Japan;
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Minghang Liu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Xinyu Ding
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| |
Collapse
|
7
|
van Doormaal JAM, van Doormaal TPC. Augmented Reality in Neurosurgery. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1462:351-374. [PMID: 39523276 DOI: 10.1007/978-3-031-64892-2_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Augmented Reality (AR) involves superimposing digital content onto the real environment. AR has evolved into a viable tool in neurosurgery, enhancing intraoperative navigation, medical education and surgical training by integrating anatomical data with the real world. Neurosurgical AR relies on several key techniques to be successful, which includes image segmentation, model rendering, AR projection, and image-to-patient registration. For each of these technical components, different solutions exist, with each having their own advantages and limitations.Intraoperative AR applications cover diverse neurosurgical disciplines including vascular, oncological, spinal, and functional surgeries. Preliminary studies indicate that AR may improve the understanding of complex anatomical structures and offer sufficient accuracy for use as a navigational tool. Additionally, AR shows promise in enhancing surgical training and patient education through interactive 3D models, aiding in the comprehension of intricate anatomical details. Despite its potential, the widespread adoption of AR in clinical settings depends on overcoming technical limitations and validating its clinical efficacy.
Collapse
Affiliation(s)
- Jesse A M van Doormaal
- Department of Neurosurgery, University Medical Centre Utrecht, Utrecht, The Netherlands.
| | | |
Collapse
|
8
|
Dho YS, Lee BC, Moon HC, Kim KM, Kang H, Lee EJ, Kim MS, Kim JW, Kim YH, Park SJ, Park CK. Validation of real-time inside-out tracking and depth realization technologies for augmented reality-based neuronavigation. Int J Comput Assist Radiol Surg 2024; 19:15-25. [PMID: 37442869 DOI: 10.1007/s11548-023-02993-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
PURPOSE Concomitant with the significant advances in computing technology, the utilization of augmented reality-based navigation in clinical applications is being actively researched. In this light, we developed novel object tracking and depth realization technologies to apply augmented reality-based neuronavigation to brain surgery. METHODS We developed real-time inside-out tracking based on visual inertial odometry and a visual inertial simultaneous localization and mapping algorithm. The cube quick response marker and depth data obtained from light detection and ranging sensors are used for continuous tracking. For depth realization, order-independent transparency, clipping, and annotation and measurement functions were developed. In this study, the augmented reality model of a brain tumor patient was applied to its life-size three-dimensional (3D) printed model. RESULTS Using real-time inside-out tracking, we confirmed that the augmented reality model remained consistent with the 3D printed patient model without flutter, regardless of the movement of the visualization device. The coordination accuracy during real-time inside-out tracking was also validated. The average movement error of the X and Y axes was 0.34 ± 0.21 and 0.04 ± 0.08 mm, respectively. Further, the application of order-independent transparency with multilayer alpha blending and filtered alpha compositing improved the perception of overlapping internal brain structures. Clipping, and annotation and measurement functions were also developed to aid depth perception and worked perfectly during real-time coordination. We named this system METAMEDIP navigation. CONCLUSIONS The results validate the efficacy of the real-time inside-out tracking and depth realization technology. With these novel technologies developed for continuous tracking and depth perception in augmented reality environments, we are able to overcome the critical obstacles in the development of clinically applicable augmented reality neuronavigation.
Collapse
Affiliation(s)
- Yun-Sik Dho
- Neuro-Oncology Clinic, National Cancer Center, Goyang, Republic of Korea
| | - Byeong Cheol Lee
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd., Seoul, Republic of Korea
| | - Hyeong Cheol Moon
- Department of Neurosurgery, Chungbuk National University Hospital, Cheongju, Republic of Korea
| | - Kyung Min Kim
- Department of Neurosurgery, Inha University Hospital, Inha University College of Medicine, Incheon, Korea
| | - Ho Kang
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Eun Jung Lee
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Min-Sung Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Jin Wook Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Yong Hwy Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Sang Joon Park
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd., Seoul, Republic of Korea.
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea.
| | - Chul-Kee Park
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
9
|
Qi Z, Bopp MHA, Nimsky C, Chen X, Xu X, Wang Q, Gan Z, Zhang S, Wang J, Jin H, Zhang J. A Novel Registration Method for a Mixed Reality Navigation System Based on a Laser Crosshair Simulator: A Technical Note. Bioengineering (Basel) 2023; 10:1290. [PMID: 38002414 PMCID: PMC10669875 DOI: 10.3390/bioengineering10111290] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 11/01/2023] [Indexed: 11/26/2023] Open
Abstract
Mixed Reality Navigation (MRN) is pivotal in augmented reality-assisted intelligent neurosurgical interventions. However, existing MRN registration methods face challenges in concurrently achieving low user dependency, high accuracy, and clinical applicability. This study proposes and evaluates a novel registration method based on a laser crosshair simulator, evaluating its feasibility and accuracy. A novel registration method employing a laser crosshair simulator was introduced, designed to replicate the scanner frame's position on the patient. The system autonomously calculates the transformation, mapping coordinates from the tracking space to the reference image space. A mathematical model and workflow for registration were designed, and a Universal Windows Platform (UWP) application was developed on HoloLens-2. Finally, a head phantom was used to measure the system's target registration error (TRE). The proposed method was successfully implemented, obviating the need for user interactions with virtual objects during the registration process. Regarding accuracy, the average deviation was 3.7 ± 1.7 mm. This method shows encouraging results in efficiency and intuitiveness and marks a valuable advancement in low-cost, easy-to-use MRN systems. The potential for enhancing accuracy and adaptability in intervention procedures positions this approach as promising for improving surgical outcomes.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| |
Collapse
|
10
|
Ragnhildstveit A, Li C, Zimmerman MH, Mamalakis M, Curry VN, Holle W, Baig N, Uğuralp AK, Alkhani L, Oğuz-Uğuralp Z, Romero-Garcia R, Suckling J. Intra-operative applications of augmented reality in glioma surgery: a systematic review. Front Surg 2023; 10:1245851. [PMID: 37671031 PMCID: PMC10476869 DOI: 10.3389/fsurg.2023.1245851] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/04/2023] [Indexed: 09/07/2023] Open
Abstract
Background Augmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential. Methods A systematic review of the literature was conducted. This adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. PubMed, Embase, and Scopus electronic databases were queried from inception to October 10, 2022. Leveraging the Population, Intervention, Comparison, Outcomes, and Study design (PICOS) framework, study eligibility was evaluated in the qualitative synthesis. Data regarding AR workflow, surgical application, and associated outcomes were then extracted. The quality of evidence was additionally examined, using hierarchical classes of evidence in neurosurgery. Results The search returned 77 articles. Forty were subject to title and abstract screening, while 25 proceeded to full text screening. Of these, 22 articles met eligibility criteria and were included in the final review. During abstraction, studies were classified as "development" or "intervention" based on primary aims. Overall, AR was qualitatively advantageous, due to enhanced visualization of gliomas and critical structures, frequently aiding in maximal safe resection. Non-rigid applications were also useful in disclosing and compensating for intra-operative brain shift. Irrespective, there was high variance in registration methods and measurements, which considerably impacted projection accuracy. Most studies were of low-level evidence, yielding heterogeneous results. Conclusions AR has increasing potential for glioma surgery, with capacity to positively influence the onco-functional balance. However, technical and design limitations are readily apparent. The field must consider the importance of consistency and replicability, as well as the level of evidence, to effectively converge on standard approaches that maximize patient benefit.
Collapse
Affiliation(s)
- Anya Ragnhildstveit
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Chao Li
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, England
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England
| | | | - Michail Mamalakis
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Victoria N. Curry
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Willis Holle
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Physics and Astronomy, The University of Utah, Salt Lake City, UT, United States
| | - Noor Baig
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, United States
| | | | - Layth Alkhani
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Biology, Stanford University, Stanford, CA, United States
| | | | - Rafael Romero-Garcia
- Department of Psychiatry, University of Cambridge, Cambridge, England
- Instituto de Biomedicina de Sevilla (IBiS) HUVR/CSIC/Universidad de Sevilla/CIBERSAM, ISCIII, Dpto. de Fisiología Médica y Biofísica
| | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge, England
| |
Collapse
|
11
|
Jain S, Tajsic T, Das T, Gao Y, Yuan NK, Yeo TT, Graves MJ, Helmy A. Assessment of Accuracy of Mixed Reality Device for Neuronavigation: Proposed Methodology and Results. NEUROSURGERY PRACTICE 2023; 4:e00031. [PMID: 39958371 PMCID: PMC11809955 DOI: 10.1227/neuprac.0000000000000036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 01/06/2023] [Indexed: 02/18/2025]
Abstract
Intraoperative neuronavigation is currently an essential component of neurosurgical operations in several contexts. Recent progress in mixed reality (MR) technology has attempted to overcome the disadvantages of standard neuronavigation systems allowing the surgeon to superimpose a 3D rendered image onto the patient's anatomy. We present the first study in the literature to assess the surface matching accuracy of MR rendered image. For the purposes of this study, we used HoloLens 2 with virtual surgery intelligence providing the software capability for image rendering. To assess the accuracy of using mixed reality device for neuronavigation intraoperatively. This study seeks to assess the accuracy of rendered holographic images from a mixed reality device as a means for neuronavigation intraoperatively. We used the Realistic Operative Workstation for Educating Neurosurgical Apprentices to represent a patient's skull with intracranial components which underwent standardized computed tomography (CT) and MRI imaging. Eleven predefined points were used for purposes of assessing the accuracy of the rendered image, compared with the intraoperative gold standard neuronavigation. The mean HoloLens values against the ground truth were significantly higher when compared with Stealth using CT scan as the imaging modality. Using extracranial anatomic landmarks, the HoloLens error values continued to be significantly higher in magnitude when compared with Stealth across CT and MRI. This study provides a relatively easy and feasible method to assess accuracy of MR-based navigation without requiring any additions to the established imaging protocols. We failed to show the equivalence of MR-based navigation over the current neuronavigation systems.
Collapse
Affiliation(s)
- Swati Jain
- Divison of Neurosurgery, University Surgical Cluster, National University Health System, Singapore
| | - Tamara Tajsic
- Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Tilak Das
- Department of Radiology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Yujia Gao
- Division of Hepatobiliary & Pancreatic Surgery, University Surgical Cluster, National University Health System (NUHS), Singapore
| | - Ngiam Kee Yuan
- Division of General Surgery (Thyroid & Endocrine Surgery), University Surgical Cluster, National University Health System (NUHS), Singapore
| | - Tseng Tsai Yeo
- Divison of Neurosurgery, University Surgical Cluster, National University Health System, Singapore
| | - Martin J. Graves
- Department of Radiology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Adel Helmy
- Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
12
|
Qi Z, Zhang S, Xu X, Chen X. Letter to the Editor. Augmented reality-assisted navigation for deep target acquisition: is it reliable? J Neurosurg 2023; 138:1169-1170. [PMID: 36681956 DOI: 10.3171/2022.10.jns222239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Ziyu Qi
- Chinese PLA General Hospital, Beijing, China
| | | | | | | |
Collapse
|
13
|
Use of Mixed Reality in Neuro-Oncology: A Single Centre Experience. Life (Basel) 2023; 13:life13020398. [PMID: 36836755 PMCID: PMC9965132 DOI: 10.3390/life13020398] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/25/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
(1) Background: Intra-operative neuronavigation is currently an essential component to most neurosurgical operations. Recent progress in mixed reality (MR) technology has attempted to overcome the disadvantages of the neuronavigation systems. We present our experience using the HoloLens 2 in neuro-oncology for both intra- and extra-axial tumours. (2) Results: We describe our experience with three patients who underwent tumour resection. We evaluated surgeon experience, accuracy of superimposed 3D image in tumour localisation with standard neuronavigation both pre- and intra-operatively. Surgeon training and usage for HoloLens 2 was short and easy. The process of image overlay was relatively straightforward for the three cases. Registration in prone position with a conventional neuronavigation system is often difficult, which was easily overcome during use of HoloLens 2. (3) Conclusion: Although certain limitations were identified, the authors feel that this system is a feasible alternative device for intra-operative visualization of neurosurgical pathology. Further studies are being planned to assess its accuracy and suitability across various surgical disciplines.
Collapse
|
14
|
Satoh M, Nakajima T, Watanabe E, Kawai K. Augmented Reality in Stereotactic Neurosurgery: Current Status and Issues. Neurol Med Chir (Tokyo) 2023; 63:137-140. [PMID: 36682793 PMCID: PMC10166603 DOI: 10.2176/jns-nmc.2022-0278] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
Stereotactic neurosurgery is an established technique, but it has several limitations. In frame-based stereotaxy using a stereotactic frame, frame setting errors may decrease the accuracy of the procedure. Frameless stereotaxy using neuronavigation requires surgeons to shift their view from the surgical field to the navigation display and to advance the needle while assuming a physically uncomfortable position. To overcome these limitations, several researchers have applied augmented reality in stereotactic neurosurgery. Augmented reality enables surgeons to visualize the information regarding the target and preplanned trajectory superimposed over the actual surgical field. In frame-based stereotaxy, a researcher applies tablet computer-based augmented reality to check for the setting errors of the stereotactic frame, thereby improving the safety of the procedure. Several researchers have reported performing frameless stereotaxy guided by head-mounted-display-based augmented reality that enables surgeons to advance the needle at a more natural posture. These studies have shown that augmented reality can address the limitations of stereotactic neurosurgery. Conversely, they have also revealed the limited accuracy of current augmented reality systems for small targets, which indicates that further development of augmented reality systems is needed.
Collapse
Affiliation(s)
- Makoto Satoh
- Department of Neurosurgery, Jichi Medical University
| | | | - Eiju Watanabe
- Department of Neurosurgery, Jichi Medical University
| | - Kensuke Kawai
- Department of Neurosurgery, Jichi Medical University
| |
Collapse
|
15
|
Zary N, Eysenbach G, Van Doormaal TPC, Ruurda JP, Van der Kaaij NP, De Heer LM. Mixed Reality in Modern Surgical and Interventional Practice: Narrative Review of the Literature. JMIR Serious Games 2023; 11:e41297. [PMID: 36607711 PMCID: PMC9947976 DOI: 10.2196/41297] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 10/17/2022] [Accepted: 10/31/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND Mixed reality (MR) and its potential applications have gained increasing interest within the medical community over the recent years. The ability to integrate virtual objects into a real-world environment within a single video-see-through display is a topic that sparks imagination. Given these characteristics, MR could facilitate preoperative and preinterventional planning, provide intraoperative and intrainterventional guidance, and aid in education and training, thereby improving the skills and merits of surgeons and residents alike. OBJECTIVE In this narrative review, we provide a broad overview of the different applications of MR within the entire spectrum of surgical and interventional practice and elucidate on potential future directions. METHODS A targeted literature search within the PubMed, Embase, and Cochrane databases was performed regarding the application of MR within surgical and interventional practice. Studies were included if they met the criteria for technological readiness level 5, and as such, had to be validated in a relevant environment. RESULTS A total of 57 studies were included and divided into studies regarding preoperative and interventional planning, intraoperative and interventional guidance, as well as training and education. CONCLUSIONS The overall experience with MR is positive. The main benefits of MR seem to be related to improved efficiency. Limitations primarily seem to be related to constraints associated with head-mounted display. Future directions should be aimed at improving head-mounted display technology as well as incorporation of MR within surgical microscopes, robots, and design of trials to prove superiority.
Collapse
Affiliation(s)
| | | | - Tristan P C Van Doormaal
- University Medical Center Utrecht, Utrecht, Netherlands.,University Hospital Zurich, Zurich, Switzerland
| | | | | | | |
Collapse
|
16
|
Bopp MHA, Corr F, Saß B, Pojskic M, Kemmling A, Nimsky C. Augmented Reality to Compensate for Navigation Inaccuracies. SENSORS (BASEL, SWITZERLAND) 2022; 22:9591. [PMID: 36559961 PMCID: PMC9787763 DOI: 10.3390/s22249591] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/22/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
This study aims to report on the capability of microscope-based augmented reality (AR) to evaluate registration and navigation accuracy with extracranial and intracranial landmarks and to elaborate on its opportunities and obstacles in compensation for navigation inaccuracies. In a consecutive single surgeon series of 293 patients, automatic intraoperative computed tomography-based registration was performed delivering a high initial registration accuracy with a mean target registration error of 0.84 ± 0.36 mm. Navigation accuracy is evaluated by overlaying a maximum intensity projection or pre-segmented object outlines within the recent focal plane onto the in situ patient anatomy and compensated for by translational and/or rotational in-plane transformations. Using bony landmarks (85 cases), there was two cases where a mismatch was seen. Cortical vascular structures (242 cases) showed a mismatch in 43 cases and cortex representations (40 cases) revealed two inaccurate cases. In all cases, with detected misalignment, a successful spatial compensation was performed (mean correction: bone (6.27 ± 7.31 mm), vascular (3.00 ± 1.93 mm, 0.38° ± 1.06°), and cortex (5.31 ± 1.57 mm, 1.75° ± 2.47°)) increasing navigation accuracy. AR support allows for intermediate and straightforward monitoring of accuracy, enables compensation of spatial misalignments, and thereby provides additional safety by increasing overall accuracy.
Collapse
Affiliation(s)
- Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Felix Corr
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany
- EDU Institute of Higher Education, Villa Bighi, Chaplain’s House, KKR 1320 Kalkara, Malta
| | - Benjamin Saß
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany
| | - Mirza Pojskic
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany
| | - André Kemmling
- Department of Neuroradiology, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| |
Collapse
|
17
|
Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol 2022; 67:10.1088/1361-6560/ac72ef. [PMID: 35609586 PMCID: PMC9801422 DOI: 10.1088/1361-6560/ac72ef] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/24/2022] [Indexed: 01/03/2023]
Abstract
Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
| | - J Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States of America
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| |
Collapse
|
18
|
Postcentral Gyrus High-Grade Glioma: Maximal Safe Anatomical Resection Guided by Augmented Reality with Fiber Tractography and Fluorescein. World Neurosurg 2021; 159:108. [PMID: 34968755 DOI: 10.1016/j.wneu.2021.12.072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/23/2022]
|
19
|
Montemurro N, Condino S, Cattari N, D’Amato R, Ferrari V, Cutolo F. Augmented Reality-Assisted Craniotomy for Parasagittal and Convexity En Plaque Meningiomas and Custom-Made Cranio-Plasty: A Preliminary Laboratory Report. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:9955. [PMID: 34639256 PMCID: PMC8507881 DOI: 10.3390/ijerph18199955] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/10/2021] [Accepted: 09/17/2021] [Indexed: 12/23/2022]
Abstract
BACKGROUND This report discusses the utility of a wearable augmented reality platform in neurosurgery for parasagittal and convexity en plaque meningiomas with bone flap removal and custom-made cranioplasty. METHODS A real patient with en plaque cranial vault meningioma with diffuse and extensive dural involvement, extracranial extension into the calvarium, and homogeneous contrast enhancement on gadolinium-enhanced T1-weighted MRI, was selected for this case study. A patient-specific manikin was designed starting with the segmentation of the patient's preoperative MRI images to simulate a craniotomy procedure. Surgical planning was performed according to the segmented anatomy, and customized bone flaps were designed accordingly. During the surgical simulation stage, the VOSTARS head-mounted display was used to accurately display the planned craniotomy trajectory over the manikin skull. The precision of the craniotomy was assessed based on the evaluation of previously prepared custom-made bone flaps. RESULTS A bone flap with a radius 0.5 mm smaller than the radius of an ideal craniotomy fitted perfectly over the performed craniotomy, demonstrating an error of less than ±1 mm in the task execution. The results of this laboratory-based experiment suggest that the proposed augmented reality platform helps in simulating convexity en plaque meningioma resection and custom-made cranioplasty, as carefully planned in the preoperative phase. CONCLUSIONS Augmented reality head-mounted displays have the potential to be a useful adjunct in tumor surgical resection, cranial vault lesion craniotomy and also skull base surgery, but more study with large series is needed.
Collapse
Affiliation(s)
- Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
| | - Sara Condino
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| | - Nadia Cattari
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
- Department of Translational Research, University of Pisa, 56100 Pisa, Italy
| | - Renzo D’Amato
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| |
Collapse
|
20
|
Strickland BA, Ruzevick J, Zada G. Commentary: Early Experience With Virtual and Synchronized Augmented Reality Platform for Preoperative Planning and Intraoperative Navigation: A Case Series. Oper Neurosurg (Hagerstown) 2021; 21:E298-E299. [PMID: 34171913 DOI: 10.1093/ons/opab204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 04/22/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Ben A Strickland
- Department of Neurosurgery, University of Southern California, Los Angeles, Los Angeles, California, USA
| | - Jacob Ruzevick
- Department of Neurosurgery, University of Washington, Seattle, Washington, USA
| | - Gabriel Zada
- Department of Neurosurgery, University of Southern California, Los Angeles, Los Angeles, California, USA
| |
Collapse
|
21
|
Strickland BA, Zada G, Lee DJ. Commentary: Application of Augmented Reality in Percutaneous Procedures-Rhizotomy of the Gasserian Ganglion. Oper Neurosurg (Hagerstown) 2021; 21:E224-E225. [PMID: 34097742 DOI: 10.1093/ons/opab179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 04/04/2021] [Indexed: 11/12/2022] Open
Affiliation(s)
- Ben A Strickland
- Department of Neurosurgery, University of Southern California, Los Angeles, California, USA
| | - Gabriel Zada
- Department of Neurosurgery, University of Southern California, Los Angeles, California, USA
| | - Darrin J Lee
- Department of Neurosurgery, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
22
|
Luzzi S, Giotta Lucifero A, Martinelli A, Maestro MD, Savioli G, Simoncelli A, Lafe E, Preda L, Galzio R. Supratentorial high-grade gliomas: maximal safe anatomical resection guided by augmented reality high-definition fiber tractography and fluorescein. Neurosurg Focus 2021; 51:E5. [PMID: 34333470 DOI: 10.3171/2021.5.focus21185] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 05/13/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The theoretical advantages of augmented reality (AR) with diffusion tensor imaging (DTI)-based high-definition fiber tractography (HDFT) and sodium fluorescein (F) in high-grade glioma (HGG) surgery have not been investigated in detail. In this study, the authors aimed to evaluate the safety and efficacy profiles of HDFT-F microscope-based AR cytoreductive surgery for newly diagnosed supratentorial HGGs. METHODS Data of patients with newly diagnosed supratentorial HGGs who underwent surgery using the AR HDFT-F technique were reviewed and compared with those of a cohort of patients who underwent conventional white-light surgery assisted by infrared neuronavigation. The safety and efficacy of the techniques were reported based on the postoperative Neurological Assessment in Neuro-Oncology (NANO) scores, extent of resection (EOR), and Kaplan-Meier curves, respectively. The chi-square test was conducted for categorical variables. A p value < 0.05 was considered statistically significant. RESULTS A total of 54 patients underwent surgery using the AR HDFT-F technique, and 63 underwent conventional white-light surgery assisted by infrared neuronavigation. The mean postoperative NANO scores were 3.8 ± 2 and 5.2 ± 4 in the AR HDFT-F group and control group, respectively (p < 0.05). The EOR was higher in the AR HDFT-F group (p < 0.05) than in the control group. With a mean follow-up of 12.2 months, the rate of progression-free survival (PFS) was longer in the study group (log-rank test, p = 0.006) than in the control group. Moreover, the complication rates were 9.2% and 9.5% in the study and control groups, respectively. CONCLUSIONS Overall, AR HDFT-F-assisted surgery is safe and effective in maximizing the EOR and PFS rate for patients with newly diagnosed supratentorial HGGs, and in optimizing patient functional outcomes.
Collapse
Affiliation(s)
- Sabino Luzzi
- 1Neurosurgery Unit, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia.,2Neurosurgery Unit, Department of Surgical Sciences, Fondazione IRCCS Policlinico San Matteo, Pavia
| | - Alice Giotta Lucifero
- 1Neurosurgery Unit, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia
| | - Andrea Martinelli
- 3Department of Science and High Technology, University of Insubria, Como
| | - Mattia Del Maestro
- 4PhD School in Experimental Medicine, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia
| | - Gabriele Savioli
- 4PhD School in Experimental Medicine, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia.,5Emergency Department, IRCCS Policlinico San Matteo, Pavia; and
| | - Anna Simoncelli
- 6Department of Diagnostic Radiology and Interventional Radiology and Neuroradiology, University of Pavia, IRCCS Policlinico San Matteo Foundation, Pavia; and
| | - Elvis Lafe
- 6Department of Diagnostic Radiology and Interventional Radiology and Neuroradiology, University of Pavia, IRCCS Policlinico San Matteo Foundation, Pavia; and
| | - Lorenzo Preda
- 6Department of Diagnostic Radiology and Interventional Radiology and Neuroradiology, University of Pavia, IRCCS Policlinico San Matteo Foundation, Pavia; and
| | - Renato Galzio
- 7Neurosurgery Unit, Maria Cecilia Hospital, Cotignola, Italy
| |
Collapse
|
23
|
Evaluation of a Wearable AR Platform for Guiding Complex Craniotomies in Neurosurgery. Ann Biomed Eng 2021; 49:2590-2605. [PMID: 34297263 DOI: 10.1007/s10439-021-02834-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/12/2021] [Indexed: 10/20/2022]
Abstract
Today, neuronavigation is widely used in daily clinical routine to perform safe and efficient surgery. Augmented reality (AR) interfaces can provide anatomical models and preoperative planning contextually blended with the real surgical scenario, overcoming the limitations of traditional neuronavigators. This study aims to demonstrate the reliability of a new-concept AR headset in navigating complex craniotomies. Moreover, we aim to prove the efficacy of a patient-specific template-based methodology for fast, non-invasive, and fully automatic planning-to-patient registration. The AR platform navigation performance was assessed with an in-vitro study whose goal was twofold: to measure the real-to-virtual 3D target visualization error (TVE), and assess the navigation accuracy through a user study involving 10 subjects in tracing a complex craniotomy. The feasibility of the template-based registration was preliminarily tested on a volunteer. The TVE mean and standard deviation were 1.3 and 0.6 mm. The results of the user study, over 30 traced craniotomies, showed that 97% of the trajectory length was traced within an error margin of 1.5 mm, and 92% within a margin of 1 mm. The in-vivo test confirmed the feasibility and reliability of the patient-specific template for registration. The proposed AR headset allows ergonomic and intuitive fruition of preoperative planning, and it can represent a valid option to support neurosurgical tasks.
Collapse
|
24
|
Pennacchietti V, Stoelzel K, Tietze A, Lankes E, Schaumann A, Uecker FC, Thomale UW. First experience with augmented reality neuronavigation in endoscopic assisted midline skull base pathologies in children. Childs Nerv Syst 2021; 37:1525-1534. [PMID: 33515059 PMCID: PMC8084784 DOI: 10.1007/s00381-021-05049-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 01/14/2021] [Indexed: 12/13/2022]
Abstract
INTRODUCTION Endoscopic skull base approaches are broadly used in modern neurosurgery. The support of neuronavigation can help to effectively target the lesion avoiding complications. In children, endoscopic-assisted skull base surgery in combination with navigation systems becomes even more important because of the morphological variability and rare diseases affecting the sellar and parasellar regions. This paper aims to analyze our first experience on augmented reality navigation in endoscopic skull base surgery in a pediatric case series. PATIENTS AND METHODS A retrospective review identified seventeen endoscopic-assisted endonasal or transoral procedures performed in an interdisciplinary setting in a period between October 2011 and May 2020. In all the cases, the surgical target was a lesion in the sellar or parasellar region. Clinical conditions, MRI appearance, intraoperative conditions, postoperative MRI, possible complications, and outcomes were analyzed. RESULTS The mean age of our patients was 14.5 ± 2.4 years. The diagnosis varied, but craniopharyngiomas (31.2%) were mostly represented. AR navigation was experienced to be very helpful for effectively targeting the lesion and defining the intraoperative extension of the pathology. In 65% of the oncologic cases, a radical removal was proven in postoperative MRI. The mean follow-up was 89 ± 79 months. There were no deaths in our series. No long-term complications were registered; two cerebrospinal fluid (CSF) fistulas and a secondary abscess required further surgery. CONCLUSION The implementation of augmented reality to endoscopic-assisted neuronavigated procedures within the skull base was feasible and did provide relevant information directly in the endoscopic field of view and was experienced to be useful in the pediatric cases, where anatomical variability and rarity of the pathologies make surgery more challenging.
Collapse
Affiliation(s)
- Valentina Pennacchietti
- Pediatric Neurosurgery, Charité-Universitätsmedizin Berlin, Campus Virchow Klinikum, Augustenburger Platz 1, 13353 Berlin, Germany
| | - Katharina Stoelzel
- Department of Otorhinolaryngology, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Anna Tietze
- Institute of Neuroradiology, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Erwin Lankes
- Department for Pediatric Endocrinology and Diabetes, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas Schaumann
- Pediatric Neurosurgery, Charité-Universitätsmedizin Berlin, Campus Virchow Klinikum, Augustenburger Platz 1, 13353 Berlin, Germany
| | | | - Ulrich Wilhelm Thomale
- Pediatric Neurosurgery, Charité-Universitätsmedizin Berlin, Campus Virchow Klinikum, Augustenburger Platz 1, 13353, Berlin, Germany.
| |
Collapse
|