1
|
Hadida Barzilai D, Tejman‐Yarden S, Yogev D, Vazhgovsky O, Nagar N, Sasson L, Sion‐Sarid R, Parmet Y, Goldfarb A, Ilan O. Augmented Reality-Guided Mastoidectomy Simulation: A Randomized Controlled Trial Assessing Surgical Proficiency. Laryngoscope 2025; 135:894-900. [PMID: 39315469 PMCID: PMC11725687 DOI: 10.1002/lary.31791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 08/30/2024] [Accepted: 09/06/2024] [Indexed: 09/25/2024]
Abstract
OBJECTIVE Mastoidectomy surgical training is challenging due to the complex nature of the anatomical structures involved. Traditional training methods based on direct patient care and cadaveric temporal bone training have practical shortcomings. 3D-printed temporal bone models and augmented reality (AR) have emerged as promising solutions, particularly for mastoidectomy surgery, which demands an understanding of intricate anatomical structures. Evidence is needed to explore the potential of AR technology in addressing these training challenges. METHODS 21 medical students in their clinical clerkship were recruited for this prospective, randomized controlled trial assessing mastoidectomy skills. The participants were randomly assigned to the AR group, which received real-time guidance during drilling on 3D-printed temporal bone models, or to the control group, which received traditional training methods. Skills were assessed on a modified Welling scale and evaluated independently by two senior otologists. RESULTS The AR group outperformed the control group, with a mean overall drilling score of 19.5 out of 25, compared with the control group's score of 12 (p < 0.01). The AR group was significantly better at defining mastoidectomy margins (p < 0.01), exposing the antrum, preserving the lateral semicircular canal (p < 0.05), sharpening the sinodural angle (p < 0.01), exposing the tegmen and attic, preserving the ossicles (p < 0.01), and thinning and preserving the external auditory canal (p < 0.05). CONCLUSION AR simulation in mastoidectomy, even in a single session, improved the proficiency of novice surgeons compared with traditional methods. LEVEL OF EVIDENCE NA Laryngoscope, 135:894-900, 2025.
Collapse
Affiliation(s)
| | - Shai Tejman‐Yarden
- The Engineering Medical Research LabSheba Medical CenterRamat GanIsrael
- The Edmond J. Safra International Congenital Heart CenterSheba Medical CenterRamat GanIsrael
| | - David Yogev
- The Engineering Medical Research LabSheba Medical CenterRamat GanIsrael
- Department of Otolaryngology and Head and Neck SurgerySheba Medical CenterTel HashomerIsrael
| | - Oliana Vazhgovsky
- The Engineering Medical Research LabSheba Medical CenterRamat GanIsrael
- The Edmond J. Safra International Congenital Heart CenterSheba Medical CenterRamat GanIsrael
| | - Netanel Nagar
- The Engineering Medical Research LabSheba Medical CenterRamat GanIsrael
| | - Lior Sasson
- Cardiothoracic Surgery, Wolfson Medical CenterTel Aviv UniversityHolonIsrael
| | | | - Yisrael Parmet
- Department of Industrial Engineering and ManagementBen Gurion UniversityBeer ShevaIsrael
| | - Abraham Goldfarb
- Department of Otorhinolaryngology and Head and Neck SurgeryEdith Wolfson Medical CenterHolonIsrael
| | - Ophir Ilan
- Department of Otorhinolaryngology and Head and Neck SurgeryEdith Wolfson Medical CenterHolonIsrael
| |
Collapse
|
2
|
Carbone M, Montemurro N, Cattari N, Autelitano M, Cutolo F, Ferrari V, Cigna E, Condino S. Targeting accuracy of neuronavigation: a comparative evaluation of an innovative wearable AR platform vs. traditional EM navigation. Front Digit Health 2025; 6:1500677. [PMID: 39877694 PMCID: PMC11772343 DOI: 10.3389/fdgth.2024.1500677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Accepted: 12/30/2024] [Indexed: 01/31/2025] Open
Abstract
Wearable augmented reality in neurosurgery offers significant advantages by enabling the visualization of navigation information directly on the patient, seamlessly integrating virtual data with the real surgical field. This ergonomic approach can facilitate a more intuitive understanding of spatial relationships and guidance cues, potentially reducing cognitive load and enhancing the accuracy of surgical gestures by aligning critical information with the actual anatomy in real-time. This study evaluates the benefits of a novel AR platform, VOSTARS, by comparing its targeting accuracy to that of the gold-standard electromagnetic (EM) navigation system, Medtronic StealthStation® S7®. Both systems were evaluated in phantom and human studies. In the phantom study, participants targeted 13 predefined landmarks using identical pointers to isolate system performance. In the human study, three facial landmarks were targeted in nine volunteers post-brain tumor surgery. The performance of the VOSTARS system was superior to that of the standard neuronavigator in both the phantom and human studies. In the phantom study, users achieved a median accuracy of 1.4 mm (IQR: 1.2 mm) with VOSTARS compared to 2.9 mm (IQR: 1.4 mm) with the standard neuronavigator. In the human study, the median targeting accuracy with VOSTARS was significantly better for selected landmarks in the outer eyebrow (3.7 mm vs. 6.6 mm, p = 0.05) and forehead (4.5 mm vs. 6.3 mm, p = 0.021). Although the difference for the pronasal point was not statistically significant (2.7 mm vs. 3.5 mm, p = 0.123), the trend towards improved accuracy with VOSTARS is clear. These findings suggest that the proposed AR technology has the potential to significantly improve surgical outcomes in neurosurgery.
Collapse
Affiliation(s)
- Marina Carbone
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Nicola Montemurro
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
- Department of Neurosurgery, Azienda Ospedaliero Universitaria Pisana, Pisa, Italy
| | - Nadia Cattari
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Martina Autelitano
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
- Department of Neurosurgery, Azienda Ospedaliero Universitaria Pisana, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| | - Emanuele Cigna
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Sara Condino
- Department of Information Engineering, University of Pisa, Pisa, Italy
- EndoCAS Interdipartimental Center, University of Pisa, Pisa, Italy
| |
Collapse
|
3
|
Gao L, Zhang H, Xu Y, Dong Y, Sheng L, Fan Y, Qin C, Gu W. Mixed reality-assisted versus landmark-guided spinal puncture in elderly patients: protocol for a stratified randomized controlled trial. Trials 2024; 25:780. [PMID: 39558217 PMCID: PMC11575154 DOI: 10.1186/s13063-024-08628-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 11/11/2024] [Indexed: 11/20/2024] Open
Abstract
BACKGROUND Performing spinal anesthesia in elderly patients with spine degeneration is challenging for novice practitioners. This stratified randomized controlled trial aims to compare the effectiveness of mixed reality-assisted spinal puncture (MRasp) with that of landmark-guided spinal puncture (LGsp) performed by novice practitioners in elderly patients. METHODS This prospective, single-center, stratified, blocked, parallel randomized controlled trial will include 168 patients (aged ≥ 65 years) scheduled for elective surgery involving spinal anesthesia. All spinal punctures will be performed by anesthesiology interns and residents trained at Huadong Hospital. Patients will be randomly assigned to the MRasp group (n = 84) or the LGsp group (n = 84). Based on each intern/resident's experience in spinal puncture, participants will be stratified into three clusters: the primary group, intermediate group, and advanced group. The primary outcome will be the comparison of the rate of successful first-attempt needle insertion between the MRasp group and the LGsp group. Secondary outcomes will include the number of needle insertion attempts, the number of redirection attempts, the number of passes, the rate of successful first needle pass, the spinal puncture time, the total procedure time, and the incidence of perioperative complications. A stratified subgroup analysis will also be conducted for interns/residents at different experience levels. DISCUSSION The findings from this trial establish the effectiveness of MRasp by novice practitioners in elderly patients. This trial may provide experimental evidence for exploring an effective visualization technology to assist in spinal puncture. TRIAL REGISTRATION Chinese Clinical Trials Registry ChiCTR2300075291. Registered on August 31, 2023. https://www.chictr.org.cn/bin/project/edit?pid=189622 .
Collapse
Affiliation(s)
- Lei Gao
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Haichao Zhang
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Yidi Xu
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Yanjun Dong
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Lu Sheng
- Department of Urology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Yongqian Fan
- Department of Orthopedics, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Chunhui Qin
- Department of Pain Management, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, 200437, China.
| | - Weidong Gu
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
4
|
Isikay I, Cekic E, Baylarov B, Tunc O, Hanalioglu S. Narrative review of patient-specific 3D visualization and reality technologies in skull base neurosurgery: enhancements in surgical training, planning, and navigation. Front Surg 2024; 11:1427844. [PMID: 39081485 PMCID: PMC11287220 DOI: 10.3389/fsurg.2024.1427844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 07/02/2024] [Indexed: 08/02/2024] Open
Abstract
Recent advances in medical imaging, computer vision, 3-dimensional (3D) modeling, and artificial intelligence (AI) integrated technologies paved the way for generating patient-specific, realistic 3D visualization of pathological anatomy in neurosurgical conditions. Immersive surgical simulations through augmented reality (AR), virtual reality (VR), mixed reality (MxR), extended reality (XR), and 3D printing applications further increased their utilization in current surgical practice and training. This narrative review investigates state-of-the-art studies, the limitations of these technologies, and future directions for them in the field of skull base surgery. We begin with a methodology summary to create accurate 3D models customized for each patient by combining several imaging modalities. Then, we explore how these models are employed in surgical planning simulations and real-time navigation systems in surgical procedures involving the anterior, middle, and posterior cranial skull bases, including endoscopic and open microsurgical operations. We also evaluate their influence on surgical decision-making, performance, and education. Accumulating evidence demonstrates that these technologies can enhance the visibility of the neuroanatomical structures situated at the cranial base and assist surgeons in preoperative planning and intraoperative navigation, thus showing great potential to improve surgical results and reduce complications. Maximum effectiveness can be achieved in approach selection, patient positioning, craniotomy placement, anti-target avoidance, and comprehension of spatial interrelationships of neurovascular structures. Finally, we present the obstacles and possible future paths for the broader implementation of these groundbreaking methods in neurosurgery, highlighting the importance of ongoing technological advancements and interdisciplinary collaboration to improve the accuracy and usefulness of 3D visualization and reality technologies in skull base surgeries.
Collapse
Affiliation(s)
- Ilkay Isikay
- Department of Neurosurgery, Faculty of Medicine, Hacettepe University, Ankara, Türkiye
| | - Efecan Cekic
- Neurosurgery Clinic, Polatli Duatepe State Hospital, Ankara, Türkiye
| | - Baylar Baylarov
- Department of Neurosurgery, Faculty of Medicine, Hacettepe University, Ankara, Türkiye
| | - Osman Tunc
- Btech Innovation, METU Technopark, Ankara, Türkiye
| | - Sahin Hanalioglu
- Department of Neurosurgery, Faculty of Medicine, Hacettepe University, Ankara, Türkiye
| |
Collapse
|
5
|
Aweeda M, Adegboye F, Yang SF, Topf MC. Enhancing Surgical Vision: Augmented Reality in Otolaryngology-Head and Neck Surgery. JOURNAL OF MEDICAL EXTENDED REALITY 2024; 1:124-136. [PMID: 39091667 PMCID: PMC11290041 DOI: 10.1089/jmxr.2024.0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/15/2024] [Indexed: 08/04/2024]
Abstract
Augmented reality (AR) technology has become widely established in otolaryngology-head and neck surgery. Over the past 20 years, numerous AR systems have been investigated and validated across the subspecialties, both in cadaveric and in live surgical studies. AR displays projected through head-mounted devices, microscopes, and endoscopes, most commonly, have demonstrated utility in preoperative planning, intraoperative guidance, and improvement of surgical decision-making. Specifically, they have demonstrated feasibility in guiding tumor margin resections, identifying critical structures intraoperatively, and displaying patient-specific virtual models derived from preoperative imaging, with millimetric accuracy. This review summarizes both established and emerging AR technologies, detailing how their systems work, what features they offer, and their clinical impact across otolaryngology subspecialties. As AR technology continues to advance, its integration holds promise for enhancing surgical precision, simulation training, and ultimately, improving patient outcomes.
Collapse
Affiliation(s)
- Marina Aweeda
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Feyisayo Adegboye
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Shiayin F. Yang
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Michael C. Topf
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- School of Engineering, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
6
|
Qi Z, Jin H, Xu X, Wang Q, Gan Z, Xiong R, Zhang S, Liu M, Wang J, Ding X, Chen X, Zhang J, Nimsky C, Bopp MHA. Head model dataset for mixed reality navigation in neurosurgical interventions for intracranial lesions. Sci Data 2024; 11:538. [PMID: 38796526 PMCID: PMC11127921 DOI: 10.1038/s41597-024-03385-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/15/2024] [Indexed: 05/28/2024] Open
Abstract
Mixed reality navigation (MRN) technology is emerging as an increasingly significant and interesting topic in neurosurgery. MRN enables neurosurgeons to "see through" the head with an interactive, hybrid visualization environment that merges virtual- and physical-world elements. Offering immersive, intuitive, and reliable guidance for preoperative and intraoperative intervention of intracranial lesions, MRN showcases its potential as an economically efficient and user-friendly alternative to standard neuronavigation systems. However, the clinical research and development of MRN systems present challenges: recruiting a sufficient number of patients within a limited timeframe is difficult, and acquiring low-cost, commercially available, medically significant head phantoms is equally challenging. To accelerate the development of novel MRN systems and surmount these obstacles, the study presents a dataset designed for MRN system development and testing in neurosurgery. It includes CT and MRI data from 19 patients with intracranial lesions and derived 3D models of anatomical structures and validation references. The models are available in Wavefront object (OBJ) and Stereolithography (STL) formats, supporting the creation and assessment of neurosurgical MRN applications.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany.
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China.
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
- NCO School, Army Medical University, 050081, Shijiazhuang, China
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Ruochu Xiong
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Department of Neurosurgery, Division of Medicine, Graduate School of Medical Sciences, Kanazawa University, Takara-machi 13-1, 920-8641, Kanazawa, Ishikawa, Japan
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Minghang Liu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Xinyu Ding
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China.
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), 35043, Marburg, Germany
| | - Miriam H A Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany.
- Center for Mind, Brain and Behavior (CMBB), 35043, Marburg, Germany.
| |
Collapse
|
7
|
Ben-Shlomo N, Rahimi A, Abunimer AM, Guenette JP, Juliano AF, Starr JR, Jayender J, Corrales CE. Inner Ear Breaches from Vestibular Schwannoma Surgery: Revisiting the Incidence of Otologic Injury from Retrosigmoid and Middle Cranial Fossa Approaches. Otol Neurotol 2024; 45:311-318. [PMID: 38238921 PMCID: PMC10922915 DOI: 10.1097/mao.0000000000004105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
OBJECTIVE To assess the rate of iatrogenic injury to the inner ear in vestibular schwannoma resections. STUDY DESIGN Retrospective case review. SETTING Multiple academic tertiary care hospitals. PATIENTS Patients who underwent retrosigmoid or middle cranial fossa approaches for vestibular schwannoma resection between 1993 and 2015. INTERVENTION Diagnostic with therapeutic implications. MAIN OUTCOME MEASURE Drilling breach of the inner ear as confirmed by operative note or postoperative computed tomography (CT). RESULTS 21.5% of patients undergoing either retrosigmoid or middle fossa approaches to the internal auditory canal were identified with a breach of the vestibulocochlear system. Because of the lack of postoperative CT imaging in this cohort, this is likely an underestimation of the true incidence of inner ear breaches. Of all postoperative CT scans reviewed, 51.8% had an inner ear breach. As there may be bias in patients undergoing postoperative CT, a middle figure based on sensitivity analyses estimates the incidence of inner ear breaches from lateral skull base surgery to be 34.7%. CONCLUSIONS A high percentage of vestibular schwannoma surgeries via retrosigmoid and middle cranial fossa approaches result in drilling breaches of the inner ear. This study reinforces the value of preoperative image analysis for determining risk of inner ear breaches during vestibular schwannoma surgery and the importance of acquiring CT studies postoperatively to evaluate the integrity of the inner ear.
Collapse
Affiliation(s)
- Nir Ben-Shlomo
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Carver College of Medicine, Iowa City, Iowa
| | | | - Abdullah M Abunimer
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Jeffrey P Guenette
- Division of Neuroradiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Amy F Juliano
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts
| | - Jacqueline R Starr
- Channing Division of Network Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Jagadeesan Jayender
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - C Eduardo Corrales
- Department of Otolaryngology-Head and Neck Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
8
|
Begagić E, Bečulić H, Pugonja R, Memić Z, Balogun S, Džidić-Krivić A, Milanović E, Salković N, Nuhović A, Skomorac R, Sefo H, Pojskić M. Augmented Reality Integration in Skull Base Neurosurgery: A Systematic Review. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:335. [PMID: 38399622 PMCID: PMC10889940 DOI: 10.3390/medicina60020335] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 02/05/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Background and Objectives: To investigate the role of augmented reality (AR) in skull base (SB) neurosurgery. Materials and Methods: Utilizing PRISMA methodology, PubMed and Scopus databases were explored to extract data related to AR integration in SB surgery. Results: The majority of 19 included studies (42.1%) were conducted in the United States, with a focus on the last five years (77.8%). Categorization included phantom skull models (31.2%, n = 6), human cadavers (15.8%, n = 3), or human patients (52.6%, n = 10). Microscopic surgery was the predominant modality in 10 studies (52.6%). Of the 19 studies, surgical modality was specified in 18, with microscopic surgery being predominant (52.6%). Most studies used only CT as the data source (n = 9; 47.4%), and optical tracking was the prevalent tracking modality (n = 9; 47.3%). The Target Registration Error (TRE) spanned from 0.55 to 10.62 mm. Conclusion: Despite variations in Target Registration Error (TRE) values, the studies highlighted successful outcomes and minimal complications. Challenges, such as device practicality and data security, were acknowledged, but the application of low-cost AR devices suggests broader feasibility.
Collapse
Affiliation(s)
- Emir Begagić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Hakija Bečulić
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Ragib Pugonja
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Zlatan Memić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Simon Balogun
- Division of Neurosurgery, Department of Surgery, Obafemi Awolowo University Teaching Hospitals Complex, Ilesa Road PMB 5538, Ile-Ife 220282, Nigeria
| | - Amina Džidić-Krivić
- Department of Neurology, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina
| | - Elma Milanović
- Neurology Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Naida Salković
- Department of General Medicine, School of Medicine, University of Tuzla, Univerzitetska 1, 75000 Tuzla, Bosnia and Herzegovina;
| | - Adem Nuhović
- Department of General Medicine, School of Medicine, University of Sarajevo, Univerzitetska 1, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Rasim Skomorac
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Surgery, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina
| | - Haso Sefo
- Neurosurgery Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Mirza Pojskić
- Department of Neurosurgery, University Hospital Marburg, Baldingerstr., 35033 Marburg, Germany
| |
Collapse
|
9
|
Morley CT, Arreola DM, Qian L, Lynn AL, Veigulis ZP, Osborne TF. Mixed Reality Surgical Navigation System; Positional Accuracy Based on Food and Drug Administration Standard. Surg Innov 2024; 31:48-57. [PMID: 38019844 PMCID: PMC10773158 DOI: 10.1177/15533506231217620] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
BACKGROUND Computer assisted surgical navigation systems are designed to improve outcomes by providing clinicians with procedural guidance information. The use of new technologies, such as mixed reality, offers the potential for more intuitive, efficient, and accurate procedural guidance. The goal of this study is to assess the positional accuracy and consistency of a clinical mixed reality system that utilizes commercially available wireless head-mounted displays (HMDs), custom software, and localization instruments. METHODS Independent teams using the second-generation Microsoft HoloLens© hardware, Medivis SurgicalAR© software, and localization instruments, tested the accuracy of the combined system at different institutions, times, and locations. The ASTM F2554-18 consensus standard for computer-assisted surgical systems, as recognized by the U.S. FDA, was utilized to measure the performance. 288 tests were performed. RESULTS The system demonstrated consistent results, with an average accuracy performance that was better than one millimeter (.75 ± SD .37 mm). CONCLUSION Independently acquired positional tracking accuracies exceed conventional in-market surgical navigation tracking systems and FDA standards. Importantly, the performance was achieved at two different institutions, using an international testing standard, and with a system that included a commercially available off-the-shelf wireless head mounted display and software.
Collapse
Affiliation(s)
| | - David M. Arreola
- US Department of Veterans Affairs, Palo Alto Healthcare System, Palo Alto, CA, USA
| | | | | | - Zachary P. Veigulis
- US Department of Veterans Affairs, Palo Alto Healthcare System, Palo Alto, CA, USA
- Department of Business Analytics, Tippie College of Business, University of Iowa, Iowa, IA, USA
| | - Thomas F. Osborne
- US Department of Veterans Affairs, Palo Alto Healthcare System, Palo Alto, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
10
|
Qi Z, Jin H, Wang Q, Gan Z, Xiong R, Zhang S, Liu M, Wang J, Ding X, Chen X, Zhang J, Nimsky C, Bopp MHA. The Feasibility and Accuracy of Holographic Navigation with Laser Crosshair Simulator Registration on a Mixed-Reality Display. SENSORS (BASEL, SWITZERLAND) 2024; 24:896. [PMID: 38339612 PMCID: PMC10857152 DOI: 10.3390/s24030896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
Addressing conventional neurosurgical navigation systems' high costs and complexity, this study explores the feasibility and accuracy of a simplified, cost-effective mixed reality navigation (MRN) system based on a laser crosshair simulator (LCS). A new automatic registration method was developed, featuring coplanar laser emitters and a recognizable target pattern. The workflow was integrated into Microsoft's HoloLens-2 for practical application. The study assessed the system's precision by utilizing life-sized 3D-printed head phantoms based on computed tomography (CT) or magnetic resonance imaging (MRI) data from 19 patients (female/male: 7/12, average age: 54.4 ± 18.5 years) with intracranial lesions. Six to seven CT/MRI-visible scalp markers were used as reference points per case. The LCS-MRN's accuracy was evaluated through landmark-based and lesion-based analyses, using metrics such as target registration error (TRE) and Dice similarity coefficient (DSC). The system demonstrated immersive capabilities for observing intracranial structures across all cases. Analysis of 124 landmarks showed a TRE of 3.0 ± 0.5 mm, consistent across various surgical positions. The DSC of 0.83 ± 0.12 correlated significantly with lesion volume (Spearman rho = 0.813, p < 0.001). Therefore, the LCS-MRN system is a viable tool for neurosurgical planning, highlighting its low user dependency, cost-efficiency, and accuracy, with prospects for future clinical application enhancements.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Ruochu Xiong
- Department of Neurosurgery, Division of Medicine, Graduate School of Medical Sciences, Kanazawa University, Takara-machi 13-1, Kanazawa 920-8641, Japan;
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Minghang Liu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Xinyu Ding
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (H.J.); (Q.W.); (Z.G.); (S.Z.); (M.L.); (J.W.); (X.D.); (X.C.); (J.Z.)
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| |
Collapse
|
11
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
12
|
Deng Z, Xiang N, Pan J. State of the Art in Immersive Interactive Technologies for Surgery Simulation: A Review and Prospective. Bioengineering (Basel) 2023; 10:1346. [PMID: 38135937 PMCID: PMC10740891 DOI: 10.3390/bioengineering10121346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/08/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
Immersive technologies have thrived on a strong foundation of software and hardware, injecting vitality into medical training. This surge has witnessed numerous endeavors incorporating immersive technologies into surgery simulation for surgical skills training, with a growing number of researchers delving into this domain. Relevant experiences and patterns need to be summarized urgently to enable researchers to establish a comprehensive understanding of this field, thus promoting its continuous growth. This study provides a forward-looking perspective by reviewing the latest development of immersive interactive technologies for surgery simulation. The investigation commences from a technological standpoint, delving into the core aspects of virtual reality (VR), augmented reality (AR) and mixed reality (MR) technologies, namely, haptic rendering and tracking. Subsequently, we summarize recent work based on the categorization of minimally invasive surgery (MIS) and open surgery simulations. Finally, the study showcases the impressive performance and expansive potential of immersive technologies in surgical simulation while also discussing the current limitations. We find that the design of interaction and the choice of immersive technology in virtual surgery development should be closely related to the corresponding interactive operations in the real surgical speciality. This alignment facilitates targeted technological adaptations in the direction of greater applicability and fidelity of simulation.
Collapse
Affiliation(s)
- Zihan Deng
- Department of Computing, School of Advanced Technology, Xi’an Jiaotong-Liverpool Uiversity, Suzhou 215123, China;
| | - Nan Xiang
- Department of Computing, School of Advanced Technology, Xi’an Jiaotong-Liverpool Uiversity, Suzhou 215123, China;
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China;
| |
Collapse
|
13
|
Qi Z, Bopp MHA, Nimsky C, Chen X, Xu X, Wang Q, Gan Z, Zhang S, Wang J, Jin H, Zhang J. A Novel Registration Method for a Mixed Reality Navigation System Based on a Laser Crosshair Simulator: A Technical Note. Bioengineering (Basel) 2023; 10:1290. [PMID: 38002414 PMCID: PMC10669875 DOI: 10.3390/bioengineering10111290] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 11/01/2023] [Indexed: 11/26/2023] Open
Abstract
Mixed Reality Navigation (MRN) is pivotal in augmented reality-assisted intelligent neurosurgical interventions. However, existing MRN registration methods face challenges in concurrently achieving low user dependency, high accuracy, and clinical applicability. This study proposes and evaluates a novel registration method based on a laser crosshair simulator, evaluating its feasibility and accuracy. A novel registration method employing a laser crosshair simulator was introduced, designed to replicate the scanner frame's position on the patient. The system autonomously calculates the transformation, mapping coordinates from the tracking space to the reference image space. A mathematical model and workflow for registration were designed, and a Universal Windows Platform (UWP) application was developed on HoloLens-2. Finally, a head phantom was used to measure the system's target registration error (TRE). The proposed method was successfully implemented, obviating the need for user interactions with virtual objects during the registration process. Regarding accuracy, the average deviation was 3.7 ± 1.7 mm. This method shows encouraging results in efficiency and intuitiveness and marks a valuable advancement in low-cost, easy-to-use MRN systems. The potential for enhancing accuracy and adaptability in intervention procedures positions this approach as promising for improving surgical outcomes.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| |
Collapse
|
14
|
Feinmesser G, Yogev D, Goldberg T, Parmet Y, Illouz S, Vazgovsky O, Eshet Y, Tejman-Yarden S, Alon E. Virtual reality-based training and pre-operative planning for head and neck sentinel lymph node biopsy. Am J Otolaryngol 2023; 44:103976. [PMID: 37480684 DOI: 10.1016/j.amjoto.2023.103976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 07/04/2023] [Indexed: 07/24/2023]
Abstract
OBJECTIVE Sentinel lymph node biopsy (SLNB) is crucial for managing head and neck skin cancer. However, variable lymphatic drainage can complicate SLN detection when using Single-Photon Emission Computed Tomography (SPECT) or lymphoscintigraphy. Virtual Reality (VR) can contribute to pre-operative planning by simulating a realistic 3D model, which improves orientation. VR can also facilitate real-patient training outside the operating room. This study explored using a VR platform for pre-operative planning in head and neck skin cancer patients undergoing SLNBs and assessed its value for residential training. MATERIALS AND METHODS In this prospective technology pilot study, attending surgeons and residents who performed 21 SLNB operations on patients with head and neck skin cancers (81% males, mean age 69.2 ± 11.3) used a VR simulation model based on each patient's pre-operative SPECT scan to examine patient-specific anatomy. After surgery, they completed a questionnaire on the efficiency of the VR simulation as a pre-operative planning tool and training device for residents. RESULTS The attending surgeons rated the VR model's accuracy at 8.3 ± 1.6 out of 10. Three-quarters (76%) of residents reported increased confidence after using VR. The physicians rated the platform's contribution to residents' training at 7.4 ± 2.1 to 8.9 ± 1.3 out of 10. CONCLUSION A VR SLNB simulation can accurately portray marked sentinel lymph nodes. It was rated high as a surgical planning and teaching tool among attending surgeons and residents alike and may play a role in pre-operative planning and resident training. Further studies are needed to explore its applications in practice.
Collapse
Affiliation(s)
- Gilad Feinmesser
- Department of Otolaryngology-Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel
| | - David Yogev
- School of Medicine, Tel Aviv University, Tel Aviv, Israel; Sheba Arrow Project, Sheba Medical Center, Ramat Gan, Israel; Department of Otolaryngology-Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel; The Engineering Medical Research Lab, Sheba Medical Center, Ramat Gan, Israel.
| | - Tomer Goldberg
- School of Medicine, Tel Aviv University, Tel Aviv, Israel; The Engineering Medical Research Lab, Sheba Medical Center, Ramat Gan, Israel
| | - Yisrael Parmet
- Department of Industrial Engineering and Management, Ben Gurion University, Beer Sheva, Israel
| | - Shay Illouz
- School of Medicine, Tel Aviv University, Tel Aviv, Israel; The Engineering Medical Research Lab, Sheba Medical Center, Ramat Gan, Israel
| | - Oliana Vazgovsky
- The Engineering Medical Research Lab, Sheba Medical Center, Ramat Gan, Israel
| | - Yael Eshet
- School of Medicine, Tel Aviv University, Tel Aviv, Israel; Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan, Israel
| | - Shai Tejman-Yarden
- School of Medicine, Tel Aviv University, Tel Aviv, Israel; The Engineering Medical Research Lab, Sheba Medical Center, Ramat Gan, Israel
| | - Eran Alon
- School of Medicine, Tel Aviv University, Tel Aviv, Israel; Sheba Arrow Project, Sheba Medical Center, Ramat Gan, Israel; Department of Otolaryngology-Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel
| |
Collapse
|
15
|
Ben-Shlomo N, Jayender J, Guenette JP, Corrales CE. Iatrogenic inner ear dehiscence associated with lateral skull base surgery: a systematic analysis of drilling injuries and their causal factors. Acta Neurochir (Wien) 2023; 165:2969-2977. [PMID: 37430067 PMCID: PMC10905369 DOI: 10.1007/s00701-023-05695-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 06/18/2023] [Indexed: 07/12/2023]
Abstract
PURPOSE Drilling injuries of the inner ear are an underreported complication of lateral skull base (LSB) surgery. Inner ear breaches can cause hearing loss, vestibular dysfunction, and third window phenomenon. This study aims to elucidate primary factors causing iatrogenic inner ear dehiscences (IED) in 9 patients who presented to a tertiary care center with postoperative symptoms of IED following LSB surgery for vestibular schwannoma, endolymphatic sac tumor, Meniere's disease, paraganglioma jugulare, and vagal schwannoma. METHODS Utilizing 3D Slicer image processing software, geometric and volumetric analysis was applied to both preoperative and postoperative imaging to identify causal factors iatrogenic inner ear breaches. Segmentation analyses, craniotomy analyses, and drilling trajectory analyses were performed. Cases of retrosigmoid approaches for vestibular schwannoma resection were compared to matched controls. RESULTS Excessive lateral drilling and breach of a single inner ear structure occurred in 3 cases undergoing transjugular (n=2) and transmastoid (n=1) approaches. Inadequate drilling trajectory breaching ≥1 inner ear structure occurred in 6 cases undergoing retrosigmoid (n=4), transmastoid (n=1), and middle cranial fossa approaches (n=1). In retrosigmoid approaches the 2-cm visualization window and craniotomy limits did not provide drilling angles to the entire tumor without causing IED in comparison to matched controls. CONCLUSIONS Inappropriate drill depth, errant lateral drilling, inadequate drill trajectory, or a combination of these led to iatrogenic IED. Image-based segmentation, individualized 3D anatomical model generation, and geometric and volumetric analyses can optimize operative plans and possibly reduce inner ear breaches from lateral skull base surgery.
Collapse
Affiliation(s)
- Nir Ben-Shlomo
- Department of Otolaryngology-Head and Neck Surgery, Carver College of Medicine, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Jagadeesan Jayender
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jeffrey P Guenette
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Carleton Eduardo Corrales
- Department of Otolaryngology-Head and Neck Surgery, Brigham and Women's Hospital, Harvard Medical School, 45 Francis Street, Boston, MA, 02115, USA.
| |
Collapse
|
16
|
Männle D, Pohlmann J, Monji-Azad S, Hesser J, Rotter N, Affolter A, Lammert A, Kramer B, Ludwig S, Huber L, Scherl C. Artificial intelligence directed development of a digital twin to measure soft tissue shift during head and neck surgery. PLoS One 2023; 18:e0287081. [PMID: 37556451 PMCID: PMC10411805 DOI: 10.1371/journal.pone.0287081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Accepted: 07/14/2023] [Indexed: 08/11/2023] Open
Abstract
Digital twins derived from 3D scanning data were developed to measure soft tissue deformation in head and neck surgery by an artificial intelligence approach. This framework was applied suggesting feasibility of soft tissue shift detection as a hitherto unsolved problem. In a pig head cadaver model 104 soft tissue resection had been performed. The surface of the removed soft tissue (RTP) and the corresponding resection cavity (RC) was scanned (N = 416) to train an artificial intelligence (AI) with two different 3D object detectors (HoloLens 2; ArtecEva). An artificial tissue shift (TS) was created by changing the tissue temperature from 7,91±4,1°C to 36,37±1,28°C. Digital twins of RTP and RC in cold and warm conditions had been generated and volumes were calculated based on 3D surface meshes. Significant differences in number of vertices created by the different 3D scanners (HoloLens2 51313 vs. ArtecEva 21694, p<0.0001) hence result in differences in volume measurement of the RTC (p = 0.0015). A significant TS could be induced by changing the temperature of the tissue of RC (p = 0.0027) and RTP (p = <0.0001). RC showed more correlation in TS by heating than RTP with a volume increase of 3.1 μl or 9.09% (p = 0.449). Cadaver models are suitable for training a machine learning model for deformable registration through creation of a digital twin. Despite different point cloud densities, HoloLens and ArtecEva provide only slightly different estimates of volume. This means that both devices can be used for the task.TS can be simulated and measured by temperature change, in which RC and RTP react differently. This corresponds to the clinical behaviour of tumour and resection cavity during surgeries, which could be used for frozen section management and a range of other clinical applications.
Collapse
Affiliation(s)
- David Männle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jan Pohlmann
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Sara Monji-Azad
- Mannheim Institute for Intelligent Systems in Medicine (MIISM), Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jürgen Hesser
- Mannheim Institute for Intelligent Systems in Medicine (MIISM), Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany
- Central Institute for Computer Engineering (ZITI), Heidelberg University, Heidelberg, Germany
- CZS Heidelberg Center for Model-Based AI, Heidelberg University, Heidelberg, Germany
- AI Health Innovation Cluster, Heidelberg-Mannheim Health and Life Science Alliance, Heidelberg University, Heidelberg, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Annette Affolter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Anne Lammert
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Benedikt Kramer
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Sonja Ludwig
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Lena Huber
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Claudia Scherl
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- AI Health Innovation Cluster, Heidelberg-Mannheim Health and Life Science Alliance, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
17
|
Chen JX, Yu S, Ding AS, Lee DJ, Welling DB, Carey JP, Gray ST, Creighton FX. Augmented Reality in Otology/Neurotology: A Scoping Review with Implications for Practice and Education. Laryngoscope 2023; 133:1786-1795. [PMID: 36519414 PMCID: PMC10267287 DOI: 10.1002/lary.30515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/29/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVE To determine how augmented reality (AR) has been applied to the field of otology/neurotology, examine trends and gaps in research, and provide an assessment of the future potential of this technology within surgical practice and education. DATA SOURCES PubMed, EMBASE, and Cochrane Library were assessed from their inceptions through October 2022. A manual bibliography search was also conducted. REVIEW METHODS A scoping review was conducted and reported according to PRISMA-ScR guidelines. Data from studies describing the application of AR to the field of otology/neurotology were evaluated, according to a priori inclusion/exclusion criteria. Exclusion criteria included non-English language articles, abstracts, letters/commentaries, conference papers, and review articles. RESULTS Eighteen articles covering a diverse range of AR platforms were included. Publication dates spanned from 2007 to 2022 and the rate of publication increased over this time. Six of 18 studies were case series in human patients although the remaining were proof of concepts in cadaveric/artificial/animal models. The most common application of AR was for surgical navigation (14 of 18 studies). Computed tomography was the most common source of input data. Few studies noted potential applications to surgical training. CONCLUSION Interest in the application of AR to otology/neurotology is growing based on the number of recent publications that use a broad range of hardware, software, and AR platforms. Large gaps in research such as the need for submillimeter registration error must be addressed prior to adoption in the operating room and for educational purposes. LEVEL OF EVIDENCE N/A Laryngoscope, 133:1786-1795, 2023.
Collapse
Affiliation(s)
- Jenny X. Chen
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| | | | - Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Daniel J. Lee
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, MA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, MA
| | - D. Brad Welling
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, MA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, MA
| | - John P. Carey
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Stacey T. Gray
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, MA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, MA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| |
Collapse
|
18
|
Wu J, Gao L, Shi Q, Qin C, Xu K, Jiang Z, Zhang X, Li M, Qiu J, Gu W. Accuracy Evaluation Trial of Mixed Reality-Guided Spinal Puncture Technology. Ther Clin Risk Manag 2023; 19:599-609. [PMID: 37484696 PMCID: PMC10361284 DOI: 10.2147/tcrm.s416918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/03/2023] [Indexed: 07/25/2023] Open
Abstract
Purpose To evaluate the accuracy of mixed reality (MR)-guided visualization technology for spinal puncture (MRsp). Methods MRsp involved the following three steps: 1. Lumbar spine computed tomography (CT) data were obtained to reconstruct virtual 3D images, which were imported into a HoloLens (2nd gen). 2. The patented MR system quickly recognized the spatial orientation and superimposed the virtual image over the real spine in the HoloLens. 3. The operator performed the spinal puncture with structural information provided by the virtual image. A posture fixation cushion was used to keep the subjects' lateral decubitus position consistent. 12 subjects were recruited to verify the setup error and the registration error. The setup error was calculated using the first two CT scans and measuring the displacement of two location markers. The projection points of the upper edge of the L3 spinous process (L3↑), the lower edge of the L3 spinous process (L3↓), and the lower edge of the L4 spinous process (L4↓) in the virtual image were positioned and marked on the skin as the registration markers. A third CT scan was performed to determine the registration error by measuring the displacement between the three registration markers and the corresponding real spinous process edges. Results The setup errors in the position of the cranial location marker between CT scans along the left-right (LR), anterior-posterior (AP), and superior-inferior (SI) axes of the CT bed measured 0.09 ± 0.06 cm, 0.30 ± 0.28 cm, and 0.22 ± 0.12 cm, respectively, while those of the position of the caudal location marker measured 0.08 ± 0.06 cm, 0.29 ± 0.18 cm, and 0.18 ± 0.10 cm, respectively. The registration errors between the three registration markers and the subject's real L3↑, L3↓, and L4↓ were 0.11 ± 0.09 cm, 0.15 ± 0.13 cm, and 0.13 ± 0.10 cm, respectively, in the SI direction. Conclusion This MR-guided visualization technology for spinal puncture can accurately and quickly superimpose the reconstructed 3D CT images over a real human spine.
Collapse
Affiliation(s)
- Jiajun Wu
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, 200040, People’s Republic of China
| | - Lei Gao
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, 200040, People’s Republic of China
| | - Qiao Shi
- Department of Anesthesiology, International Peace Maternity and Child Health Hospital of China, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200030, People’s Republic of China
| | - Chunhui Qin
- Department of Pain Management, Yueyang Integrated Traditional Chinese Medicine and Western Medicine Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, 200437, People’s Republic of China
| | - Kai Xu
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, 200040, People’s Republic of China
| | - Zhaoshun Jiang
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, 200040, People’s Republic of China
| | - Xixue Zhang
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, 200040, People’s Republic of China
| | - Ming Li
- Department of Radiology, Huadong Hospital affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
| | - Jianjian Qiu
- Department of Radiation Oncology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
| | - Weidong Gu
- Department of Anesthesiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, People’s Republic of China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, 200040, People’s Republic of China
| |
Collapse
|
19
|
Jain S, Tajsic T, Das T, Gao Y, Yuan NK, Yeo TT, Graves MJ, Helmy A. Assessment of Accuracy of Mixed Reality Device for Neuronavigation: Proposed Methodology and Results. NEUROSURGERY PRACTICE 2023; 4:e00031. [PMID: 39958371 PMCID: PMC11809955 DOI: 10.1227/neuprac.0000000000000036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 01/06/2023] [Indexed: 02/18/2025]
Abstract
Intraoperative neuronavigation is currently an essential component of neurosurgical operations in several contexts. Recent progress in mixed reality (MR) technology has attempted to overcome the disadvantages of standard neuronavigation systems allowing the surgeon to superimpose a 3D rendered image onto the patient's anatomy. We present the first study in the literature to assess the surface matching accuracy of MR rendered image. For the purposes of this study, we used HoloLens 2 with virtual surgery intelligence providing the software capability for image rendering. To assess the accuracy of using mixed reality device for neuronavigation intraoperatively. This study seeks to assess the accuracy of rendered holographic images from a mixed reality device as a means for neuronavigation intraoperatively. We used the Realistic Operative Workstation for Educating Neurosurgical Apprentices to represent a patient's skull with intracranial components which underwent standardized computed tomography (CT) and MRI imaging. Eleven predefined points were used for purposes of assessing the accuracy of the rendered image, compared with the intraoperative gold standard neuronavigation. The mean HoloLens values against the ground truth were significantly higher when compared with Stealth using CT scan as the imaging modality. Using extracranial anatomic landmarks, the HoloLens error values continued to be significantly higher in magnitude when compared with Stealth across CT and MRI. This study provides a relatively easy and feasible method to assess accuracy of MR-based navigation without requiring any additions to the established imaging protocols. We failed to show the equivalence of MR-based navigation over the current neuronavigation systems.
Collapse
Affiliation(s)
- Swati Jain
- Divison of Neurosurgery, University Surgical Cluster, National University Health System, Singapore
| | - Tamara Tajsic
- Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Tilak Das
- Department of Radiology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Yujia Gao
- Division of Hepatobiliary & Pancreatic Surgery, University Surgical Cluster, National University Health System (NUHS), Singapore
| | - Ngiam Kee Yuan
- Division of General Surgery (Thyroid & Endocrine Surgery), University Surgical Cluster, National University Health System (NUHS), Singapore
| | - Tseng Tsai Yeo
- Divison of Neurosurgery, University Surgical Cluster, National University Health System, Singapore
| | - Martin J. Graves
- Department of Radiology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Adel Helmy
- Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
20
|
Liu S, Liao Y, He B, Dai B, Zhu Z, Shi J, Huang Y, Zou G, Du C, Shi B. Mandibular resection and defect reconstruction guided by a contour registration-based augmented reality system: A preclinical trial. J Craniomaxillofac Surg 2023:S1010-5182(23)00077-X. [PMID: 37355367 DOI: 10.1016/j.jcms.2023.05.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/22/2023] [Accepted: 05/21/2023] [Indexed: 06/26/2023] Open
Abstract
The aim of this study was to verify the feasibility and accuracy of a contour registration-based augmented reality (AR) system in jaw surgery. An AR system was developed to display the interaction between virtual planning and images of the surgical site in real time. Several trials were performed with the guidance of the AR system and the surgical guide. The postoperative cone beam CT (CBCT) data were matched with the preoperatively planned data to evaluate the accuracy of the system by comparing the deviations in distance and angle. All procedures were performed successfully. In nine model trials, distance and angular deviations for the mandible, reconstructed fibula, and fixation screws were 1.62 ± 0.38 mm, 1.86 ± 0.43 mm, 1.67 ± 0.70 mm, and 3.68 ± 0.71°, 5.48 ± 2.06°, 7.50 ± 1.39°, respectively. In twelve animal trials, results of the AR system were compared with the surgical guide. Distance deviations for the bilateral condylar outer poles were 0.93 ± 0.63 mm and 0.81 ± 0.30 mm, respectively (p = 0.68). Distance deviations for the bilateral mandibular posterior angles were 2.01 ± 2.49 mm and 2.89 ± 1.83 mm, respectively (p = 0.50). Distance and angular deviations for the mandible were 1.41 ± 0.61 mm, 1.21 ± 0.18 mm (p = 0.45), and 6.81 ± 2.21°, 6.11 ± 2.93° (p = 0.65), respectively. Distance and angular deviations for the reconstructed tibiofibular bones were 0.88 ± 0.22 mm, 0.84 ± 0.18 mm (p = 0.70), and 6.47 ± 3.03°, 6.90 ± 4.01° (p = 0.84), respectively. This study proposed a contour registration-based AR system to assist surgeons in intuitively observing the surgical plan intraoperatively. The trial results indicated that this system had similar accuracy to the surgical guide.
Collapse
Affiliation(s)
- Shaofeng Liu
- Department of Oral and Maxillofacial Surgery, First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China; School and Hospital of Stomatology, Fujian Medical University, Fuzhou, 350004, China
| | - Yunyang Liao
- Department of Oral and Maxillofacial Surgery, First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China; Laboratory of Facial Plastic and Reconstruction, Fujian Medical University, Fuzhou, 350004, China
| | - Bingwei He
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China; Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou, 350108, China
| | - Bowen Dai
- Department of Oral and Maxillofacial Surgery, Second Xiangya Hospital of Central South University, Changsha, 410000, China
| | - Zhaoju Zhu
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China; Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou, 350108, China
| | - Jiafeng Shi
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China; Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou, 350108, China
| | - Yue Huang
- Department of Oral and Maxillofacial Surgery, First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China; Laboratory of Facial Plastic and Reconstruction, Fujian Medical University, Fuzhou, 350004, China
| | - Gengsen Zou
- Department of Oral and Maxillofacial Surgery, First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China; Laboratory of Facial Plastic and Reconstruction, Fujian Medical University, Fuzhou, 350004, China
| | - Chen Du
- Department of Oral and Maxillofacial Surgery, First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China; School and Hospital of Stomatology, Fujian Medical University, Fuzhou, 350004, China
| | - Bin Shi
- Department of Oral and Maxillofacial Surgery, First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China; Laboratory of Facial Plastic and Reconstruction, Fujian Medical University, Fuzhou, 350004, China.
| |
Collapse
|
21
|
Tzelnick S, Rampinelli V, Sahovaler A, Franz L, Chan HHL, Daly MJ, Irish JC. Skull-Base Surgery-A Narrative Review on Current Approaches and Future Developments in Surgical Navigation. J Clin Med 2023; 12:2706. [PMID: 37048788 PMCID: PMC10095207 DOI: 10.3390/jcm12072706] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 03/10/2023] [Accepted: 03/29/2023] [Indexed: 04/07/2023] Open
Abstract
Surgical navigation technology combines patient imaging studies with intraoperative real-time data to improve surgical precision and patient outcomes. The navigation workflow can also include preoperative planning, which can reliably simulate the intended resection and reconstruction. The advantage of this approach in skull-base surgery is that it guides access into a complex three-dimensional area and orients tumors intraoperatively with regard to critical structures, such as the orbit, carotid artery and brain. This enhances a surgeon's capabilities to preserve normal anatomy while resecting tumors with adequate margins. The aim of this narrative review is to outline the state of the art and the future directions of surgical navigation in the skull base, focusing on the advantages and pitfalls of this technique. We will also present our group experience in this field, within the frame of the current research trends.
Collapse
Affiliation(s)
- Sharon Tzelnick
- Division of Head and Neck Surgery, Princess Margaret Cancer Center, University of Toronto, Toronto, ON M5G 2M9, Canada
- Guided Therapeutics (GTx) Program, TECHNA Institute, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Vittorio Rampinelli
- Unit of Otorhinolaryngology—Head and Neck Surgery, Department of Medical and Surgical Specialties, Radiologic Sciences and Public Health, University of Brescia, 25121 Brescia, Italy
- Technology for Health (PhD Program), Department of Information Engineering, University of Brescia, 25121 Brescia, Italy
| | - Axel Sahovaler
- Guided Therapeutics (GTx) Program, TECHNA Institute, University Health Network, Toronto, ON M5G 2C4, Canada
- Head & Neck Surgery Unit, University College London Hospitals, London NW1 2PG, UK
| | - Leonardo Franz
- Department of Neuroscience DNS, Otolaryngology Section, University of Padova, 35122 Padua, Italy
| | - Harley H. L. Chan
- Guided Therapeutics (GTx) Program, TECHNA Institute, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Michael J. Daly
- Guided Therapeutics (GTx) Program, TECHNA Institute, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Jonathan C. Irish
- Division of Head and Neck Surgery, Princess Margaret Cancer Center, University of Toronto, Toronto, ON M5G 2M9, Canada
- Guided Therapeutics (GTx) Program, TECHNA Institute, University Health Network, Toronto, ON M5G 2C4, Canada
| |
Collapse
|
22
|
Bounajem MT, Cameron B, Sorensen K, Parr R, Gibby W, Prashant G, Evans JJ, Karsy M. Improved Accuracy and Lowered Learning Curve of Ventricular Targeting Using Augmented Reality-Phantom and Cadaveric Model Testing. Neurosurgery 2023; 92:884-891. [PMID: 36562619 DOI: 10.1227/neu.0000000000002293] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/23/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Augmented reality (AR) has demonstrated significant potential in neurosurgical cranial, spine, and teaching applications. External ventricular drain (EVD) placement remains a common procedure, but with error rates in targeting between 10% and 40%. OBJECTIVE To evaluate Novarad VisAR guidance system for the placement of EVDs in phantom and cadaveric models. METHODS Two synthetic ventricular phantom models and a third cadaver model underwent computerized tomography imaging and registration with the VisAR system (Novarad). Root mean square (RMS), angular error (γ), and Euclidian distance were measured by multiple methods for various standard EVD placements. RESULTS Computerized tomography measurements on a phantom model (0.5-mm targets showed a mean Euclidean distance error of 1.20 ± 0.98 mm and γ of 1.25° ± 1.02°. Eight participants placed EVDs in lateral and occipital burr holes using VisAR in a second phantom anatomic ventricular model (mean RMS: 3.9 ± 1.8 mm, γ: 3.95° ± 1.78°). There were no statistically significant differences in accuracy for postgraduate year level, prior AR experience, prior EVD experience, or experience with video games ( P > .05). In comparing EVDs placed with anatomic landmarks vs VisAR navigation in a cadaver, VisAR demonstrated significantly better RMS and γ, 7.47 ± 0.94 mm and 7.12° ± 0.97°, respectively ( P ≤ .05). CONCLUSION The novel VisAR AR system resulted in accurate placement of EVDs with a rapid learning curve, which may improve clinical treatment and patient safety. Future applications of VisAR can be expanded to other cranial procedures.
Collapse
Affiliation(s)
- Michael T Bounajem
- Department of Neurosurgery, Clinical Neurosciences Center, University of Utah, Salt Lake City, Utah, USA
| | | | | | | | - Wendell Gibby
- Novarad, Provo, Utah, USA
- Department of Radiology, University of California-San Diego, San Diego, California, USA
| | - Giyarpuram Prashant
- Department of Neurosurgery, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, USA
| | - James J Evans
- Department of Neurosurgery, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, USA
| | - Michael Karsy
- Department of Neurosurgery, Clinical Neurosciences Center, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
23
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
24
|
Lin C, Zhang Y, Dong S, Wu J, Zhang C, Wan X, Zhang S. Application of mixed reality-based surgical navigation system in craniomaxillofacial trauma bone reconstruction. HUA XI KOU QIANG YI XUE ZA ZHI = HUAXI KOUQIANG YIXUE ZAZHI = WEST CHINA JOURNAL OF STOMATOLOGY 2022; 40:676-684. [PMID: 36416320 PMCID: PMC9763953 DOI: 10.7518/hxkq.2022.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 10/25/2022] [Indexed: 01/25/2023]
Abstract
OBJECTIVES This study aimed to build a surgical navigation system based on mixed reality (MR) and optical positioning technique and evaluate its clinical applicability in craniomaxillofacial trauma bone reconstruction. Me-thods We first integrated the software and hardware platforms of the MR-based surgical navigation system and explored the system workflow. The systematic error, target registration error, and osteotomy application error of the system were then analyzed via 3D printed skull model experiment. The feasibility of the MR-based surgical navigation system in craniomaxillofacial trauma bone reconstruction was verified via zygomatico-maxillary complex (ZMC) reduction experiment of the skull model and preliminary clinical study. RESULTS The system error of this MR-based surgical navigation system was 1.23 mm±0.52 mm, the target registration error was 2.83 mm±1.18 mm, and the osteotomy application error was 3.13 mm±1.66 mm. Virtual surgical planning and the reduction of the ZMC model were successfully conducted. In addition, with the guidance of the MR-based navigation system, the frontal bone defect was successfully reconstructed, and the clinical outcome was satisfactory. CONCLUSIONS The MR-based surgical navigation system has its advantages in virtual reality fusion effect and dynamic navigation stability. It provides a new method for doctor-patient communications, education, preoperative planning, and intraoperative navigation in craniomaxillofacial surgery.
Collapse
Affiliation(s)
- Chengzhong Lin
- The 2nd Dental Center, Ninth People's Hospital, Shanghai JiaoTong University School of Medicine; College of Stomatology, Shanghai JiaoTong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology, Shanghai 200011, China
| | - Yong Zhang
- Dept. of Oral and Cranio-Maxillofacial Surgery, Ninth People's Hospital, Shanghai JiaoTong University School of Medicine; College of Stomatology, Shanghai JiaoTong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology, Shanghai 200011, China
| | - Shao Dong
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200011, China
| | - Jinyang Wu
- Dept. of Oral and Cranio-Maxillofacial Surgery, Ninth People's Hospital, Shanghai JiaoTong University School of Medicine; College of Stomatology, Shanghai JiaoTong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology, Shanghai 200011, China
| | - Chuxi Zhang
- Dept. of Oral and Cranio-Maxillofacial Surgery, Ninth People's Hospital, Shanghai JiaoTong University School of Medicine; College of Stomatology, Shanghai JiaoTong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology, Shanghai 200011, China
| | - Xinjun Wan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200011, China
| | - Shilei Zhang
- Dept. of Oral and Cranio-Maxillofacial Surgery, Ninth People's Hospital, Shanghai JiaoTong University School of Medicine; College of Stomatology, Shanghai JiaoTong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology, Shanghai 200011, China
| |
Collapse
|
25
|
Ho S, Liu P, Palombo DJ, Handy TC, Krebs C. The role of spatial ability in mixed reality learning with the HoloLens. ANATOMICAL SCIENCES EDUCATION 2022; 15:1074-1085. [PMID: 34694737 DOI: 10.1002/ase.2146] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 09/21/2021] [Accepted: 10/23/2021] [Indexed: 06/13/2023]
Abstract
The use of mixed reality in science education has been increasing and as such it has become more important to understand how information is learned in these virtual environments. Spatial ability is important in many learning contexts, but especially in neuroanatomy education where learning the locations and spatial relationships between brain regions is paramount. It is currently unclear what role spatial ability plays in mixed reality learning environments, and whether it is different compared to traditional physical environments. To test this, a learning experiment was conducted where students learned neuroanatomy using both mixed reality and a physical plastic model of a brain (N = 27). Spatial ability was assessed and analyzed to determine its effect on performance across the two learning modalities. The results showed that spatial ability facilitated learning in mixed reality (β = 0.21, P = 0.003), but not when using a plastic model (β = 0.08, P = 0.318). A non-significant difference was observed between the modalities in terms of knowledge test performance (d = 0.39, P = 0.052); however, mixed reality was more engaging (d = 0.59, P = 0.005) and learners were more confident in the information they learned compared to using a physical model (d = 0.56, P = 0.007). Overall, these findings suggest that spatial ability is more relevant in virtual learning environments, where the ability to manipulate and interact with an object is diminished or abstracted through a virtual user interface.
Collapse
Affiliation(s)
- Simon Ho
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Pu Liu
- Department of Cellular and Physiological Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Daniela J Palombo
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Todd C Handy
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Claudia Krebs
- Department of Cellular and Physiological Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
26
|
Ravindra VM, Tadlock MD, Gurney JM, Kraus KL, Dengler BA, Gordon J, Cooke J, Porensky P, Belverud S, Milton JO, Cardoso M, Carroll CP, Tomlin J, Champagne R, Bell RS, Viers AG, Ikeda DS. Attitudes Toward Neurosurgery Education for the Nonneurosurgeon: A Survey Study and Critical Analysis of U.S. Military Training Techniques and Future Prospects. World Neurosurg 2022; 167:e1335-e1344. [PMID: 36103986 DOI: 10.1016/j.wneu.2022.09.033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND The U.S. military requires medical readiness to support forward-deployed combat operations. Because time and distance to neurosurgical capabilities vary within the deployed trauma system, nonneurosurgeons are required to perform emergent cranial procedures in select cases. It is unclear whether these surgeons have sufficient training in these procedures. METHODS This quality-improvement study involved a voluntary, anonymized specialty-specific survey of active-duty surgeons about their experience and attitudes toward U.S. military emergency neurosurgical training. RESULTS Survey responses were received from 104 general surgeons and 26 neurosurgeons. Among general surgeons, 81% have deployed and 53% received training in emergency neurosurgical procedures before deployment. Only 16% of general surgeons reported participating in craniotomy/craniectomy procedures in the last year. Nine general surgeons reported performing an emergency neurosurgical procedure while on deployment/humanitarian mission, and 87% of respondents expressed interest in further predeployment emergency neurosurgery training. Among neurosurgeons, 81% had participated in training nonneurosurgeons and 73% believe that more comprehensive training for nonneurosurgeons before deployment is needed. General surgeons proposed lower procedure minimums for competency for external ventricular drain placement and craniotomy/craniectomy than did neurosurgeons. Only 37% of general surgeons had used mixed/augmented reality in any capacity previously; for combat procedures, most (90%) would prefer using synchronous supervision via high-fidelity video teleconferencing over mixed reality. CONCLUSIONS These survey results show a gap in readiness for neurosurgical procedures for forward-deployed general surgeons. Capitalizing on capabilities such as mixed/augmented reality would be a force multiplier and a potential means of improving neurosurgical capabilities in the forward-deployed environments.
Collapse
Affiliation(s)
- Vijay M Ravindra
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA; Department of Neurosurgery, University of California San Diego, San Diego, California, USA; Department of Neurosurgery, University of Utah, Salt Lake City, Utah, USA
| | - Matthew D Tadlock
- Department of Surgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA; Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA; 1st Medical Battalion, 1st Marine Logistics Group, Camp Pendleton, California, USA
| | - Jennifer M Gurney
- U.S. Army Institute of Surgical Research, Joint Base San Antonio, San Antonio, Texas, USA
| | - Kristin L Kraus
- Department of Neurosurgery, University of Utah, Salt Lake City, Utah, USA
| | - Bradley A Dengler
- Department of Neurosurgery, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Jennifer Gordon
- Department of Surgery, U.S. Naval Hospital Okinawa, Okinawa, Japan
| | - Jonathon Cooke
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Paul Porensky
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Shawn Belverud
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Jason O Milton
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Mario Cardoso
- Department of Brain and Spine Surgery, Naval Medical Center, Portsmouth, Virginia, USA
| | - Christopher P Carroll
- Department of Brain and Spine Surgery, Naval Medical Center, Portsmouth, Virginia, USA
| | - Jeffrey Tomlin
- Department of Brain and Spine Surgery, Naval Medical Center, Portsmouth, Virginia, USA
| | - Roland Champagne
- Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Randy S Bell
- Department of Neurosurgery, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Angela G Viers
- Department of Surgery, U.S. Naval Hospital Okinawa, Okinawa, Japan
| | - Daniel S Ikeda
- Department of Neurosurgery, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.
| |
Collapse
|
27
|
Puladi B, Ooms M, Bellgardt M, Cesov M, Lipprandt M, Raith S, Peters F, Möhlhenrich SC, Prescher A, Hölzle F, Kuhlen TW, Modabber A. Augmented Reality-Based Surgery on the Human Cadaver Using a New Generation of Optical Head-Mounted Displays: Development and Feasibility Study. JMIR Serious Games 2022; 10:e34781. [PMID: 35468090 PMCID: PMC9086879 DOI: 10.2196/34781] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/04/2022] [Accepted: 03/05/2022] [Indexed: 12/15/2022] Open
Abstract
Background Although nearly one-third of the world’s disease burden requires surgical care, only a small proportion of digital health applications are directly used in the surgical field. In the coming decades, the application of augmented reality (AR) with a new generation of optical-see-through head-mounted displays (OST-HMDs) like the HoloLens (Microsoft Corp) has the potential to bring digital health into the surgical field. However, for the application to be performed on a living person, proof of performance must first be provided due to regulatory requirements. In this regard, cadaver studies could provide initial evidence. Objective The goal of the research was to develop an open-source system for AR-based surgery on human cadavers using freely available technologies. Methods We tested our system using an easy-to-understand scenario in which fractured zygomatic arches of the face had to be repositioned with visual and auditory feedback to the investigators using a HoloLens. Results were verified with postoperative imaging and assessed in a blinded fashion by 2 investigators. The developed system and scenario were qualitatively evaluated by consensus interview and individual questionnaires. Results The development and implementation of our system was feasible and could be realized in the course of a cadaver study. The AR system was found helpful by the investigators for spatial perception in addition to the combination of visual as well as auditory feedback. The surgical end point could be determined metrically as well as by assessment. Conclusions The development and application of an AR-based surgical system using freely available technologies to perform OST-HMD–guided surgical procedures in cadavers is feasible. Cadaver studies are suitable for OST-HMD–guided interventions to measure a surgical end point and provide an initial data foundation for future clinical trials. The availability of free systems for researchers could be helpful for a possible translation process from digital health to AR-based surgery using OST-HMDs in the operating theater via cadaver studies.
Collapse
Affiliation(s)
- Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Mark Ooms
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Martin Bellgardt
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
| | - Mark Cesov
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Visual Computing Institute, RWTH Aachen University, Aachen, Germany
| | - Myriam Lipprandt
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Stefan Raith
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Florian Peters
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Stephan Christian Möhlhenrich
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Department of Orthodontics, Private University of Witten/Herdecke, Witten, Germany
| | - Andreas Prescher
- Institute of Molecular and Cellular Anatomy, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | | | - Ali Modabber
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
28
|
Su S, Lei P, Wang C, Gao F, Zhong D, Hu Y. Mixed Reality Technology in Total Knee Arthroplasty: An Updated Review With a Preliminary Case Report. Front Surg 2022; 9:804029. [PMID: 35495740 PMCID: PMC9053587 DOI: 10.3389/fsurg.2022.804029] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 03/16/2022] [Indexed: 11/13/2022] Open
Abstract
Background Augmented reality and mixed reality have been used to help surgeons perform complex surgeries. With the development of technology, mixed reality (MR) technology has been used to improve the success rate of complex hip arthroplasty due to its unique advantages. At present, there are few reports on the application of MR technology in total knee arthroplasty. We presented a case of total knee arthroplasty with the help of mixed reality technology. Case Presentation We presented a case of a 71-year-old woman who was diagnosed with bilateral knee osteoarthritis with varus deformity, especially on the right side. After admission, the right total knee arthroplasty was performed with the assistance of MR technology. Before the operation, the three-dimensional virtual model of the knee joint of the patient was reconstructed for condition analysis, operation plan formulation, and operation simulation. During the operation, the three-dimensional virtual images of the femur and tibia coincided with the real body of the patient, showing the osteotomy plane designed before the operation, which can accurately guide the completion of osteotomy and prosthesis implantation. Conclusions As far as we know, this is the first report on total knee arthroplasty under the guidance of mixed reality technology.
Collapse
Affiliation(s)
- Shilong Su
- Department of Orthopedics, Xiangya Hospital, Central South University, Changsha, China
- Department of Orthopedics, The First Hospital of Changsha, Changsha, China
| | - Pengfei Lei
- Department of Orthopedics, Xiangya Hospital, Central South University, Changsha, China
- Department of Orthopedics, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Chenggong Wang
- Department of Orthopedics, Xiangya Hospital, Central South University, Changsha, China
| | - Fawei Gao
- Department of Orthopedics, Xiangya Hospital, Central South University, Changsha, China
| | - Da Zhong
- Department of Orthopedics, Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Da Zhong
| | - Yihe Hu
- Department of Orthopedics, Xiangya Hospital, Central South University, Changsha, China
- Department of Orthopedics, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
29
|
Peng C, Yang L, Yi W, Yidan L, Yanglingxi W, Qingtao Z, Xiaoyong T, Tang Y, Jia W, Xing Y, Zhiqin Z, Yongbing D. Application of Fused Reality Holographic Image and Navigation Technology in the Puncture Treatment of Hypertensive Intracerebral Hemorrhage. Front Neurosci 2022; 16:850179. [PMID: 35360174 PMCID: PMC8963409 DOI: 10.3389/fnins.2022.850179] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 02/08/2022] [Indexed: 11/13/2022] Open
Abstract
Objective Minimally invasive puncture and drainage (MIPD) of hematomas was the preferred option for appropriate patients with hypertensive intracerebral hemorrhage (HICH). The goal of our research was to introduce the MIPD surgery using mixed reality holographic navigation technology (MRHNT). Method We provided the complete workflow for hematoma puncture using MRHNT included three-dimensional model reconstruction by preoperative CT examination, puncture trajectory design, immersive presentation of model, and real environment and hematoma puncture using dual-plane navigation by wearing special equipment. We collected clinical data on eight patients with HICH who underwent MIPD using MRHNT from March 2021 to August 2021, including the hematoma evacuation rate, operation time, deviation in drainage tube target, postoperative complications, and 2-week postoperative GCS. Result The workflow for hematoma puncture using MRHNT were performed in all eight cases, in which the average hematoma evacuation rate was 47.36±9.16%, the average operation time was 82.14±15.74 min, and the average deviation of the drainage tube target was 5.76±0.80 mm. There was no delayed bleeding, acute ischemic stroke, intracranial infection, or epilepsy 2 weeks after surgery. The 2-week postoperative GCS was improved compared with the preoperative GCS. Conclusion The research concluded it was feasible to perform the MIPD by MRHNT on patients with HICH. The risk of general anesthesia and highly professional holographic information processing restricted the promotion of the technology, it was necessary for technical innovation and the accumulation of more case experience and verification of its superiority.
Collapse
Affiliation(s)
- Chen Peng
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Liu Yang
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Wang Yi
- QINYING Technology Co., Ltd., Chongqing, China
| | - Liang Yidan
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Wang Yanglingxi
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Zhang Qingtao
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Tang Xiaoyong
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Yongbing Tang
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Wang Jia
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Yu Xing
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Zhu Zhiqin
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Deng Yongbing
- Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
- *Correspondence: Deng Yongbing
| |
Collapse
|
30
|
Uhl C, Hatzl J, Meisenbacher K, Zimmer L, Hartmann N, Böckler D. Mixed-Reality-Assisted Puncture of the Common Femoral Artery in a Phantom Model. J Imaging 2022; 8:jimaging8020047. [PMID: 35200749 PMCID: PMC8874567 DOI: 10.3390/jimaging8020047] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 12/15/2022] Open
Abstract
Percutaneous femoral arterial access is daily practice in a variety of medical specialties and enables physicians worldwide to perform endovascular interventions. The reported incidence of percutaneous femoral arterial access complications is 3–18% and often results from suboptimal puncture location due to insufficient visualization of the target vessel. The purpose of this proof-of-concept study was to evaluate the feasibility and the positional error of a mixed-reality (MR)-assisted puncture of the common femoral artery in a phantom model using a commercially available navigation system. In total, 15 MR-assisted punctures were performed. Cone-beam computed tomography angiography (CTA) was used following each puncture to allow quantification of positional error of needle placements in the axial and sagittal planes. Technical success was achieved in 14/15 cases (93.3%) with a median axial positional error of 1.0 mm (IQR 1.3) and a median sagittal positional error of 1.1 mm (IQR 1.6). The median duration of the registration process and needle insertion was 2 min (IQR 1.0). MR-assisted puncture of the common femoral artery is feasible with acceptable positional errors in a phantom model. Future studies should aim to measure and reduce the positional error resulting from MR registration.
Collapse
|
31
|
Bori E, Pancani S, Vigliotta S, Innocenti B. Validation and accuracy evaluation of automatic segmentation for knee joint pre-planning. Knee 2021; 33:275-281. [PMID: 34739958 DOI: 10.1016/j.knee.2021.10.016] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 09/28/2021] [Accepted: 10/12/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Proper use of three-dimensional (3D) models generated from medical imaging data in clinical preoperative planning, training and consultation is based on the preliminary proved accuracy of the replication of the patient anatomy. Therefore, this study investigated the dimensional accuracy of 3D reconstructions of the knee joint generated from computed tomography scans via automatic segmentation by comparing them with 3D models generated through manual segmentation. METHODS Three unpaired, fresh-frozen right legs were investigated. Three-dimensional models of the femur and the tibia of each leg were manually segmented using a commercial software and compared in terms of geometrical accuracy with the 3D models automatically segmented using proprietary software. Bony landmarks were identified and used to calculate clinically relevant distances: femoral epicondylar distance; posterior femoral epicondylar distance; femoral trochlear groove length; tibial knee center tubercle distance (TKCTD). Pearson's correlation coefficient and Bland and Altman plots were used to evaluate the level of agreement between measured distances. RESULTS Differences between parameters measured on 3D models manually and automatically segmented were below 1 mm (range: -0.06 to 0.72 mm), except for TKCTD (between 1.00 and 1.40 mm in two specimens). In addition, there was a significant strong correlation between measurements. CONCLUSIONS The results obtained are comparable to those reported in previous studies where accuracy of bone 3D reconstruction was investigated. Automatic segmentation techniques can be used to quickly reconstruct reliable 3D models of bone anatomy and these results may contribute to enhance the spread of this technology in preoperative and operative settings, where it has shown considerable potential.
Collapse
Affiliation(s)
- Edoardo Bori
- BEAMS Department, Université Libre de Bruxelles, Bruxelles, Belgium.
| | | | | | | |
Collapse
|
32
|
Neves CA, Leuze C, Gomez AM, Navab N, Blevins N, Vaisbuch Y, McNab JA. Augmented Reality for Retrosigmoid Craniotomy Planning. Skull Base Surg 2021; 83:e564-e573. [DOI: 10.1055/s-0041-1735509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
AbstractWhile medical imaging data have traditionally been viewed on two-dimensional (2D) displays, augmented reality (AR) allows physicians to project the medical imaging data on patient's bodies to locate important anatomy. We present a surgical AR application to plan the retrosigmoid craniotomy, a standard approach to access the posterior fossa and the internal auditory canal. As a simple and accurate alternative to surface landmarks and conventional surgical navigation systems, our AR application augments the surgeon's vision to guide the optimal location of cortical bone removal. In this work, two surgeons performed a retrosigmoid approach 14 times on eight cadaver heads. In each case, the surgeon manually aligned a computed tomography (CT)-derived virtual rendering of the sigmoid sinus on the real cadaveric heads using a see-through AR display, allowing the surgeon to plan and perform the craniotomy accordingly. Postprocedure CT scans were acquired to assess the accuracy of the retrosigmoid craniotomies with respect to their intended location relative to the dural sinuses. The two surgeons had a mean margin of davg = 0.6 ± 4.7 mm and davg = 3.7 ± 2.3 mm between the osteotomy border and the dural sinuses over all their cases, respectively, and only positive margins for 12 of the 14 cases. The intended surgical approach to the internal auditory canal was successfully achieved in all cases using the proposed method, and the relatively small and consistent margins suggest that our system has the potential to be a valuable tool to facilitate planning a variety of similar skull-base procedures.
Collapse
Affiliation(s)
- Caio A. Neves
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
- Faculty of Medicine, University of Brasília, Brasília, Brazil
| | - Christoph Leuze
- Department of Radiology, Stanford School of Medicine, Stanford, United States
| | - Alejandro M. Gomez
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Germany
- Laboratory for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Germany
- Laboratory for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Nikolas Blevins
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
| | - Yona Vaisbuch
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
| | - Jennifer A. McNab
- Department of Radiology, Stanford School of Medicine, Stanford, United States
| |
Collapse
|
33
|
Fick T, van Doormaal JAM, Tosic L, van Zoest RJ, Meulstee JW, Hoving EW, van Doormaal TPC. Fully automatic brain tumor segmentation for 3D evaluation in augmented reality. Neurosurg Focus 2021; 51:E14. [PMID: 34333477 DOI: 10.3171/2021.5.focus21200] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/18/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set. METHODS Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95). RESULTS The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). CONCLUSIONS The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.
Collapse
Affiliation(s)
- Tim Fick
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Jesse A M van Doormaal
- 2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Lazar Tosic
- 3Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and
| | - Renate J van Zoest
- 4Department of Neurology and Neurosurgery, Curaçao Medical Center, Willemstad, Curaçao
| | - Jene W Meulstee
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Eelco W Hoving
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.,2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P C van Doormaal
- 2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.,3Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and
| |
Collapse
|
34
|
Qi Z, Li Y, Xu X, Zhang J, Li F, Gan Z, Xiong R, Wang Q, Zhang S, Chen X. Holographic mixed-reality neuronavigation with a head-mounted device: technical feasibility and clinical application. Neurosurg Focus 2021; 51:E22. [PMID: 34333462 DOI: 10.3171/2021.5.focus21175] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/13/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The authors aimed to evaluate the technical feasibility of a mixed-reality neuronavigation (MRN) system with a wearable head-mounted device (HMD) and to determine its clinical application and accuracy. METHODS A semiautomatic registration MRN system on HoloLens smart glasses was developed and tested for accuracy and feasibility. Thirty-seven patients with intracranial lesions were prospectively identified. For each patient, multimodal imaging-based holograms of lesions, markers, and surrounding eloquent structures were created and then imported to the MRN HMD. After a point-based registration, the holograms were projected onto the patient's head and observed through the HMD. The contour of the holograms was compared with standard neuronavigation (SN). The projection of the lesion boundaries perceived by the neurosurgeon on the patient's scalp was then marked with MRN and SN. The distance between the two contours generated by MRN and SN was measured so that the accuracy of MRN could be assessed. RESULTS MRN localization was achieved in all patients. The mean additional time required for MRN was 36.3 ± 6.3 minutes, in which the mean registration time was 2.6 ± 0.9 minutes. A trend toward a shorter time required for preparation was observed with the increase of neurosurgeon experience with the MRN system. The overall median deviation was 4.1 mm (IQR 3.0 mm-4.7 mm), and 81.1% of the lesions localized by MRN were found to be highly consistent with SN (deviation < 5.0 mm). There was a significant difference between the supine position and the prone position (3.7 ± 1.1 mm vs 5.4 ± 0.9 mm, p = 0.001). The magnitudes of deviation vectors did not correlate with lesion volume (p = 0.126) or depth (p = 0.128). There was no significant difference in additional operating time between different operators (37.4 ± 4.8 minutes vs 34.6 ± 4.8 minutes, p = 0.237) or in localization deviation (3.7 ± 1.0 mm vs 4.6 ± 1.5 mm, p = 0.070). CONCLUSIONS This study provided a complete set of a clinically applicable workflow on an easy-to-use MRN system using a wearable HMD, and has shown its technical feasibility and accuracy. Further development is required to improve the accuracy and clinical efficacy of this system.
Collapse
Affiliation(s)
- Ziyu Qi
- 1Department of Neurosurgery, Chinese PLA General Hospital; and.,2School of Medicine, Nankai University, Tianjin, China
| | - Ye Li
- 3Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing; and
| | - Xinghua Xu
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Jiashu Zhang
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Fangye Li
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Zhichao Gan
- 1Department of Neurosurgery, Chinese PLA General Hospital; and.,2School of Medicine, Nankai University, Tianjin, China
| | - Ruochu Xiong
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Qun Wang
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Shiyu Zhang
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Xiaolei Chen
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| |
Collapse
|
35
|
Evaluation of a Wearable AR Platform for Guiding Complex Craniotomies in Neurosurgery. Ann Biomed Eng 2021; 49:2590-2605. [PMID: 34297263 DOI: 10.1007/s10439-021-02834-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/12/2021] [Indexed: 10/20/2022]
Abstract
Today, neuronavigation is widely used in daily clinical routine to perform safe and efficient surgery. Augmented reality (AR) interfaces can provide anatomical models and preoperative planning contextually blended with the real surgical scenario, overcoming the limitations of traditional neuronavigators. This study aims to demonstrate the reliability of a new-concept AR headset in navigating complex craniotomies. Moreover, we aim to prove the efficacy of a patient-specific template-based methodology for fast, non-invasive, and fully automatic planning-to-patient registration. The AR platform navigation performance was assessed with an in-vitro study whose goal was twofold: to measure the real-to-virtual 3D target visualization error (TVE), and assess the navigation accuracy through a user study involving 10 subjects in tracing a complex craniotomy. The feasibility of the template-based registration was preliminarily tested on a volunteer. The TVE mean and standard deviation were 1.3 and 0.6 mm. The results of the user study, over 30 traced craniotomies, showed that 97% of the trajectory length was traced within an error margin of 1.5 mm, and 92% within a margin of 1 mm. The in-vivo test confirmed the feasibility and reliability of the patient-specific template for registration. The proposed AR headset allows ergonomic and intuitive fruition of preoperative planning, and it can represent a valid option to support neurosurgical tasks.
Collapse
|
36
|
The utility of augmented reality in lateral skull base surgery: A preliminary report. Am J Otolaryngol 2021; 42:102942. [PMID: 33556837 DOI: 10.1016/j.amjoto.2021.102942] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 01/23/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To discuss the utility of augmented reality in lateral skull base surgery. PATIENTS Those undergoing lateral skull base surgery at our institution. INTERVENTION(S) Cerebellopontine angle tumor resection using an augmented reality interface. MAIN OUTCOME MEASURE(S) Ease of use, utility of, and future directions of augmented reality in lateral skull base surgery. RESULTS Anecdotally we have found an augmented reality interface helpful in simulating cerebellopontine angle tumor resection as well as assisting in planning the incision and craniotomy. CONCLUSIONS Augmented reality has the potential to be a useful adjunct in lateral skull base surgery, but more study is needed with large series.
Collapse
|
37
|
Li G, Cao Z, Wang J, Zhang X, Zhang L, Dong J, Lu G. Mixed reality models based on low-dose computed tomography technology in nephron-sparing surgery are better than models based on normal-dose computed tomography. Quant Imaging Med Surg 2021; 11:2658-2668. [PMID: 34079731 DOI: 10.21037/qims-20-956] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Background Nephron-sparing surgery has been widely applied in the treatment of renal tumors. Previous studies have confirmed the advantages of mixed reality technology in surgery. The study aimed to explore the optimization of mixed reality technology and its application value in nephron-sparing surgery. Methods In this prospective study of 150 patients with complex renal tumors (RENAL nephrometry score ≥7) who underwent nephron-sparing surgery, patients were randomly divided into Group A (the normal-dose mixed reality group, n=50), Group B (the low-dose mixed reality group, n=50), and Group C (the traditional computed tomography image group, n=50). Group A and Group C received the normal-dose computed tomography scan protocol: 120 kVp, 400 mA, and 350 mgI/mL, while Group B received the low-dose computed tomography scan protocol: 80 kVp, automatic tube current modulation, and 320 mgI/mL. All computed tomography data were transmitted to a three-dimensional visualization workstation and underwent modeling and mixed reality imaging. Two senior surgeons evaluated mixed reality quality. Objective indexes and perioperative indexes were calculated and compared. Results Compared with Group A, the radiation effective dose in Group B was decreased by 39.6%. The subjective scores of mixed reality quality in Group B were significantly higher than those of Group A (Z=-4.186, P<0.001). The inter-observer agreement between the two senior surgeons in mixed reality quality was excellent (K=0.840, P<0.001). The perioperative indexes showed that the mixed reality groups were significantly different from the computed tomography image group (all P<0.017). More cases underwent nephron-sparing surgery in the mixed reality groups than in the computed tomography image group (P<0.0017). Conclusions Low-dose computed tomography technology can be effectively applied to mixed reality optimization, reducing the effective dose and improving mixed reality quality. Optimized mixed reality can significantly increase the cases of successful nephron-sparing surgery and improve perioperative indexes.
Collapse
Affiliation(s)
- Guan Li
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, China
| | - Zhiqiang Cao
- Department of Urology, General Hospital of Northern Theater Command, Shenyang, China
| | - Jinbao Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Xin Zhang
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Longjiang Zhang
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, China
| | - Jie Dong
- Department of Urology, Jinling Hospital, Medical School of Nanjing University, Nanjing, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, China
| |
Collapse
|
38
|
Patient-specific virtual and mixed reality for immersive, experiential anatomy education and for surgical planning in temporal bone surgery. Auris Nasus Larynx 2021; 48:1081-1091. [PMID: 34059399 DOI: 10.1016/j.anl.2021.03.009] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 02/13/2021] [Accepted: 03/16/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The recent development of extended reality technology has attracted interest in medicine. We explored the use of patient-specific virtual reality (VR) and mixed reality (MR) temporal bone models in anatomical teaching, pre-operative surgical planning and intra-operative surgical referencing. METHODS VR and MR temporal bone models were created and visualized on head-mounted display (HMD) and MR headset respectively, by a novel webservice that allows users to convert computed tomography images to VR and MR images without specific knowledge of programming. Eleven otorhinolaryngology trainees and specialists were asked to manipulate the healthy VR temporal bone model and to assess its validity by filling out a questionnaire. Additionally, VR and MR pathological models of petrous apex cholesteatoma were utilized for surgical planning pre-operatively and for referring to the anatomy during the surgery. RESULTS Most participants were favorable about the VR model and considered HMD as superior to a flat computer screen. 91% of the participants agreed or somewhat agreed that VR through HMD is cost effective. In addition, the VR pathological model was used for planning and sharing the surgical approach during a pre-operative surgical conference. The MR headset was worn intra-operatively to clarify the relationship between the pathological lesion and vital anatomical structures. CONCLUSION Regardless of the participants' training level in otorhinolaryngology or VR experience, all participants agreed that the VR temporal bone model is useful for anatomical education. Furthermore, the creation of patient-specific VR and MR models using the webservice and their pre- and intra-operative usages indicated the potential of innovative adjunctive surgical instrument.
Collapse
|
39
|
Fick T, van Doormaal JAM, Hoving EW, Regli L, van Doormaal TPC. Holographic patient tracking after bed movement for augmented reality neuronavigation using a head-mounted display. Acta Neurochir (Wien) 2021; 163:879-884. [PMID: 33515122 PMCID: PMC7966201 DOI: 10.1007/s00701-021-04707-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 01/04/2021] [Indexed: 11/27/2022]
Abstract
BACKGROUND Holographic neuronavigation has several potential advantages compared to conventional neuronavigation systems. We present the first report of a holographic neuronavigation system with patient-to-image registration and patient tracking with a reference array using an augmented reality head-mounted display (AR-HMD). METHODS Three patients undergoing an intracranial neurosurgical procedure were included in this pilot study. The relevant anatomy was first segmented in 3D and then uploaded as holographic scene in our custom neuronavigation software. Registration was performed using point-based matching using anatomical landmarks. We measured the fiducial registration error (FRE) as the outcome measure for registration accuracy. A custom-made reference array with QR codes was integrated in the neurosurgical setup and used for patient tracking after bed movement. RESULTS Six registrations were performed with a mean FRE of 8.5 mm. Patient tracking was achieved with no visual difference between the registration before and after movement. CONCLUSIONS This first report shows a proof of principle of intraoperative patient tracking using a standalone holographic neuronavigation system. The navigation accuracy should be further optimized to be clinically applicable. However, it is likely that this technology will be incorporated in future neurosurgical workflows because the system improves spatial anatomical understanding for the surgeon.
Collapse
Affiliation(s)
- T Fick
- Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584, CS, Utrecht, The Netherlands.
| | - J A M van Doormaal
- Department of Oral and Maxillofacial surgery, University Medical Centre Utrecht, Heidelberglaan 100, 3584, CX, Utrecht, The Netherlands
| | - E W Hoving
- Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584, CS, Utrecht, The Netherlands
- Department of Neurosurgery, University Medical Centre Utrecht, Heidelberglaan 100, 3584, CX, Utrecht, The Netherlands
| | - L Regli
- Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Rämistrasse 100, 8091, Zürich, Switzerland
| | - T P C van Doormaal
- Department of Neurosurgery, University Medical Centre Utrecht, Heidelberglaan 100, 3584, CX, Utrecht, The Netherlands
- Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Rämistrasse 100, 8091, Zürich, Switzerland
| |
Collapse
|
40
|
Scherl C, Stratemeier J, Rotter N, Hesser J, Schönberg SO, Servais JJ, Männle D, Lammert A. Augmented Reality with HoloLens® in Parotid Tumor Surgery: A Prospective Feasibility Study. ORL J Otorhinolaryngol Relat Spec 2021; 83:439-448. [PMID: 33784686 DOI: 10.1159/000514640] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 01/02/2021] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. OBJECTIVES So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. METHODS 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. RESULTS The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient's anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (p = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). CONCLUSION This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.
Collapse
Affiliation(s)
- Claudia Scherl
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Johanna Stratemeier
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jürgen Hesser
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Stefan O Schönberg
- Department of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jérôme J Servais
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - David Männle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Anne Lammert
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
41
|
唐 祖, Hui YS, 胡 耒, 于 尧, 章 文, 彭 歆. [Application of mixed reality technique for the surgery of oral and maxillofacial tumors]. BEIJING DA XUE XUE BAO. YI XUE BAN = JOURNAL OF PEKING UNIVERSITY. HEALTH SCIENCES 2020; 52:1124-1129. [PMID: 33331325 PMCID: PMC7745289 DOI: 10.19723/j.issn.1671-167x.2020.06.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVE To explore the application of mixed reality technique for the surgery of oral and maxillofacial tumors. METHODS In this study, patients with a diagnosis of an oral and maxillofacial tumor who were referred to Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology from December 2018 to January 2020 were selected. The preoperative contrast-enhanced computed tomography data of the patients were imported into StarAtlas Holographic Medical Imaging System (Visual 3D Corp., Beijing, China). Three-dimensional (3D) model of tumor and key structures, such as skeleton and vessels were reconstructed to three-dimensionally present the spatial relationship between them, followed with the key structures delineation and preoperative virtual surgical planning. By using mixed reality technique, the real-time 3D model was displayed stereotactically in the surgical site. While keeping sterile during operation, the surgeon could use simple gestures to adjust the 3D model, and observed the location, range, and size of tumor and the key structures adjacent to the tumor. Mixed reality technique was used to assist the operation: 3D model registration was performed for guidance before tumor excision; intraoperative real-time verification was performed during tumor exposure and after excision of the tumor. The Likert scale was used to evaluate the application of mixed reality technique after the operation. RESULTS Eight patients underwent mixed reality assisted tumor resection, and all of them successfully completed the operation. The average time of the 3D model registration was 12.0 minutes. In all the cases, the surgeon could intuitively and three-dimensionally observe the 3D model of the tumor and the surrounding anatomical structures, and could adjust the model during the operation. The results of the Likert scale showed that mixed reality technique got high scores in terms of perceptual accuracy, helping to locate the anatomical parts, the role of model guidance during surgery, and the potential for improving surgical safety (4.22, 4.19, 4.16, and 4.28 points respectively). Eight patients healed well without perioperative complications. CONCLUSION By providing real-time stereotactic visualization of anatomy of surgical site and guiding the operation process through 3D model, mixed reality technique could improve the accuracy and safety of the excision of oral and maxillofacial tumors.
Collapse
Affiliation(s)
- 祖南 唐
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - Yuh Soh Hui
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 耒豪 胡
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 尧 于
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 文博 章
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 歆 彭
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| |
Collapse
|
42
|
唐 祖, Hui YS, 胡 耒, 于 尧, 章 文, 彭 歆. [Application of mixed reality technique for the surgery of oral and maxillofacial tumors]. BEIJING DA XUE XUE BAO. YI XUE BAN = JOURNAL OF PEKING UNIVERSITY. HEALTH SCIENCES 2020; 52:1124-1129. [PMID: 33331325 PMCID: PMC7745289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Indexed: 08/11/2024]
Abstract
OBJECTIVE To explore the application of mixed reality technique for the surgery of oral and maxillofacial tumors. METHODS In this study, patients with a diagnosis of an oral and maxillofacial tumor who were referred to Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology from December 2018 to January 2020 were selected. The preoperative contrast-enhanced computed tomography data of the patients were imported into StarAtlas Holographic Medical Imaging System (Visual 3D Corp., Beijing, China). Three-dimensional (3D) model of tumor and key structures, such as skeleton and vessels were reconstructed to three-dimensionally present the spatial relationship between them, followed with the key structures delineation and preoperative virtual surgical planning. By using mixed reality technique, the real-time 3D model was displayed stereotactically in the surgical site. While keeping sterile during operation, the surgeon could use simple gestures to adjust the 3D model, and observed the location, range, and size of tumor and the key structures adjacent to the tumor. Mixed reality technique was used to assist the operation: 3D model registration was performed for guidance before tumor excision; intraoperative real-time verification was performed during tumor exposure and after excision of the tumor. The Likert scale was used to evaluate the application of mixed reality technique after the operation. RESULTS Eight patients underwent mixed reality assisted tumor resection, and all of them successfully completed the operation. The average time of the 3D model registration was 12.0 minutes. In all the cases, the surgeon could intuitively and three-dimensionally observe the 3D model of the tumor and the surrounding anatomical structures, and could adjust the model during the operation. The results of the Likert scale showed that mixed reality technique got high scores in terms of perceptual accuracy, helping to locate the anatomical parts, the role of model guidance during surgery, and the potential for improving surgical safety (4.22, 4.19, 4.16, and 4.28 points respectively). Eight patients healed well without perioperative complications. CONCLUSION By providing real-time stereotactic visualization of anatomy of surgical site and guiding the operation process through 3D model, mixed reality technique could improve the accuracy and safety of the excision of oral and maxillofacial tumors.
Collapse
Affiliation(s)
- 祖南 唐
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - Yuh Soh Hui
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 耒豪 胡
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 尧 于
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 文博 章
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - 歆 彭
- />北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| |
Collapse
|
43
|
Andrews CM, Henry AB, Soriano IM, Southworth MK, Silva JR. Registration Techniques for Clinical Applications of Three-Dimensional Augmented Reality Devices. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 9:4900214. [PMID: 33489483 PMCID: PMC7819530 DOI: 10.1109/jtehm.2020.3045642] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/13/2020] [Accepted: 12/03/2020] [Indexed: 12/15/2022]
Abstract
Many clinical procedures would benefit from direct and intuitive real-time visualization of anatomy, surgical plans, or other information crucial to the procedure. Three-dimensional augmented reality (3D-AR) is an emerging technology that has the potential to assist physicians with spatial reasoning during clinical interventions. The most intriguing applications of 3D-AR involve visualizations of anatomy or surgical plans that appear directly on the patient. However, commercially available 3D-AR devices have spatial localization errors that are too large for many clinical procedures. For this reason, a variety of approaches for improving 3D-AR registration accuracy have been explored. The focus of this review is on the methods, accuracy, and clinical applications of registering 3D-AR devices with the clinical environment. The works cited represent a variety of approaches for registering holograms to patients, including manual registration, computer vision-based registration, and registrations that incorporate external tracking systems. Evaluations of user accuracy when performing clinically relevant tasks suggest that accuracies of approximately 2 mm are feasible. 3D-AR device limitations due to the vergence-accommodation conflict or other factors attributable to the headset hardware add on the order of 1.5 mm of error compared to conventional guidance. Continued improvements to 3D-AR hardware will decrease these sources of error.
Collapse
Affiliation(s)
- Christopher M. Andrews
- Department of Biomedical EngineeringWashington University in St Louis, McKelvey School of EngineeringSt LouisMO63130USA
- SentiAR, Inc.St. LouisMO63108USA
| | | | | | | | - Jonathan R. Silva
- Department of Biomedical EngineeringWashington University in St Louis, McKelvey School of EngineeringSt LouisMO63130USA
| |
Collapse
|
44
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
45
|
Fick T, van Doormaal JAM, Hoving EW, Willems PWA, van Doormaal TPC. Current Accuracy of Augmented Reality Neuronavigation Systems: Systematic Review and Meta-Analysis. World Neurosurg 2020; 146:179-188. [PMID: 33197631 DOI: 10.1016/j.wneu.2020.11.029] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 11/04/2020] [Accepted: 11/05/2020] [Indexed: 12/17/2022]
Abstract
BACKGROUND Augmented reality neuronavigation (ARN) systems can overlay three-dimensional anatomy and disease without the need for a two-dimensional external monitor. Accuracy is crucial for their clinical applicability. We performed a systematic review regarding the reported accuracy of ARN systems and compared them with the accuracy of conventional infrared neuronavigation (CIN). METHODS PubMed and Embase were searched for ARN and CIN systems. For ARN, type of system, method of patient-to-image registration, accuracy method, and accuracy of the system were noted. For CIN, navigation accuracy, expressed as target registration error (TRE), was noted. A meta-analysis was performed comparing the TRE of ARN and CIN systems. RESULTS Thirty-five studies were included, 12 for ARN and 23 for CIN. ARN systems could be divided into head-mounted display and heads-up display. In ARN, 4 methods were encountered for patient-to-image registration, of which point-pair matching was the one most frequently used. Five methods for assessing accuracy were described. Ninety-four TRE measurements of ARN systems were compared with 9058 TRE measurements of CIN systems. Mean TRE was 2.5 mm (95% confidence interval, 0.7-4.4) for ARN systems and 2.6 mm (95% confidence interval, 2.1-3.1) for CIN systems. CONCLUSIONS In ARN, there seems to be lack of agreement regarding the best method to assess accuracy. Nevertheless, ARN systems seem able to achieve an accuracy comparable to CIN systems. Future studies should be prospective and compare TREs, which should be measured in a standardized fashion.
Collapse
Affiliation(s)
- Tim Fick
- Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
| | - Jesse A M van Doormaal
- Department of Oral and Maxillofacial Surgery, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Eelco W Hoving
- Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Peter W A Willems
- Department of Neurosurgery, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Tristan P C van Doormaal
- Department of Neurosurgery, University Medical Centre Utrecht, Utrecht, The Netherlands; Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
46
|
van Doormaal TPC, van Doormaal JAM, Mensink T. Clinical Accuracy of Holographic Navigation Using Point-Based Registration on Augmented-Reality Glasses. Oper Neurosurg (Hagerstown) 2020; 17:588-593. [PMID: 31081883 PMCID: PMC6995446 DOI: 10.1093/ons/opz094] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 12/25/2018] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND As current augmented-reality (AR) smart glasses are self-contained, powerful computers that project 3-dimensional holograms that can maintain their position in physical space, they could theoretically be used as a low-cost, stand-alone neuronavigation system. OBJECTIVE To determine feasibility and accuracy of holographic neuronavigation (HN) using AR smart glasses. METHODS We programmed a fully functioning neuronavigation system on commercially available smart glasses (HoloLens®, Microsoft, Redmond, Washington) and tested its accuracy and feasibility in the operating room. The fiducial registration error (FRE) was measured for both HN and conventional neuronavigation (CN) (Brainlab, Munich, Germany) by using point-based registration on a plastic head model. Subsequently, we measured HN and CN FRE on 3 patients. RESULTS A stereoscopic view of the holograms was successfully achieved in all experiments. In plastic head measurements, the mean HN FRE was 7.2 ± 1.8 mm compared to the mean CN FRE of 1.9 ± 0.45 (mean difference: –5.3 mm; 95% confidence interval [CI]: –6.7 to –3.9). In the 3 patients, the mean HN FRE was 4.4 ± 2.5 mm compared to the mean CN FRE of 3.6 ± 0.5 (mean difference: –0.8 mm; 95% CI: –3.0 to 4.6). CONCLUSION Owing to the potential benefits and promising results, we believe that HN could eventually find application in operating rooms. However, several improvements will have to be made before the device can be used in clinical practice.
Collapse
Affiliation(s)
- Tristan P C van Doormaal
- Rudolf Magnus Institute of Neuroscience, Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.,Brain Technology Institute, Utrecht, The Netherlands
| | - Jesse A M van Doormaal
- Rudolf Magnus Institute of Neuroscience, Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.,Brain Technology Institute, Utrecht, The Netherlands
| | - Tom Mensink
- Brain Technology Institute, Utrecht, The Netherlands
| |
Collapse
|
47
|
Lin M, Fredrickson VL, Catapano JS, Attenello FJ. Commentary: Mini Fronto-Orbital pproach: "Window Opening" Towards the Superomedial Orbit-A Virtual Reality-Planned Anatomic Study. Oper Neurosurg (Hagerstown) 2020; 19:E285-E287. [PMID: 32412632 DOI: 10.1093/ons/opaa122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 03/17/2020] [Indexed: 11/14/2022] Open
Affiliation(s)
- Michelle Lin
- Department of Neurological Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California
| | - Vance L Fredrickson
- Department of Neurological Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California
| | - Joshua S Catapano
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona
| | - Frank J Attenello
- Department of Neurological Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California
| |
Collapse
|
48
|
Bogomolova K, van der Ham IJM, Dankbaar MEW, van den Broek WW, Hovius SER, van der Hage JA, Hierck BP. The Effect of Stereoscopic Augmented Reality Visualization on Learning Anatomy and the Modifying Effect of Visual-Spatial Abilities: A Double-Center Randomized Controlled Trial. ANATOMICAL SCIENCES EDUCATION 2020; 13:558-567. [PMID: 31887792 DOI: 10.1002/ase.1941] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 12/20/2019] [Accepted: 12/27/2019] [Indexed: 05/09/2023]
Abstract
Monoscopically projected three-dimensional (3D) visualization technology may have significant disadvantages for students with lower visual-spatial abilities despite its overall effectiveness in teaching anatomy. Previous research suggests that stereopsis may facilitate a better comprehension of anatomical knowledge. This study evaluated the educational effectiveness of stereoscopic augmented reality (AR) visualization and the modifying effect of visual-spatial abilities on learning. In a double-center randomized controlled trial, first- and second-year (bio)medical undergraduates studied lower limb anatomy with stereoscopic 3D AR model (n = 20), monoscopic 3D desktop model (n = 20), or two-dimensional (2D) anatomical atlas (n = 18). Visual-spatial abilities were tested with Mental Rotation Test (MRT), Paper Folding Test (PFT), and Mechanical Reasoning (MR) Test. Anatomical knowledge was assessed by the validated 30-item paper posttest. The overall posttest scores in the stereoscopic 3D AR group (47.8%) were similar to those in the monoscopic 3D desktop group (38.5%; P = 0.240) and the 2D anatomical atlas group (50.9%; P = 1.00). When stratified by visual-spatial abilities test scores, students with lower MRT scores achieved higher posttest scores in the stereoscopic 3D AR group (49.2%) as compared to the monoscopic 3D desktop group (33.4%; P = 0.015) and similar to the scores in the 2D group (46.4%; P = 0.99). Participants with higher MRT scores performed equally well in all conditions. It is instrumental to consider an aptitude-treatment interaction caused by visual-spatial abilities when designing research into 3D learning. Further research is needed to identify contributing features and the most effective way of introducing this technology into current educational programs.
Collapse
Affiliation(s)
- Katerina Bogomolova
- Department of Surgery, Leiden University Medical Center, Leiden, The Netherlands
- Center for Innovation of Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | | | - Mary E W Dankbaar
- Institute for Medical Education Research Rotterdam, Rotterdam Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Walter W van den Broek
- Institute for Medical Education Research Rotterdam, Rotterdam Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Steven E R Hovius
- Department of Plastic and Reconstructive Surgery and Hand Surgery, Rotterdam Erasmus University Medical Center, Rotterdam, The Netherlands
- Department of Plastic and Reconstructive Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jos A van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, The Netherlands
- Center for Innovation of Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Beerend P Hierck
- Center for Innovation of Medical Education, Leiden University Medical Center, Leiden, The Netherlands
- Department of Anatomy and Embryology, Leiden University Medical Center, Leiden, The Netherlands
- Centre for Innovation, Leiden University, The Hague, The Netherlands
- Leiden Teachers' Academy, Leiden University, Leiden, The Netherlands
| |
Collapse
|
49
|
Silva JNA, Privitera MB, Southworth MK, Silva JR. Development and Human Factors Considerations for Extended Reality Applications in Medicine: The Enhanced ELectrophysiology Visualization and Interaction System (ĒLVIS). VIRTUAL, AUGMENTED AND MIXED REALITY : INDUSTRIAL AND EVERYDAY LIFE APPLICATIONS : 12TH INTERNATIONAL CONFERENCE, VAMR 2020, HELD AS PART OF THE 22ND HCI INTERNATIONAL CONFERENCE, HCII 2020, COPENHAGEN, DENMARK, JULY 19-24, 2020, PROCEE... 2020; 12191:341-356. [PMID: 34327520 PMCID: PMC8317914 DOI: 10.1007/978-3-030-49698-2_23] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the rapid expansion of hardware options in the extended realities (XRs), there has been widespread development of applications throughout many fields, including engineering, entertainment and medicine. Development of medical applications for the XRs have a unique set of considerations during development and human factors testing. Additionally, understanding the constraints of the user and the use case allow for iterative improvement. In this manuscript, the authors discuss the considerations when developing and performing human factors testing for XR applications, using the Enhanced ELectrophysiology Visualization and Interaction System (ĒLVIS) as an example. Additionally, usability and critical interpersonal interaction data from first-in-human testing of ĒLVIS are presented.
Collapse
Affiliation(s)
- Jennifer N Avari Silva
- Department of Pediatrics, Cardiology. Washington University in St Louis, School of Medicine, St Louis, MO
- Department of Biomedical Engineering. Washington University in St Louis, McKelvey School of Engineering, St Louis, MO
- SentiAR, Inc, St Louis, MO
| | | | | | - Jonathan R Silva
- Department of Biomedical Engineering. Washington University in St Louis, McKelvey School of Engineering, St Louis, MO
- SentiAR, Inc, St Louis, MO
| |
Collapse
|
50
|
Southworth MK, Silva JNA, Blume WM, Van Hare GF, Dalal AS, Silva JR. Performance Evaluation of Mixed Reality Display for Guidance During Transcatheter Cardiac Mapping and Ablation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:1900810. [PMID: 32742821 PMCID: PMC7390021 DOI: 10.1109/jtehm.2020.3007031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 06/23/2020] [Accepted: 06/24/2020] [Indexed: 01/18/2023]
Abstract
Cardiac electrophysiology procedures present the physician with a wealth of 3D information, typically presented on fixed 2D monitors. New developments in wearable mixed reality displays offer the potential to simplify and enhance 3D visualization while providing hands-free, dynamic control of devices within the procedure room. OBJECTIVE This work aims to evaluate the performance and quality of a mixed reality system designed for intraprocedural use in cardiac electrophysiology. METHOD The Enhanced Electrophysiology Visualization and Interaction System (ĒLVIS) mixed reality system performance criteria, including image quality, hardware performance, and usability were evaluated using existing display validation procedures adapted to the electrophysiology specific use case. Additional performance and user validation were performed through a 10 patient, in-human observational study, the Engineering ĒLVIS (E2) Study. RESULTS The ĒLVIS system achieved acceptable frame rate, latency, and battery runtime with acceptable dynamic range and depth distortion as well as minimal geometric distortion. Bench testing results corresponded with physician feedback in the observational study, and potential improvements in geometric understanding were noted. CONCLUSION The ĒLVIS system, based on current commercially available mixed reality hardware, is capable of meeting the hardware performance, image quality, and usability requirements of the electroanatomic mapping display for intraprocedural, real-time use in electrophysiology procedures. Verifying off the shelf mixed reality hardware for specific clinical use can accelerate the adoption of this transformative technology and provide novel visualization, understanding, and control of clinically relevant data in real-time.
Collapse
Affiliation(s)
| | - Jennifer N. Avari Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | | | - George F. Van Hare
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Aarti S. Dalal
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Jonathan R. Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMO63130USA
| |
Collapse
|