1
|
Moser CH, Kim C, Charles B, Tijones R, Sanchez E, Davila JG, Matta HR, Brenner MJ, Pandian V. Mixed Reality in Nursing Practice: A Mixed Methods Systematic Review. J Clin Nurs 2025. [PMID: 40200558 DOI: 10.1111/jocn.17722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2024] [Revised: 02/08/2025] [Accepted: 02/21/2025] [Indexed: 04/10/2025]
Abstract
AIM(S) To review the current evidence on mixed reality (MR) applications in nursing practice, focusing on efficiency, ergonomics, satisfaction, competency, and team effectiveness. DESIGN Mixed methods systematic review of empirical studies evaluating MR interventions in nursing practice. METHODS The systematic review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and was registered with PROSPERO. Studies were included if they assessed nursing outcomes related to MR interventions. Exclusion criteria encompassed reviews, studies focusing solely on virtual reality, and those involving only nursing students. The Cochrane ROBINS-I, RoB 2, and CASP tools assessed the risk of bias and methodological quality. DATA SOURCES A comprehensive search of 12 databases (MEDLINE, Embase, CINAHL, Cochrane Library, Web of Science, and others) covered literature published between January 2013 and January 2023. RESULTS Eight studies met inclusion criteria, exploring diverse MR implementations, including smart glasses and mobile applications, across various nursing specialisations. MR demonstrated potential benefits in efficiency, such as faster task completion and improved accuracy. Satisfaction outcomes were limited but indicated promise. Ergonomic challenges were identified, including discomfort and technical issues. Studies on competency showed mixed results, with some evidence of improved skill acquisition. Team effectiveness and health equity outcomes were underexplored. CONCLUSION While MR shows potential in enhancing nursing practice, evidence is heterogeneous and clinical relevance remains unclear. Further rigorous comparative studies are necessary to establish its utility and address barriers to adoption. IMPLICATIONS FOR THE PROFESSION AND/OR PATIENT CARE MR technology may enhance nursing efficiency, competency and satisfaction. Addressing ergonomic and technical challenges could optimise adoption and benefit patient care. REPORTING METHOD This review adheres to PRISMA guidelines. PATIENT OR PUBLIC CONTRIBUTION No Patient or Public Contribution. TRIAL AND PROTOCOL REGISTRATION PROSPERO registration: #CRD42022324066.
Collapse
Affiliation(s)
- Chandler H Moser
- Nurse Scientist, Center for Nursing Science and Clinical Inquiry, Madigan Army Medical Center, Joint Base Lewis-McChord, Tacoma, WA, USA
| | - Changhwan Kim
- Doctoral Student, Johns Hopkins School of Nursing, Baltimore, Maryland, USA
| | - Bindu Charles
- Pathway to PhD Fellow, Johns Hopkins School of Nursing; Baltimore, Maryland, USA; and Doctoral Student, Founder-Chancellor Shri N.P.V Ramasamy, Udayar Research Fellow, Chennai, India
| | - Renilda Tijones
- Pathway to PhD Fellow, Johns Hopkins School of Nursing, Baltimore, Maryland, USA
| | - Elsa Sanchez
- Pathway to PhD Fellow, Johns Hopkins School of Nursing, Baltimore, Maryland, USA
| | - Jedry G Davila
- Pathway to PhD Fellow, Johns Hopkins School of Nursing, Baltimore, Maryland, USA
| | - Hemilla R Matta
- Emergency Nurse, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Michael J Brenner
- Associate Professor, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Vinciya Pandian
- Associate Dean for Graduate Education and Professor of Nursing; Joint Appointment with Otolaryngology-Head and Neck Surgery, College of Medicine; Executive Director of Center for Immersive Learning and Digital Innovation, Ross and Carol Nese College of Nursing, Penn State University, University Park, Pennsylvania, USA
| |
Collapse
|
2
|
Wise PA, Studier-Fischer A, Hackert T, Nickel F. [Status Quo of Surgical Navigation]. Zentralbl Chir 2024; 149:522-528. [PMID: 38056501 DOI: 10.1055/a-2211-4898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
Surgical navigation, also referred to as computer-assisted or image-guided surgery, is a technique that employs a variety of methods - such as 3D imaging, tracking systems, specialised software, and robotics to support surgeons during surgical interventions. These emerging technologies aim not only to enhance the accuracy and precision of surgical procedures, but also to enable less invasive approaches, with the objective of reducing complications and improving operative outcomes for patients. By harnessing the integration of emerging digital technologies, surgical navigation holds the promise of assisting complex procedures across various medical disciplines. In recent years, the field of surgical navigation has witnessed significant advances. Abdominal surgical navigation, particularly endoscopy, laparoscopic, and robot-assisted surgery, is currently undergoing a phase of rapid evolution. Emphases include image-guided navigation, instrument tracking, and the potential integration of augmented and mixed reality (AR, MR). This article will comprehensively delve into the latest developments in surgical navigation, spanning state-of-the-art intraoperative technologies like hyperspectral and fluorescent imaging, to the integration of preoperative radiological imaging within the intraoperative setting.
Collapse
Affiliation(s)
- Philipp Anthony Wise
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Alexander Studier-Fischer
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Thilo Hackert
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
| | - Felix Nickel
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| |
Collapse
|
3
|
Taleb A, Leclerc S, Hussein R, Lalande A, Bozorg-Grayeli A. Registration of preoperative temporal bone CT-scan to otoendoscopic video for augmented-reality based on convolutional neural networks. Eur Arch Otorhinolaryngol 2024; 281:2921-2930. [PMID: 38200355 DOI: 10.1007/s00405-023-08403-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
PURPOSE Patient-to-image registration is a preliminary step required in surgical navigation based on preoperative images. Human intervention and fiducial markers hamper this task as they are time-consuming and introduce potential errors. We aimed to develop a fully automatic 2D registration system for augmented reality in ear surgery. METHODS CT-scans and corresponding oto-endoscopic videos were collected from 41 patients (58 ears) undergoing ear examination (vestibular schwannoma before surgery, profound hearing loss requiring cochlear implant, suspicion of perilymphatic fistula, contralateral ears in cases of unilateral chronic otitis media). Two to four images were selected from each case. For the training phase, data from patients (75% of the dataset) and 11 cadaveric specimens were used. Tympanic membranes and malleus handles were contoured on both video images and CT-scans by expert surgeons. The algorithm used a U-Net network for detecting the contours of the tympanic membrane and the malleus on both preoperative CT-scans and endoscopic video frames. Then, contours were processed and registered through an iterative closest point algorithm. Validation was performed on 4 cases and testing on 6 cases. Registration error was measured by overlaying both images and measuring the average and Hausdorff distances. RESULTS The proposed registration method yielded a precision compatible with ear surgery with a 2D mean overlay error of 0.65 ± 0.60 mm for the incus and 0.48 ± 0.32 mm for the round window. The average Hausdorff distance for these 2 targets was 0.98 ± 0.60 mm and 0.78 ± 0.34 mm respectively. An outlier case with higher errors (2.3 mm and 1.5 mm average Hausdorff distance for incus and round window respectively) was observed in relation to a high discrepancy between the projection angle of the reconstructed CT-scan and the video image. The maximum duration for the overall process was 18 s. CONCLUSIONS A fully automatic 2D registration method based on a convolutional neural network and applied to ear surgery was developed. The method did not rely on any external fiducial markers nor human intervention for landmark recognition. The method was fast and its precision was compatible with ear surgery.
Collapse
Affiliation(s)
- Ali Taleb
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France.
| | - Sarah Leclerc
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
| | | | - Alain Lalande
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- Medical Imaging Department, Dijon University Hospital, 21000, Dijon, France
| | - Alexis Bozorg-Grayeli
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- ENT Department, Dijon University Hospital, 21000, Dijon, France
| |
Collapse
|
4
|
Mamone V, Ferrari V, D’Amato R, Condino S, Cattari N, Cutolo F. Head-Mounted Projector for Manual Precision Tasks: Performance Assessment. SENSORS (BASEL, SWITZERLAND) 2023; 23:3494. [PMID: 37050554 PMCID: PMC10098766 DOI: 10.3390/s23073494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/22/2023] [Accepted: 03/24/2023] [Indexed: 06/19/2023]
Abstract
The growing interest in augmented reality applications has led to an in-depth look at the performance of head-mounted displays and their testing in numerous domains. Other devices for augmenting the real world with virtual information are presented less frequently and usually focus on the description of the device rather than on its performance analysis. This is the case of projected augmented reality, which, compared to head-worn AR displays, offers the advantages of being simultaneously accessible by multiple users whilst preserving user awareness of the environment and feeling of immersion. This work provides a general evaluation of a custom-made head-mounted projector for the aid of precision manual tasks through an experimental protocol designed for investigating spatial and temporal registration and their combination. The results of the tests show that the accuracy (0.6±0.1 mm of spatial registration error) and motion-to-photon latency (113±12 ms) make the proposed solution suitable for guiding precision tasks.
Collapse
Affiliation(s)
- Virginia Mamone
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Azienda Ospedaliero Universitaria Pisana, 56126 Pisa, Italy
| | - Vincenzo Ferrari
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Renzo D’Amato
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Sara Condino
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Nadia Cattari
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Fabrizio Cutolo
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| |
Collapse
|
5
|
Margeta J, Hussain R, López Diez P, Morgenstern A, Demarcy T, Wang Z, Gnansia D, Martinez Manzanera O, Vandersteen C, Delingette H, Buechner A, Lenarz T, Patou F, Guevara N. A Web-Based Automated Image Processing Research Platform for Cochlear Implantation-Related Studies. J Clin Med 2022; 11:6640. [PMID: 36431117 PMCID: PMC9699139 DOI: 10.3390/jcm11226640] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 11/11/2022] Open
Abstract
The robust delineation of the cochlea and its inner structures combined with the detection of the electrode of a cochlear implant within these structures is essential for envisaging a safer, more individualized, routine image-guided cochlear implant therapy. We present Nautilus-a web-based research platform for automated pre- and post-implantation cochlear analysis. Nautilus delineates cochlear structures from pre-operative clinical CT images by combining deep learning and Bayesian inference approaches. It enables the extraction of electrode locations from a post-operative CT image using convolutional neural networks and geometrical inference. By fusing pre- and post-operative images, Nautilus is able to provide a set of personalized pre- and post-operative metrics that can serve the exploration of clinically relevant questions in cochlear implantation therapy. In addition, Nautilus embeds a self-assessment module providing a confidence rating on the outputs of its pipeline. We present a detailed accuracy and robustness analyses of the tool on a carefully designed dataset. The results of these analyses provide legitimate grounds for envisaging the implementation of image-guided cochlear implant practices into routine clinical workflows.
Collapse
Affiliation(s)
- Jan Margeta
- Research and Development, KardioMe, 01851 Nova Dubnica, Slovakia
| | - Raabid Hussain
- Research and Technology Group, Oticon Medical, 2765 Smørum, Denmark
| | - Paula López Diez
- Department for Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kongens Lyngby, Denmark
| | - Anika Morgenstern
- Department of Otolaryngology, Medical University of Hannover, 30625 Hannover, Germany
| | - Thomas Demarcy
- Research and Technology Group, Oticon Medical, 2765 Smørum, Denmark
| | - Zihao Wang
- Epione Team, Inria, Université Côte d’Azur, 06902 Sophia Antipolis, France
| | - Dan Gnansia
- Research and Technology Group, Oticon Medical, 2765 Smørum, Denmark
| | | | - Clair Vandersteen
- Institut Universitaire de la Face et du Cou, Centre Hospitalier Universitaire de Nice, Université Côte d’Azur, 06100 Nice, France
| | - Hervé Delingette
- Epione Team, Inria, Université Côte d’Azur, 06902 Sophia Antipolis, France
| | - Andreas Buechner
- Department of Otolaryngology, Medical University of Hannover, 30625 Hannover, Germany
| | - Thomas Lenarz
- Department of Otolaryngology, Medical University of Hannover, 30625 Hannover, Germany
| | - François Patou
- Research and Technology Group, Oticon Medical, 2765 Smørum, Denmark
| | - Nicolas Guevara
- Institut Universitaire de la Face et du Cou, Centre Hospitalier Universitaire de Nice, Université Côte d’Azur, 06100 Nice, France
| |
Collapse
|
6
|
Interactive Scientific Visualization of Fluid Flow Simulation Data Using AR Technology-Open-Source Library OpenVisFlow. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6090081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Computational fluid dynamics (CFD) are being used more and more in the industry to understand and optimize processes such as fluid flows. At the same time, tools such as augmented reality (AR) are becoming increasingly important with the realization of Industry 5.0 to make data and processes more tangible. Placing the two together paves the way for a new method of active learning and also for an interesting and engaging way of presenting industry processes. It also enables students to reinforce their understanding of the fundamental concepts of fluid dynamics in an interactive way. However, this is not really being utilized yet. For this reason, in this paper, we aim to combine these two powerful tools. Furthermore, we present the framework of a modular open-source library for scientific visualization of fluid flow “OpenVisFlow” which simplifies the creation of such applications and enables seamless visualization without other software by allowing users to integrate the visualization step into the simulation code. Using this framework and the open-source extension AR-Core, we show how a new markerless visualization tool can be implemented.
Collapse
|
7
|
Nilius M, Nilius MH. How precise are oral splints for frameless stereotaxy in guided ear, nose, throat, and maxillofacial surgery: a cadaver study. Eur Radiol Exp 2021; 5:27. [PMID: 34195878 PMCID: PMC8245614 DOI: 10.1186/s41747-021-00223-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 05/18/2021] [Indexed: 11/12/2022] Open
Abstract
Background Computer-assisted surgery optimises accuracy and serves to improve precise surgical procedures. We validated oral splints with fiducial markers by testing them against rigid bone markers. Methods We screwed twenty bone anchors as fiducial markers into different regions of a dried skull and measured the distances. After computed tomography (CT) scanning, the accuracy was evaluated by determining the markers’ position using frameless stereotaxy on a dry cadaver and indicated on the CT scan. We compared the accuracy of chairside fabricated oral splints to standard registration with bone markers immediately after fabrication and after a ten-time use. Accuracy was calculated as deviation (mean ± standard deviation). For statistical analysis, t test, Kruskal-Wallis, Tukey's, and various linear regression models, such as the Pearson's product–moment correlation coefficient, were used. Results Oral splints showed an accuracy of 0.90 mm ± 0.27 for viscerocranium, 1.10 mm ± 0.39 for skull base, and 1.45 mm ± 0.59 for neurocranium. We found an accuracy of less than 2 mm for both splints for a distance of up to 152 mm. The accuracy persisted even after ten times removing and reattaching the splints. Conclusions Oral splints offer a non-invasive indicator to improve the accuracy of image-guided surgery. The precision is dependent on the distance to the target. Up to 150-mm distance, a precision of fewer than 2 mm is possible. Dental splints provide sufficient accuracy than bone markers and may opt for higher precision combined with other non-invasive registration methods.
Collapse
Affiliation(s)
- Manfred Nilius
- NILIUSKLINIK Dortmund, Londoner Bogen 6, D-44269, Dortmund, Germany. .,Technische Universität Dresden, Dresden, Germany.
| | | |
Collapse
|
8
|
Hu X, Baena FRY, Cutolo F. Head-Mounted Augmented Reality Platform for Markerless Orthopaedic Navigation. IEEE J Biomed Health Inform 2021; 26:910-921. [PMID: 34115600 DOI: 10.1109/jbhi.2021.3088442] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual augmented reality (AR) has the potential to improve the accuracy, efficiency and reproducibility of computer-assisted orthopaedic surgery (CAOS). AR Head-mounted displays (HMDs) further allow non-eye-shift target observation and egocentric view. Recently, a markerless tracking and registration (MTR) algorithm was proposed to avoid the artificial markers that are conventionally pinned into the target anatomy for tracking, as their use prolongs surgical workflow, introduces human-induced errors, and necessitates additional surgical invasion in patients. However, such an MTR-based method has neither been explored for surgical applications nor integrated into current AR HMDs, making the ergonomic HMD-based markerless AR CAOS navigation hard to achieve. To these aims, we present a versatile, device-agnostic and accurate HMD-based AR platform. Our software platform, supporting both video see-through (VST) and optical see-through (OST) modes, integrates two proposed fast calibration procedures using a specially designed calibration tool. According to the camera-based evaluation, our AR platform achieves a display error of 6.31 2.55 arcmin for VST and 7.72 3.73 arcmin for OST. A proof-of-concept markerless surgical navigation system to assist in femoral bone drilling was then developed based on the platform and Microsoft HoloLens 1. According to the user study, both VST and OST markerless navigation systems are reliable, with the OST system providing the best usability. The measured navigation error is 4.90 1.04 mm, 5.96 2.22 for VST system and 4.36 0.80 mm, 5.65 1.42 for OST system.
Collapse
|
9
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
10
|
Lefevre E, Terrier LM, Bekaert O, Simonneau A, Rogers A, Vignal-Clermont C, Boissonnet H, Robert G, Lot G, Chauvet D. Microsurgical Transcranial Approach of 112 Paraoptic Meningiomas: A Single-Center Case Series. Oper Neurosurg (Hagerstown) 2020; 19:651-658. [PMID: 32649763 DOI: 10.1093/ons/opaa207] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 04/27/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Predictors of visual outcomes after optic nerve decompression are controversial. OBJECTIVE To identify the predictors of poor visual outcomes after surgery of meningiomas responsible of a compressive optic neuropathy. METHODS We focused on paraoptic meningiomas (POMs), which gathered tuberculum sellae meningiomas (TSMs) and anterior clinoid meningiomas (ACMs) responsible for visual impairment or threatening visual function, that underwent surgery at our institution between January 2009 and December 2015 and analyzed the clinical and radiological findings of our patients. RESULTS Among 112 patients who underwent surgery for a POM, a preoperative visual deficit was present in 108 patients (96.4%). Six months after surgery, 79 patients (70.5%) had a visual improvement, 15 patients (13.4%) had an unchanged vision, and 18 patients (16.1%) had deteriorated vision. A preoperative visual deficit of 6 mo or more was a strong predictor of poor visual outcome after surgery (P = .034). Poor visual outcome after surgery was not significantly related to the size of the tumor (P = .057), the age of the patient (P = .94), or the tumor extension into the optic canal (P = .47). CONCLUSION The duration of preoperative visual deficit was found to be a strong predictor of poor visual outcomes after surgery in POMs Other predictors of poor visual outcomes are still needed and are currently under evaluation in a prospective study at our institution.
Collapse
Affiliation(s)
- Etienne Lefevre
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | | | - Olivier Bekaert
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | - Adrien Simonneau
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | - Alister Rogers
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | | | - Hervé Boissonnet
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | - Gilles Robert
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | - Guillaume Lot
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| | - Dorian Chauvet
- Department of Neurosurgery, Rothschild Foundation Hospital, Paris, France
| |
Collapse
|
11
|
Cercenelli L, Carbone M, Condino S, Cutolo F, Marcelli E, Tarsitano A, Marchetti C, Ferrari V, Badiali G. The Wearable VOSTARS System for Augmented Reality-Guided Surgery: Preclinical Phantom Evaluation for High-Precision Maxillofacial Tasks. J Clin Med 2020; 9:jcm9113562. [PMID: 33167432 PMCID: PMC7694536 DOI: 10.3390/jcm9113562] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/29/2020] [Accepted: 11/03/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. METHODS An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. RESULTS VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory's length was within ±0.5 mm accuracy. CONCLUSIONS Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
- Correspondence: ; Tel.: +39-0516364603
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Emanuela Marcelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
| | - Achille Tarsitano
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Claudio Marchetti
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| |
Collapse
|
12
|
Quantitative Augmented Reality-Assisted Free-Hand Orthognathic Surgery Using Electromagnetic Tracking and Skin-Attached Dynamic Reference. J Craniofac Surg 2020; 31:2175-2181. [PMID: 33136850 DOI: 10.1097/scs.0000000000006739] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
The purpose of this study was to develop a quantitative AR-assisted free-hand orthognathic surgery method using electromagnetic (EM) tracking and skin-attached dynamic reference. The authors proposed a novel, simplified, and convenient workflow for augmented reality (AR)-assisted orthognathic surgery based on optical marker-less tracking, a comfortable display, and a non-invasive, skin-attached dynamic reference frame. The 2 registrations between the physical (EM tracking) and CT image spaces and between the physical and AR camera spaces, essential processes in AR-assisted surgery, were pre-operatively performed using the registration body complex and 3D depth camera. The intraoperative model of the maxillary bone segment (MBS) was superimposed on the real patient image with the simulated goal model on a flat-panel display, and the MBS was freely handled for repositioning with respect to the skin-attached dynamic reference tool (SRT) with quantitative visualization of landmarks of interest using only EM tracking. To evaluate the accuracy of AR-assisted Le Fort I surgery, the MBS of the phantom was simulated and repositioned by 6 translational and three rotational movements. The mean absolute deviations (MADs) between the simulation and post-operative positions of MBS landmarks by the SRT were 0.20, 0.34, 0.29, and 0.55 mm in x- (left lateral, right lateral), y- (setback, advance), and z- (impaction, elongation) directions, and RMS, respectively, while those by the BRT were 0.23, 0.37, 0.30, and 0.60 mm. There were no significant differences between the translation and rotation surgeries or among surgeries in the x-, y-, and z-axes for the SRT. The MADs in the x-, y-, and z-axes exhibited no significant differences between the SRT and BRT. The developed method showed high accuracy and reliability in free-hand orthognathic surgery using EM tracking and skin-attached dynamic reference.
Collapse
|
13
|
Augmented reality for inner ear procedures: visualization of the cochlear central axis in microscopic videos. Int J Comput Assist Radiol Surg 2020; 15:1703-1711. [PMID: 32737858 DOI: 10.1007/s11548-020-02240-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 07/20/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE Visualization of the cochlea is impossible due to the delicate and intricate ear anatomy. Augmented reality may be used to perform auditory nerve implantation by transmodiolar approach in patients with profound hearing loss. METHODS We present an augmented reality system for the visualization of the cochlear axis in surgical videos. The system starts with an automatic anatomical landmark detection in preoperative computed tomography images based on deep reinforcement learning. These landmarks are used to register the preoperative geometry with the real-time microscopic video captured inside the auditory canal. Three-dimensional pose of the cochlear axis is determined using the registration projection matrices. In addition, the patient microscope movements are tracked using an image feature-based tracking process. RESULTS The landmark detection stage yielded an average localization error of [Formula: see text] mm ([Formula: see text]). The target registration error was [Formula: see text] mm for the cochlear apex and [Formula: see text] for the cochlear axis. CONCLUSION We developed an augmented reality system to visualize the cochlear axis in intraoperative videos. The system yielded millimetric accuracy and remained stable throughout the experimental study despite camera movements throughout the procedure in experimental conditions.
Collapse
|
14
|
Video-based augmented reality combining CT-scan and instrument position data to microscope view in middle ear surgery. Sci Rep 2020; 10:6767. [PMID: 32317726 PMCID: PMC7174368 DOI: 10.1038/s41598-020-63839-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 03/26/2020] [Indexed: 11/27/2022] Open
Abstract
The aim of the study was to develop and assess the performance of a video-based augmented reality system, combining preoperative computed tomography (CT) and real-time microscopic video, as the first crucial step to keyhole middle ear procedures through a tympanic membrane puncture. Six different artificial human temporal bones were included in this prospective study. Six stainless steel fiducial markers were glued on the periphery of the eardrum, and a high-resolution CT-scan of the temporal bone was obtained. Virtual endoscopy of the middle ear based on this CT-scan was conducted on Osirix software. Virtual endoscopy image was registered to the microscope-based video of the intact tympanic membrane based on fiducial markers and a homography transformation was applied during microscope movements. These movements were tracked using Speeded-Up Robust Features (SURF) method. Simultaneously, a micro-surgical instrument was identified and tracked using a Kalman filter. The 3D position of the instrument was extracted by solving a three-point perspective framework. For evaluation, the instrument was introduced through the tympanic membrane and ink droplets were injected on three middle ear structures. An average initial registration accuracy of 0.21 ± 0.10 mm (n = 3) was achieved with a slow propagation error during tracking (0.04 ± 0.07 mm). The estimated surgical instrument tip position error was 0.33 ± 0.22 mm. The target structures’ localization accuracy was 0.52 ± 0.15 mm. The submillimetric accuracy of our system without tracker is compatible with ear surgery.
Collapse
|
15
|
Condino S, Fida B, Carbone M, Cercenelli L, Badiali G, Ferrari V, Cutolo F. Wearable Augmented Reality Platform for Aiding Complex 3D Trajectory Tracing. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1612. [PMID: 32183212 PMCID: PMC7146390 DOI: 10.3390/s20061612] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 03/05/2020] [Accepted: 03/11/2020] [Indexed: 01/28/2023]
Abstract
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.
Collapse
Affiliation(s)
- Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Benish Fida
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Laura Cercenelli
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| |
Collapse
|