76
|
Turkyilmaz I, Wilkins GN. Taking surgical training to another level with mixed reality advanced dental simulator. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2023; 124:101384. [PMID: 36642248 DOI: 10.1016/j.jormas.2023.101384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
|
77
|
Antonelli M, Lucignani M, Parrillo C, Grassi F, Figà Talamanca L, Rossi Espagnet MC, Gandolfo C, Secinaro A, Pasquini L, De Benedictis A, Placidi E, De Palma L, Marras CE, Marasi A, Napolitano A. Magnetic resonance imaging based neurosurgical planning on hololens 2: A feasibility study in a paediatric hospital. Digit Health 2023; 9:20552076231214066. [PMID: 38025111 PMCID: PMC10656794 DOI: 10.1177/20552076231214066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 01/31/2023] [Indexed: 12/01/2023] Open
Abstract
Objective The goal of this work is to show how to implement a mixed reality application (app) for neurosurgery planning based on neuroimaging data, highlighting the strengths and weaknesses of its design. Methods Our workflow explains how to handle neuroimaging data, including how to load morphological, functional and diffusion tensor imaging data into a mixed reality environment, thus creating a first guide of this kind. Brain magnetic resonance imaging data from a paediatric patient were acquired using a 3 T Siemens Magnetom Skyra scanner. Initially, this raw data underwent specific software pre-processing and were subsequently transformed to ensure seamless integration with the mixed reality app. After that, we created three-dimensional models of brain structures and the mixed reality environment using Unity™ engine together with Microsoft® HoloLens 2™ device. To get an evaluation of the app we submitted a questionnaire to four neurosurgeons. To collect data concerning the performance of a user session we used Unity Performance Profiler. Results The use of the interactive features, such as rotating, scaling and moving models and browsing through menus, provided by the app had high scores in the questionnaire, and their use can still be improved as suggested by the performance data collected. The questionnaire's average scores were high, so the overall experiences of using our mixed reality app were positive. Conclusion We have successfully created a valuable and easy-to-use neuroimaging data mixed reality app, laying the foundation for more future clinical uses, as more models and data derived from various biomedical images can be imported.
Collapse
|
78
|
Baetzner AS, Wespi R, Hill Y, Gyllencreutz L, Sauter TC, Saveman BI, Mohr S, Regal G, Wrzus C, Frenkel MO. Preparing medical first responders for crises: a systematic literature review of disaster training programs and their effectiveness. Scand J Trauma Resusc Emerg Med 2022; 30:76. [PMID: 36566227 PMCID: PMC9789518 DOI: 10.1186/s13049-022-01056-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 11/30/2022] [Indexed: 12/25/2022] Open
Abstract
BACKGROUND Adequate training and preparation of medical first responders (MFRs) are essential for an optimal performance in highly demanding situations like disasters (e.g., mass accidents, natural catastrophes). The training needs to be as effective as possible, because precise and effective behavior of MFRs under stress is central for ensuring patients' survival and recovery. This systematic review offers an overview of scientifically evaluated training methods used to prepare MFRs for disasters. It identifies different effectiveness indicators and provides an additional analysis of how and to what extent the innovative training technologies virtual (VR) and mixed reality (MR) are included in disaster training research. METHODS The systematic review was conducted according to the PRISMA guidelines and focused specifically on (quasi-)experimental studies published between January 2010 and September 2021. The literature search was conducted via Web of Science and PubMed and led to the inclusion of 55 articles. RESULTS The search identified several types of training, including traditional (e.g., lectures, real-life scenario training) and technology-based training (e.g., computer-based learning, educational videos). Most trainings consisted of more than one method. The effectiveness of the trainings was mainly assessed through pre-post comparisons of knowledge tests or self-reported measures although some studies also used behavioral performance measures (e.g., triage accuracy). While all methods demonstrated effectiveness, the literature indicates that technology-based methods often lead to similar or greater training outcomes than traditional trainings. Currently, few studies systematically evaluated immersive VR and MR training. CONCLUSION To determine the success of a training, proper and scientifically sound evaluation is necessary. Of the effectiveness indicators found, performance assessments in simulated scenarios are closest to the target behavior during real disasters. For valid yet inexpensive evaluations, objectively assessible performance measures, such as accuracy, time, and order of actions could be used. However, performance assessments have not been applied often. Furthermore, we found that technology-based training methods represent a promising approach to train many MFRs repeatedly and efficiently. These technologies offer great potential to supplement or partially replace traditional training. Further research is needed on those methods that have been underrepresented, especially serious gaming, immersive VR, and MR.
Collapse
|
79
|
McStay A. Replika in the Metaverse: the moral problem with empathy in 'It from Bit'. AI AND ETHICS 2022; 3:1-13. [PMID: 36573214 PMCID: PMC9773645 DOI: 10.1007/s43681-022-00252-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Accepted: 12/08/2022] [Indexed: 12/24/2022]
Abstract
This paper assesses claims of computational empathy in relation to existing social open-ended chatbots and intention that these chatbots will feature in emergent mixed reality contexts, recently given prominence due to interest in the Metaverse. Against the background of increasing loneliness within society and use of chatbots as a potential remedy for this, the paper considers two leading current social chatbots, Replika and Microsoft's Xiaoice, their technical underpinnings, empathetic claims and properties that have scope to scale into the Metaverse (if it coheres). Finding scope for human benefit from social chatbots, the paper highlights problematic reliance on self-disclosure to sustain the existence of chatbots. The paper progresses to situate Microsoft's empathetic computing framework in relation to philosophical ideas that inform Metaverse speculation and construction, including Wheeler's 'It from Bit' thesis that all aspects of existence may be computed, Chalmers' philosophical championing that virtual realities are genuine realities, Bostrom's proposal and provocation that we might already be living in a simulation, and longtermist belief that future complex simulations need to be protected from decisions made today. Given claims for current and nascent social chatbots, belief in bit-based possible and projected futures, and industrial buy-in to these philosophies, this paper answers whether computational empathy is real or not. The paper finds when diverse accounts of empathy are accounted for, whilst something is irrevocably lost in an 'It from Bit' account of empathy, the missing components are not accuracy or even human commonality of experience, but the moral dimension of empathy.
Collapse
|
80
|
Mofatteh M, Mashayekhi MS, Arfaie S, Chen Y, Mirza AB, Fares J, Bandyopadhyay S, Henich E, Liao X, Bernstein M. Augmented and virtual reality usage in awake craniotomy: a systematic review. Neurosurg Rev 2022; 46:19. [PMID: 36529827 PMCID: PMC9760592 DOI: 10.1007/s10143-022-01929-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/21/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022]
Abstract
Augmented and virtual reality (AR, VR) are becoming promising tools in neurosurgery. AR and VR can reduce challenges associated with conventional approaches via the simulation and mimicry of specific environments of choice for surgeons. Awake craniotomy (AC) enables the resection of lesions from eloquent brain areas while monitoring higher cortical and subcortical functions. Evidence suggests that both surgeons and patients benefit from the various applications of AR and VR in AC. This paper investigates the application of AR and VR in AC and assesses its prospective utility in neurosurgery. A systematic review of the literature was performed using PubMed, Scopus, and Web of Science databases in accordance with the PRISMA guidelines. Our search results yielded 220 articles. A total of six articles consisting of 118 patients have been included in this review. VR was used in four papers, and the other two used AR. Tumour was the most common pathology in 108 patients, followed by vascular lesions in eight patients. VR was used for intraoperative mapping of language, vision, and social cognition, while AR was incorporated in preoperative training of white matter dissection and intraoperative visualisation and navigation. Overall, patients and surgeons were satisfied with the applications of AR and VR in their cases. AR and VR can be safely incorporated during AC to supplement, augment, or even replace conventional approaches in neurosurgery. Future investigations are required to assess the feasibility of AR and VR in various phases of AC.
Collapse
|
81
|
Nguyen TV, Raghunath S, Phung KA, Ongwere T, Tran MT. Revisiting natural user interaction in virtual world. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:2443-2453. [PMID: 36530470 PMCID: PMC9734837 DOI: 10.1007/s12652-022-04496-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Natural user interaction in virtual environment is a prominent factor in any mixed reality applications. In this paper, we revisit the assessment of natural user interaction via a case study of a virtual aquarium. Viewers with the wearable headsets are able to interact with virtual objects via head orientation, gaze, gesture, and visual markers. The virtual environment is operated on both Google Cardboard and HoloLens, the two popular wireless head-mounted displays. Evaluation results reveal the preferences of users over different natural user interaction methods.
Collapse
|
82
|
Chahine J, Mascarenhas L, George SA, Bartos J, Yannopoulos D, Raveendran G, Gurevich S. Effects of a Mixed-Reality Headset on Procedural Outcomes in the Cardiac Catheterization Laboratory. CARDIOVASCULAR REVASCULARIZATION MEDICINE 2022; 45:3-8. [PMID: 35995656 DOI: 10.1016/j.carrev.2022.08.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/02/2022] [Accepted: 08/05/2022] [Indexed: 01/04/2023]
Abstract
BACKGROUND Mixed reality head-mounted displays (MR-HMD) are a novel and emerging tool in healthcare. There is a paucity of data on the safety and efficacy of the use of MR-HMD in the cardiac catheterization laboratory (CCL). We sought to analyze and compare fluoroscopy time, procedure time, and complication rates with right heart catheterizations (RHCs) and coronary angiographies (CAs) performed with MR-HMD versus standard LCD medical displays. METHODS This is a non-randomized trial that included patients who underwent RHC and CA with MR-HMD between August 2019 and January 2020. Their outcomes were compared to a control group during the same time period. The primary endpoints were procedure time, fluoroscopy time, and dose area product (DAP). The secondary endpoints were contrast volume and intra and postprocedural complications rate. RESULTS 50 patients were enrolled in the trial, 33 had a RHC done, and 29 had a diagnostic CA performed. They were compared to 232 patients in the control group. The use of MR-HMD was associated with a significantly lower procedure time (20 min (IQR 14-30) vs. 25 min (IQR 18-36), p = 0.038). There were no significant differences in median fluoroscopy time (1.5 min (IQR 0.7-4.9) in the study group vs. 1.3 min (IQR 0.8-3.1), p = 0.84) or median DAP (165.4 mGy·cm2 (IQR 13-15,583) in the study group vs. 913 mGy·cm2 (IQR 24-6291), p = 0.17). There was no significant increase in intra- or post-procedure complications. CONCLUSION MR-HMD use is safe and feasible and may decrease procedure time in the CCL.
Collapse
|
83
|
Zhou Z, Yang Z, Jiang S, Zhuo J, Zhu T, Ma S. Surgical Navigation System for Hypertensive Intracerebral Hemorrhage Based on Mixed Reality. J Digit Imaging 2022; 35:1530-1543. [PMID: 35819536 PMCID: PMC9712880 DOI: 10.1007/s10278-022-00676-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 06/24/2022] [Accepted: 06/28/2022] [Indexed: 10/17/2022] Open
Abstract
Hypertensive intracerebral hemorrhage (HICH) is an intracerebral bleeding disease that affects 2.5 per 10,000 people worldwide each year. An effective way to cure this disease is puncture through the dura with a brain puncture drill and tube; the accuracy of the insertion determines the quality of the surgery. In recent decades, surgical navigation systems have been widely used to improve the accuracy of surgery and minimize risks. Augmented reality- and mixed reality-based surgical navigation is a promising new technology for surgical navigation in the clinic, aiming to improve the safety and accuracy of the operation. In this study, we present a novel multimodel mixed reality navigation system for HICH surgery in which medical images and virtual anatomical structures can be aligned intraoperatively with the actual structures of the patient in a head-mounted device and adjusted when the patient moves in real time while under local anesthesia; this approach can help the surgeon intuitively perform intraoperative navigation. A novel registration method is used to register the holographic space and serves as an intraoperative optical tracker, and a method for calibrating the HICH surgical tools is used to track the tools in real time. The results of phantom experiments revealed a mean registration error of 1.03 mm and an average time consumption of 12.9 min. In clinical usage, the registration error was 1.94 mm, and the time consumption was 14.2 min, showing that this system is sufficiently accurate and effective for clinical application.
Collapse
|
84
|
Wodzinski M, Daniol M, Socha M, Hemmerling D, Stanuch M, Skalski A. Deep learning-based framework for automatic cranial defect reconstruction and implant modeling. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107173. [PMID: 36257198 DOI: 10.1016/j.cmpb.2022.107173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/19/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE This article presents a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling. METHODS We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction, and a dedicated iterative procedure to improve the implant geometry, followed by an automatic generation of models ready for 3-D printing. We propose a cross-case augmentation based on imperfect image registration combining cases from different datasets. Additional ablation studies compare different augmentation strategies and other state-of-the-art methods. RESULTS We evaluate the method on three datasets introduced during the AutoImplant 2021 challenge, organized jointly with the MICCAI conference. We perform the quantitative evaluation using the Dice and boundary Dice coefficients, and the Hausdorff distance. The Dice coefficient, boundary Dice coefficient, and the 95th percentile of Hausdorff distance averaged across all test sets, are 0.91, 0.94, and 1.53 mm respectively. We perform an additional qualitative evaluation by 3-D printing and visualization in mixed reality to confirm the implant's usefulness. CONCLUSION The article proposes a complete pipeline that enables one to create the cranial implant model ready for 3-D printing. The described method is a greatly extended version of the method that scored 1st place in all AutoImplant 2021 challenge tasks. We freely release the source code, which together with the open datasets, makes the results fully reproducible. The automatic reconstruction of cranial defects may enable manufacturing personalized implants in a significantly shorter time, possibly allowing one to perform the 3-D printing process directly during a given intervention. Moreover, we show the usability of the defect reconstruction in a mixed reality that may further reduce the surgery time.
Collapse
|
85
|
Mackenzie CF, Harris TE, Shipper AG, Elster E, Bowyer MW. Virtual reality and haptic interfaces for civilian and military open trauma surgery training: A systematic review. Injury 2022; 53:3575-3585. [PMID: 36123192 DOI: 10.1016/j.injury.2022.08.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE Virtual (VR), augmented (AR), mixed reality (MR) and haptic interfaces make additional avenues available for surgeon assessment, guidance and training. We evaluated applications for open trauma and emergency surgery to address the question: Have new computer-supported interface developments occurred that could improve trauma training for civilian and military surgeons performing open, emergency, non-laparoscopic surgery? DESIGN Systematic literature review. SETTING AND PARTICIPANTS Faculty, University of Maryland School of Medicine, Baltimore., Maryland; Womack Army Medical Center, Fort Bragg, North Carolina; Temple University, Philadelphia, Pennsylvania; Uniformed Services University of Health Sciences, and Walter Reed National Military Medical Center, Bethesda, Maryland. METHODS Structured literature searches identified studies using terms for virtual, augmented, mixed reality and haptics, as well as specific procedures in trauma training courses. Reporting bias was assessed. Study quality was evaluated by the Kirkpatrick's Level of evidence and the Machine Learning to Asses Surgical Expertise (MLASE) score. RESULTS Of 422 papers identified, 14 met inclusion criteria, included 282 enrolled subjects, 20% were surgeons, the remainder students, medics and non-surgeon physicians. Study design was poor and sample sizes were low. No data analyses were beyond descriptive and the highest outcome types were procedural success, subjective self-reports, except three studies used validated metrics. Among the 14 studies, Kirkpatrick's level of evidence was level zero in five studies, level 1 in 8 and level 2 in one. Only one study had MLASE Score greater than 9/20. There was a high risk of bias in 6 studies, uncertain bias in 5 studies and low risk of bias in 3 studies. CONCLUSIONS There was inadequate evidence that VR,MR,AR or haptic interfaces can facilitate training for open trauma surgery or replace cadavers. Because of limited testing in surgeons, deficient study and technology design, risk of reporting bias, no current well-designed studies of computer-supported technologies have shown benefit for open trauma, emergency surgery nor has their use shown improved patient outcomes. Larger more rigorously designed studies and evaluations by experienced surgeons are required for a greater variety of procedures and skills. COMPETENCIES Medical Knowledge, Practice Based Learning and Improvement, Patient Care, Systems-Based Practice.
Collapse
|
86
|
Balci D, Kirimker EO, Raptis DA, Gao Y, Kow AWC. Uses of a dedicated 3D reconstruction software with augmented and mixed reality in planning and performing advanced liver surgery and living donor liver transplantation (with videos). Hepatobiliary Pancreat Dis Int 2022; 21:455-461. [PMID: 36123242 DOI: 10.1016/j.hbpd.2022.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 09/02/2022] [Indexed: 02/05/2023]
Abstract
The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics, virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality. In this article, we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation. Furthermore, we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.
Collapse
|
87
|
Leong SC, Tang YM, Toh FM, Fong KNK. Examining the effectiveness of virtual, augmented, and mixed reality (VAMR) therapy for upper limb recovery and activities of daily living in stroke patients: a systematic review and meta-analysis. J Neuroeng Rehabil 2022; 19:93. [PMID: 36002898 PMCID: PMC9404551 DOI: 10.1186/s12984-022-01071-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 08/12/2022] [Indexed: 12/02/2022] Open
Abstract
INTRODUCTION Virtual reality (VR), augmented reality (AR), and mixed reality (MR) are emerging technologies in the field of stroke rehabilitation that have the potential to overcome the limitations of conventional treatment. Enhancing upper limb (UL) function is critical in stroke impairments because the upper limb is involved in the majority of activities of daily living (ADL). METHODS This study reviewed the use of virtual, augmented and mixed reality (VAMR) methods for improving UL recovery and ADL, and compared the effectiveness of VAMR treatment to conventional rehabilitation therapy. The databases ScienceDirect, PubMed, IEEE Xplore, and Web of Science were examined, and 50 randomized control trials comparing VAMR treatment to standard therapy were determined. The random effect model and fixed effect model are applied based on heterogeneity. RESULTS The most often used outcomes of UL recovery and ADL in stroke rehabilitation were the Fugl-Meyer Assessment for Upper Extremities (FMA-UE), followed by the Box and Block Test (BBT), the Wolf Motor Function Test (WMFT), and the Functional Independence Measure (FIM). According to the meta-analysis, VR, AR, and MR all have a significant positive effect on improving FMA-UE for UL impairment (36 studies, MD = 3.91, 95 percent CI = 1.70-6.12, P = 0.0005) and FIM for ADL (10 studies, MD = 4.25, 95 percent CI = 1.47-7.03, P = 0.003), but not on BBT and WMFT for the UL function tests (16 studies, MD = 2.07, 95 percent CI = - 0.58-4.72, P = 0.13), CONCLUSIONS: VAMR therapy was superior to conventional treatment in UL impairment and daily function outcomes, but not UL function measures. Future studies might include further high-quality trials examining the effect of VR, AR, and MR on UL function measures, with an emphasis on subgroup meta-analysis by stroke type and recovery stage.
Collapse
|
88
|
Are extended reality technologies (ERTs) more effective than traditional anatomy education methods? SURGICAL AND RADIOLOGIC ANATOMY : SRA 2022; 44:1215-1218. [PMID: 35951086 DOI: 10.1007/s00276-022-02998-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 08/01/2022] [Indexed: 10/15/2022]
Abstract
PURPOSE Reviews and meta-analyses concerning the effectiveness of extended reality technologies (ERTs) (namely virtual, augmented, and mixed reality-VR, AR, and MR) in anatomy education (AE) have resulted in conflicting outcomes. The current review explores the existing evidence provided by reviews of AE literature regarding the effectiveness of ERTs after their comparison with traditional (either cadaveric or two-dimensional) anatomy teaching modalities and sheds light on the factors associated with the conflicting outcomes. METHODS PubMed, SCOPUS, ERIC, and Cochrane databases were searched for review articles with the purpose to investigate the effectiveness of ERTs in AE. RESULTS Nine (four systematic with or without meta-analysis and five non-systematic) reviews were included. A lack of robust evidence provided by those reviews was noted, mainly due to a remarkable confusion in the definition of each ERT, along with confusion when authors referred to traditional AE (TAE) methods. CONCLUSIONS To clarify to what extent VR, AR, or MR can replace or supplement TAE methods, there is a primary need for addressing issues regarding the definition of each technology and determining which specific TAE methods are used as comparators.
Collapse
|
89
|
Tokunaga T, Sugimoto M, Saito Y, Kashihara H, Yoshikawa K, Nakao T, Nishi M, Takasu C, Wada Y, Yoshimoto T, Yamashita S, Iwakawa Y, Yokota N, Shimada M. Intraoperative holographic image-guided surgery in a transanal approach for rectal cancer. Langenbecks Arch Surg 2022; 407:2579-2584. [PMID: 35840706 DOI: 10.1007/s00423-022-02607-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 07/09/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE Urethral injury is one of the most important complications in transanal total mesorectal excision (TaTME) in male patients with rectal cancer. The purpose of this study was to investigate holographic image-guided surgery in TaTME. METHODS Polygon (stereolithography) files were created and exported from SYNAPSE VINCENT, and then uploaded into the Holoeyes MD system (Holoeyes Inc., Tokyo, Japan). After uploading the data, the three-dimensional image was automatically converted into a case-specific hologram. The hologram was then installed into the head mount display, HoloLens (Microsoft Corporation, Redmond, WA). The surgeons and assistants wore the HoloLens when they performed TaTME. RESULTS In a Wi-Fi-enabled operating room, each surgeon, wearing a HoloLens, shared the same hologram and succeeded in adjusting the hologram by making simple hand gestures from their respective angles. The hologram contributed to better comprehension of the positional relationships between the urethra and the surrounding pelvic organs during surgery. All surgeons were able to properly determine the dissection line. CONCLUSIONS This first experience suggests that intraoperative holograms contributed to reducing the risk of urethral injury and understanding transanal anatomy. Intraoperative holograms have the potential to become a new next-generation surgical support tool for use in spatial awareness and the sharing of information between surgeons.
Collapse
|
90
|
Harel R, Anekstein Y, Raichel M, Molina CA, Ruiz-Cardozo MA, Orrú E, Khan M, Mirovsky Y, Smorgick Y. The XVS System During Open Spinal Fixation Procedures in Patients Requiring Pedicle Screw Placement in the Lumbosacral Spine. World Neurosurg 2022; 164:e1226-e1232. [PMID: 35671991 DOI: 10.1016/j.wneu.2022.05.134] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/30/2022] [Accepted: 05/31/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE This pilot study was undertaken to evaluate the safety, performance, and usability of the Xvision-Spine (XVS) System (Augmedics, Arlington Heights, IL) during open spinal fixation procedures in patients requiring pedicle screw placement in the lumbosacral spine. METHODS The XVS System is an augmented reality head-mounted display (HMD) based on a computer navigation system designed to assist surgeons in accurately placing pedicle screws. It uses an HMD-mounted tracking camera to provide optical tracking technology, and provides the surgeon a translucent direct near-eye display of the navigated surgical instrument's location relative to the computed tomographic image. We report the preliminary results of a prospective series of all consecutive patients who underwent augmented reality-assisted pedicle screw placement in the lumbosacral vertebrae at 3 institutions. Clinical accuracy for each pedicle screw was graded with Gertzbein-Robbins scores by 2 independent and blinded neuroradiologists. RESULTS The 19 study participants included 8 men and 11 women with a mean age of 59.13 ± 12.09 and 59.91 ± 12.89 years, respectively. Seventeen procedures were successfully completed via the XVS System. Two procedures were not completed due to technical issues with the system's intraoperative scanner. A total of 86 screws were inserted. The accuracy of the XVS System was 97.7%. CONCLUSIONS The XVS System's performance in accurate placement of pedicle screws in the lumbosacral vertebrae had an overall accuracy of 97.7%. These preliminary results were comparable to the accuracy of other manual computer-assisted navigation systems reported in the literature.
Collapse
|
91
|
A Dedicated Tool for Presurgical Mapping of Brain Tumors and Mixed-Reality Navigation During Neurosurgery. J Digit Imaging 2022; 35:704-713. [PMID: 35230562 PMCID: PMC9156583 DOI: 10.1007/s10278-022-00609-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 02/03/2022] [Accepted: 02/05/2022] [Indexed: 12/15/2022] Open
Abstract
Brain tumor surgery requires a delicate tradeoff between complete removal of neoplastic tissue while minimizing loss of brain function. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) have emerged as valuable tools for non-invasive assessment of human brain function and are now used to determine brain regions that should be spared to prevent functional impairment after surgery. However, image analysis requires different software packages, mainly developed for research purposes and often difficult to use in a clinical setting, preventing large-scale diffusion of presurgical mapping. We developed a specialized software able to implement an automatic analysis of multimodal MRI presurgical mapping in a single application and to transfer the results to the neuronavigator. Moreover, the imaging results are integrated in a commercially available wearable device using an optimized mixed-reality approach, automatically anchoring 3-dimensional holograms obtained from MRI with the physical head of the patient. This will allow the surgeon to virtually explore deeper tissue layers highlighting critical brain structures that need to be preserved, while retaining the natural oculo-manual coordination. The enhanced ergonomics of this procedure will significantly improve accuracy and safety of the surgery, with large expected benefits for health care systems and related industrial investors.
Collapse
|
92
|
Jean WC. Virtual and Augmented Reality in Neurosurgery: The Evolution of its Application and Study Designs. World Neurosurg 2022; 161:459-464. [PMID: 35505566 DOI: 10.1016/j.wneu.2021.08.150] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 10/18/2022]
Abstract
BACKGROUND As the art of neurosurgery evolves in the 21st century, more emphasis is placed on minimally invasive techniques, which require technical precision. Simultaneously, the reduction on training hours continues, and teachers of neurosurgery faces "double jeopardy"-with harder skills to teach and less time to teach them. Mixed reality appears as the neurosurgical educators' natural ally: Virtual reality facilitates the learning of spatial relationships and permits rehearsal of skills, while augmented reality can make procedures safer and more efficient. Little wonder then, that the body of literature on mixed reality in neurosurgery has grown exponentially. METHODS Publications involving virtual and augmented reality in neurosurgery were examined. A total of 414 papers were included, and they were categorized according to study design and analyzed. RESULTS Half of the papers were published within the last 3 years alone. Whereas in the earlier half, most of the publications involved experiments in virtual reality simulation and the efficacy of skills acquisition, many of the more recent publication are proof-of-concept studies. This attests to the evolution of mixed reality in neurosurgery. As the technology advances, neurosurgeons are finding more applications, both in training and clinical practice. CONCLUSIONS With parallel advancement in Internet speed and artificial intelligence, the utilization of mixed reality will permeate neurosurgery. From solving staff problems in global neurosurgery, to mitigating the deleterious effect of duty-hour reductions, to improving individual operations, mixed reality will have a positive effect in many aspects of neurosurgery.
Collapse
|
93
|
Steiert C, Behringer SP, Kraus LM, Bissolo M, Demerath T, Beck J, Grauvogel J, Reinacher PC. Augmented reality-assisted craniofacial reconstruction in skull base lesions - an innovative technique for single-step resection and cranioplasty in neurosurgery. Neurosurg Rev 2022; 45:2745-2755. [PMID: 35441994 PMCID: PMC9349131 DOI: 10.1007/s10143-022-01784-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/19/2022] [Accepted: 03/30/2022] [Indexed: 10/31/2022]
Abstract
Defects of the cranial vault often require cosmetic reconstruction with patient-specific implants, particularly in cases of craniofacial involvement. However, fabrication takes time and is expensive; therefore, efforts must be made to develop more rapidly available and more cost-effective alternatives. The current study investigated the feasibility of an augmented reality (AR)-assisted single-step procedure for repairing bony defects involving the facial skeleton and the skull base. In an experimental setting, nine neurosurgeons fabricated AR-assisted and conventionally shaped ("freehand") implants from polymethylmethacrylate (PMMA) on a skull model with a craniofacial bony defect. Deviations of the surface profile in comparison with the original model were quantified by means of volumetry, and the cosmetic results were evaluated using a multicomponent scoring system, each by two blinded neurosurgeons. Handling the AR equipment proved to be quite comfortable. The median volume deviating from the surface profile of the original model was low in the AR-assisted implants (6.40 cm3) and significantly reduced in comparison with the conventionally shaped implants (13.48 cm3). The cosmetic appearance of the AR-assisted implants was rated as very good (median 25.00 out of 30 points) and significantly improved in comparison with the conventionally shaped implants (median 14.75 out of 30 points). Our experiments showed outstanding results regarding the possibilities of AR-assisted procedures for single-step reconstruction of craniofacial defects. Although patient-specific implants still represent the gold standard in esthetic aspects, AR-assisted procedures hold high potential for an immediately and widely available, cost-effective alternative providing excellent cosmetic outcomes.
Collapse
|
94
|
Dolega-Dolegowski D, Proniewska K, Dolega-Dolegowska M, Pregowska A, Hajto-Bryk J, Trojak M, Chmiel J, Walecki P, Fudalej PS. Application of holography and augmented reality based technology to visualize the internal structure of the dental root - a proof of concept. Head Face Med 2022; 18:12. [PMID: 35382839 PMCID: PMC8981712 DOI: 10.1186/s13005-022-00307-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 01/18/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The Augmented Reality (AR) blends digital information with the real world. Thanks to cameras, sensors, and displays it can supplement the physical world with holographic images. Nowadays, the applications of AR range from navigated surgery to vehicle navigation. DEVELOPMENT The purpose of this feasibility study was to develop an AR holographic system implementing Vertucci's classification of dental root morphology to facilitate the study of tooth anatomy. It was tailored to run on the AR HoloLens 2 (Microsoft) glasses. The 3D tooth models were created in Autodesk Maya and exported to Unity software. The holograms of dental roots can be projected in a natural setting of the dental office. The application allowed to display 3D objects in such a way that they could be rotated, zoomed in/out, and penetrated. The advantage of the proposed approach was that students could learn a 3D internal anatomy of the teeth without environmental visual restrictions. CONCLUSIONS It is feasible to visualize internal dental root anatomy with AR holographic system. AR holograms seem to be attractive adjunct for learning of root anatomy.
Collapse
|
95
|
Kang YJ, Kang Y. Mixed reality-based online interprofessional education: a case study in South Korea. KOREAN JOURNAL OF MEDICAL EDUCATION 2022; 34:63-69. [PMID: 35255617 PMCID: PMC8906924 DOI: 10.3946/kjme.2022.220] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 10/13/2021] [Accepted: 11/09/2021] [Indexed: 05/26/2023]
Abstract
PURPOSE The purpose of this study was to explore undergraduate medical and nursing students' satisfaction with their mixed reality (MR)-based online interprofessional learning experience in South Korea. METHODS This study used a case study design. A convenience sample of 30 participants (i.e., 15 third-year medical students and 15 fourth-year nursing students) participated in a 120-minute MR-based online interprofessional education (IPE) that consisted of visualization of holographic standardized patient with ischemic stroke, online interprofessional activity, and debriefing and reflection sessions. Following the MR-based online IPE, data were collected through Modified Satisfaction with Simulation Experience Scale survey and were analyzed using descriptive analyses and independent t-tests. RESULTS Although medical and nursing students were highly satisfied with MR-based online interprofessional learning experience, nursing students were significantly more satisfied with it compared with medical students. CONCLUSION These results suggest that the integration of MR and online approach through the structured clinical reasoning process in undergraduate health professions programs can be used as an educational strategy to improve clinical reasoning and critical thinking and to promote interprofessional understanding.
Collapse
|
96
|
Xi N, Chen J, Gama F, Riar M, Hamari J. The challenges of entering the metaverse: An experiment on the effect of extended reality on workload. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2022; 25:659-680. [PMID: 35194390 PMCID: PMC8852991 DOI: 10.1007/s10796-022-10244-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/13/2022] [Indexed: 06/14/2023]
Abstract
Information technologies exist to enable us to either do things we have not done before or do familiar things more efficiently. Metaverse (i.e. extended reality: XR) enables novel forms of engrossing telepresence, but it also may make mundate tasks more effortless. Such technologies increasingly facilitate our work, education, healthcare, consumption and entertainment; however, at the same time, metaverse bring a host of challenges. Therefore, we pose the question whether XR technologies, specifically Augmented Reality (AR) and Virtual Reality (VR), either increase or decrease the difficulties of carrying out everyday tasks. In the current study we conducted a 2 (AR: with vs. without) × 2 (VR: with vs. without) between-subject experiment where participants faced a shopping-related task (including navigating, movement, hand-interaction, information processing, information searching, storing, decision making, and simple calculation) to examine a proposed series of hypotheses. The NASA Task Load Index (NASA-TLX) was used to measure subjective workload when using an XR-mediated information system including six sub-dimensions of frustration, performance, effort, physical, mental, and temporal demand. The findings indicate that AR was significantly associated with overall workload, especially mental demand and effort, while VR had no significant effect on any workload sub-dimensions. There was a significant interaction effect between AR and VR on physical demand, effort, and overall workload. The results imply that the resources and cost of operating XR-mediated realities are different and higher than physical reality.
Collapse
|
97
|
Zhu LY, Hou JC, Yang L, Liu ZR, Tong W, Bai Y, Zhang YM. Application value of mixed reality in hepatectomy for hepatocellular carcinoma. World J Gastrointest Surg 2022; 14:36-45. [PMID: 35126861 PMCID: PMC8790326 DOI: 10.4240/wjgs.v14.i1.36] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 11/29/2021] [Accepted: 12/25/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND As a new digital holographic imaging technology, mixed reality (MR) technology has unique advantages in determining the liver anatomy and location of tumor lesions. With the popularization of 5G communication technology, MR shows great potential in preoperative planning and intraoperative navigation, making hepatectomy more accurate and safer.
AIM To evaluate the application value of MR technology in hepatectomy for hepatocellular carcinoma (HCC).
METHODS The clinical data of 95 patients who underwent open hepatectomy surgery for HCC between June 2018 and October 2020 at our hospital were analyzed retrospectively. We selected 95 patients with HCC according to the inclusion criteria and exclusion criteria. In 38 patients, hepatectomy was assisted by MR (Group A), and an additional 57 patients underwent traditional hepatectomy without MR (Group B). The perioperative outcomes of the two groups were collected and compared to evaluate the application value of MR in hepatectomy for patients with HCC.
RESULTS We summarized the technical process of MR-assisted hepatectomy in the treatment of HCC. Compared to traditional hepatectomy in Group B, MR-assisted hepatectomy in Group A yielded a shorter operation time (202.86 ± 46.02 min vs 229.52 ± 57.13 min, P = 0.003), less volume of bleeding (329.29 ± 97.31 mL vs 398.23 ± 159.61 mL, P = 0.028), and shorter obstructive time of the portal vein (17.71 ± 4.16 min vs 21.58 ± 5.24 min, P = 0.019). Group A had lower alanine aminotransferas and higher albumin values on the third day after the operation (119.74 ± 29.08 U/L vs 135.53 ± 36.68 U/L, P = 0.029 and 33.60 ± 3.21 g/L vs 31.80 ± 3.51 g/L, P = 0.014, respectively). The total postoperative complications and hospitalization days in Group A were significantly less than those in Group B [14 (37.84%) vs 35 (60.34%), P = 0.032 and 12.05 ± 4.04 d vs 13.78 ± 4.13 d, P = 0.049, respectively].
CONCLUSION MR has some application value in three-dimensional visualization of the liver, surgical planning, and intraoperative navigation during hepatectomy, and it significantly improves the perioperative outcomes of hepatectomy for HCC.
Collapse
|
98
|
Elawady M, Sarhan A, Alshewimy MAM. Toward a mixed reality domain model for time-Sensitive applications using IoE infrastructure and edge computing (MRIoEF). THE JOURNAL OF SUPERCOMPUTING 2022; 78:10656-10689. [PMID: 35095192 PMCID: PMC8785157 DOI: 10.1007/s11227-022-04307-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/31/2021] [Indexed: 06/14/2023]
Abstract
Mixed reality (MR) is one of the technologies with many challenges in the design and implementation phases, especially the problems associated with time-sensitive applications. The main objective of this paper is to introduce a conceptual model for MR application that gives MR application a new layer of interactivity by using Internet of things/Internet of everything models, which provide an improved quality of experience for end-users. The model supports the cloud and fog compute layers to give more functionalities that need more processing resources and reduce the latency problems for time-sensitive applications. Validation of the proposed model is performed via demonstrating a prototype of the model applied to a real-time case study and discussing how to enable standard technologies of the various components in the model. Moreover, it shows the applicability of the model, the ease of defining the roles, and the coherence of data or processes found in the most common applications.
Collapse
|
99
|
Mixed Reality Needle Guidance Application on Smartglasses Without Pre-procedural CT Image Import with Manually Matching Coordinate Systems. Cardiovasc Intervent Radiol 2022; 45:349-356. [PMID: 35022858 DOI: 10.1007/s00270-021-03029-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 10/28/2021] [Indexed: 11/02/2022]
Abstract
PURPOSE To develop and assess the accuracy of a mixed reality (MR) needle guidance application on smartglasses. MATERIALS AND METHODS An MR needle guidance application on HoloLens2, without pre-procedural CT image reconstruction or import by manually matching the spatial and MR coordinate systems, was developed. First, the accuracy of the target locations in the image overlay at 63 points arranged on a 45 × 35 × 21 cm box and needle angles from 0° to 80°, placed using the MR application, was verified. The needle placement errors from 12 different entry points in a phantom by seven operators (four physicians and three non-physicians) were compared using a linear mixed model between the MR guidance and conventional methods using protractors. RESULTS The average errors of the target locations and needle angles placed using the MR application were 5.9 ± 2.6 mm and 2.3 ± 1.7°, respectively. The average needle insertion error using the MR guidance was slightly smaller compared to that using the conventional method (8.4 ± 4.0 mm vs. 9.6 ± 5.1 mm, p = 0.091), particularly in the out-of-plane approach (9.6 ± 3.5 mm vs. 12.3 ± 4.6 mm, p = 0.003). The procedural time was longer with MR guidance than with the conventional method (412 ± 134 s vs. 219 ± 66 s, p < 0.001). CONCLUSION MR needle guidance without pre-procedural CT image import is feasible when matching coordinate systems, and the accuracy of needle insertion is slightly better than that of the conventional method.
Collapse
|
100
|
Aslan S, Agrawal A, Alyuz N, Chierichetti R, Durham LM, Manuvinakurike R, Okur E, Sahay S, Sharma S, Sherry J, Raffa G, Nachman L. Exploring Kid Space in the wild: a preliminary study of multimodal and immersive collaborative play-based learning experiences. EDUCATIONAL TECHNOLOGY RESEARCH AND DEVELOPMENT : ETR & D 2022; 70:205-230. [PMID: 35035182 PMCID: PMC8741584 DOI: 10.1007/s11423-021-10072-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/18/2021] [Indexed: 06/14/2023]
Abstract
Parents recognize the potential benefits of technology for their young children but are wary of too much screen time and its potential deficits in terms of social engagement and physical activity. To address these concerns, related literature suggests technology usages with a blend of digital and physical learning experiences. Towards this end, we developed Kid Space, incorporating immersive computing experiences designed to engage children more actively in physical movement and social collaboration during play-based learning. The technology features an animated peer learner, Oscar, who aims to understand and respond to children's actions and utterances using extensive multimodal sensing and sensemaking technologies. To investigate student engagement during Kid Space learning experiences, an exploratory case study was designed using a formative research method with eight first-grade students. Multimodal data (audio and video) along with observational, interview, and questionnaire data were collected and analyzed. The results show that the students demonstrated high levels of engagement, less attention focused on the screen (projected wall), and more physical activity. In addition to these promising results, the study also enabled us to understand actionable insights to improve Kid Space for future deployments (e.g., the need for real-time personalization). We plan to incorporate the lessons learned from this preliminary study and deploy Kid Space with real-time personalization features for longer periods with more students.
Collapse
|