1
|
Tabernée Heijtmeijer S, Glas H, Janssen N, Vosselman N, de Visscher S, Spijkervet F, Raghoebar G, de Bree R, Rosenberg A, Witjes M, Kraeima J. Accuracy of augmented reality navigated surgery for placement of zygomatic implants: a human cadaver study. PeerJ 2024; 12:e18468. [PMID: 39670105 PMCID: PMC11636531 DOI: 10.7717/peerj.18468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Accepted: 10/15/2024] [Indexed: 12/14/2024] Open
Abstract
Purpose Placement of zygomatic implants in the most optimal prosthetic position is considered challenging due to limited bone mass of the zygoma, limited visibility, length of the drilling path and proximity to critical anatomical structures. Augmented reality (AR) navigation can eliminate some of the disadvantages of surgical guides and conventional surgical navigation, while potentially improving accuracy. In this human cadaver study, we evaluated a developed AR navigation approach for placement of zygomatic implants after total maxillectomy. Methods The developed AR navigation interface connects a commercial navigation system with the Microsoft HoloLens. AR navigated surgery was performed to place 20 zygomatic implants using five human cadaver skulls after total maxillectomy. To determine accuracy, postoperative scans were virtually matched with preoperative three-dimensional virtual surgical planning, and distances in mm from entry-exit points and angular deviations were calculated as outcome measures. Results were compared with a previously conducted study in which zygomatic implants were positioned with 3D printed surgical guides. Results The mean entry point deviation was 2.43 ± 1.33 mm and a 3D angle deviation of 5.80 ± 4.12° (range 1.39-19.16°). The mean exit point deviation was 3.28 mm (±2.17). The abutment height deviation was on average 2.20 ± 1.35 mm. The accuracy of the abutment in the occlusal plane was 4.13 ± 2.53 mm. Surgical guides perform significantly better for the entry-point (P = 0.012) and 3D angle (P = 0.05); however, there is no significant difference in accuracy for the exit-point (P = 0.143) when using 3D printed drill guides or AR navigated surgery. Conclusion Despite the higher precision of surgical guides, AR navigation demonstrated acceptable accuracy, with potential for improvement and specialized applications. The study highlights the feasibility of AR navigation for zygomatic implant placement, offering an alternative to conventional methods.
Collapse
Affiliation(s)
- Sander Tabernée Heijtmeijer
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
- 3D-Lab, University Medical Center Groningen, Groningen, Netherlands
| | - Haye Glas
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
| | - Nard Janssen
- Department of Oral and Maxillofacial Surgery & Special Dental Care, Utrecht University Medical Center, Utrecht, Netherlands
| | - Nathalie Vosselman
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
| | - Sebastiaan de Visscher
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
| | - Fred Spijkervet
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
| | - Gerry Raghoebar
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
| | - Remco de Bree
- Department of Head and Neck Surgical Oncology, University Medical Center Utrecht, Utrecht, Netherlands
| | - Antoine Rosenberg
- Department of Oral and Maxillofacial Surgery & Special Dental Care, Utrecht University Medical Center, Utrecht, Netherlands
| | - Max Witjes
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
| | - Joep Kraeima
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, Netherlands
- 3D-Lab, University Medical Center Groningen, Groningen, Netherlands
| |
Collapse
|
2
|
Puleio F, Tosco V, Pirri R, Simeone M, Monterubbianesi R, Lo Giudice G, Lo Giudice R. Augmented Reality in Dentistry: Enhancing Precision in Clinical Procedures-A Systematic Review. Clin Pract 2024; 14:2267-2283. [PMID: 39585006 PMCID: PMC11587009 DOI: 10.3390/clinpract14060178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 10/21/2024] [Accepted: 10/24/2024] [Indexed: 11/26/2024] Open
Abstract
Background: Augmented reality (AR) enhances sensory perception by adding extra information, improving anatomical localization and simplifying treatment views. In dentistry, digital planning on bidimensional screens lacks real-time feedback, leading to potential errors. However, it is not clear if AR can improve the clinical treatment precision. The aim of this research is to evaluate if the use of AR-based instruments could improve dental procedure precision. Methods: This review covered studies from January 2018 to June 2023, focusing on AR in dentistry. The PICO question was "Does AR increase the precision of dental interventions compared to non-AR techniques?". The systematic review was carried out on electronic databases, including Ovid MEDLINE, PubMed, and the Web of Science, with the following inclusion criteria: studies comparing the variation in the precision of interventions carried out with AR instruments and non-AR techniques. Results: Thirteen studies were included. Conclusions: The results of this systematic review demonstrate that AR enhances the precision of various dental procedures. The authors advise clinicians to use AR-based tools in order to improve the precision of their therapies.
Collapse
Affiliation(s)
- Francesco Puleio
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, Messina University, 98100 Messina, Italy;
| | - Vincenzo Tosco
- Department of Clinical Sciences and Stomatology (DISCO), Università Politecnica delle Marche, 60126 Ancona, Italy; (V.T.); (R.M.)
| | | | - Michele Simeone
- Department of Neuroscience, Reproductive Science and Dentistry, University of Naples Federico II, 80138 Naples, Italy;
| | - Riccardo Monterubbianesi
- Department of Clinical Sciences and Stomatology (DISCO), Università Politecnica delle Marche, 60126 Ancona, Italy; (V.T.); (R.M.)
| | - Giorgio Lo Giudice
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, Messina University, 98100 Messina, Italy;
| | - Roberto Lo Giudice
- Department of Human Pathology of Adults and Developmental Age, University of Messina, 98100 Messina, Italy;
| |
Collapse
|
3
|
Zhang X, Liu D, Liu S, Cai Y, Shan L, Chen C, Chen H, Liu Y, Guo T, Chen H. Toward Intelligent Display with Neuromorphic Technology. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024; 36:e2401821. [PMID: 38567884 DOI: 10.1002/adma.202401821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/19/2024] [Indexed: 04/16/2024]
Abstract
In the era of the Internet and the Internet of Things, display technology has evolved significantly toward full-scene display and realistic display. Incorporating "intelligence" into displays is a crucial technical approach to meet the demands of this development. Traditional display technology relies on distributed hardware systems to achieve intelligent displays but encounters challenges stemming from the physical separation of sensing, processing, and light-emitting modules. The high energy consumption and data transformation delays limited the development of intelligence display, breaking the physical separation is crucial to overcoming the bottlenecks of intelligence display technology. Inspired by the biological neural system, neuromorphic technology with all-in-one features is widely employed across various fields. It proves effective in reducing system power consumption, facilitating frequent data transformation, and enabling cross-scene integration. Neuromorphic technology shows great potential to overcome display technology bottlenecks, realizing the full-scene display and realistic display with high efficiency and low power consumption. This review offers a comprehensive summary of recent advancements in the application of neuromorphic technology in displays, with a focus on interoperability. This work delves into its state-of-the-art designs and potential future developments aimed at revolutionizing display technology.
Collapse
Affiliation(s)
- Xianghong Zhang
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Di Liu
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Shuai Liu
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Yongjie Cai
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Liuting Shan
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Cong Chen
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Huimei Chen
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Yaqian Liu
- School of Electronics and Information, Zhengzhou University of Light Industry, Zhengzhou, Henan, 450002, China
| | - Tailiang Guo
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| | - Huipeng Chen
- Institute of Optoelectronic Display, National and Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou, 350002, China
- Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou, Fujian, 350100, China
| |
Collapse
|
4
|
Wu Y, Pan C, Lu C, Zhang Y, Zhang L, Huang Z. Augmented reality display with high eyebox uniformity over the full field of view based on a random mask grating. OPTICS EXPRESS 2024; 32:17409-17423. [PMID: 38858925 DOI: 10.1364/oe.521992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/15/2024] [Indexed: 06/12/2024]
Abstract
Ensuring uniform illuminance in waveguide-based augmented reality (AR) display devices is crucial for providing an immersive and comfortable visual experience. However, there is a lack of a straightforward and efficient design method available to achieve illuminance uniformity over the full field of view. To address this issue, we propose a novel design that utilizes random mask gratings (RMGs) as the folding grating and the out-coupling grating. Unlike traditional approaches that modify the grating structure, we control the diffraction efficiency distribution by adjusting the filling factor of the mask while keeping the grating structure unchanged in one RMG. The grating structures are designed and optimized based on rigorous coupled wave analysis and particle swarm optimization. The feasibility of our method is verified by the simulation results in Lighttools. In the FOV range of 20°×15°, the eyebox uniformities of all fields are greater than 0.78, which can provide a good visual experience for users.
Collapse
|
5
|
Cattari N, Cutolo F, Placa LL, Ferrari V. Visualization modality for augmented reality guidance of in-depth tumour enucleation procedures. Healthc Technol Lett 2024; 11:101-107. [PMID: 38638490 PMCID: PMC11022226 DOI: 10.1049/htl2.12058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 04/20/2024] Open
Abstract
Recent research studies reported that the employment of wearable augmented reality (AR) systems such as head-mounted displays for the in situ visualisation of ultrasound (US) images can improve the outcomes of US-guided biopsies through reduced procedure completion times and improved accuracy. Here, the authors continue in the direction of recent developments and present the first AR system for guiding an in-depth tumour enucleation procedure under US guidance. The system features an innovative visualisation modality with cutting trajectories that 'sink' into the tissue according to the depth reached by the electric scalpel, tracked in real-time, and a virtual-to-virtual alignment between the scalpel's tip and the trajectory. The system has high accuracy in estimating the scalpel's tip position (mean depth error of 0.4 mm and mean radial error of 1.34 mm). Furthermore, we demonstrated with a preliminary user study that our system allowed us to successfully guide an in-depth tumour enucleation procedure (i.e. preserving the safety margin around the lesion).
Collapse
Affiliation(s)
- Nadia Cattari
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
- EndoCAS CentreUniversity of PisaPisaTuscanyItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
- EndoCAS CentreUniversity of PisaPisaTuscanyItaly
| | - Luciana La Placa
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
- EndoCAS CentreUniversity of PisaPisaTuscanyItaly
| |
Collapse
|
6
|
Alonso JR, Fernández A, Javidi B. Spatial perception in stereoscopic augmented reality based on multifocus sensing. OPTICS EXPRESS 2024; 32:5943-5955. [PMID: 38439309 DOI: 10.1364/oe.510688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 03/06/2024]
Abstract
In many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging. Spatial perception of the displayed stereo pairs can be controlled by synthesizing the desired point of view of each image of the stereo-pair along with their parallax setting. The proposed method has the potential to alleviate the accommodation-convergence conflict and make augmented reality stereoscopic devices less vulnerable to visual fatigue.
Collapse
|
7
|
Condino S, Cutolo F, Carbone M, Cercenelli L, Badiali G, Montemurro N, Ferrari V. Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:258-267. [PMID: 38410181 PMCID: PMC10896424 DOI: 10.1109/jtehm.2023.3332088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/19/2023] [Accepted: 11/08/2023] [Indexed: 02/28/2024]
Abstract
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
Collapse
Affiliation(s)
- Sara Condino
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Marina Carbone
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Laura Cercenelli
- EDIMES Laboratory of BioengineeringDepartment of Experimental, Diagnostic and Specialty MedicineUniversity of Bologna40138BolognaItaly
| | - Giovanni Badiali
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Nicola Montemurro
- Department of NeurosurgeryAzienda Ospedaliera Universitaria Pisana (AOUP)56127PisaItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| |
Collapse
|
8
|
Sermarini J, Michlowitz RA, LaViola JJ, Walters LC, Azevedo R, Kider JT. Investigating the Impact of Augmented Reality and BIM on Retrofitting Training for Non-Experts. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4655-4665. [PMID: 37788209 DOI: 10.1109/tvcg.2023.3320223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Augmented Reality (AR) tools have shown significant potential in providing on-site visualization of Building Information Modeling (BIM) data and models for supporting construction evaluation, inspection, and guidance. Retrofitting existing buildings, however, remains a challenging task requiring more innovative solutions to successfully integrate AR and BIM. This study aims to investigate the impact of AR+BIM technology on the retrofitting training process and assess the potential for future on-site usage. We conducted a study with 64 non-expert participants, who were asked to perform a common retrofitting procedure of an electrical outlet installation using either an AR+BIM system or a standard printed blueprint documentation set. Our findings indicate that AR+BIM reduced task time significantly and improved performance consistency across participants, while also decreasing the physical and cognitive demands of the training. This study provides a foundation for augmenting future retrofitting construction research that can extend the use of [Formula: see text] technology, thus facilitating more efficient retrofitting of existing buildings. A video presentation of this article and all supplemental materials are available at https://github.com/DesignLabUCF/SENSEable_RetrofittingTraining.
Collapse
|
9
|
Ruan N, Shi F, Tian Y, Xing P, Zhang W, Qiao S. Design method of an ultra-thin two-dimensional geometrical waveguide near-eye display based on forward-ray-tracing and maximum FOV analysis. OPTICS EXPRESS 2023; 31:33799-33814. [PMID: 37859152 DOI: 10.1364/oe.498011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/18/2023] [Indexed: 10/21/2023]
Abstract
A two-dimensional geometrical waveguide enables ultra-thin augmented reality (AR) near-eye display (NED) with wide field of view (FOV) and large exit-pupil diameter (EPD). A conventional design method can efficiently design waveguides that meet the requirements, but is unable to fully utilize the potential display performance of the waveguide. A forward-ray-tracing waveguide design method with maximum FOV analysis is proposed, enabling two-dimensional geometrical waveguides to achieve their maximum FOV while maintaining minimum dimensions. Finally, the designed stray-light-suppressed waveguide NED has a thickness of 1.7 mm, a FOV of 50.00°H × 29.92°V, and an eye-box of 12 mm × 12 mm at an eye-relief of 18 mm.
Collapse
|
10
|
Wu Y, Pan C, Lu C, Zhang Y, Zhang L, Huang Z. Hybrid waveguide based augmented reality display system with extra large field of view and 2D exit pupil expansion. OPTICS EXPRESS 2023; 31:32799-32812. [PMID: 37859074 DOI: 10.1364/oe.499177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 09/01/2023] [Indexed: 10/21/2023]
Abstract
For a waveguide display device, the field of view (FOV) is a key parameter for evaluating its optical performance. To address this issue, we propose a hybrid waveguide system, which is composed of two projectors, two in-couplers, two half-mirror arrays and an out-coupler. We use two projectors to generate the left and right parts of the output image separately, which can increase the upper limit of the FOV significantly. Unlike conventional waveguide-based system, we use half-mirror arrays instead of folding gratings to realize 2D exit pupil expansion. By doing so, the total internal reflection condition can always be met during the pupil expansion process. To solve the difficulty in designing collimating optical system with large FOV, we propose a method of tilting the projection system. The hybrid waveguide system can realize a FOV of 88°(H) × 53°(V).
Collapse
|
11
|
Heiliger C, Heiliger T, Deodati A, Winkler A, Grimm M, Kalim F, Esteban J, Mihatsch L, Ehrlich V Treuenstätt VH, Mohamed KA, Andrade D, Frank A, Solyanik O, Mandal S, Werner J, Eck U, Navab N, Karcz K. AR visualizations in laparoscopy: surgeon preferences and depth assessment of vascular anatomy. MINIM INVASIV THER 2023; 32:190-198. [PMID: 37293947 DOI: 10.1080/13645706.2023.2219739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 05/11/2023] [Indexed: 06/10/2023]
Abstract
Introduction: This study compares five augmented reality (AR) vasculature visualization techniques in a mixed-reality laparoscopy simulator with 50 medical professionals and analyzes their impact on the surgeon. Material and methods: The different visualization techniques' abilities to convey depth were measured using the participant's accuracy in an objective depth sorting task. Demographic data and subjective measures, such as the preference of each AR visualization technique and potential application areas, were collected with questionnaires. Results: Despite measuring differences in objective measurements across the visualization techniques, they were not statistically significant. In the subjective measures, however, 55% of the participants rated visualization technique II, 'Opaque with single-color Fresnel highlights', as their favorite. Participants felt that AR could be useful for various surgeries, especially complex surgeries (100%). Almost all participants agreed that AR could potentially improve surgical parameters, such as patient safety (88%), complication rate (84%), and identifying risk structures (96%). Conclusions: More studies are needed on the effect of different visualizations on task performance, as well as more sophisticated and effective visualization techniques for the operating room. With the findings of this study, we encourage the development of new study setups to advance surgical AR.
Collapse
Affiliation(s)
- Christian Heiliger
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Thomas Heiliger
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Alessandra Deodati
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Alexander Winkler
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
- Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich (TUM), Munich, Germany
| | - Matthias Grimm
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
- Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich (TUM), Munich, Germany
- Maxer Endoscopy GmbH, Wurmlingen, Germany
| | | | - Javier Esteban
- Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich (TUM), Munich, Germany
| | - Lorenz Mihatsch
- Department of Anesthesiology and Intensive Care Medicine, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Viktor H Ehrlich V Treuenstätt
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Khaled Ahmed Mohamed
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Dorian Andrade
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Alexander Frank
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Olga Solyanik
- Department of Radiology, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | | | - Jens Werner
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| | - Ulrich Eck
- Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich (TUM), Munich, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich (TUM), Munich, Germany
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Konrad Karcz
- Department of General, Visceral, and Transplantation Surgery, Hospital of the LMU Munich, Ludwig-Maximilians-Universität (LMU), Munich, Germany
| |
Collapse
|
12
|
Affiliation(s)
- Guido A Wanner
- Spine Clinic & Traumatology, Private Hospital Bethanien, Swiss Medical Network, Zurich, Switzerland
| | - Sandro M Heining
- Department of Traumatology, University Hospital Zurich, Switzerland
| | - Vladislav Raykov
- Department of Orthopedics & Traumatology, Landeskrankenhaus Bludenz/Feldkirch, Austria
| | | |
Collapse
|
13
|
Alhumaidi WA, Alqurashi NN, Alnumani RD, Althagafi ES, Bajunaid FR, Alnefaie GO. Perceptions of Doctors in Saudi Arabia Toward Virtual Reality and Augmented Reality Applications in Healthcare. Cureus 2023; 15:e42648. [PMID: 37644952 PMCID: PMC10461506 DOI: 10.7759/cureus.42648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2023] [Indexed: 08/31/2023] Open
Abstract
Background Several studies suggested that artificial intelligence (AI), including virtual reality (VR) and augmented reality (AR), may help improve visualization, diagnostic, and therapeutic abilities and reduce medical and surgical errors. These technologies have been revolutionary in Saudi Arabia. We aimed to elucidate physicians' perceptions toward these technologies. Methodology We carried out a cross-sectional electronic questionnaire-based study in November 2021. The study targeted doctors of different medical and surgical specialties in the western region of Saudi Arabia. Results In our study, 53.2% of the participants were 25-30 years old. Most participants were residents (53.6%) with career experiences <5 years. Only 32.3% had a good familiarity with AR and VR technologies. However, 64.5% agreed that AR and VR technologies had practical applications in the medical field. Moreover, 35% agreed that the diagnostic and therapeutic ability was superior to the clinical experience of a human doctor. About 41.4% agreed they would always use AR and VR technologies for future medical decisions. Conclusion Doctors are open to using AR and VR technologies in healthcare. Although most people are unfamiliar with these technologies, most agree that they positively impact healthcare.
Collapse
|
14
|
Benmahdjoub M, Thabit A, van Veelen MLC, Niessen WJ, Wolvius EB, Walsum TV. Evaluation of AR visualization approaches for catheter insertion into the ventricle cavity. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2434-2445. [PMID: 37027733 DOI: 10.1109/tvcg.2023.3247042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Augmented reality (AR) has shown potential in computer-aided surgery. It allows for the visualization of hidden anatomical structures as well as assists in navigating and locating surgical instruments at the surgical site. Various modalities (devices and/or visualizations) have been used in the literature, but few studies investigated the adequacy/superiority of one modality over the other. For instance, the use of optical see-through (OST) HMDs has not always been scientifically justified. Our goal is to compare various visualization modalities for catheter insertion in external ventricular drain and ventricular shunt procedures. We investigate two AR approaches: (1) 2D approaches consisting of a smartphone and a 2D window visualized through an OST (Microsoft HoloLens 2), and (2) 3D approaches consisting of a fully aligned patient model and a model that is adjacent to the patient and is rotationally aligned using an OST. 32 participants joined this study. For each visualization approach, participants were asked to perform five insertions after which they filled NASA-TLX and SUS forms. Moreover, the position and orientation of the needle with respect to the planning during the insertion task were collected. The results show that participants achieved a better insertion performance significantly under 3D visualizations, and the NASA-TLX and SUS forms reflected the preference of participants for these approaches compared to 2D approaches.
Collapse
|
15
|
Matinfar S, Salehi M, Suter D, Seibold M, Dehghani S, Navab N, Wanivenhaus F, Fürnstahl P, Farshad M, Navab N. Sonification as a reliable alternative to conventional visual surgical navigation. Sci Rep 2023; 13:5930. [PMID: 37045878 PMCID: PMC10097653 DOI: 10.1038/s41598-023-32778-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 04/02/2023] [Indexed: 04/14/2023] Open
Abstract
Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany.
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany.
| | - Mehrdad Salehi
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| | - Daniel Suter
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Matthias Seibold
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Shervin Dehghani
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, H3G 2W1, Canada
| | - Florian Wanivenhaus
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| |
Collapse
|
16
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
17
|
Heller J, Mahr D, de Ruyter K, Schaap E, Hilken T, Keeling DI, Chylinski M, Flavián C, Jung T, Rauschnabel PA. An interdisciplinary Co-authorship networking perspective on AR and human behavior: Taking stock and moving ahead. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
18
|
Jiang J, Zhang J, Sun J, Wu D, Xu S. User's image perception improved strategy and application of augmented reality systems in smart medical care: A review. Int J Med Robot 2023; 19:e2497. [PMID: 36629798 DOI: 10.1002/rcs.2497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/26/2022] [Accepted: 01/06/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND Augmented reality (AR) is a new human-computer interaction technology that combines virtual reality, computer vision, and computer networks. With the rapid advancement of the medical field towards intelligence and data visualisation, AR systems are becoming increasingly popular in the medical field because they can provide doctors with clear enough medical images and accurate image navigation in practical applications. However, it has been discovered that different display types of AR systems have different effects on doctors' perception of the image after virtual-real fusion during the actual medical application. If doctors cannot correctly perceive the image, they may be unable to correctly match the virtual information with the real world, which will have a significant impact on their ability to recognise complex structures. METHODS This paper uses Citespace, a literature analysis tool, to visualise and analyse the research hotspots when AR systems are used in the medical field. RESULTS A visual analysis of the 1163 articles retrieved from the Web of Science Core Collection database reveals that display technology and visualisation technology are the key research directions of AR systems at the moment. CONCLUSION This paper categorises AR systems based on their display principles, reviews current image perception optimisation schemes for various types of systems, and analyses and compares different display types of AR systems based on their practical applications in the field of smart medical care so that doctors can select the appropriate display types based on different application scenarios. Finally, the future development direction of AR display technology is anticipated in order for AR technology to be more effectively applied in the field of smart medical care. The advancement of display technology for AR systems is critical for their use in the medical field, and the advantages and disadvantages of various display types should be considered in different application scenarios to select the best AR system.
Collapse
Affiliation(s)
- Jingang Jiang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China.,Robotics & Its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jiawei Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jianpeng Sun
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Dianhao Wu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Shuainan Xu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| |
Collapse
|
19
|
Navab N, Martin-Gomez A, Seibold M, Sommersperger M, Song T, Winkler A, Yu K, Eck U. Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process. J Imaging 2022; 9:4. [PMID: 36662102 PMCID: PMC9866223 DOI: 10.3390/jimaging9010004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/15/2022] [Accepted: 12/19/2022] [Indexed: 12/28/2022] Open
Abstract
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.
Collapse
Affiliation(s)
- Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alejandro Martin-Gomez
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Matthias Seibold
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, CH-8008 Zurich, Switzerland
| | - Michael Sommersperger
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Tianyu Song
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alexander Winkler
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University Hospital, DE-80336 Munich, Germany
| | - Kevin Yu
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- medPhoton GmbH, AT-5020 Salzburg, Austria
| | - Ulrich Eck
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| |
Collapse
|
20
|
Phantom study on surgical performance in augmented reality laparoscopy. Int J Comput Assist Radiol Surg 2022:10.1007/s11548-022-02809-7. [PMID: 36547767 PMCID: PMC10363058 DOI: 10.1007/s11548-022-02809-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 12/06/2022] [Indexed: 12/24/2022]
Abstract
Abstract
Purpose
Only a few studies have evaluated Augmented Reality (AR) in in vivo simulations compared to traditional laparoscopy; further research is especially needed regarding the most effective AR visualization technique. This pilot study aims to determine, under controlled conditions on a 3D-printed phantom, whether an AR laparoscope improves surgical outcomes over conventional laparoscopy without augmentation.
Methods
We selected six surgical residents at a similar level of training and had them perform a laparoscopic task. The participants repeated the experiment three times, using different 3D phantoms and visualizations: Floating AR, Occlusion AR, and without any AR visualization (Control). Surgical performance was determined using objective measurements. Subjective measures, such as task load and potential application areas, were collected with questionnaires.
Results
Differences in operative time, total touching time, and SurgTLX scores showed no statistical significance ($$p>0.05$$
p
>
0.05
). However, when assessing the invasiveness of the simulated intervention, the comparison revealed a statistically significant difference ($$p=0.009$$
p
=
0.009
). Participants felt AR could be useful for various surgeries, especially for liver, sigmoid, and pancreatic resections (100%). Almost all participants agreed that AR could potentially lead to improved surgical parameters, such as operative time (83%), complication rate (83%), and identifying risk structures (83%).
Conclusion
According to our results, AR may have great potential in visceral surgery and based on the objective measures of the study, may improve surgeons' performance in terms of an atraumatic approach. In this pilot study, participants consistently took more time to complete the task, had more contact with the vascular tree, were significantly more invasive, and scored higher on the SurgTLX survey than with AR.
Collapse
|
21
|
A novel motionless calibration method for augmented reality surgery navigation system based on optical tracker. Heliyon 2022; 8:e12115. [PMID: 36590529 PMCID: PMC9801086 DOI: 10.1016/j.heliyon.2022.e12115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 09/25/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
Augmented reality (AR) surgery navigation systems display the pre-operation planned virtual model at the accurate position in the real surgical scene to assist the operation. Accurate calibration of the mapping relationship between the virtual coordinate and the real world is the key to the virtual-real fusion effect. Former calibration methods require the doctor user to conduct complex manual procedures before usage. This paper introduces a novel motionless virtual-real calibration method. The method only requires to take a mixed reality image containing both virtual and real marker balls using the built-in forward camera of the AR glasses. The mapping relationship between the virtual and real spaces is calculated by using the camera coordinate system as a transformation medium. The composition and working process of the AR navigation system is introduced, and then the mathematical principle of the calibration is designed. The feasibility of the proposed calibration scheme is verified with a verification experiment, and the average registration accuracy of the scheme is around 5.80mm, which is of same level of formerly reported methods. The proposed method is convenient and rapid to implement, and the calibration accuracy is not dependent on the user experience. Further, it can potentially realize the real-time update of the registration transformation matrix, which can improve the AR fusion accuracy when the AR glasses moves. This motionless calibration method has great potential to be applied in future clinical navigation research.
Collapse
|
22
|
Doughty M, Ghugre NR, Wright GA. Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery. J Imaging 2022; 8:jimaging8070203. [PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 02/01/2023] Open
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - Graham A. Wright
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
23
|
Wong KC, Sun YE, Kumta SM. Review and Future/Potential Application of Mixed Reality Technology in Orthopaedic Oncology. Orthop Res Rev 2022; 14:169-186. [PMID: 35601186 PMCID: PMC9121991 DOI: 10.2147/orr.s360933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 04/26/2022] [Indexed: 11/23/2022] Open
Abstract
In orthopaedic oncology, surgical planning and intraoperative execution errors may result in positive tumor resection margins that increase the risk of local recurrence and adversely affect patients’ survival. Computer navigation and 3D-printed resection guides have been reported to address surgical inaccuracy by replicating the surgical plans in complex cases. However, limitations include surgeons’ attention shift from the operative field to view the navigation monitor and expensive navigation facilities in computer navigation surgery. Practical concerns are lacking real-time visual feedback of preoperative images and the lead-time in manufacturing 3D-printed objects. Mixed Reality (MR) is a technology of merging real and virtual worlds to produce new environments with enhanced visualizations, where physical and digital objects coexist and allow users to interact with both in real-time. The unique MR features of enhanced medical images visualization and interaction with holograms allow surgeons real-time and on-demand medical information and remote assistance in their immediate working environment. Early application of MR technology has been reported in surgical procedures. Its role is unclear in orthopaedic oncology. This review aims to provide orthopaedic tumor surgeons with up-to-date knowledge of the emerging MR technology. The paper presents its essential features and clinical workflow, reviews the current literature and potential clinical applications, and discusses the limitations and future development in orthopaedic oncology. The emerging MR technology adds a new dimension to digital assistive tools with a more accessible and less costly alternative in orthopaedic oncology. The MR head-mounted display and hand-free control may achieve clinical point-of-care inside or outside the operating room and improve service efficiency and patient safety. However, lacking an accurate hologram-to-patient matching, an MR platform dedicated to orthopaedic oncology, and clinical results may hinder its wide adoption. Industry-academic partnerships are essential to advance the technology with its clinical role determined through future clinical studies. ![]()
Point your SmartPhone at the code above. If you have a QR code reader the video abstract will appear. Or use: https://youtu.be/t4hl_Anh_kM
Collapse
Affiliation(s)
- Kwok Chuen Wong
- Department of Orthopaedics and Traumatology, Prince of Wales Hospital, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, People’s Republic of China
- Correspondence: Kwok Chuen Wong, Department of Orthopaedics and Traumatology, Prince of Wales Hospital, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, People’s Republic of China, Email
| | - Yan Edgar Sun
- New Territories, Hong Kong Special Administrative Region, People’s Republic of China
| | - Shekhar Madhukar Kumta
- Department of Orthopaedics and Traumatology, Prince of Wales Hospital, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, People’s Republic of China
| |
Collapse
|
24
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Matthew Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
25
|
Cheng D, Hou Q, Li Y, Zhang T, Li D, Huang Y, Liu Y, Wang Q, Hou W, Yang T, Feng Z, Wang Y. Optical design and pupil swim analysis of a compact, large EPD and immersive VR head mounted display. OPTICS EXPRESS 2022; 30:6584-6602. [PMID: 35299440 DOI: 10.1364/oe.452747] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 02/01/2022] [Indexed: 06/14/2023]
Abstract
Virtual reality head-mounted displays (VR-HMDs) are crucial to Metaverse which appears to be one of the most popular terms to have been adopted over the internet recently. It provides basic infrastructure and entrance to cater for the next evolution of social interaction, and it has already been widely used in many fields. The VR-HMDs with traditional aspherical or Fresnel optics are not suitable for long-term usage because of the image quality, system size, and weight. In this study, we designed and developed a large exit pupil diameter (EPD), compact, and lightweight VR-HMD with catadioptric optics. The mathematical formula for designing the catadioptric VR optics is derived. The reason why this kind of immersive VR optics could achieve a compact size and large EPD simultaneously is answered. Various catadioptric forms are systematically proposed and compared. The design can achieve a diagonal field of view (FOV) of 96° at -1 diopter, with an EPD of 10 mm at 11 mm eye relief (ERF). The overall length (OAL) of the system was less than 20 mm. A prototype of a compact catadioptric VR-HMD system was successfully developed.
Collapse
|
26
|
Davidson TJ, Sanderson PM. A review of the effects of head-worn displays on teamwork for emergency response. ERGONOMICS 2022; 65:188-218. [PMID: 34445922 DOI: 10.1080/00140139.2021.1968041] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 08/09/2021] [Indexed: 06/13/2023]
Abstract
Head-Worn Displays (HWD) can potentially support the mobile work of emergency responders, but it remains unclear whether teamwork is affected when emergency responders use HWDs. We reviewed studies that examined HWDs in emergency response contexts to evaluate the impact of HWDs on team performance and on team processes of situation awareness, communication, and coordination. Sixteen studies were identified through manual and systematic literature searches. HWDs appeared to improve the quality of team performance but they increased time to perform under some conditions; effects on team processes were mixed. We identify five challenges to explain the mixed results. We discuss four theoretical perspectives that might address the challenges and guide research needs-joint cognitive systems, distributed cognition, common ground, and dynamical systems. Researchers and designers should use process-based measures and apply greater theoretical guidance to uncover mechanisms by which HWDs shape team processes, and to understand the impact on team performance. Practitioner Summary: This review examines the effects of head-worn displays on teamwork performance and team processes for emergency response. Results are mixed, but study diversity challenges the search for underlying mechanisms. Guidance from perspectives such as joint cognitive systems, distributed cognition, common ground, and dynamical systems may advance knowledge in the area. Abbreviations: HWD: head-worn display; RC: remote collaboration; DD: data display; ARC: augmented remote collaboration; ACC: augmented collocated collaboration; SA: situation awareness; TSA: team situation awareness; CPR: cardiopulmonary resuscitation; SAGAT: situation awareness global assessment technique; SART: situation awareness rating technique.
Collapse
Affiliation(s)
- Thomas J Davidson
- School of Psychology, The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
27
|
Doughty M, Ghugre NR. Head-Mounted Display-Based Augmented Reality for Image-Guided Media Delivery to the Heart: A Preliminary Investigation of Perceptual Accuracy. J Imaging 2022; 8:jimaging8020033. [PMID: 35200735 PMCID: PMC8878166 DOI: 10.3390/jimaging8020033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 01/25/2022] [Accepted: 01/28/2022] [Indexed: 01/14/2023] Open
Abstract
By aligning virtual augmentations with real objects, optical see-through head-mounted display (OST-HMD)-based augmented reality (AR) can enhance user-task performance. Our goal was to compare the perceptual accuracy of several visualization paradigms involving an adjacent monitor, or the Microsoft HoloLens 2 OST-HMD, in a targeted task, as well as to assess the feasibility of displaying imaging-derived virtual models aligned with the injured porcine heart. With 10 participants, we performed a user study to quantify and compare the accuracy, speed, and subjective workload of each paradigm in the completion of a point-and-trace task that simulated surgical targeting. To demonstrate the clinical potential of our system, we assessed its use for the visualization of magnetic resonance imaging (MRI)-based anatomical models, aligned with the surgically exposed heart in a motion-arrested open-chest porcine model. Using the HoloLens 2 with alignment of the ground truth target and our display calibration method, users were able to achieve submillimeter accuracy (0.98 mm) and required 1.42 min for calibration in the point-and-trace task. In the porcine study, we observed good spatial agreement between the MRI-models and target surgical site. The use of an OST-HMD led to improved perceptual accuracy and task-completion times in a simulated targeting task.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada;
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada;
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
28
|
Augmented reality visualization in brain lesions: a prospective randomized controlled evaluation of its potential and current limitations in navigated microneurosurgery. Acta Neurochir (Wien) 2022; 164:3-14. [PMID: 34904183 PMCID: PMC8761141 DOI: 10.1007/s00701-021-05045-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 09/01/2021] [Indexed: 11/16/2022]
Abstract
Background Augmented reality (AR) has the potential to support complex neurosurgical interventions by including visual information seamlessly. This study examines intraoperative visualization parameters and clinical impact of AR in brain tumor surgery. Methods Fifty-five intracranial lesions, operated either with AR-navigated microscope (n = 39) or conventional neuronavigation (n = 16) after randomization, have been included prospectively. Surgical resection time, duration/type/mode of AR, displayed objects (n, type), pointer-based navigation checks (n), usability of control, quality indicators, and overall surgical usefulness of AR have been assessed. Results AR display has been used in 44.4% of resection time. Predominant AR type was navigation view (75.7%), followed by target volumes (20.1%). Predominant AR mode was picture-in-picture (PiP) (72.5%), followed by 23.3% overlay display. In 43.6% of cases, vision of important anatomical structures has been partially or entirely blocked by AR information. A total of 7.7% of cases used MRI navigation only, 30.8% used one, 23.1% used two, and 38.5% used three or more object segmentations in AR navigation. A total of 66.7% of surgeons found AR visualization helpful in the individual surgical case. AR depth information and accuracy have been rated acceptable (median 3.0 vs. median 5.0 in conventional neuronavigation). The mean utilization of the navigation pointer was 2.6 × /resection hour (AR) vs. 9.7 × /resection hour (neuronavigation); navigation effort was significantly reduced in AR (P < 0.001). Conclusions The main benefit of HUD-based AR visualization in brain tumor surgery is the integrated continuous display allowing for pointer-less navigation. Navigation view (PiP) provides the highest usability while blocking the operative field less frequently. Visualization quality will benefit from improvements in registration accuracy and depth impression. German clinical trials registration number. DRKS00016955. Supplementary Information The online version contains supplementary material available at 10.1007/s00701-021-05045-1.
Collapse
|
29
|
Li T, Li C, Zhang X, Liang W, Chen Y, Ye Y, Lin H. Augmented Reality in Ophthalmology: Applications and Challenges. Front Med (Lausanne) 2021; 8:733241. [PMID: 34957138 PMCID: PMC8703032 DOI: 10.3389/fmed.2021.733241] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 11/19/2021] [Indexed: 12/16/2022] Open
Abstract
Augmented reality (AR) has been developed rapidly and implemented in many fields such as medicine, maintenance, and cultural heritage. Unlike other specialties, ophthalmology connects closely with AR since most AR systems are based on vision systems. Here we summarize the applications and challenges of AR in ophthalmology and provide insights for further research. Firstly, we illustrate the structure of the standard AR system and present essential hardware. Secondly, we systematically introduce applications of AR in ophthalmology, including therapy, education, and clinical assistance. To conclude, there is still a large room for development, which needs researchers to pay more effort. Applications in diagnosis and protection might be worth exploring. Although the obstacles of hardware restrict the development of AR in ophthalmology at present, the AR will realize its potential and play an important role in ophthalmology in the future with the rapidly developing technology and more in-depth research.
Collapse
Affiliation(s)
- Tongkeng Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Chenghao Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wenting Liang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yongxin Chen
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yunpeng Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
30
|
In Situ Visualization for 3D Ultrasound-Guided Interventions with Augmented Reality Headset. Bioengineering (Basel) 2021; 8:bioengineering8100131. [PMID: 34677204 PMCID: PMC8533537 DOI: 10.3390/bioengineering8100131] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/16/2021] [Accepted: 09/21/2021] [Indexed: 12/03/2022] Open
Abstract
Augmented Reality (AR) headsets have become the most ergonomic and efficient visualization devices to support complex manual tasks performed under direct vision. Their ability to provide hands-free interaction with the augmented scene makes them perfect for manual procedures such as surgery. This study demonstrates the reliability of an AR head-mounted display (HMD), conceived for surgical guidance, in navigating in-depth high-precision manual tasks guided by a 3D ultrasound imaging system. The integration between the AR visualization system and the ultrasound imaging system provides the surgeon with real-time intra-operative information on unexposed soft tissues that are spatially registered with the surrounding anatomic structures. The efficacy of the AR guiding system was quantitatively assessed with an in vitro study simulating a biopsy intervention aimed at determining the level of accuracy achievable. In the experiments, 10 subjects were asked to perform the biopsy on four spherical lesions of decreasing sizes (10, 7, 5, and 3 mm). The experimental results showed that 80% of the subjects were able to successfully perform the biopsy on the 5 mm lesion, with a 2.5 mm system accuracy. The results confirmed that the proposed integrated system can be used for navigation during in-depth high-precision manual tasks.
Collapse
|
31
|
Usability of Graphical Visualizations on a Tool-Mounted Interface for Spine Surgery. J Imaging 2021; 7:jimaging7080159. [PMID: 34460795 PMCID: PMC8404910 DOI: 10.3390/jimaging7080159] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/29/2021] [Accepted: 08/17/2021] [Indexed: 12/01/2022] Open
Abstract
Screw placement in the correct angular trajectory is one of the most intricate tasks during spinal fusion surgery. Due to the crucial role of pedicle screw placement for the outcome of the operation, spinal navigation has been introduced into the clinical routine. Despite its positive effects on the precision and safety of the surgical procedure, local separation of the navigation information and the surgical site, combined with intricate visualizations, limit the benefits of the navigation systems. Instead of a tech-driven design, a focus on usability is required in new research approaches to enable advanced and effective visualizations. This work presents a new tool-mounted interface (TMI) for pedicle screw placement. By fixing a TMI onto the surgical instrument, physical de-coupling of the anatomical target and navigation information is resolved. A total of 18 surgeons participated in a usability study comparing the TMI to the state-of-the-art visualization on an external screen. With the usage of the TMI, significant improvements in system usability (Kruskal–Wallis test p < 0.05) were achieved. A significant reduction in mental demand and overall cognitive load, measured using a NASA-TLX (p < 0.05), were observed. Moreover, a general improvement in performance was shown by means of the surgical task time (one-way ANOVA p < 0.001).
Collapse
|
32
|
Abdel Al S, Chaar MKA, Mustafa A, Al-Hussaini M, Barakat F, Asha W. Innovative Surgical Planning in Resecting Soft Tissue Sarcoma of the Foot Using Augmented Reality With a Smartphone. J Foot Ankle Surg 2021; 59:1092-1097. [PMID: 32505724 DOI: 10.1053/j.jfas.2020.03.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 03/05/2020] [Accepted: 03/18/2020] [Indexed: 02/03/2023]
Abstract
Augmented or hybrid reality is a display technology that combines the real world with the virtual world; it permits digital images of preoperative planning information to be combined with the surgeon's view of the real world. Augmented reality (AR) can increase the surgeon's intraoperative vision by providing virtual transparency of the real patient and has been applied to a wide spectrum of orthopedic procedures, such as tumor resection, fracture fixation, arthroscopy, and component's alignment in total joint arthroplasty. We present a case of a male patient who presented with pain in the medial aspect of his left foot after he underwent an incomplete mass excision elsewhere where it turned out to be synovial sarcoma. Because the mass was small, impalpable, and deeply positioned beneath both the plantar and the medial plantar aponeuroses, it was impossible to preoperatively decide a plan for resection. We opted to use the aid of AR in the form of an application using the camera of a smartphone. We were able to excise the tumor with negative surgical margins. On 12-month follow-up, the patient is in complete remission and has optimal mobility and functionality of his foot. In conclusion, AR holds great potential for use in the future of orthopedic surgical oncology. We emphasize using it via a handheld device that we found to be optimal for planning resection of the small and relatively fixed tumor. Based on our literature review, this is the first case describing the surgical planning in resecting an impalpable synovial sarcoma of the foot using AR technology.
Collapse
Affiliation(s)
- Samer Abdel Al
- Consultant, Department of Orthopedic Oncology, King Hussein Cancer Center, Amman, Jordan.
| | | | - Ahmad Mustafa
- Resident, Department of Surgery, King Hussein Cancer Center, Amman, Jordan
| | - Maysa Al-Hussaini
- Consultant, Department of Pathology and Laboratory Medicine, King Hussein Cancer Center, Amman, Jordan
| | - Fareed Barakat
- Consultant, Department of Pathology and Laboratory Medicine, King Hussein Cancer Center, Amman, Jordan
| | - Wafa Asha
- Specialist, Department of Radiation Oncology, King Hussein Cancer Center, Amman, Jordan
| |
Collapse
|
33
|
Iqbal H, Tatti F, Rodriguez Y Baena F. Augmented reality in robotic assisted orthopaedic surgery: A pilot study. J Biomed Inform 2021; 120:103841. [PMID: 34146717 DOI: 10.1016/j.jbi.2021.103841] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 06/11/2021] [Accepted: 06/14/2021] [Indexed: 01/18/2023]
Abstract
BACKGROUND The research and development of augmented-reality (AR) technologies in surgical applications has seen an evolution of the traditional user-interfaces (UI) utilised by clinicians when conducting robot-assisted orthopaedic surgeries. The typical UI for such systems relies on surgeons managing 3D medical imaging data in the 2D space of a touchscreen monitor, located away from the operating site. Conversely, AR can provide a composite view overlaying the real surgical scene with co-located virtual holographic representations of medical data, leading to a more immersive and intuitive operator experience. MATERIALS AND METHODS This work explores the integration of AR within an orthopaedic setting by capturing and replicating the UI of an existing surgical robot within an AR head-mounted display worn by the clinician. The resulting mixed-reality workflow enabled users to simultaneously view the operating-site and real-time holographic operating informatics when carrying out a robot-assisted patellofemoral-arthroplasty (PFA). Ten surgeons were recruited to test the impact of the AR system on procedure completion time and operating surface roughness. RESULTS AND DISCUSSION The integration of AR did not appear to require subjects to significantly alter their surgical techniques, which was demonstrated by non-significant changes to the study's clinical metrics, with a statistically insignificant mean increase in operating time (+0.778 s, p = 0.488) and a statistically insignificant change in mean surface roughness (p = 0.274). Additionally, a post-operative survey indicated a positive consensus on the usability of the AR system without incurring noticeable physical distress such as eyestrain or fatigue. CONCLUSIONS Overall, these study results demonstrated a successful integration of AR technologies within the framework of an existing robot-assisted surgical platform with no significant negative effects in two quantitative metrics of surgical performance, and a positive outcome relating to user-centric and ergonomic evaluation criteria.
Collapse
Affiliation(s)
- Hisham Iqbal
- Mechatronics in Medicine Laboratory, Imperial College London, London, UK.
| | - Fabio Tatti
- Mechatronics in Medicine Laboratory, Imperial College London, London, UK
| | | |
Collapse
|
34
|
Abstract
BACKGROUND During a deep inferior epigastric perforator (DIEP) flap harvest, the identification and localization of the epigastric arteries and its perforators are crucial. Holographic augmented reality is an innovative technique that can be used to visualize this patient-specific anatomy extracted from a computed tomographic scan directly on the patient. This study describes an innovative workflow to achieve this. METHODS A software application for the Microsoft HoloLens was developed to visualize the anatomy as a hologram. By using abdominal nevi as natural landmarks, the anatomy hologram is registered to the patient. To ensure that the anatomy hologram remains correctly positioned when the patient or the user moves, real-time patient tracking is obtained with a quick response marker attached to the patient. RESULTS Holographic augmented reality can be used to visualize the epigastric arteries and its perforators in preparation for a deep inferior epigastric perforator flap harvest. CONCLUSIONS Potentially, this workflow can be used visualize the vessels intraoperatively. Furthermore, this workflow is intuitive to use and could be applied for other flaps or other types of surgery.
Collapse
|
35
|
Gsaxner C, Pepe A, Li J, Ibrahimpasic U, Wallner J, Schmalstieg D, Egger J. Augmented Reality for Head and Neck Carcinoma Imaging: Description and Feasibility of an Instant Calibration, Markerless Approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105854. [PMID: 33261944 DOI: 10.1016/j.cmpb.2020.105854] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Augmented reality (AR) can help to overcome current limitations in computer assisted head and neck surgery by granting "X-ray vision" to physicians. Still, the acceptance of AR in clinical applications is limited by technical and clinical challenges. We aim to demonstrate the benefit of a marker-free, instant calibration AR system for head and neck cancer imaging, which we hypothesize to be acceptable and practical for clinical use. METHODS We implemented a novel AR system for visualization of medical image data registered with the head or face of the patient prior to intervention. Our system allows the localization of head and neck carcinoma in relation to the outer anatomy. Our system does not require markers or stationary infrastructure, provides instant calibration and allows 2D and 3D multi-modal visualization for head and neck surgery planning via an AR head-mounted display. We evaluated our system in a pre-clinical user study with eleven medical experts. RESULTS Medical experts rated our application with a system usability scale score of 74.8 ± 15.9, which signifies above average, good usability and clinical acceptance. An average of 12.7 ± 6.6 minutes of training time was needed by physicians, before they were able to navigate the application without assistance. CONCLUSIONS Our AR system is characterized by a slim and easy setup, short training time and high usability and acceptance. Therefore, it presents a promising, novel tool for visualizing head and neck cancer imaging and pre-surgical localization of target structures.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria.
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Una Ibrahimpasic
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jürgen Wallner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria; Department of Cranio-Maxillofacial Surgery, AZ Monica Hospital Antwerp and Antwerp University Hospital, Antwerp, Belgium.
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria.
| |
Collapse
|
36
|
Meng FH, Zhu ZH, Lei ZH, Zhang XH, Shao L, Zhang HZ, Zhang T. Feasibility of the application of mixed reality in mandible reconstruction with fibula flap: A cadaveric specimen study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 122:e45-e49. [PMID: 33434746 DOI: 10.1016/j.jormas.2021.01.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/02/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND In recent years, a new technology, mixed reality (MR), has emerged and surpassed the limitations of augmented reality (AR) with its inability to interact with hologram. This study aimed to investigate the feasibility of the application of MR in mandible reconstruction with fibula flap. METHODS Computed tomography (CT) examination was performed for one cadaveric mandible and ten fibula bones. Using professional software Proplan CMF 3.0 (Materialize, Leuven, Belgium), we created a defected mandibular model and simulated the reconstruction design with these 10 fibula bones. The surgical plans were transferred to the HoloLens. We used HoloLens to guide the osteotomy and shaping of the fibular bone. After fixing the fibular segments using the Ti template, all segments underwent a CT examination. Before and after objects were compared for measurements of the location of fibular osteotomies, angular deviation of fibular segments, and intergonial angle distances. RESULTS The mean location of the fibular osteotomies, angular deviation of the fibular segments, and intergonial angle distances were 2.11 ± 1.31 mm, 2.85°± 1.97°, and 7.24 ± 3.42 mm, respectively. CONCLUSION The experimental results revealed that slight deviations remained in the accuracy of fibular osteotomy. With the further development of technology, it has the potential to improve the efficiency and precision of the reconstructive surgery.
Collapse
Affiliation(s)
- F H Meng
- Chinese PLA General Hospital, Department of Oral and Maxillofacial Surgery, 100853, Beijing, China
| | - Z H Zhu
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China
| | - Z H Lei
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China
| | - X H Zhang
- Shenzhen Luohu Hospital Group Luohu People's Hospital, Department of Oral and Maxillofacial Surgery, 518020, Shenzhen, China
| | - L Shao
- Beijing Institute of Technology, Optoelectronic College, 100081, Beijing, China
| | - H Z Zhang
- Chinese PLA General Hospital, Department of Oral and Maxillofacial Surgery, 100853, Beijing, China.
| | - T Zhang
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China.
| |
Collapse
|
37
|
Ma L, Fei B. Comprehensive review of surgical microscopes: technology development and medical applications. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200292VRR. [PMID: 33398948 PMCID: PMC7780882 DOI: 10.1117/1.jbo.26.1.010901] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/04/2020] [Indexed: 05/06/2023]
Abstract
SIGNIFICANCE Surgical microscopes provide adjustable magnification, bright illumination, and clear visualization of the surgical field and have been increasingly used in operating rooms. State-of-the-art surgical microscopes are integrated with various imaging modalities, such as optical coherence tomography (OCT), fluorescence imaging, and augmented reality (AR) for image-guided surgery. AIM This comprehensive review is based on the literature of over 500 papers that cover the technology development and applications of surgical microscopy over the past century. The aim of this review is threefold: (i) providing a comprehensive technical overview of surgical microscopes, (ii) providing critical references for microscope selection and system development, and (iii) providing an overview of various medical applications. APPROACH More than 500 references were collected and reviewed. A timeline of important milestones during the evolution of surgical microscope is provided in this study. An in-depth technical overview of the optical system, mechanical system, illumination, visualization, and integration with advanced imaging modalities is provided. Various medical applications of surgical microscopes in neurosurgery and spine surgery, ophthalmic surgery, ear-nose-throat (ENT) surgery, endodontics, and plastic and reconstructive surgery are described. RESULTS Surgical microscopy has been significantly advanced in the technical aspects of high-end optics, bright and shadow-free illumination, stable and flexible mechanical design, and versatile visualization. New imaging modalities, such as hyperspectral imaging, OCT, fluorescence imaging, photoacoustic microscopy, and laser speckle contrast imaging, are being integrated with surgical microscopes. Advanced visualization and AR are being added to surgical microscopes as new features that are changing clinical practices in the operating room. CONCLUSIONS The combination of new imaging technologies and surgical microscopy will enable surgeons to perform challenging procedures and improve surgical outcomes. With advanced visualization and improved ergonomics, the surgical microscope has become a powerful tool in neurosurgery, spinal, ENT, ophthalmic, plastic and reconstructive surgeries.
Collapse
Affiliation(s)
- Ling Ma
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
38
|
Parekh P, Patel S, Patel N, Shah M. Systematic review and meta-analysis of augmented reality in medicine, retail, and games. Vis Comput Ind Biomed Art 2020; 3:21. [PMID: 32954214 PMCID: PMC7492097 DOI: 10.1186/s42492-020-00057-7] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 08/28/2020] [Indexed: 12/16/2022] Open
Abstract
This paper presents a detailed review of the applications of augmented reality (AR) in three important fields where AR use is currently increasing. The objective of this study is to highlight how AR improves and enhances the user experience in entertainment, medicine, and retail. The authors briefly introduce the topic of AR and discuss its differences from virtual reality. They also explain the software and hardware technologies required for implementing an AR system and the different types of displays required for enhancing the user experience. The growth of AR in markets is also briefly discussed. In the three sections of the paper, the applications of AR are discussed. The use of AR in multiplayer gaming, computer games, broadcasting, and multimedia videos, as an aspect of entertainment and gaming is highlighted. AR in medicine involves the use of AR in medical healing, medical training, medical teaching, surgery, and post-medical treatment. AR in retail was discussed in terms of its uses in advertisement, marketing, fashion retail, and online shopping. The authors concluded the paper by detailing the future use of AR and its advantages and disadvantages in the current scenario.
Collapse
Affiliation(s)
- Pranav Parekh
- Department of Computer Engineering, Nirma University, Ahmedabad, Gujarat 382481 India
| | - Shireen Patel
- Department of Computer Engineering, Nirma University, Ahmedabad, Gujarat 382481 India
| | - Nivedita Patel
- Department of Computer Engineering, Nirma University, Ahmedabad, Gujarat 382481 India
| | - Manan Shah
- Department of Chemical Engineering, School of Technology, Pandit Deendayal Petroleum University, Gandhinagar, Gujarat India
| |
Collapse
|
39
|
Kiarostami P, Dennler C, Roner S, Sutter R, Fürnstahl P, Farshad M, Rahm S, Zingg PO. Augmented reality-guided periacetabular osteotomy-proof of concept. J Orthop Surg Res 2020; 15:540. [PMID: 33203429 PMCID: PMC7672946 DOI: 10.1186/s13018-020-02066-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Accepted: 11/04/2020] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND The Ganz' periacetabular osteotomy (PAO) consists of four technically challenging osteotomies (OT), namely, supraacetabular (saOT), pubic (pOT), ischial (iOT), and retroacetabular OT (raOT). PURPOSE We performed a proof of concept study to test (1) the feasibility of augmented reality (AR) guidance for PAO, (2) precision of the OTs guided by AR compared to the freehand technique performed by an experienced PAO surgeon, and (3) the effect of AR on performance depending on experience. METHODS A 3D preoperative plan of a PAO was created from segmented computed tomography (CT) data of an anatomic plastic pelvis model (PPM). The plan was then embedded in a software application for an AR head-mounted device. Soft tissue coverage was imitated using foam rubber. The 3D plan was then registered onto the PPM using an anatomical landmark registration. Two surgeons (one experienced and one novice PAO surgeon) each performed 15 freehand (FH) and 15 AR-guided PAOs. The starting point distances and angulation between the planned and executed OT planes for the FH and the AR-guided PAOs were compared in post-intervention CTs. RESULTS AR guidance did not affect the performance of the expert surgeon in terms of the mean differences between the planned and executed starting points, but the raOT angle was more accurate as compared to FH PAO (p = 0.0027). AR guidance increased the accuracy of the performance of the novice surgeon for iOT (p = 0.03). An intraarticular osteotomy performed by the novice surgeon with the FH technique could be observed only once. CONCLUSION AR guidance of osteotomies for PAOs is feasible and seems to increase accuracy. The effect is more accentuated for less-experienced surgeons. CLINICAL RELEVANCE This is the first proof of concept study documenting the feasibility of AR guidance for PAO. Based on these findings, further studies are essential for elaborating on the potential merits of AR guidance to increase the accuracy of complex surgical procedures.
Collapse
Affiliation(s)
- Pascal Kiarostami
- Department of Orthopaedics, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Cyrill Dennler
- Department of Orthopaedics, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Simon Roner
- Department of Orthopaedics, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Reto Sutter
- Department of Radiology, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Philipp Fürnstahl
- Computer Assisted Research & Development Group, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Stefan Rahm
- Department of Orthopaedics, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Patrick O. Zingg
- Department of Orthopaedics, Balgrist University Hospital, University of Zürich, Forchstrasse 340, 8008 Zürich, Switzerland
| |
Collapse
|
40
|
Southworth MK, Silva JNA, Blume WM, Van Hare GF, Dalal AS, Silva JR. Performance Evaluation of Mixed Reality Display for Guidance During Transcatheter Cardiac Mapping and Ablation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:1900810. [PMID: 32742821 PMCID: PMC7390021 DOI: 10.1109/jtehm.2020.3007031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 06/23/2020] [Accepted: 06/24/2020] [Indexed: 01/18/2023]
Abstract
Cardiac electrophysiology procedures present the physician with a wealth of 3D information, typically presented on fixed 2D monitors. New developments in wearable mixed reality displays offer the potential to simplify and enhance 3D visualization while providing hands-free, dynamic control of devices within the procedure room. OBJECTIVE This work aims to evaluate the performance and quality of a mixed reality system designed for intraprocedural use in cardiac electrophysiology. METHOD The Enhanced Electrophysiology Visualization and Interaction System (ĒLVIS) mixed reality system performance criteria, including image quality, hardware performance, and usability were evaluated using existing display validation procedures adapted to the electrophysiology specific use case. Additional performance and user validation were performed through a 10 patient, in-human observational study, the Engineering ĒLVIS (E2) Study. RESULTS The ĒLVIS system achieved acceptable frame rate, latency, and battery runtime with acceptable dynamic range and depth distortion as well as minimal geometric distortion. Bench testing results corresponded with physician feedback in the observational study, and potential improvements in geometric understanding were noted. CONCLUSION The ĒLVIS system, based on current commercially available mixed reality hardware, is capable of meeting the hardware performance, image quality, and usability requirements of the electroanatomic mapping display for intraprocedural, real-time use in electrophysiology procedures. Verifying off the shelf mixed reality hardware for specific clinical use can accelerate the adoption of this transformative technology and provide novel visualization, understanding, and control of clinically relevant data in real-time.
Collapse
Affiliation(s)
| | - Jennifer N. Avari Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | | | - George F. Van Hare
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Aarti S. Dalal
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Jonathan R. Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMO63130USA
| |
Collapse
|
41
|
Gribaudo M, Piazzolla P, Porpiglia F, Vezzetti E, Violante MG. 3D augmentation of the surgical video stream: Toward a modular approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105505. [PMID: 32387863 DOI: 10.1016/j.cmpb.2020.105505] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/29/2020] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon's spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient's organ aligned over its image captured by the endoscope. METHODS We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system. RESULTS We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy. CONCLUSIONS The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.
Collapse
Affiliation(s)
- Marco Gribaudo
- Dept. of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Pietro Piazzolla
- Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy.
| | - Francesco Porpiglia
- Division of Urology, Department of Oncology, School of Medicine, University of Turin, Italy
| | - Enrico Vezzetti
- Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy
| | - Maria Grazia Violante
- Dept. of Management and Production Engineering, Politecnico di Torino, Torino, Italy
| |
Collapse
|
42
|
Evangelista A, Ardito L, Boccaccio A, Fiorentino M, Messeni Petruzzelli A, Uva AE. Unveiling the technological trends of augmented reality: A patent analysis. COMPUT IND 2020. [DOI: 10.1016/j.compind.2020.103221] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
43
|
Léger É, Reyes J, Drouin S, Popa T, Hall JA, Collins DL, Kersten-Oertel M. MARIN: an open-source mobile augmented reality interactive neuronavigation system. Int J Comput Assist Radiol Surg 2020; 15:1013-1021. [DOI: 10.1007/s11548-020-02155-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/03/2020] [Indexed: 12/20/2022]
|
44
|
Schoeb D, Suarez-Ibarrola R, Hein S, Dressler FF, Adams F, Schlager D, Miernik A. Use of Artificial Intelligence for Medical Literature Search: Randomized Controlled Trial Using the Hackathon Format. Interact J Med Res 2020; 9:e16606. [PMID: 32224481 PMCID: PMC7154940 DOI: 10.2196/16606] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 11/24/2019] [Accepted: 12/15/2019] [Indexed: 12/17/2022] Open
Abstract
Background Mapping out the research landscape around a project is often time consuming and difficult. Objective This study evaluates a commercial artificial intelligence (AI) search engine (IRIS.AI) for its applicability in an automated literature search on a specific medical topic. Methods To evaluate the AI search engine in a standardized manner, the concept of a science hackathon was applied. Three groups of researchers were tasked with performing a literature search on a clearly defined scientific project. All participants had a high level of expertise for this specific field of research. Two groups were given access to the AI search engine IRIS.AI. All groups were given the same amount of time for their search and were instructed to document their results. Search results were summarized and ranked according to a predetermined scoring system. Results The final scoring awarded 49 and 39 points out of 60 to AI groups 1 and 2, respectively, and the control group received 46 points. A total of 20 scientific studies with high relevance were identified, and 5 highly relevant studies (“spot on”) were reported by each group. Conclusions AI technology is a promising approach to facilitate literature searches and the management of medical libraries. In this study, however, the application of AI technology lead to a more focused literature search without a significant improvement in the number of results.
Collapse
Affiliation(s)
- Dominik Schoeb
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Rodrigo Suarez-Ibarrola
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Simon Hein
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Franz Friedrich Dressler
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Fabian Adams
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Daniel Schlager
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Arkadiusz Miernik
- Medical Center - Department of Urology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
45
|
A Skin-Conformal, Stretchable, and Breathable Fiducial Marker Patch for Surgical Navigation Systems. MICROMACHINES 2020; 11:mi11020194. [PMID: 32070015 PMCID: PMC7074652 DOI: 10.3390/mi11020194] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 01/31/2020] [Accepted: 02/11/2020] [Indexed: 11/24/2022]
Abstract
Augmented reality (AR) surgical navigation systems have attracted considerable attention as they assist medical professionals in visualizing the location of ailments within the human body that are not readily seen with the naked eye. Taking medical imaging with a parallel C-shaped arm (C-arm) as an example, surgical sites are typically targeted using an optical tracking device and a fiducial marker in real-time. These markers then guide operators who are using a multifunctional endoscope apparatus by signaling the direction or distance needed to reach the affected parts of the body. In this way, fiducial markers are used to accurately protect the vessels and nerves exposed during the surgical process. Although these systems have already shown potential for precision implantation, delamination of the fiducial marker, which is a critical component of the system, from human skin remains a challenge due to a mechanical mismatch between the marker and skin, causing registration problems that lead to poor position alignments and surgical degradation. To overcome this challenge, the mechanical modulus and stiffness of the marker patch should be lowered to approximately 150 kPa, which is comparable to that of the epidermis, while improving functionality. Herein, we present a skin-conformal, stretchable yet breathable fiducial marker for the application in AR-based surgical navigation systems. By adopting pore patterns, we were able to create a fiducial marker with a skin-like low modulus and breathability. When attached to the skin, the fiducial marker was easily identified using optical recognition equipment and showed skin-conformal adhesion when stretched and shrunk repeatedly. As such, we believe the marker would be a good fiducial marker candidate for patients under surgical navigation systems.
Collapse
|
46
|
Chen L, Zhang F, Zhan W, Gan M, Sun L. Optimization of virtual and real registration technology based on augmented reality in a surgical navigation system. Biomed Eng Online 2020; 19:1. [PMID: 31915014 PMCID: PMC6950982 DOI: 10.1186/s12938-019-0745-z] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022] Open
Abstract
Background The traditional navigation interface was intended only for two-dimensional observation by doctors; thus, this interface does not display the total spatial information for the lesion area. Surgical navigation systems have become essential tools that enable for doctors to accurately and safely perform complex operations. The image navigation interface is separated from the operating area, and the doctor needs to switch the field of vision between the screen and the patient’s lesion area. In this paper, augmented reality (AR) technology was applied to spinal surgery to provide more intuitive information to surgeons. The accuracy of virtual and real registration was improved via research on AR technology. During the operation, the doctor could observe the AR image and the true shape of the internal spine through the skin. Methods To improve the accuracy of virtual and real registration, a virtual and real registration technique based on an improved identification method and robot-assisted method was proposed. The experimental method was optimized by using the improved identification method. X-ray images were used to verify the effectiveness of the puncture performed by the robot. Results The final experimental results show that the average accuracy of the virtual and real registration based on the general identification method was 9.73 ± 0.46 mm (range 8.90–10.23 mm). The average accuracy of the virtual and real registration based on the improved identification method was 3.54 ± 0.13 mm (range 3.36–3.73 mm). Compared with the virtual and real registration based on the general identification method, the accuracy was improved by approximately 65%. The highest accuracy of the virtual and real registration based on the robot-assisted method was 2.39 mm. The accuracy was improved by approximately 28.5% based on the improved identification method. Conclusion The experimental results show that the two optimized methods are highly very effective. The proposed AR navigation system has high accuracy and stability. This system may have value in future spinal surgeries.
Collapse
Affiliation(s)
- Long Chen
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China
| | - Fengfeng Zhang
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China. .,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China.
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Minfeng Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Lining Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China.,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China
| |
Collapse
|
47
|
Abstract
Augmented reality and virtual reality technologies are increasing in popularity. Augmented reality has thrived to date mainly on mobile applications, with games like Pokémon Go or the new Google Maps utility as some of its ambassadors. On the other hand, virtual reality has been popularized mainly thanks to the videogame industry and cheaper devices. However, what was initially a failure in the industrial field is resurfacing in recent years thanks to the technological improvements in devices and processing hardware. In this work, an in-depth study of the different fields in which augmented and virtual reality have been used has been carried out. This study focuses on conducting a thorough scoping review focused on these new technologies, where the evolution of each of them during the last years in the most important categories and in the countries most involved in these technologies will be analyzed. Finally, we will analyze the future trend of these technologies and the areas in which it is necessary to investigate to further integrate these technologies into society.
Collapse
|
48
|
Gsaxner C, Wallner J, Chen X, Zemann W, Egger J. Facial model collection for medical augmented reality in oncologic cranio-maxillofacial surgery. Sci Data 2019; 6:310. [PMID: 31819060 PMCID: PMC6901520 DOI: 10.1038/s41597-019-0327-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 11/21/2019] [Indexed: 01/25/2023] Open
Abstract
Medical augmented reality (AR) is an increasingly important topic in many medical fields. AR enables x-ray vision to see through real world objects. In medicine, this offers pre-, intra- or post-interventional visualization of "hidden" structures. In contrast to a classical monitor view, AR applications provide visualization not only on but also in relation to the patient. However, research and development of medical AR applications is challenging, because of unique patient-specific anatomies and pathologies. Working with several patients during the development for weeks or even months is not feasible. One alternative are commercial patient phantoms, which are very expensive. Hence, this data set provides a unique collection of head and neck cancer patient PET-CT scans with corresponding 3D models, provided as stereolitography (STL) files. The 3D models are optimized for effective 3D printing at low cost. This data can be used in the development and evaluation of AR applications for head and neck surgery.
Collapse
Affiliation(s)
- Christina Gsaxner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria
| | - Jürgen Wallner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - Xiaojun Chen
- Shanghai Jiao Tong University, School of Mechanical Engineering, 800 Dong Chuan Road, Shanghai, 200240, China
| | - Wolfgang Zemann
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
| | - Jan Egger
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria
- Shanghai Jiao Tong University, School of Mechanical Engineering, 800 Dong Chuan Road, Shanghai, 200240, China
| |
Collapse
|
49
|
Wheeler G, Deng S, Pushparajah K, Schnabel JA, Simpson JM, Gomez A. Virtual linear measurement system for accurate quantification of medical images. Healthc Technol Lett 2019; 6:220-225. [PMID: 32038861 PMCID: PMC6952242 DOI: 10.1049/htl.2019.0074] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 10/02/2019] [Indexed: 11/29/2022] Open
Abstract
Virtual reality (VR) has the potential to aid in the understanding of complex volumetric medical images, by providing an immersive and intuitive experience accessible to both experts and non-imaging specialists. A key feature of any clinical image analysis tool is measurement of clinically relevant anatomical structures. However, this feature has been largely neglected in VR applications. The authors propose a Unity-based system to carry out linear measurements on three-dimensional (3D), purposefully designed for the measurement of 3D echocardiographic images. The proposed system is compared to commercially available, widely used image analysis packages that feature both 2D (multi-planar reconstruction) and 3D (volume rendering) measurement tools. The results indicate that the proposed system provides statistically equivalent measurements compared to the reference 2D system, while being more accurate than the commercial 3D system.
Collapse
Affiliation(s)
- Gavin Wheeler
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Shujie Deng
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Kuberan Pushparajah
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Department of Congenital Heart Disease, Evelina London Children's Hospital, London, UK
| | - Julia A. Schnabel
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - John M. Simpson
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Department of Congenital Heart Disease, Evelina London Children's Hospital, London, UK
| | - Alberto Gomez
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| |
Collapse
|
50
|
Talukder JR, Lin HY, Wu ST. Photo- and electrical-responsive liquid crystal smart dimmer for augmented reality displays. OPTICS EXPRESS 2019; 27:18169-18179. [PMID: 31252764 DOI: 10.1364/oe.27.018169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 05/30/2019] [Indexed: 06/09/2023]
Abstract
A dual-stimuli polarizer-free dye-doped liquid crystal (LC) dimmer is demonstrated. The LC composition consists of photo-stable chiral agent, photosensitive azobenzene, and dichroic dye in a nematic host with positive dielectric anisotropy. Upon UV exposure, the LC directors and dye molecules turn from initially vertical alignment (high transmittance state) to twisted fingerprint structure (low transmittance state). The reversal process is accelerated by combining a longitudinal electric field to unwind the LC directors from twisted fingerprint to homeotropic state, and a red light to transform the cis azobenzene back to trans. This device can be used as a smart dimmer to enhance the ambient contrast ratio for augmented reality displays.
Collapse
|