1
|
Tu M, Jung H, Kim J, Kyme A. Head-Mounted Displays in Context-Aware Systems for Open Surgery: A State-of-the-Art Review. IEEE J Biomed Health Inform 2025; 29:1165-1175. [PMID: 39466871 DOI: 10.1109/jbhi.2024.3485023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
Surgical context-aware systems (SCAS), which leverage real-time data and analysis from the operating room to inform surgical activities, can be enhanced through the integration of head-mounted displays (HMDs). Rather than user-agnostic data derived from conventional, and often static, external sensors, HMD-based SCAS relies on dynamic user-centric sensing of the surgical context. The analyzed context-aware information is then augmented directly into a user's field of view via augmented reality (AR) to directly improve their task and decision-making capability. This state-of-the-art review complements previous reviews by exploring the advancement of HMD- based SCAS, including their development and impact on enhancing situational awareness and surgical outcomes in the operating room. The survey demonstrates that this technology can mitigate risks associated with gaps in surgical expertise, increase procedural efficiency, and improve patient outcomes. We also highlight key limitations still to be addressed by the research community, including improving prediction accuracy, robustly handling data heterogeneity, and reducing system latency.
Collapse
|
2
|
Ye J, Chen Q, Zhong T, Liu J, Gao H. Is Overlain Display a Right Choice for AR Navigation? A Qualitative Study of Head-Mounted Augmented Reality Surgical Navigation on Accuracy for Large-Scale Clinical Deployment. CNS Neurosci Ther 2025; 31:e70217. [PMID: 39817491 PMCID: PMC11736426 DOI: 10.1111/cns.70217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 12/24/2024] [Accepted: 01/03/2025] [Indexed: 01/18/2025] Open
Abstract
BACKGROUND During the course of the past two decades, head-mounted augmented reality surgical navigation (HMARSN) systems have been increasingly employed in a variety of surgical specialties as a result of both advancements in augmented reality-related technologies and surgeons' desires to overcome some drawbacks inherent to conventional surgical navigation systems. In the present time, most experimental HMARSN systems adopt overlain display (OD) that overlay virtual models and planned routes of surgical tools on corresponding physical tissues, organs, lesions, and so forth, in a surgical field so as to provide surgeons with an intuitive and direct view to gain better hand-eye coordination as well as avoid attention shift and loss of sight (LOS), among other benefits during procedures. Yet, its system accuracy, which is the most crucial performance indicator of any surgical navigation system, is difficult to ascertain because it is highly subjective and user-dependent. Therefore, the aim of this study was to review presently available experimental OD HMARSN systems qualitatively, explore how their system accuracy is affected by overlain display, and find out if such systems are suited to large-scale clinical deployment. METHOD We searched PubMed and ScienceDirect with the following terms: head mounted augmented reality surgical navigation, and 445 records were returned in total. After screening and eligibility assessment, 60 papers were finally analyzed. Specifically, we focused on how their accuracies were defined and measured, as well as whether such accuracies are stable in clinical practice and competitive with corresponding commercially available systems. RESULTS AND CONCLUSIONS The primary findings are that the system accuracy of OD HMARSN systems is seriously affected by a transformation between the spaces of the user's eyes and the surgical field, because measurement of the transformation is heavily individualized and user-dependent. Additionally, the transformation itself is potentially subject to changes during surgical procedures, and hence unstable. Therefore, OD HMARSN systems are not suitable for large-scale clinical deployment.
Collapse
Affiliation(s)
- Jian Ye
- Department of Neurosurgery, Affiliated Qingyuan HospitalGuangzhou Medical University, Qingyuan People's HospitalQiangyuanChina
| | - Qingwen Chen
- Department of NeurosurgeryThe First Affiliated Hospital of Guangdong Pharmaceutical UniversityGuangzhouChina
| | - Tao Zhong
- Department of NeurosurgeryThe First Affiliated Hospital of Guangdong Pharmaceutical UniversityGuangzhouChina
| | - Jian Liu
- Department of Neurosurgery, Affiliated Qingyuan HospitalGuangzhou Medical University, Qingyuan People's HospitalQiangyuanChina
| | - Han Gao
- Department of Neurosurgery, Affiliated Qingyuan HospitalGuangzhou Medical University, Qingyuan People's HospitalQiangyuanChina
| |
Collapse
|
3
|
An Z, Liu Y, Xue M. Design of AR system tracking registration method using dynamic target light-field. OPTICS EXPRESS 2024; 32:16467-16477. [PMID: 38859272 DOI: 10.1364/oe.521975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/04/2024] [Indexed: 06/12/2024]
Abstract
In the process of tracking registration for an augmented reality (AR) system, it's essential first to obtain the system's initial state, as its accuracy significantly influences the precision of subsequent three-dimensional tracking registration. At this point, minor movements of the target can directly lead to calibration errors. Current methods fail to address the challenge of capturing the initial state of dynamic deformation in optically transparent AR systems effectively. To tackle this issue, the concept of a static light-field is expanded to a four-dimensional dynamic light-field, and a tracking registration method for an optical see-through AR system based on the four-dimensional dynamic light-field is introduced. This method begins by analyzing the relationship between the components of the optical see-through AR system and studying the impact of a dynamic target on the initial state model. Leveraging the fundamental principle of light-field correlation, the theory and model for four-dimensional dynamic light-field tracking registration are developed. A lot of experiments have confirmed the algorithm's accuracy, enhanced its stability, and demonstrated the superior performance of the three-dimensional tracking registration algorithm.
Collapse
|
4
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
5
|
Schneider M, Kunz C, Wirtz CR, Mathis-Ullrich F, Pala A, Hlavac M. Augmented Reality-Assisted versus Freehand Ventriculostomy in a Head Model. J Neurol Surg A Cent Eur Neurosurg 2023; 84:562-569. [PMID: 37402395 DOI: 10.1055/s-0042-1759827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/06/2023]
Abstract
BACKGROUND Ventriculostomy (VST) is a frequent neurosurgical procedure. Freehand catheter placement represents the standard current practice. However, multiple attempts are often required. We present augmented reality (AR) headset guided VST with in-house developed head models. We conducted a proof of concept study in which we tested AR-guided as well as freehand VST. Repeated AR punctures were conducted to investigate if a learning curve can be derived. METHODS Five custom-made 3D-printed head models, each holding an anatomically different ventricular system, were filled with agarose gel. Eleven surgeons placed two AR-guided as well as two freehand ventricular drains per head. A subgroup of four surgeons did a total of three series of AR-guided punctures each to test for a learning curve. A Microsoft HoloLens served as the hardware platform. The marker-based tracking did not require rigid head fixation. Catheter tip position was evaluated in computed tomography scans. RESULTS Marker-tracking, image segmentation, and holographic display worked satisfactorily. In freehand VST, a success rate of 72.7% was achieved, which was higher than under AR guidance (68.2%, difference not statistically significant). Repeated AR-guided punctures increased the success rate from 65 to 95%. We assume a steep learning curve as repeated AR-guided punctures led to an increase in successful attempts. Overall user experience showed positive feedback. CONCLUSIONS We achieved promising results that encourage the continued development and technical improvement. However, several more developmental steps have to be taken before an application in humans can be considered. In the future, AR headset-based holograms have the potential to serve as a compact navigational help inside and outside the operating room.
Collapse
Affiliation(s)
- Max Schneider
- Department of Neurosurgery, University Hospital Ulm, Ulm, Germany
| | - Christian Kunz
- Institute for Anthropomatics and Robotics - Health Robotics and Automation (HERA), KIT, Karlsruhe, Germany
| | | | - Franziska Mathis-Ullrich
- Institute for Anthropomatics and Robotics - Health Robotics and Automation (HERA), KIT, Karlsruhe, Germany
| | - Andrej Pala
- Department of Neurosurgery, University Hospital Ulm, Ulm, Germany
| | - Michal Hlavac
- Department of Neurosurgery, University Hospital Ulm, Ulm, Germany
| |
Collapse
|
6
|
Cao B, Yuan B, Xu G, Zhao Y, Sun Y, Wang Z, Zhou S, Xu Z, Wang Y, Chen X. A Pilot Human Cadaveric Study on Accuracy of the Augmented Reality Surgical Navigation System for Thoracolumbar Pedicle Screw Insertion Using a New Intraoperative Rapid Registration Method. J Digit Imaging 2023; 36:1919-1929. [PMID: 37131064 PMCID: PMC10406793 DOI: 10.1007/s10278-023-00840-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/04/2023] Open
Abstract
To evaluate the feasibility and accuracy of AR-assisted pedicle screw placement using a new intraoperative rapid registration method of combining preoperative CT scanning and intraoperative C-arm 2D fluoroscopy in cadavers. Five cadavers with intact thoracolumbar spines were employed in this study. Intraoperative registration was performed using anteroposterior and lateral views of preoperative CT scanning and intraoperative 2D fluoroscopic images. Patient-specific targeting guides were used for pedicle screw placement from Th1-L5, totaling 166 screws. Instrumentation for each side was randomized (augmented reality surgical navigation (ARSN) vs. C-arm) with an equal distribution of 83 screws in each group. CT was performed to evaluate the accuracy of both techniques by assessing the screw positions and the deviations between the inserted screws and planned trajectories. Postoperative CT showed that 98.80% (82/83) screws in ARSN group and 72.29% (60/83) screws in C-arm group were within the 2-mm safe zone (p < 0.001). The mean time for instrumentation per level in ARSN group was significantly shorter than that in C-arm group (56.17 ± 3.33 s vs. 99.22 ± 9.03 s, p < 0.001). The overall intraoperative registration time was 17.2 ± 3.5 s per segment. AR-based navigation technology can provide surgeons with accurate guidance of pedicle screw insertion and save the operation time by using the intraoperative rapid registration method of combining preoperative CT scanning and intraoperative C-arm 2D fluoroscopy.
Collapse
Affiliation(s)
- Bing Cao
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Bo Yuan
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Guofeng Xu
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Yin Zhao
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Yanqing Sun
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Zhiwei Wang
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Shengyuan Zhou
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Zheng Xu
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China
| | - Yao Wang
- Linyan Medical Technology Company Limited, 528 Ruiqing Road, Pudong New District, Shanghai, China
| | - Xiongsheng Chen
- Spine Center, Department of Orthopaedics, Shanghai Changzheng Hospital, Second Military Medical University, 415 Fengyang Road, Huangpu District, Shanghai, China.
| |
Collapse
|
7
|
Jiang J, Zhang J, Sun J, Wu D, Xu S. User's image perception improved strategy and application of augmented reality systems in smart medical care: A review. Int J Med Robot 2023; 19:e2497. [PMID: 36629798 DOI: 10.1002/rcs.2497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/26/2022] [Accepted: 01/06/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND Augmented reality (AR) is a new human-computer interaction technology that combines virtual reality, computer vision, and computer networks. With the rapid advancement of the medical field towards intelligence and data visualisation, AR systems are becoming increasingly popular in the medical field because they can provide doctors with clear enough medical images and accurate image navigation in practical applications. However, it has been discovered that different display types of AR systems have different effects on doctors' perception of the image after virtual-real fusion during the actual medical application. If doctors cannot correctly perceive the image, they may be unable to correctly match the virtual information with the real world, which will have a significant impact on their ability to recognise complex structures. METHODS This paper uses Citespace, a literature analysis tool, to visualise and analyse the research hotspots when AR systems are used in the medical field. RESULTS A visual analysis of the 1163 articles retrieved from the Web of Science Core Collection database reveals that display technology and visualisation technology are the key research directions of AR systems at the moment. CONCLUSION This paper categorises AR systems based on their display principles, reviews current image perception optimisation schemes for various types of systems, and analyses and compares different display types of AR systems based on their practical applications in the field of smart medical care so that doctors can select the appropriate display types based on different application scenarios. Finally, the future development direction of AR display technology is anticipated in order for AR technology to be more effectively applied in the field of smart medical care. The advancement of display technology for AR systems is critical for their use in the medical field, and the advantages and disadvantages of various display types should be considered in different application scenarios to select the best AR system.
Collapse
Affiliation(s)
- Jingang Jiang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China.,Robotics & Its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jiawei Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jianpeng Sun
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Dianhao Wu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Shuainan Xu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| |
Collapse
|
8
|
Syed TA, Siddiqui MS, Abdullah HB, Jan S, Namoun A, Alzahrani A, Nadeem A, Alkhodre AB. In-Depth Review of Augmented Reality: Tracking Technologies, Development Tools, AR Displays, Collaborative AR, and Security Concerns. SENSORS (BASEL, SWITZERLAND) 2022; 23:146. [PMID: 36616745 PMCID: PMC9824627 DOI: 10.3390/s23010146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Augmented reality (AR) has gained enormous popularity and acceptance in the past few years. AR is indeed a combination of different immersive experiences and solutions that serve as integrated components to assemble and accelerate the augmented reality phenomena as a workable and marvelous adaptive solution for many realms. These solutions of AR include tracking as a means for keeping track of the point of reference to make virtual objects visible in a real scene. Similarly, display technologies combine the virtual and real world with the user's eye. Authoring tools provide platforms to develop AR applications by providing access to low-level libraries. The libraries can thereafter interact with the hardware of tracking sensors, cameras, and other technologies. In addition to this, advances in distributed computing and collaborative augmented reality also need stable solutions. The various participants can collaborate in an AR setting. The authors of this research have explored many solutions in this regard and present a comprehensive review to aid in doing research and improving different business transformations. However, during the course of this study, we identified that there is a lack of security solutions in various areas of collaborative AR (CAR), specifically in the area of distributed trust management in CAR. This research study also proposed a trusted CAR architecture with a use-case of tourism that can be used as a model for researchers with an interest in making secure AR-based remote communication sessions.
Collapse
Affiliation(s)
- Toqeer Ali Syed
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Muhammad Shoaib Siddiqui
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Hurria Binte Abdullah
- School of Social Sciences and Humanities, National University of Science and Technology (NUST), Islamabad 44000, Pakistan
| | - Salman Jan
- Malaysian Institute of Information Technology, Universiti Kuala Lumpur, Kuala Lumpur 50250, Malaysia
- Department of Computer Science, Bacha Khan University Charsadda, Charsadda 24420, Pakistan
| | - Abdallah Namoun
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ali Alzahrani
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Adnan Nadeem
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ahmad B. Alkhodre
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| |
Collapse
|
9
|
Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. INFORMATION 2022. [DOI: 10.3390/info13120566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Augmented reality (AR) technologies can blend digital and physical space and serve a variety of applications intuitively and effectively. Specifically, wearable AR enabled by optical see-through (OST) AR head-mounted displays (HMDs) might provide users with a direct view of the physical environment containing digital objects. Besides, users could directly interact with three-dimensional (3D) digital artefacts using freehand gestures captured by OST HMD sensors. However, as an emerging user interaction paradigm, freehand interaction with OST AR still requires further investigation to improve user performance and satisfaction. Thus, we conducted two studies to investigate various freehand selection design aspects in OST AR, including target placement, size, distance, position, and haptic feedback on the hand and body. The user evaluation results indicated that 40 cm might be an appropriate target distance for freehand gestural selection. A large target size might lower the selection time and error rate, and a small target size could minimise selection effort. The targets positioned in the centre are the easiest to select, while those in the corners require extra time and effort. Furthermore, we discovered that haptic feedback on the body could lead to high user preference and satisfaction. Based on the research findings, we conclude with design recommendations for effective and comfortable freehand gestural interaction in OST AR.
Collapse
|
10
|
Doughty M, Ghugre NR, Wright GA. Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery. J Imaging 2022; 8:jimaging8070203. [PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 02/01/2023] Open
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - Graham A. Wright
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
11
|
Qian L, Song T, Unberath M, Kazanzides P. AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2550-2562. [PMID: 33170780 DOI: 10.1109/tvcg.2020.3037284] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
Collapse
|
12
|
Ferrari V, Cattari N, Fontana U, Cutolo F. Parallax Free Registration for Augmented Reality Optical See-Through Displays in the Peripersonal Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1608-1618. [PMID: 32881688 DOI: 10.1109/tvcg.2020.3021534] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Egocentric augmented reality (AR) interfaces are quickly becoming a key asset for assisting high precision activities in the peripersonal space in several application fields. In these applications, accurate and robust registration of computer-generated information to the real scene is hard to achieve with traditional Optical See-Through (OST) displays given that it relies on the accurate calibration of the combined eye-display projection model. The calibration is required to efficiently estimate the projection parameters of the pinhole model that encapsulate the optical features of the display and whose values vary according to the position of the user's eye. In this article, we describe an approach that prevents any parallax-related AR misregistration at a pre-defined working distance in OST displays with infinity focus; our strategy relies on the use of a magnifier placed in front of the OST display, and features a proper parameterization of the virtual rendering camera achieved through a dedicated calibration procedure that accounts for the contribution of the magnifier. We model the registration error due to the viewpoint parallax outside the ideal working distance. Finally, we validate our strategy on a OST display, and we show that sub-millimetric registration accuracy can be achieved for working distances of ±100 mm around the focal length of the magnifier.
Collapse
|
13
|
Architecture of a Hybrid Video/Optical See-through Head-Mounted Display-Based Augmented Reality Surgical Navigation Platform. INFORMATION 2022. [DOI: 10.3390/info13020081] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop surgical guidance applications with the ambition to demonstrate the potential of AR in surgery. The applications proposed in the literature underline the hunger for AR-guidance in the surgical room together with the limitations that hinder commercial HMDs from being the answer to such a need. The medical domain demands specifically developed devices that address, together with ergonomics, the achievement of surgical accuracy objectives and compliance with medical device regulations. In the framework of an EU Horizon2020 project, a hybrid video and optical see-through augmented reality headset paired with a software architecture, both specifically designed to be seamlessly integrated into the surgical workflow, has been developed. In this paper, the overall architecture of the system is described. The developed AR HMD surgical navigation platform was positively tested on seven patients to aid the surgeon while performing Le Fort 1 osteotomy in cranio-maxillofacial surgery, demonstrating the value of the hybrid approach and the safety and usability of the navigation platform.
Collapse
|
14
|
Doughty M, Ghugre NR. Head-Mounted Display-Based Augmented Reality for Image-Guided Media Delivery to the Heart: A Preliminary Investigation of Perceptual Accuracy. J Imaging 2022; 8:jimaging8020033. [PMID: 35200735 PMCID: PMC8878166 DOI: 10.3390/jimaging8020033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 01/25/2022] [Accepted: 01/28/2022] [Indexed: 01/14/2023] Open
Abstract
By aligning virtual augmentations with real objects, optical see-through head-mounted display (OST-HMD)-based augmented reality (AR) can enhance user-task performance. Our goal was to compare the perceptual accuracy of several visualization paradigms involving an adjacent monitor, or the Microsoft HoloLens 2 OST-HMD, in a targeted task, as well as to assess the feasibility of displaying imaging-derived virtual models aligned with the injured porcine heart. With 10 participants, we performed a user study to quantify and compare the accuracy, speed, and subjective workload of each paradigm in the completion of a point-and-trace task that simulated surgical targeting. To demonstrate the clinical potential of our system, we assessed its use for the visualization of magnetic resonance imaging (MRI)-based anatomical models, aligned with the surgically exposed heart in a motion-arrested open-chest porcine model. Using the HoloLens 2 with alignment of the ground truth target and our display calibration method, users were able to achieve submillimeter accuracy (0.98 mm) and required 1.42 min for calibration in the point-and-trace task. In the porcine study, we observed good spatial agreement between the MRI-models and target surgical site. The use of an OST-HMD led to improved perceptual accuracy and task-completion times in a simulated targeting task.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada;
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada;
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
15
|
Zhang F, Zhang S, Sun L, Zhan W, Sun L. Research on registration and navigation technology of augmented reality for ex-vivo hepatectomy. Int J Comput Assist Radiol Surg 2021; 17:147-155. [PMID: 34800225 DOI: 10.1007/s11548-021-02531-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 10/27/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE The application of augmented reality technology to the partial hepatectomy procedure has high practical significance, but the existing augmented reality navigation system has major drawbacks in the display and registration methods, which result in low precision. The augmented reality surgical navigation system proposed in this study has been improved in the above two aspects, which can significantly improve the surgical accuracy. METHODS The use of optical see-through head-mounted displays for imaging displays can prevent doctors from reconstructing the patient's two-dimensional image information in their minds and reduce the psychological burden of doctors. In the registration process, the biomechanical properties of the liver are introduced, and a non-rigid registration method based on biomechanics is proposed and realized by a meshless algorithm. In addition, this study uses the moving grid algorithm to carry out clinical experiments on ex-vivo pig liver for experimental verification. RESULTS The mark-based interactive registration error is 4.21 ± 1.6 mm, and the registration error is reduced after taking the biomechanical properties of the liver into account, which is - 0.153 ± 0.398 mm. The cutting error of the liver model is 0.159 ± 0.292 mm. In addition, with the aid of the navigation system proposed in this paper, the experiment of ex-vivo pig liver cutting was completed with an error of - 1.164 ± 0.576 mm. CONCLUSIONS As a proof-of-concept study, the augmented reality navigation system proposed in this study improves the traditional image-guided surgery in terms of display and registration methods, and the feasibility of the system is verified by ex-vivo pig liver experiments. Therefore, the navigation system has a certain guiding significance for clinical surgery.
Collapse
Affiliation(s)
- Fengfeng Zhang
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China. .,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China.
| | - Shi Zhang
- College of Mechanical and Engineering, Harbin Engineering University, Harbin, 150001, China
| | - Long Sun
- College of Mechanical and Engineering, Harbin Engineering University, Harbin, 150001, China
| | - Wei Zhan
- The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - Lining Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China.,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China
| |
Collapse
|
16
|
In Situ Visualization for 3D Ultrasound-Guided Interventions with Augmented Reality Headset. Bioengineering (Basel) 2021; 8:bioengineering8100131. [PMID: 34677204 PMCID: PMC8533537 DOI: 10.3390/bioengineering8100131] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/16/2021] [Accepted: 09/21/2021] [Indexed: 12/03/2022] Open
Abstract
Augmented Reality (AR) headsets have become the most ergonomic and efficient visualization devices to support complex manual tasks performed under direct vision. Their ability to provide hands-free interaction with the augmented scene makes them perfect for manual procedures such as surgery. This study demonstrates the reliability of an AR head-mounted display (HMD), conceived for surgical guidance, in navigating in-depth high-precision manual tasks guided by a 3D ultrasound imaging system. The integration between the AR visualization system and the ultrasound imaging system provides the surgeon with real-time intra-operative information on unexposed soft tissues that are spatially registered with the surrounding anatomic structures. The efficacy of the AR guiding system was quantitatively assessed with an in vitro study simulating a biopsy intervention aimed at determining the level of accuracy achievable. In the experiments, 10 subjects were asked to perform the biopsy on four spherical lesions of decreasing sizes (10, 7, 5, and 3 mm). The experimental results showed that 80% of the subjects were able to successfully perform the biopsy on the 5 mm lesion, with a 2.5 mm system accuracy. The results confirmed that the proposed integrated system can be used for navigation during in-depth high-precision manual tasks.
Collapse
|
17
|
Condino S, Cutolo F, Cattari N, Colangeli S, Parchi PD, Piazza R, Ruinato AD, Capanna R, Ferrari V. Hybrid Simulation and Planning Platform for Cryosurgery with Microsoft HoloLens. SENSORS 2021; 21:s21134450. [PMID: 34209748 PMCID: PMC8272062 DOI: 10.3390/s21134450] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 06/23/2021] [Accepted: 06/25/2021] [Indexed: 11/16/2022]
Abstract
Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient’s preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning.
Collapse
Affiliation(s)
- Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (F.C.); (V.F.)
- Correspondence:
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (F.C.); (V.F.)
| | - Nadia Cattari
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy; (N.C.); (R.P.); (A.D.R.)
| | - Simone Colangeli
- Orthopaedic and Traumatology Division, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56124 Pisa, Italy; (S.C.); (P.D.P.); (R.C.)
| | - Paolo Domenico Parchi
- Orthopaedic and Traumatology Division, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56124 Pisa, Italy; (S.C.); (P.D.P.); (R.C.)
| | - Roberta Piazza
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy; (N.C.); (R.P.); (A.D.R.)
| | - Alfio Damiano Ruinato
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy; (N.C.); (R.P.); (A.D.R.)
| | - Rodolfo Capanna
- Orthopaedic and Traumatology Division, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56124 Pisa, Italy; (S.C.); (P.D.P.); (R.C.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (F.C.); (V.F.)
| |
Collapse
|
18
|
Hu X, Baena FRY, Cutolo F. Head-Mounted Augmented Reality Platform for Markerless Orthopaedic Navigation. IEEE J Biomed Health Inform 2021; 26:910-921. [PMID: 34115600 DOI: 10.1109/jbhi.2021.3088442] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual augmented reality (AR) has the potential to improve the accuracy, efficiency and reproducibility of computer-assisted orthopaedic surgery (CAOS). AR Head-mounted displays (HMDs) further allow non-eye-shift target observation and egocentric view. Recently, a markerless tracking and registration (MTR) algorithm was proposed to avoid the artificial markers that are conventionally pinned into the target anatomy for tracking, as their use prolongs surgical workflow, introduces human-induced errors, and necessitates additional surgical invasion in patients. However, such an MTR-based method has neither been explored for surgical applications nor integrated into current AR HMDs, making the ergonomic HMD-based markerless AR CAOS navigation hard to achieve. To these aims, we present a versatile, device-agnostic and accurate HMD-based AR platform. Our software platform, supporting both video see-through (VST) and optical see-through (OST) modes, integrates two proposed fast calibration procedures using a specially designed calibration tool. According to the camera-based evaluation, our AR platform achieves a display error of 6.31 2.55 arcmin for VST and 7.72 3.73 arcmin for OST. A proof-of-concept markerless surgical navigation system to assist in femoral bone drilling was then developed based on the platform and Microsoft HoloLens 1. According to the user study, both VST and OST markerless navigation systems are reliable, with the OST system providing the best usability. The measured navigation error is 4.90 1.04 mm, 5.96 2.22 for VST system and 4.36 0.80 mm, 5.65 1.42 for OST system.
Collapse
|
19
|
Li R, Tong Y, Yang T, Guo J, Si W, Zhang Y, Klein R, Heng PA. Towards quantitative and intuitive percutaneous tumor puncture via augmented virtual reality. Comput Med Imaging Graph 2021; 90:101905. [PMID: 33848757 DOI: 10.1016/j.compmedimag.2021.101905] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 02/02/2021] [Accepted: 03/14/2021] [Indexed: 11/24/2022]
Abstract
In recent years, the radiofrequency ablation (RFA) therapy has become a widely accepted minimal invasive treatment for liver tumor patients. However, it is challenging for doctors to precisely and efficiently perform the percutaneous tumor punctures under free-breathing conditions. This is because the traditional RFA is based on the 2D CT Image information, the missing spatial and dynamic information is dependent on surgeons' experience. This paper presents a novel quantitative and intuitive surgical navigation modality for percutaneous respiratory tumor puncture via augmented virtual reality, which is to achieve the augmented visualization of the pre-operative virtual planning information precisely being overlaid on intra-operative surgical scenario. In the pre-operation stage, we first combine the signed distance field of feasible structures (like liver and tumor) where the puncture path can go through and unfeasible structures (like large vessels and ribs) where the needle is not allowed to go through to quantitatively generate the 3D feasible region for percutaneous puncture. Then we design three constraints according to the RFA specialists consensus to automatically determine the optimal puncture trajectory. In the intra-operative stage, we first propose a virtual-real alignment method to precisely superimpose the virtual information on surgical scenario. Then, a user-friendly collaborative holographic interface is designed for real-time 3D respiratory tumor puncture navigation, which can effectively assist surgeons fast and accurately locating the target step-by step. The validation of our system is performed on static abdominal phantom and in vivo beagle dogs with artificial lesion. Experimental results demonstrate that the accuracy of the proposed planning strategy is better than the manual planning sketched by experienced doctors. Besides, the proposed holographic navigation modality can effectively reduce the needle adjustment for precise puncture as well. Our system shows its clinical feasibility to provide the quantitative planning of optimal needle path and intuitive in situ holographic navigation for percutaneous tumor ablation without surgeons' experience-dependence and reduce the times of needle adjustment. The proposed augmented virtual reality navigation system can effectively improve the precision and reliability in percutaneous tumor ablation and has the potential to be used for other surgical navigation tasks.
Collapse
Affiliation(s)
- Ruotong Li
- Department of Computer Science II, University of Bonn, Germany
| | - Yuqi Tong
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Tianpei Yang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | | | - Weixin Si
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China.
| | | | - Reinhard Klein
- Department of Computer Science II, University of Bonn, Germany
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
20
|
Cutolo F, Cattari N, Fontana U, Ferrari V. Optical See-Through Head-Mounted Displays With Short Focal Distance: Conditions for Mitigating Parallax-Related Registration Error. Front Robot AI 2020; 7:572001. [PMID: 33501331 PMCID: PMC7806030 DOI: 10.3389/frobt.2020.572001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/17/2020] [Indexed: 11/13/2022] Open
Abstract
Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space.
Collapse
Affiliation(s)
- Fabrizio Cutolo
- Information Engineering Department, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Nadia Cattari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Umberto Fontana
- Information Engineering Department, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
21
|
Cercenelli L, Carbone M, Condino S, Cutolo F, Marcelli E, Tarsitano A, Marchetti C, Ferrari V, Badiali G. The Wearable VOSTARS System for Augmented Reality-Guided Surgery: Preclinical Phantom Evaluation for High-Precision Maxillofacial Tasks. J Clin Med 2020; 9:jcm9113562. [PMID: 33167432 PMCID: PMC7694536 DOI: 10.3390/jcm9113562] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/29/2020] [Accepted: 11/03/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. METHODS An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. RESULTS VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory's length was within ±0.5 mm accuracy. CONCLUSIONS Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
- Correspondence: ; Tel.: +39-0516364603
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Emanuela Marcelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
| | - Achille Tarsitano
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Claudio Marchetti
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| |
Collapse
|
22
|
Li M, Seifabadi R, Long D, De Ruiter Q, Varble N, Hecht R, Negussie AH, Krishnasamy V, Xu S, Wood BJ. Smartphone- versus smartglasses-based augmented reality (AR) for percutaneous needle interventions: system accuracy and feasibility study. Int J Comput Assist Radiol Surg 2020; 15:1921-1930. [PMID: 32734314 DOI: 10.1007/s11548-020-02235-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 07/14/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE To compare the system accuracy and needle placement performance of smartphone- and smartglasses-based augmented reality (AR) for percutaneous needle interventions. METHODS An AR platform was developed to enable the superimposition of annotated anatomy and a planned needle trajectory onto a patient in real time. The system accuracy of the AR display on smartphone (iPhone7) and smartglasses (HoloLens1) devices was evaluated on a 3D-printed phantom. The target overlay error was measured as the distance between actual and virtual targets (n = 336) on the AR display, derived from preprocedural CT. The needle overlay angle was measured as the angular difference between actual and virtual needles (n = 12) on the AR display. Three operators each used the iPhone (n = 8), HoloLens (n = 8) and CT-guided freehand (n = 8) to guide needles into targets in a phantom. Needle placement error was measured with post-placement CT. Needle placement time was recorded from needle puncture to navigation completion. RESULTS The target overlay error of the iPhone was comparable to the HoloLens (1.75 ± 0.59 mm, 1.74 ± 0.86 mm, respectively, p = 0.9). The needle overlay angle of the iPhone and HoloLens was similar (0.28 ± 0.32°, 0.41 ± 0.23°, respectively, p = 0.26). The iPhone-guided needle placements showed reduced error compared to the HoloLens (2.58 ± 1.04 mm, 3.61 ± 2.25 mm, respectively, p = 0.05) and increased time (87 ± 17 s, 71 ± 27 s, respectively, p = 0.02). Both AR devices reduced placement error compared to CT-guided freehand (15.92 ± 8.06 mm, both p < 0.001). CONCLUSION An augmented reality platform employed on smartphone and smartglasses devices may provide accurate display and navigation guidance for percutaneous needle-based interventions.
Collapse
Affiliation(s)
- Ming Li
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA.
| | - Reza Seifabadi
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Dilara Long
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Quirina De Ruiter
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Nicole Varble
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
- Philips Research of North America, Cambridge, MA, USA
| | - Rachel Hecht
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Ayele H Negussie
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Venkatesh Krishnasamy
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, 20892, USA
| |
Collapse
|
23
|
Lee KK, Kim JW, Ryu JH, Kim JO. Ambient light robust gamut mapping for optical see-through displays. OPTICS EXPRESS 2020; 28:15392-15406. [PMID: 32403567 DOI: 10.1364/oe.391447] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 04/24/2020] [Indexed: 06/11/2023]
Abstract
An optical see-through (OST) display is affected more severely by ambient light than any other type of displays when placed in an outdoor environment with bright illuminance because of its transparency and thus, its inherent color distortion can worsen. It is hard to directly apply existing gamut mapping methods to an OST display because of its morphological gamut characteristic and the effect of ambient light. In this paper, we propose a new robust gamut mapping method which works against bright ambient light. The process is divided into two steps: lightness mapping (LM) and chroma reproduction. LM aligns the lightness level of sRGB gamut with OST gamut and partitions the region of OST gamut based on the relative size of the sRGB gamut and its lightness value. The second step (chroma reproduction) determines an appropriate chroma reproduction method (gamut compression or extension) and a proper direction for gamut mapping based on the characteristics of each region in order to minimize the effects of ambient light. The quality of color reproduction is qualitatively and quantitatively evaluated based on accurate measurements of the displayed colors. It has been experimentally confirmed that the proposed gamut mapping method can reduce color distortion more than the existing parametric gamut mapping algorithms.
Collapse
|
24
|
Condino S, Fida B, Carbone M, Cercenelli L, Badiali G, Ferrari V, Cutolo F. Wearable Augmented Reality Platform for Aiding Complex 3D Trajectory Tracing. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1612. [PMID: 32183212 PMCID: PMC7146390 DOI: 10.3390/s20061612] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 03/05/2020] [Accepted: 03/11/2020] [Indexed: 01/28/2023]
Abstract
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.
Collapse
Affiliation(s)
- Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Benish Fida
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Laura Cercenelli
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| |
Collapse
|
25
|
Ambiguity-Free Optical-Inertial Tracking for Augmented Reality Headsets. SENSORS 2020; 20:s20051444. [PMID: 32155808 PMCID: PMC7085738 DOI: 10.3390/s20051444] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/04/2020] [Accepted: 03/04/2020] [Indexed: 01/19/2023]
Abstract
The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based on Kalman filtering that integrates optical tracking data with inertial tracking data when computing motion correlation. In order to measure improvements in AR overlay accuracy, experiments are performed with a custom-made AR headset designed for supporting complex manual tasks performed under direct vision. Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene when some of the markers are occluded and when the optical tracking yields unstable and/or ambiguous results due to the limitations of using head-anchored stereo tracking cameras under uncontrollable lighting conditions.
Collapse
|
26
|
Park BJ, Hunt SJ, Martin C, Nadolski GJ, Wood BJ, Gade TP. Augmented and Mixed Reality: Technologies for Enhancing the Future of IR. J Vasc Interv Radiol 2020; 31:1074-1082. [PMID: 32061520 DOI: 10.1016/j.jvir.2019.09.020] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 08/01/2019] [Accepted: 09/20/2019] [Indexed: 10/25/2022] Open
Abstract
Augmented and mixed reality are emerging interactive and display technologies. These technologies are able to merge virtual objects, in either 2 or 3 dimensions, with the real world. Image guidance is the cornerstone of interventional radiology. With augmented or mixed reality, medical imaging can be more readily accessible or displayed in actual 3-dimensional space during procedures to enhance guidance, at times when this information is most needed. In this review, the current state of these technologies is addressed followed by a fundamental overview of their inner workings and challenges with 3-dimensional visualization. Finally, current and potential future applications in interventional radiology are highlighted.
Collapse
Affiliation(s)
- Brian J Park
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104.
| | - Stephen J Hunt
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104
| | - Charles Martin
- Department of Interventional Radiology, Cleveland Clinic, Cleveland, Ohio
| | - Gregory J Nadolski
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104
| | - Bradford J Wood
- Interventional Radiology, National Institutes of Health, Bethesda, Maryland
| | - Terence P Gade
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104
| |
Collapse
|
27
|
Singh G, Ellis SR, Swan JE. The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1385-1398. [PMID: 30222576 DOI: 10.1109/tvcg.2018.2869729] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Many augmented reality (AR) applications operate within near-field reaching distances, and require matching the depth of a virtual object with a real object. The accuracy of this matching was measured in three experiments, which examined the effect of focal distance, age, and brightness, within distances of 33.3 to 50 cm, using a custom-built AR haploscope. Experiment I examined the effect of focal demand, at the levels of collimated (infinite focal distance), consistent with other depth cues, and at the midpoint of reaching distance. Observers were too young to exhibit age-related reductions in accommodative ability. The depth matches of collimated targets were increasingly overestimated with increasing distance, consistent targets were slightly underestimated, and midpoint targets were accurately estimated. Experiment II replicated Experiment I, with older observers. Results were similar to Experiment I. Experiment III replicated Experiment I with dimmer targets, using young observers. Results were again consistent with Experiment I, except that both consistent and midpoint targets were accurately estimated. In all cases, collimated results were explained by a model, where the collimation biases the eyes' vergence angle outwards by a constant amount. Focal demand and brightness affect near-field AR depth matching, while age-related reductions in accommodative ability have no effect.
Collapse
|
28
|
Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app10010193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings.
Collapse
|
29
|
Ferrari V, Carbone M, Condino S, Cutolo F. Are augmented reality headsets in surgery a dead end? Expert Rev Med Devices 2019; 16:999-1001. [PMID: 31725347 DOI: 10.1080/17434440.2019.1693891] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Vincenzo Ferrari
- Dipartimento di Ingegneria dell'Informazione, Università di Pisa, Pisa, Italy
| | - Marina Carbone
- Dipartimento di Ingegneria dell'Informazione, Università di Pisa, Pisa, Italy
| | - Sara Condino
- Dipartimento di Ingegneria dell'Informazione, Università di Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Dipartimento di Ingegneria dell'Informazione, Università di Pisa, Pisa, Italy
| |
Collapse
|
30
|
Kim JH, Son HJ, Lee SH, Kwon SC. VR/AR Head-mounted Display System-based Measurement and Evaluation of Dynamic Visual Acuity. J Eye Mov Res 2019; 12. [PMID: 33828774 PMCID: PMC7881879 DOI: 10.16910/jemr.12.8.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
This study evaluated the dynamic visual acuity of candidates by implementing a King–Devick
(K-D)
test chart in a virtual reality head-mounted display (VR HMD) and an augmented reality
head-mounted display (AR HMD). Hard-copy KD (HCKD), VR HMD KD (VHKD), and AR HMD KD (AHKD)
tests
were conducted in 30 male and female candidates in the age of 10S and 20S and subjective
symptom
surveys were conducted. In the subjective symptom surveys, all except one of the VHKD questionnaire items
showed
subjective symptoms of less than 1 point. In the comparison between HCKD and VHKD, HCKD was
measured more rapidly than VHKD in all tests. In the comparison between HCKD and AHKD, HCKD
was
measured more rapidly than AHKD in Tests 1, 2, and 3. In the comparison between VHKD and
AHKD,
AHKD was measured more rapidly than VHKD in Tests 1, 2, and 3. In the correlation analyses
of
test platforms, all platforms were correlated with each other, except for the correlation
between HCKD and VHKD in Tests 1 and 2. There was no significant difference in the frequency
of
errors among Tests 1, 2, and 3 across test platforms. VHKD and AHKD, which require the body to be moved to read the chart, required longer
measurement
time than HCKD. In the measurements of each platform, AHKD was measured closer to HCKD than
VHKD, which may be because the AHKD environment is closer to the actual environment than the
VHKD environment. The effectiveness of VHKD and AHKD proposed in this research was evaluated
experimentally. The results suggest that treatment and training could be performed
concurrently
through the use of clinical test and content development of VHKD and AHKD.
Collapse
Affiliation(s)
- Jung-Ho Kim
- Industry-Academic Collaboration Foundation, Kwangwoon University, Seoul, Korea
| | - Ho-Jun Son
- Strategy and Planning Team, Korea VR AR Industry Association, Seoul, Korea
| | - Seung-Hyun Lee
- Ingenium College of Liberal Arts, Kwangwoon University, Seoul, Korea
| | - Soon-Chul Kwon
- Graduate School of Smart Convergence, Kwangwoon University, Seoul, Korea
| |
Collapse
|
31
|
Letter to the Editor on “Augmented Reality Based Navigation for Computer Assisted Hip Resurfacing: A Proof of Concept Study”. Ann Biomed Eng 2019; 47:2151-2153. [DOI: 10.1007/s10439-019-02299-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 05/29/2019] [Indexed: 01/20/2023]
|
32
|
Condino S, Carbone M, Piazza R, Ferrari M, Ferrari V. Perceptual Limits of Optical See-Through Visors for Augmented Reality Guidance of Manual Tasks. IEEE Trans Biomed Eng 2019; 67:411-419. [PMID: 31059421 DOI: 10.1109/tbme.2019.2914517] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The focal length of available optical see-through (OST) head-mounted displays (HMDs) is at least 2 m; therefore, during manual tasks, the user eye cannot keep in focus both the virtual and real content at the same time. Another perceptual limitation is related to the vergence-accommodation conflict, the latter being present in binocular vision only. This paper investigates the effect of incorrect focus cues on the user performance, visual comfort, and workload during the execution of augmented reality (AR)-guided manual task with one of the most advanced OST HMD, the Microsoft HoloLens. METHODS An experimental study was designed to investigate the performance of 20 subjects in a connect-the-dots task, with and without the use of AR. The following tests were planned: AR-guided monocular and binocular, and naked-eye monocular and binocular. Each trial was analyzed to evaluate the accuracy in connecting dots. NASA Task Load Index and Likert questionnaires were used to assess the workload and the visual comfort. RESULTS No statistically significant differences were found in the workload, and in the perceived comfort between the AR-guided binocular and monocular test. User performances were significantly better during the naked eye tests. No statistically significant differences in performances were found in the monocular and binocular tests. The maximum error in AR tests was 5.9 mm. CONCLUSION Even if there is a growing interest in using commercial OST HMD, for guiding high-precision manual tasks, attention should be paid to the limitations of the available technology not designed for the peripersonal space.
Collapse
|
33
|
Hamasaki T, Itoh Y. Varifocal Occlusion for Optical See-Through Head-Mounted Displays using a Slide Occlusion Mask. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1961-1969. [PMID: 30946658 DOI: 10.1109/tvcg.2019.2899249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We propose a varifocal occlusion technique for optical see-through head-mounted displays (OST-HMDs). Occlusion in OST-HMDs is a powerful visual cue that enables depth perception in augmented reality (AR). Without occlusion, virtual objects rendered by an OST-HMD appear semi-transparent and less realistic. A common occlusion technique is to use spatial light modulators (SLMs) to block incoming light rays at each pixel on the SLM selectively. However, most of the existing methods create an occlusion mask only at a single, fixed depth-typically at infinity. With recent advances in varifocal OST-HMDs, such traditional fixed-focus occlusion causes a mismatch in depth between the occlusion mask plane and the virtual object to be occluded, leading to an uncomfortable user experience with blurred occlusion masks. In this paper, we thus propose an OST-HMD system with varifocal occlusion capability: we physically slide a transmissive liquid crystal display (LCD) to optically shift the occlusion plane along the optical path so that the mask appears sharp and aligns to a virtual image at a given depth. Our solution has several benefits over existing varifocal occlusion methods: it is computationally less demanding and, more importantly, it is optically consistent, i.e., when a user loses focus on the corresponding virtual image, the mask again gets blurred consistently as the virtual image does. In the experiment, we build a proof-of-concept varifocal occlusion system implemented with a custom retinal projection display and demonstrate that the system can shift the occlusion plane to depths ranging from 25 cm to infinity.
Collapse
|
34
|
de Oliveira ME, Debarba HG, Lädermann A, Chagué S, Charbonnier C. A hand-eye calibration method for augmented reality applied to computer-assisted orthopedic surgery. Int J Med Robot 2018; 15:e1969. [DOI: 10.1002/rcs.1969] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 10/25/2018] [Accepted: 10/30/2018] [Indexed: 12/23/2022]
Affiliation(s)
| | | | - Alexandre Lädermann
- Division of Orthopedics and Trauma Surgery; La Tour Hospital; Geneva Switzerland
- Department of Orthopedic Surgery and Traumatology; Geneva University Hospital; Geneva Switzerland
| | - Sylvain Chagué
- Medical Research Department; Artanim Foundation; Geneva Switzerland
| | - Caecilia Charbonnier
- Medical Research Department; Artanim Foundation; Geneva Switzerland
- Faculty of Medicine; University of Geneva; Geneva Switzerland
| |
Collapse
|
35
|
How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:5435097. [PMID: 30515284 PMCID: PMC6236521 DOI: 10.1155/2018/5435097] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 09/30/2018] [Indexed: 12/19/2022]
Abstract
Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator.
Collapse
|
36
|
Abstract
In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models.
Collapse
|