1
|
An Z, Liu Y, Xue M. Design of AR system tracking registration method using dynamic target light-field. OPTICS EXPRESS 2024; 32:16467-16477. [PMID: 38859272 DOI: 10.1364/oe.521975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/04/2024] [Indexed: 06/12/2024]
Abstract
In the process of tracking registration for an augmented reality (AR) system, it's essential first to obtain the system's initial state, as its accuracy significantly influences the precision of subsequent three-dimensional tracking registration. At this point, minor movements of the target can directly lead to calibration errors. Current methods fail to address the challenge of capturing the initial state of dynamic deformation in optically transparent AR systems effectively. To tackle this issue, the concept of a static light-field is expanded to a four-dimensional dynamic light-field, and a tracking registration method for an optical see-through AR system based on the four-dimensional dynamic light-field is introduced. This method begins by analyzing the relationship between the components of the optical see-through AR system and studying the impact of a dynamic target on the initial state model. Leveraging the fundamental principle of light-field correlation, the theory and model for four-dimensional dynamic light-field tracking registration are developed. A lot of experiments have confirmed the algorithm's accuracy, enhanced its stability, and demonstrated the superior performance of the three-dimensional tracking registration algorithm.
Collapse
|
2
|
Qian L, Song T, Unberath M, Kazanzides P. AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2550-2562. [PMID: 33170780 DOI: 10.1109/tvcg.2020.3037284] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
Collapse
|
3
|
Domeneghetti D, Carbone M, Cutolo F, Ferrari V. A Rendering Engine for Integral Imaging in Augmented Reality Guided Surgery . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2693-2696. [PMID: 36086410 DOI: 10.1109/embc48229.2022.9871806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In the field of image-guided surgery, Augmented Reality wearable displays are a widely studied and documented technology for their ability to provide egocentric vision together with the overlap between real and virtual content. In particular, optical see-through (OST) displays have the advantage of maintaining visual perception of the real world. However, OST displays suffer from vergeance-accomodation conflict when virtual content is superimposed on real world. Furthermore, the calibration methods required to achieve geometric consistency between real and virtual are inherently error-prone. One of the solutions, already studied, to these problems is to use of integral imaging displays. In this paper we present an easy and straightforward real-time rendering strategy implemented in modern OpenGL to show the 3D image of a virtual object on a wearable OST display deploying the integral imaging approach. Clinical Relevance- The algorithm proposed open the way towards more effective AR surgical navigation in terms of comfort of the AR experience and accuracy of the AR guidance.
Collapse
|
4
|
Ferrari V, Cattari N, Fontana U, Cutolo F. Parallax Free Registration for Augmented Reality Optical See-Through Displays in the Peripersonal Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1608-1618. [PMID: 32881688 DOI: 10.1109/tvcg.2020.3021534] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Egocentric augmented reality (AR) interfaces are quickly becoming a key asset for assisting high precision activities in the peripersonal space in several application fields. In these applications, accurate and robust registration of computer-generated information to the real scene is hard to achieve with traditional Optical See-Through (OST) displays given that it relies on the accurate calibration of the combined eye-display projection model. The calibration is required to efficiently estimate the projection parameters of the pinhole model that encapsulate the optical features of the display and whose values vary according to the position of the user's eye. In this article, we describe an approach that prevents any parallax-related AR misregistration at a pre-defined working distance in OST displays with infinity focus; our strategy relies on the use of a magnifier placed in front of the OST display, and features a proper parameterization of the virtual rendering camera achieved through a dedicated calibration procedure that accounts for the contribution of the magnifier. We model the registration error due to the viewpoint parallax outside the ideal working distance. Finally, we validate our strategy on a OST display, and we show that sub-millimetric registration accuracy can be achieved for working distances of ±100 mm around the focal length of the magnifier.
Collapse
|
5
|
Doughty M, Ghugre NR. Head-Mounted Display-Based Augmented Reality for Image-Guided Media Delivery to the Heart: A Preliminary Investigation of Perceptual Accuracy. J Imaging 2022; 8:jimaging8020033. [PMID: 35200735 PMCID: PMC8878166 DOI: 10.3390/jimaging8020033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 01/25/2022] [Accepted: 01/28/2022] [Indexed: 01/14/2023] Open
Abstract
By aligning virtual augmentations with real objects, optical see-through head-mounted display (OST-HMD)-based augmented reality (AR) can enhance user-task performance. Our goal was to compare the perceptual accuracy of several visualization paradigms involving an adjacent monitor, or the Microsoft HoloLens 2 OST-HMD, in a targeted task, as well as to assess the feasibility of displaying imaging-derived virtual models aligned with the injured porcine heart. With 10 participants, we performed a user study to quantify and compare the accuracy, speed, and subjective workload of each paradigm in the completion of a point-and-trace task that simulated surgical targeting. To demonstrate the clinical potential of our system, we assessed its use for the visualization of magnetic resonance imaging (MRI)-based anatomical models, aligned with the surgically exposed heart in a motion-arrested open-chest porcine model. Using the HoloLens 2 with alignment of the ground truth target and our display calibration method, users were able to achieve submillimeter accuracy (0.98 mm) and required 1.42 min for calibration in the point-and-trace task. In the porcine study, we observed good spatial agreement between the MRI-models and target surgical site. The use of an OST-HMD led to improved perceptual accuracy and task-completion times in a simulated targeting task.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada;
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada;
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
6
|
Cutolo F, Cattari N, Fontana U, Ferrari V. Optical See-Through Head-Mounted Displays With Short Focal Distance: Conditions for Mitigating Parallax-Related Registration Error. Front Robot AI 2020; 7:572001. [PMID: 33501331 PMCID: PMC7806030 DOI: 10.3389/frobt.2020.572001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/17/2020] [Indexed: 11/13/2022] Open
Abstract
Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space.
Collapse
Affiliation(s)
- Fabrizio Cutolo
- Information Engineering Department, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Nadia Cattari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Umberto Fontana
- Information Engineering Department, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
7
|
Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app10010193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings.
Collapse
|
8
|
de Oliveira ME, Debarba HG, Lädermann A, Chagué S, Charbonnier C. A hand-eye calibration method for augmented reality applied to computer-assisted orthopedic surgery. Int J Med Robot 2018; 15:e1969. [DOI: 10.1002/rcs.1969] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 10/25/2018] [Accepted: 10/30/2018] [Indexed: 12/23/2022]
Affiliation(s)
| | | | - Alexandre Lädermann
- Division of Orthopedics and Trauma Surgery; La Tour Hospital; Geneva Switzerland
- Department of Orthopedic Surgery and Traumatology; Geneva University Hospital; Geneva Switzerland
| | - Sylvain Chagué
- Medical Research Department; Artanim Foundation; Geneva Switzerland
| | - Caecilia Charbonnier
- Medical Research Department; Artanim Foundation; Geneva Switzerland
- Faculty of Medicine; University of Geneva; Geneva Switzerland
| |
Collapse
|
9
|
Grubert J, Itoh Y, Moser K, Swan JE. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2649-2662. [PMID: 28961115 DOI: 10.1109/tvcg.2017.2754257] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user's eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user's eyes within the tracking system's coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research.
Collapse
|
10
|
Grubert J, Langlotz T, Zollmann S, Regenbrecht H. Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1706-1724. [PMID: 27008668 DOI: 10.1109/tvcg.2016.2543720] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user's current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.
Collapse
|
11
|
Langlotz T, Cook M, Regenbrecht H. Real-Time Radiometric Compensation for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2385-2394. [PMID: 27479973 DOI: 10.1109/tvcg.2016.2593781] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Optical see-through head-mounted displays are currently seeing a transition out of research labs towards the consumer-oriented market. However, whilst availability has improved and prices have decreased, the technology has not matured much. Most commercially available optical see-through head mounted displays follow a similar principle and use an optical combiner blending the physical environment with digital information. This approach yields problems as the colors for the overlaid digital information can not be correctly reproduced. The perceived pixel colors are always a result of the displayed pixel color and the color of the current physical environment seen through the head-mounted display. In this paper we present an initial approach for mitigating the effect of color-blending in optical see-through head-mounted displays by introducing a real-time radiometric compensation. Our approach is based on a novel prototype for an optical see-through head-mounted display that allows the capture of the current environment as seen by the user's eye. We present three different algorithms using this prototype to compensate color blending in real-time and with pixel-accuracy. We demonstrate the benefits and performance as well as the results of a user study. We see application for all common Augmented Reality scenarios but also for other areas such as Diminished Reality or supporting color-blind people.
Collapse
|
12
|
Itoh Y, Amano T, Iwai D, Klinker G. Gaussian Light Field: Estimation of Viewpoint-Dependent Blur for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2368-2376. [PMID: 27479971 DOI: 10.1109/tvcg.2016.2593779] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose a method to calibrate viewpoint-dependent, channel-wise image blur of near-eye displays, especially of Optical See-Through Head-Mounted Displays (OST-HMDs). Imperfections in HMD optics cause channel-wise image shift and blur that degrade the image quality of the display at a user's viewpoint. If we can estimate such characteristics perfectly, we could mitigate the effect by applying correction techniques from the computational photography in computer vision as analogous to cameras. Unfortunately, directly applying existing calibration techniques of cameras to OST-HMDs is not a straightforward task. Unlike ordinary imaging systems, image blur in OST-HMDs is viewpoint-dependent, i.e., the optical characteristic of a display dynamically changes depending on the current viewpoint of the user. This constraint makes the problem challenging since we must measure image blur of an HMD, ideally, over the entire 3D eyebox in which a user can see an image. To overcome this problem, we model the viewpoint-dependent blur as a Gaussian Light Field (GLF) that stores spatial information of the display screen as a (4D) light field with depth information and the blur as point-spread functions in the form of Gaussian kernels, respectively. We first describe both our GLF model and a calibration procedure to learn a GLF for a given OST-HMD. We then apply our calibration method to two HMDs that use different optics: a cubic prism or holographic gratings. The results show that our method achieves significantly better accuracy in Point-Spread Function (PSF) estimations with an accuracy about 2 to 7 dB in Peak SNR.
Collapse
|
13
|
Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers. ELECTRONICS 2016. [DOI: 10.3390/electronics5030059] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
14
|
|
15
|
Itoh Y, Klinker G. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:471-480. [PMID: 26357097 DOI: 10.1109/tvcg.2015.2391859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.
Collapse
Affiliation(s)
- Yuta Itoh
- Department of Informatics, Technical University of Munich
| | - Gudrun Klinker
- Department of Informatics, Technical University of Munich
| |
Collapse
|