1
|
Hiroi Y, Hiraki T, Itoh Y. StainedSweeper: Compact, Variable-Intensity Light-Attenuation Display with Sweeping Tunable Retarders. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2682-2692. [PMID: 38437084 DOI: 10.1109/tvcg.2024.3372058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Light Attenuation Displays (LADs) are a type of Optical See-Through Head-Mounted Display (OST-HMD) that present images by attenuating incoming light with a pixel-wise polarizing color filter. Although LADs can display images in bright environments, there is a trade-off between the number of Spatial Light Modulators (SLMs) and the color gamut and contrast that can be expressed, making it difficult to achieve both high-fidelity image display and a small form factor. To address this problem, we propose StainedSweeper, a LAD that achieves both the wide color gamut and the variable intensity with a single SLM. Our system synchronously controls a pixel-wise Digital Micromirror Device (DMD) and a nonpixel polarizing color filter to pass light when each pixel is the desired color. By sweeping this control at high speed, the human eye perceives images in a time-multiplexed, integrated manner. To achieve this, we develop the OST-HMD design using a reflective Solc filter as a polarized color filter and a color reproduction algorithm based on the optimization of the time-multiplexing matrix for the selected primary color filters. Our proof-of-concept prototype showed that our single SLM design can produce subtractive images with variable contrast and a wider color gamut than conventional LADs.
Collapse
|
2
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
3
|
Zhang Y, Hu X, Kiyokawa K, Yang X. Add-on Occlusion: Turning Off-the-Shelf Optical See-through Head-mounted Displays Occlusion-capable. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2700-2709. [PMID: 37027617 DOI: 10.1109/tvcg.2023.3247064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The occlusion-capable optical see-through head-mounted display (OC-OSTHMD) is actively developed in recent years since it allows mutual occlusion between virtual objects and the physical world to be correctly presented in augmented reality (AR). However, implementing occlusion with the special type of OSTHMDs prevents the appealing feature from the wide application. In this paper, a novel approach for realizing mutual occlusion for common OSTHMDs is proposed. A wearable device with per-pixel occlusion capability is designed. OSTHMD devices are upgraded to be occlusion-capable by attaching the device before optical combiners. A prototype with HoloLens 1 is built. The virtual display with mutual occlusion is demonstrated in real-time. A color correction algorithm is proposed to mitigate the color aberration caused by the occlusion device. Potential applications, including the texture replacement of real objects and the more realistic semi-transparent objects display, are demonstrated. The proposed system is expected to realize a universal implementation of mutual occlusion in AR.
Collapse
|
4
|
Macedo MCF, Apolinario AL. Occlusion Handling in Augmented Reality: Past, Present and Future. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1590-1609. [PMID: 34613916 DOI: 10.1109/tvcg.2021.3117866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
One of the main goals of many augmented reality applications is to provide a seamless integration of a real scene with additional virtual data. To fully achieve that goal, such applications must typically provide high-quality real-world tracking, support real-time performance and handle the mutual occlusion problem, estimating the position of the virtual data into the real scene and rendering the virtual content accordingly. In this survey, we focus on the occlusion handling problem in augmented reality applications and provide a detailed review of 161 articles published in this field between January 1992 and August 2020. To do so, we present a historical overview of the most common strategies employed to determine the depth order between real and virtual objects, to visualize hidden objects in a real scene, and to build occlusion-capable visual displays. Moreover, we look at the state-of-the-art techniques, highlight the recent research trends, discuss the current open problems of occlusion handling in augmented reality, and suggest future directions for research.
Collapse
|
5
|
Jiang J, Zhang J, Sun J, Wu D, Xu S. User's image perception improved strategy and application of augmented reality systems in smart medical care: A review. Int J Med Robot 2023; 19:e2497. [PMID: 36629798 DOI: 10.1002/rcs.2497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/26/2022] [Accepted: 01/06/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND Augmented reality (AR) is a new human-computer interaction technology that combines virtual reality, computer vision, and computer networks. With the rapid advancement of the medical field towards intelligence and data visualisation, AR systems are becoming increasingly popular in the medical field because they can provide doctors with clear enough medical images and accurate image navigation in practical applications. However, it has been discovered that different display types of AR systems have different effects on doctors' perception of the image after virtual-real fusion during the actual medical application. If doctors cannot correctly perceive the image, they may be unable to correctly match the virtual information with the real world, which will have a significant impact on their ability to recognise complex structures. METHODS This paper uses Citespace, a literature analysis tool, to visualise and analyse the research hotspots when AR systems are used in the medical field. RESULTS A visual analysis of the 1163 articles retrieved from the Web of Science Core Collection database reveals that display technology and visualisation technology are the key research directions of AR systems at the moment. CONCLUSION This paper categorises AR systems based on their display principles, reviews current image perception optimisation schemes for various types of systems, and analyses and compares different display types of AR systems based on their practical applications in the field of smart medical care so that doctors can select the appropriate display types based on different application scenarios. Finally, the future development direction of AR display technology is anticipated in order for AR technology to be more effectively applied in the field of smart medical care. The advancement of display technology for AR systems is critical for their use in the medical field, and the advantages and disadvantages of various display types should be considered in different application scenarios to select the best AR system.
Collapse
Affiliation(s)
- Jingang Jiang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China.,Robotics & Its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jiawei Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jianpeng Sun
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Dianhao Wu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Shuainan Xu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| |
Collapse
|
6
|
Wilson A, Hua H. Design of a Pupil-Matched Occlusion-Capable Optical See-Through Wearable Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4113-4126. [PMID: 33905332 DOI: 10.1109/tvcg.2021.3076069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
State-of-the-art optical see-through head-mounted displays (OST-HMD) for augmented reality applications lack the ability to correctly render light blocking behavior between digital and physical objects, known as mutual occlusion capability. In this article, we present a novel optical architecture for enabling a high performance, occlusion-capable optical see-through head-mounted display (OCOST-HMD). The design utilizes a single-layer, double-pass architecture, creating a compact OCOST-HMD that is capable of rendering per-pixel mutual occlusion, correctly pupil-matched viewing perspective between virtual and real scenes, and a wide see-through field of view (FOV). Based on this architecture, we present a design embodiment and a compact prototype implementation. The prototype demonstrates a virtual display with an FOV of 34° by 22°, an angular resolution of 1.06 arc minutes per pixel, and an average image contrast greater than 40 percent at the Nyquist frequency of 53 cycles/mm. Furthermore, the device achieves a see-through FOV of 90° by 50°, within which about 40° diagonally is occlusion-enabled, and has an angular resolution of 1.0 arc minutes (comparable to a 20/20 vision) and a dynamic range greater than 100:1. We conclude the paper with a quantitative comparison of the key optical performance such as modulation transfer function, image contrast, and color rendering accuracy of our OCOST-HMD system with and without occlusion enabled for various lighting environments.
Collapse
|
7
|
Zhang Y, Wang R, Peng Y, Hua W, Bao H. Color Contrast Enhanced Rendering for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4490-4502. [PMID: 34161241 DOI: 10.1109/tvcg.2021.3091686] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Most commercially available optical see-through head-mounted displays (OST-HMDs) utilize optical combiners to simultaneously visualize the physical background and virtual objects. The displayed images perceived by users are a blend of rendered pixels and background colors. Enabling high fidelity color perception in mixed reality (MR) scenarios using OST-HMDs is an important but challenging task. We propose a real-time rendering scheme to enhance the color contrast between virtual objects and the surrounding background for OST-HMDs. Inspired by the discovery of color perception in psychophysics, we first formulate the color contrast enhancement as a constrained optimization problem. We then design an end-to-end algorithm to search the optimal complementary shift in both chromaticity and luminance of the displayed color. This aims at enhancing the contrast between virtual objects and the real background as well as keeping the consistency with the original displayed color. We assess the performance of our approach using a simulated OST-HMD environment and an off-the-shelf OST-HMD. Experimental results from objective evaluations and subjective user studies demonstrate that the proposed approach makes rendered virtual objects more distinguishable from the surrounding background, thereby bringing a better visual experience.
Collapse
|
8
|
Qian L, Song T, Unberath M, Kazanzides P. AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2550-2562. [PMID: 33170780 DOI: 10.1109/tvcg.2020.3037284] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
Collapse
|
9
|
Yoo D, Lee S, Jo Y, Cho J, Choi S, Lee B. Volumetric Head-Mounted Display With Locally Adaptive Focal Blocks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1415-1427. [PMID: 32746283 DOI: 10.1109/tvcg.2020.3011468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A commercial head-mounted display (HMD) for virtual reality (VR) presents three-dimensional imagery with a fixed focal distance. The VR HMD with a fixed focus can cause visual discomfort to an observer. In this article, we propose a novel design of a compact VR HMD supporting near-correct focus cues over a wide depth of field (from 18 cm to optical infinity). The proposed HMD consists of a low-resolution binary backlight, a liquid crystal display panel, and focus-tunable lenses. In the proposed system, the backlight locally illuminates the display panel that is floated by the focus-tunable lens at a specific distance. The illumination moment and the focus-tunable lens' focal power are synchronized to generate focal blocks at the desired distances. The distance of each focal block is determined by depth information of three-dimensional imagery to provide near-correct focus cues. We evaluate the focus cue fidelity of the proposed system considering the fill factor and resolution of the backlight. Finally, we verify the display performance with experimental results.
Collapse
|
10
|
Hiroi Y, Kaminokado T, Ono S, Itoh Y. Focal surface occlusion. OPTICS EXPRESS 2021; 29:36581-36597. [PMID: 34809066 DOI: 10.1364/oe.440024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/13/2021] [Indexed: 06/13/2023]
Abstract
This paper proposes focal surface occlusion to provide focal cues of occlusion masks for multiple virtual objects at continuous depths in an occlusion-capable optical see-through head-mounted display. A phase-only spatial light modulator (PSLM) that acts as a dynamic free-form lens is used to conform the focal surface of an occlusion mask to the geometry of the virtual scene. To reproduce multiple and continuous focal blurs while reducing the distortion of the see-through view, an optical design based on afocal optics and edge-based optimization to exploit a property of the occlusion mask is established. The prototype with the PSLM and transmissive liquid crystal display can reproduce the focus blur of occluded objects at multiple and continuous depths with a field of view of 14.6°.
Collapse
|
11
|
Chae M, Bang K, Jo Y, Yoo C, Lee B. Occlusion-capable see-through display without the screen-door effect using a photochromic mask. OPTICS LETTERS 2021; 46:4554-4557. [PMID: 34525045 DOI: 10.1364/ol.430478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 08/11/2021] [Indexed: 06/13/2023]
Abstract
Conventional occlusion-capable see-through display systems have many practical limitations such as the form factor, narrow field of view, screen-door effect, and diffraction of a real scene. In this Letter, we propose an occlusion-capable see-through display using lens arrays and a photochromic plate. By imaging the occlusion mask on the photochromic plate with near-UV light, the visible light transmittance of the plate changes. Since no black matrix lies on the photochromic plate, our system provides a clear real scene view without the grid structure of the pixels and can prevent diffraction defects of the real scene. We also alleviate the drawback of a narrow field of view using the lens arrays for a reduced form factor.
Collapse
|
12
|
Zhang Y, Hu X, Kiyokawa K, Isoyama N, Sakata N, Hua H. Optical see-through augmented reality displays with wide field of view and hard-edge occlusion by using paired conical reflectors. OPTICS LETTERS 2021; 46:4208-4211. [PMID: 34469976 DOI: 10.1364/ol.428714] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/26/2021] [Indexed: 06/13/2023]
Abstract
Optical see-through head-mounted displays are actively developed in recent years. An appropriate method for mutual occlusion is essential to provide a decent user experience in many application scenarios of augmented reality. However, existing mutual occlusion methods fail to work well with a large field of view (FOV). In this Letter, we propose a double-parabolic-mirror structure that renders hard-edge occlusion within a wide FOV. The parabolic mirror increases the numerical aperture of the system significantly, and the usage of paired parabolic mirrors eliminates most optical aberrations. A liquid crystal on silicon device is introduced as the spatial light modulator for imaging a bright see-through view and rendering sharp occlusion patterns. A loop structure is built to eliminate vertical parallax. The system is designed to obtain a maximum monocular FOV of H114∘×V95∘ with hard-edge occlusion, and a FOV of H83.5∘×V53.1∘ is demonstrated with our bench-top prototype.
Collapse
|
13
|
Ueda T, Iwai D, Sato K. IlluminatedZoom: spatially varying magnified vision using periodically zooming eyeglasses and a high-speed projector. OPTICS EXPRESS 2021; 29:16377-16395. [PMID: 34154202 DOI: 10.1364/oe.427616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 05/07/2021] [Indexed: 06/13/2023]
Abstract
Spatial zooming and magnification, which control the size of only a portion of a scene while maintaining its context, is an essential interaction technique in augmented reality (AR) systems. It has been applied in various AR applications including surgical navigation, visual search support, and human behavior control. However, spatial zooming has been implemented only on video see-through displays and not been supported by optical see-through displays. It is not trivial to achieve spatial zooming of an observed real scene using near-eye optics. This paper presents the first optical see-through spatial zooming glasses which enables interactive control of the perceived sizes of real-world appearances in a spatially varying manner. The key to our technique is the combination of periodically fast zooming eyeglasses and a synchronized high-speed projector. We stack two electrically focus-tunable lenses (ETLs) for each eyeglass and sweep their focal lengths to modulate the magnification periodically from one (unmagnified) to higher (magnified) at 60 Hz in a manner that prevents a user from perceiving the modulation. We use a 1,000 fps high-speed projector to provide high-resolution spatial illumination for the real scene around the user. A portion of the scene that is to appear magnified is illuminated by the projector when the magnification is greater than one, while the other part is illuminated when the magnification is equal to one. Through experiments, we demonstrate the spatial zooming results of up to 30% magnification using a prototype system. Our technique has the potential to expand the application field of spatial zooming interaction in optical see-through AR.
Collapse
|
14
|
Itoh Y, Kaminokado T, Aksit K. Beaming Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2659-2668. [PMID: 33750701 DOI: 10.1109/tvcg.2021.3067764] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display. In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a user's retina, which are then perceived as an augmented scene within a user's field of view. In addition to providing the system design of the beaming display, we provide a physical prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the different aspects of the design space for our proposal.
Collapse
|
15
|
Kaminokado T, Hiroi Y, Itoh Y. StainedView: Variable-Intensity Light-Attenuation Display with Cascaded Spatial Color Filtering for Improved Color Fidelity. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3576-3586. [PMID: 32941143 DOI: 10.1109/tvcg.2020.3023569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present StainedView, an optical see-through display that spatially filters the spectral distribution of light to form an image with improved color fidelity. Existing light-attenuation displays have limited color fidelity and contrast, resulting in a degraded appearance of virtual images. To use these displays to present virtual images that are more consistent with the real world, we require three things: intensity modulation of incoming light, spatial color filtering with narrower bandwidth, and appropriate light modulation for incoming light with an arbitrary spectral distribution. In StainedView, we address the three requirements by cascading two phase-only spatial light modulators (PSLMs), a digital micromirror device, and polarization optics to control both light intensity and spectrum distribution. We show that our design has a 1.8 times wider color gamut fidelity (75.8% fulfillment of sRGB color space) compared to the existing single-PSLM approach (41.4%) under a reference white light. We demonstrated the design with a proof-of-concept display system. We further introduce our optics design and pixel-selection algorithm for the given light input, evaluate the spatial color filter, and discuss the limitation of the current prototype.
Collapse
|
16
|
Suzuki K, Fukano Y, Oku H. 1000-volume/s high-speed volumetric display for high-speed HMD. OPTICS EXPRESS 2020; 28:29455-29468. [PMID: 33114845 DOI: 10.1364/oe.401778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 09/05/2020] [Indexed: 06/11/2023]
Abstract
In this paper, we propose a high-speed volumetric display principle that can solve two problems faced by three-dimensional displays using the parallax stereo principle (namely, the vergence-accommodation conflict and display latency) and we report evaluation results. The proposed display method can update a set of images at different depths at 1000 Hz and is consistent with accommodation. The method selects the depth position in microseconds by combining a high-speed variable-focus lens that vibrates at about 69 kHz and sub-microsecond control of illumination light using an LED. By turning on the LED for only a few hundred nanoseconds when the refractive power of the lens is at a certain value, an image can be presented with this specific refractive power. The optical system is combined with a DMD to form an image at each depth. 3D information consisting of multiple planes in the depth direction can be presented at a high refresh rate by switching the images and changing the refractive power at high speed. A proof-of-concept system was developed to show the validity of the proposed display principle. The system successfully displayed 3D information consisting of six binary images at an update rate of 1000 volume/s.
Collapse
|
17
|
Ju YG, Choi MH, Liu P, Hellman B, Lee TL, Takashima Y, Park JH. Occlusion-capable optical-see-through near-eye display using a single digital micromirror device. OPTICS LETTERS 2020; 45:3361-3364. [PMID: 32630845 DOI: 10.1364/ol.393194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 05/01/2020] [Indexed: 06/11/2023]
Abstract
Occlusion of a real scene by displayed virtual images mitigates incorrect depth cues and enhances image visibility in augmented reality applications. In this Letter, we propose a novel optical scheme for the occlusion-capable optical-see-through near-eye display. The proposed scheme uses only a single spatial light modulator, as the real-scene mask and virtual image display simultaneously. A polarization-based double-pass configuration is also combined, enabling a compact implementation. The proposed scheme is verified by optical experiments which demonstrate a 60 Hz red-green-blue video display with a 4-bit depth for each color channel and per-pixel dynamic occlusion of a 90.6% maximum occlusion ratio.
Collapse
|
18
|
Krajancich B, Padmanaban N, Wetzstein G. Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1871-1879. [PMID: 32070978 DOI: 10.1109/tvcg.2020.2973443] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners - an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.
Collapse
|
19
|
Rathinavel K, Wetzstein G, Fuchs H. Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3125-3134. [PMID: 31502977 DOI: 10.1109/tvcg.2019.2933120] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.
Collapse
|