1
|
The Statistics of Eye Movements and Binocular Disparities during VR Gaming: Implications for Headset Design. ACM TRANSACTIONS ON GRAPHICS 2023; 42:7. [PMID: 37122317 PMCID: PMC10139447 DOI: 10.1145/3549529] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The human visual system evolved in environments with statistical regularities. Binocular vision is adapted to these such that depth perception and eye movements are more precise, faster, and performed comfortably in environments consistent with the regularities. We measured the statistics of eye movements and binocular disparities in virtual-reality (VR) - gaming environments and found that they are quite different from those in the natural environment. Fixation distance and direction are more restricted in VR, and fixation distance is farther. The pattern of disparity across the visual field is less regular in VR and does not conform to a prominent property of naturally occurring disparities. From this we predict that double vision is more likely in VR than in the natural environment. We also determined the optimal screen distance to minimize discomfort due to the vergence-accommodation conflict, and the optimal nasal-temporal positioning of head-mounted display (HMD) screens to maximize binocular field of view. Finally, in a user study we investigated how VR content affects comfort and performance. Content that is more consistent with the statistics of the natural world yields less discomfort than content that is not. Furthermore, consistent content yields slightly better performance than inconsistent content.
Collapse
|
2
|
Super multi-view near-eye virtual reality with directional backlights from wave-guides. OPTICS EXPRESS 2023; 31:1721-1736. [PMID: 36785201 DOI: 10.1364/oe.478267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/09/2022] [Indexed: 06/18/2023]
Abstract
Directional backlights have often been employed for generating multiple view-zones in three-dimensional (3D) display, with each backlight converging into a corresponding view-zone. By designing the view-zone interval for each pupil smaller than the pupil's diameter, super multi-view (SMV) can get implemented for a VAC-free 3D display. However, expanding the backlight from a light-source to cover the corresponding display panel often needs an extra thickness, which results in a thicker structure and is unwanted by a near-eye display. In this paper, two wave-guides are introduced into a near-eye virtual reality (NEVR) system, for sequentially guiding more than one directional backlight to each display panel for SMV display without bringing obvious extra thickness. A prototype SMV NEVR gets demonstrated, with two backlights from each wave-guide converging into two view-zones for a corresponding pupil. Although the additional configured light-sources are positioned far from the corresponding wave-guide in our proof-of-concept prototype, multiple light-sources can be attached to the corresponding wave-guide compactly if necessary. As proof, a 3D scene with defocus-blur effects gets displayed. The design range of the backlights' total reflection angles in the wave-guide is also discussed.
Collapse
|
3
|
Optical modelling of an accommodative light field display system and prediction of human eye responses. OPTICS EXPRESS 2022; 30:37193-37212. [PMID: 36258312 DOI: 10.1364/oe.458651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
The spatio-angular resolution of a light field (LF) display is a crucial factor for delivering adequate spatial image quality and eliciting an accommodation response. Previous studies have modelled retinal image formation with an LF display and evaluated whether accommodation would be evoked correctly. The models were mostly based on ray-tracing and a schematic eye model, which pose computational complexity and inaccurately represent the human eye population's behaviour. We propose an efficient wave-optics-based framework to model the human eye and a general LF display. With the model, we simulated the retinal point spread function (PSF) of a point rendered by an LF display at various depths to characterise the retinal image quality. Additionally, accommodation responses to the rendered point were estimated by computing the visual Strehl ratio based on the optical transfer function (VSOTF) from the PSFs. We assumed an ideal LF display that had an infinite spatial resolution and was free from optical aberrations in the simulation. We tested points rendered at 0-4 dioptres of depths having angular resolutions of up to 4x4 viewpoints within a pupil. The simulation predicted small and constant accommodation errors, which contradict the findings of previous studies. An evaluation of the optical resolution on the retina suggested a trade-off between the maximum achievable resolution and the depth range of a rendered point where in-focus resolution is kept high. The proposed framework can be used to evaluate the upper bound of the optical performance of an LF display for realistically aberrated eyes, which may help to find an optimal spatio-angular resolution required to render a high quality 3D scene.
Collapse
|
4
|
Liquid crystal lens set in augmented reality systems and virtual reality systems for rapidly varifocal images and vision correction. OPTICS EXPRESS 2022; 30:22768-22778. [PMID: 36224967 DOI: 10.1364/oe.461378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 05/24/2022] [Indexed: 06/16/2023]
Abstract
The major challenges of augmented reality (AR) systems and virtual reality (VR) systems are varifocal images for vergence-accommodation conflict (VAC) and vision corrections. In this paper, we design a liquid crystal (LC) lens set consisting of three LC lenses for varifocal images and vision corrections in AR and VR. Four operating modes of such a LC lens set present three electrically tunable lens powers: 0, -0.79 diopters, -2 diopters, and -3.06 diopters by means of manipulation of polarization of incident light using electrically tunable half-wave-plates. The response time is fast(< 50 ms). We also demonstrate AR and VR systems by adopting the LC lens set to exhibit functions of varifocal images and vision corrections which enable to solve VAC as well as vision problem in AR and VR.
Collapse
|
5
|
A calibration-free workflow for image-based mixed reality navigation of total shoulder arthroplasty. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2021.2009378] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Multifocal Stereoscopic Projection Mapping. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4256-4266. [PMID: 34449374 DOI: 10.1109/tvcg.2021.3106486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Stereoscopic projection mapping (PM) allows a user to see a three-dimensional (3D) computer-generated (CG) object floating over physical surfaces of arbitrary shapes around us using projected imagery. However, the current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues, which causes a vergence-accommodation conflict (VAC). Therefore, we propose a multifocal approach to mitigate VAC in stereoscopic PM. Our primary technical contribution is to attach electrically focus-tunable lenses (ETLs) to active shutter glasses to control both vergence and accommodation. Specifically, we apply fast and periodical focal sweeps to the ETLs, which causes the "virtual image" (as an optical term) of a scene observed through the ETLs to move back and forth during each sweep period. A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance. This provides an observer with the correct focus cues required. In this study, we solve three technical issues that are unique to stereoscopic PM: (1) The 3D CG object is displayed on non-planar and even moving surfaces; (2) the physical surfaces need to be shown without the focus modulation; (3) the shutter glasses additionally need to be synchronized with the ETLs and the projector. We also develop a novel compensation technique to deal with the "lens breathing" artifact that varies the retinal size of the virtual image through focal length modulation. Further, using a proof-of-concept prototype, we demonstrate that our technique can present the virtual image of a target 3D CG object at the correct depth. Finally, we validate the advantage provided by our technique by comparing it with conventional stereoscopic PM using a user study on a depth-matching task.
Collapse
|
7
|
Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1871-1879. [PMID: 32070978 DOI: 10.1109/tvcg.2020.2973443] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners - an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.
Collapse
|
8
|
Comparison of wavefront recording plane-based hologram calculations: ray-tracing method versus look-up table method. APPLIED OPTICS 2020; 59:2400-2408. [PMID: 32225774 DOI: 10.1364/ao.386722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 02/03/2020] [Indexed: 06/10/2023]
Abstract
In this study, we compare the ray-tracing method with the look-up table (LUT) method in order to optimize computer-generated hologram (CGH) calculation based on the wavefront recording plane (WRP) method. The speed of the WRP-based CGH calculation largely depends on implementation factors, such as calculation methods, hardware, and parallelization method. Therefore, we evaluated the calculation time and image quality of the reconstructed three-dimensional (3D) image by using the ray-tracing and LUT methods in the central processing unit (CPU) and graphics processing unit (GPU) implementations. Thereafter, we performed several implementations by changing the number of object points and the distance from 3D objects to the WRP. Furthermore, we confirmed different characteristics between CPU and GPU implementations.
Collapse
|
9
|
Retinal projection type lightguide-based near-eye display with switchable viewpoints. OPTICS EXPRESS 2020; 28:3116-3135. [PMID: 32121986 DOI: 10.1364/oe.383386] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 01/05/2020] [Indexed: 06/10/2023]
Abstract
We present a retinal-projection-based near-eye display with switchable multiple viewpoints by polarization-multiplexing. Active switching of viewpoints is provided by the polarization grating, multiplexed holographic optical elements and polarization-dependent eyepiece lens that can generate one of the dual-divided focus groups according to the pupil position. The lightguide-combined optical devices have a potential to enable a wide field of view (FOV) and short eye relief with compact form factor. Our proposed system can support a pupil movement with an extended eyebox and mitigate image problem caused by duplicated viewpoints. We discuss the optical design for guiding system and demonstrate that proof-of-concept system provides all-in-focus images with 37 degrees FOV and 16 mm eyebox in horizontal direction.
Collapse
|
10
|
Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3125-3134. [PMID: 31502977 DOI: 10.1109/tvcg.2019.2933120] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.
Collapse
|
11
|
Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes. SCIENCE ADVANCES 2019; 5:eaav6187. [PMID: 31259239 PMCID: PMC6598771 DOI: 10.1126/sciadv.aav6187] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 05/22/2019] [Indexed: 05/13/2023]
Abstract
As humans age, they gradually lose the ability to accommodate, or refocus, to near distances because of the stiffening of the crystalline lens. This condition, known as presbyopia, affects nearly 20% of people worldwide. We design and build a new presbyopia correction, autofocals, to externally mimic the natural accommodation response, combining eye tracker and depth sensor data to automatically drive focus-tunable lenses. We evaluated 19 users on visual acuity, contrast sensitivity, and a refocusing task. Autofocals exhibit better visual acuity when compared to monovision and progressive lenses while maintaining similar contrast sensitivity. On the refocusing task, autofocals are faster and, compared to progressives, also significantly more accurate. In a separate study, a majority of 23 of 37 users ranked autofocals as the best correction in terms of ease of refocusing. Our work demonstrates the superiority of autofocals over current forms of presbyopia correction and could affect the lives of millions.
Collapse
|
12
|
Real-time three-dimensional video reconstruction of real scenes with deep depth using electro-holographic display system. OPTICS EXPRESS 2019; 27:15662-15678. [PMID: 31163760 DOI: 10.1364/oe.27.015662] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 05/13/2019] [Indexed: 06/09/2023]
Abstract
Herein, we demonstrate a real-time, three-dimensional (3D) video-reconstruction system using electro-holography in real-world scenes with deep depth. We calculated computer-generated holograms using 3D information obtained through an RGB-D camera. We successfully reconstructed a 3D video (in real time) of a person moving in real-world space and confirmed that the proposed system operates at ~14 frames per second. In addition, we successfully reconstructed a full-color 3D video of the person. Furthermore, we varied the number of persons moving in the real-world space and evaluated the proposed system's performance by varying the distance between the RGB-D camera and the person(s).
Collapse
|
13
|
Abstract
Blur occurs naturally when the eye is focused at one distance and an object is presented at another distance. Computer-graphics engineers and vision scientists often wish to create display images that reproduce such depth-dependent blur, but their methods are incorrect for that purpose. They take into account the scene geometry, pupil size, and focal distances, but do not properly take into account the optical aberrations of the human eye. We developed a method that, by incorporating the viewer's optics, yields displayed images that produce retinal images close to the ones that occur in natural viewing. We concentrated on the effects of defocus, chromatic aberration, astigmatism, and spherical aberration and evaluated their effectiveness by conducting experiments in which we attempted to drive the eye's focusing response (accommodation) through the rendering of these aberrations. We found that accommodation is not driven at all by conventional rendering methods, but that it is driven surprisingly quickly and accurately by our method with defocus and chromatic aberration incorporated. We found some effect of astigmatism but none of spherical aberration. We discuss how the rendering approach can be used in vision science experiments and in the development of ophthalmic/optometric devices and augmented- and virtual-reality displays.
Collapse
|
14
|
Binocular holographic three-dimensional display using a single spatial light modulator and a grating. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:1477-1486. [PMID: 30110285 DOI: 10.1364/josaa.35.001477] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 07/12/2018] [Indexed: 06/08/2023]
Abstract
In this paper, a binocular holographic three-dimensional (3D) display system combining a single spatial light modulator (SLM) and a grating is proposed and implemented. A synthetic phase-only hologram of the left and right 3D perspective images of an object is calculated by the layer-based Fresnel diffraction method according to the depth information, and uploaded onto the SLM for holographic 3D reconstruction with correct depth cues. The grating is designed and fabricated to guide the reconstructed left and right 3D perspective images to the corresponding eyes. Optical experiments demonstrate that the proposed system can successfully present binocular holographic 3D images with both the accommodation effect and binocular parallax, which enables observation free of the accommodation-vergence conflict and visual fatigue problem.
Collapse
|
15
|
Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm. OPTICS EXPRESS 2018; 26:10140-10151. [PMID: 29715954 DOI: 10.1364/oe.26.010140] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 03/29/2018] [Indexed: 06/08/2023]
Abstract
In this paper, a method is proposed to implement noises reduced three-dimensional (3D) holographic near-eye display by phase-only computer-generated hologram (CGH). The CGH is calculated from a double-convergence light Gerchberg-Saxton (GS) algorithm, in which the phases of two virtual convergence lights are introduced into GS algorithm simultaneously. The first phase of convergence light is a replacement of random phase as the iterative initial value and the second phase of convergence light will modulate the phase distribution calculated by GS algorithm. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The results indicate that this method can effectively reduce the noises in the reconstruction. Field of view (FOV) of the reconstructed image reaches 40 degrees and experimental light path in the 4-f system is shortened. As for 3D experiments, the results demonstrate that the proposed algorithm can present 3D images with 180cm zooming range and continuous depth cues. This method may provide a promising solution in future 3D augmented reality (AR) realization.
Collapse
|
16
|
Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter. OPTICS EXPRESS 2017; 25:8412-8424. [PMID: 28380953 DOI: 10.1364/oe.25.008412] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
A compact see-through three-dimensional head-mounted display (3D-HMD) is proposed and investigated in this paper. Two phase holograms are analytically extracted from the object wavefront and uploaded on different zones of the spatial light modulator (SLM). A holographic grating is further used as the frequency filter to couple the separated holograms together for wavefront modulation. The developed preliminary prototype has a simple optical facility and a compact structure (133.8mm × 40.4mm × 35.4mm with a 47.7mm length viewing accessory). Optical experiments demonstrated that the proposed system can present 3D images to the human eye with full depth cues. Therefore, it is free of the accommodation-vergence conflict and visual fatigue problem. The dynamic display ability is also tested in the experiments, which provides a promising potential for the true 3D interactive display.
Collapse
|
17
|
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proc Natl Acad Sci U S A 2017; 114:2183-2188. [PMID: 28193871 DOI: 10.1073/pnas.1617251114] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Collapse
|
18
|
Abstract
Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.
Collapse
|