1
|
Hiroi Y, Hiraki T, Itoh Y. StainedSweeper: Compact, Variable-Intensity Light-Attenuation Display with Sweeping Tunable Retarders. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2682-2692. [PMID: 38437084 DOI: 10.1109/tvcg.2024.3372058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Light Attenuation Displays (LADs) are a type of Optical See-Through Head-Mounted Display (OST-HMD) that present images by attenuating incoming light with a pixel-wise polarizing color filter. Although LADs can display images in bright environments, there is a trade-off between the number of Spatial Light Modulators (SLMs) and the color gamut and contrast that can be expressed, making it difficult to achieve both high-fidelity image display and a small form factor. To address this problem, we propose StainedSweeper, a LAD that achieves both the wide color gamut and the variable intensity with a single SLM. Our system synchronously controls a pixel-wise Digital Micromirror Device (DMD) and a nonpixel polarizing color filter to pass light when each pixel is the desired color. By sweeping this control at high speed, the human eye perceives images in a time-multiplexed, integrated manner. To achieve this, we develop the OST-HMD design using a reflective Solc filter as a polarized color filter and a color reproduction algorithm based on the optimization of the time-multiplexing matrix for the selected primary color filters. Our proof-of-concept prototype showed that our single SLM design can produce subtractive images with variable contrast and a wider color gamut than conventional LADs.
Collapse
|
2
|
Cholok DJ, Fischer MJ, Leuze CW, Januszyk M, Daniel BL, Momeni A. Spatial Fidelity of Microvascular Perforating Vessels as Perceived by Augmented Reality Virtual Projections. Plast Reconstr Surg 2024; 153:524-534. [PMID: 37092985 DOI: 10.1097/prs.0000000000010592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
BACKGROUND Autologous breast reconstruction yields improved long-term aesthetic results but requires increased resources of practitioners and hospital systems. Innovations in radiographic imaging have been increasingly used to improve the efficiency and success of free flap harvest. Augmented reality affords the opportunity to superimpose relevant imaging on a surgeon's native field of view, potentially facilitating dissection of anatomically variable structures. To validate the spatial fidelity of augmented reality projections of deep inferior epigastric perforator flap-relevant anatomy, comparisons of three-dimensional (3D) models and their virtual renderings were performed by four independent observers. Measured discrepancies between the real and holographic models were evaluated. METHODS The 3D-printed models of deep inferior epigastric perforator flap-relevant anatomy were fabricated from computed tomographic angiography data from 19 de-identified patients. The corresponding computed tomographic angiography data were similarly formatted for the Microsoft HoloLens to generate corresponding projections. Anatomic points were initially measured on 3D models, after which the corresponding points were measured on the HoloLens projections from two separate vantage points (V1 and V2). Statistical analyses, including generalized linear modeling, were performed to characterize spatial fidelity regarding translation, rotation, and scale of holographic projections. RESULTS Among all participants, the median translational displacement at corresponding points was 9.0 mm between the real-3D model and V1, 12.1 mm between the 3D model and V2, and 13.5 mm between V1 and V2. CONCLUSION Corresponding points, including topography of perforating vessels, for the purposes of breast reconstruction can be identified within millimeters, but there remain multiple independent contributors of error, most notably the participant and location at which the projection is perceived.
Collapse
Affiliation(s)
| | - Marc J Fischer
- Department of Radiology, Stanford University School of Medicine
| | | | | | - Bruce L Daniel
- Department of Radiology, Stanford University School of Medicine
| | - Arash Momeni
- From the Division of Plastic and Reconstructive Surgery
| |
Collapse
|
3
|
Ebner C, Mohr P, Langlotz T, Peng Y, Schmalstieg D, Wetzstein G, Kalkofen D. Off-Axis Layered Displays: Hybrid Direct-View/Near-Eye Mixed Reality with Focus Cues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2816-2825. [PMID: 37027729 DOI: 10.1109/tvcg.2023.3247077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This work introduces off-axis layered displays, the first approach to stereoscopic direct-view displays with support for focus cues. Off-axis layered displays combine a head-mounted display with a traditional direct-view display for encoding a focal stack and thus, for providing focus cues. To explore the novel display architecture, we present a complete processing pipeline for the real-time computation and post-render warping of off-axis display patterns. In addition, we build two prototypes using a head-mounted display in combination with a stereoscopic direct-view display, and a more widely available monoscopic direct-view display. In addition we show how extending off-axis layered displays with an attenuation layer and with eye-tracking can improve image quality. We thoroughly analyze each component in a technical evaluation and present examples captured through our prototypes.
Collapse
|
4
|
Zhang Y, Hu X, Kiyokawa K, Yang X. Add-on Occlusion: Turning Off-the-Shelf Optical See-through Head-mounted Displays Occlusion-capable. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2700-2709. [PMID: 37027617 DOI: 10.1109/tvcg.2023.3247064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The occlusion-capable optical see-through head-mounted display (OC-OSTHMD) is actively developed in recent years since it allows mutual occlusion between virtual objects and the physical world to be correctly presented in augmented reality (AR). However, implementing occlusion with the special type of OSTHMDs prevents the appealing feature from the wide application. In this paper, a novel approach for realizing mutual occlusion for common OSTHMDs is proposed. A wearable device with per-pixel occlusion capability is designed. OSTHMD devices are upgraded to be occlusion-capable by attaching the device before optical combiners. A prototype with HoloLens 1 is built. The virtual display with mutual occlusion is demonstrated in real-time. A color correction algorithm is proposed to mitigate the color aberration caused by the occlusion device. Potential applications, including the texture replacement of real objects and the more realistic semi-transparent objects display, are demonstrated. The proposed system is expected to realize a universal implementation of mutual occlusion in AR.
Collapse
|
5
|
Macedo MCF, Apolinario AL. Occlusion Handling in Augmented Reality: Past, Present and Future. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1590-1609. [PMID: 34613916 DOI: 10.1109/tvcg.2021.3117866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
One of the main goals of many augmented reality applications is to provide a seamless integration of a real scene with additional virtual data. To fully achieve that goal, such applications must typically provide high-quality real-world tracking, support real-time performance and handle the mutual occlusion problem, estimating the position of the virtual data into the real scene and rendering the virtual content accordingly. In this survey, we focus on the occlusion handling problem in augmented reality applications and provide a detailed review of 161 articles published in this field between January 1992 and August 2020. To do so, we present a historical overview of the most common strategies employed to determine the depth order between real and virtual objects, to visualize hidden objects in a real scene, and to build occlusion-capable visual displays. Moreover, we look at the state-of-the-art techniques, highlight the recent research trends, discuss the current open problems of occlusion handling in augmented reality, and suggest future directions for research.
Collapse
|
6
|
Jiang J, Zhang J, Sun J, Wu D, Xu S. User's image perception improved strategy and application of augmented reality systems in smart medical care: A review. Int J Med Robot 2023; 19:e2497. [PMID: 36629798 DOI: 10.1002/rcs.2497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/26/2022] [Accepted: 01/06/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND Augmented reality (AR) is a new human-computer interaction technology that combines virtual reality, computer vision, and computer networks. With the rapid advancement of the medical field towards intelligence and data visualisation, AR systems are becoming increasingly popular in the medical field because they can provide doctors with clear enough medical images and accurate image navigation in practical applications. However, it has been discovered that different display types of AR systems have different effects on doctors' perception of the image after virtual-real fusion during the actual medical application. If doctors cannot correctly perceive the image, they may be unable to correctly match the virtual information with the real world, which will have a significant impact on their ability to recognise complex structures. METHODS This paper uses Citespace, a literature analysis tool, to visualise and analyse the research hotspots when AR systems are used in the medical field. RESULTS A visual analysis of the 1163 articles retrieved from the Web of Science Core Collection database reveals that display technology and visualisation technology are the key research directions of AR systems at the moment. CONCLUSION This paper categorises AR systems based on their display principles, reviews current image perception optimisation schemes for various types of systems, and analyses and compares different display types of AR systems based on their practical applications in the field of smart medical care so that doctors can select the appropriate display types based on different application scenarios. Finally, the future development direction of AR display technology is anticipated in order for AR technology to be more effectively applied in the field of smart medical care. The advancement of display technology for AR systems is critical for their use in the medical field, and the advantages and disadvantages of various display types should be considered in different application scenarios to select the best AR system.
Collapse
Affiliation(s)
- Jingang Jiang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China.,Robotics & Its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jiawei Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jianpeng Sun
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Dianhao Wu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Shuainan Xu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| |
Collapse
|
7
|
Wilson A, Hua H. Design of a Pupil-Matched Occlusion-Capable Optical See-Through Wearable Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4113-4126. [PMID: 33905332 DOI: 10.1109/tvcg.2021.3076069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
State-of-the-art optical see-through head-mounted displays (OST-HMD) for augmented reality applications lack the ability to correctly render light blocking behavior between digital and physical objects, known as mutual occlusion capability. In this article, we present a novel optical architecture for enabling a high performance, occlusion-capable optical see-through head-mounted display (OCOST-HMD). The design utilizes a single-layer, double-pass architecture, creating a compact OCOST-HMD that is capable of rendering per-pixel mutual occlusion, correctly pupil-matched viewing perspective between virtual and real scenes, and a wide see-through field of view (FOV). Based on this architecture, we present a design embodiment and a compact prototype implementation. The prototype demonstrates a virtual display with an FOV of 34° by 22°, an angular resolution of 1.06 arc minutes per pixel, and an average image contrast greater than 40 percent at the Nyquist frequency of 53 cycles/mm. Furthermore, the device achieves a see-through FOV of 90° by 50°, within which about 40° diagonally is occlusion-enabled, and has an angular resolution of 1.0 arc minutes (comparable to a 20/20 vision) and a dynamic range greater than 100:1. We conclude the paper with a quantitative comparison of the key optical performance such as modulation transfer function, image contrast, and color rendering accuracy of our OCOST-HMD system with and without occlusion enabled for various lighting environments.
Collapse
|
8
|
Zhang Y, Wang R, Peng Y, Hua W, Bao H. Color Contrast Enhanced Rendering for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4490-4502. [PMID: 34161241 DOI: 10.1109/tvcg.2021.3091686] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Most commercially available optical see-through head-mounted displays (OST-HMDs) utilize optical combiners to simultaneously visualize the physical background and virtual objects. The displayed images perceived by users are a blend of rendered pixels and background colors. Enabling high fidelity color perception in mixed reality (MR) scenarios using OST-HMDs is an important but challenging task. We propose a real-time rendering scheme to enhance the color contrast between virtual objects and the surrounding background for OST-HMDs. Inspired by the discovery of color perception in psychophysics, we first formulate the color contrast enhancement as a constrained optimization problem. We then design an end-to-end algorithm to search the optimal complementary shift in both chromaticity and luminance of the displayed color. This aims at enhancing the contrast between virtual objects and the real background as well as keeping the consistency with the original displayed color. We assess the performance of our approach using a simulated OST-HMD environment and an off-the-shelf OST-HMD. Experimental results from objective evaluations and subjective user studies demonstrate that the proposed approach makes rendered virtual objects more distinguishable from the surrounding background, thereby bringing a better visual experience.
Collapse
|
9
|
Pinilla S, Miri Rostami SR, Shevkunov I, Katkovnik V, Egiazarian K. Hybrid diffractive optics design via hardware-in-the-loop methodology for achromatic extended-depth-of-field imaging. OPTICS EXPRESS 2022; 30:32633-32649. [PMID: 36242320 DOI: 10.1364/oe.461549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 07/31/2022] [Indexed: 06/16/2023]
Abstract
End-to-end optimization of diffractive optical elements (DOEs) profile through a digital differentiable model combined with computational imaging have gained an increasing attention in emerging applications due to the compactness of resultant physical setups. Despite recent works have shown the potential of this methodology to design optics, its performance in physical setups is still limited and affected by manufacturing artefacts of DOE, mismatch between simulated and resultant experimental point spread functions, and calibration errors. Additionally, the computational burden of the digital differentiable model to effectively design the DOE is increasing, thus limiting the size of the DOE that can be designed. To overcome the above mentioned limitations, a co-design of hybrid optics and image reconstruction algorithm is produced following the end-to-end hardware-in-the-loop strategy, using for optimization a convolutional neural network equipped with quantitative and qualitative loss functions. The optics of the imaging system consists on the phase-only spatial light modulator (SLM) as DOE and refractive lens. SLM phase-pattern is optimized by applying the Hardware-in-the-loop technique, which helps to eliminate the mismatch between numerical modelling and physical reality of image formation as light propagation is not numerically modelled but is physically done. Comparison with compound multi-lens optics of a last generation smartphone and a mirrorless commercial cameras show that the proposed system is advanced in all-in-focus sharp imaging for a depth range 0.4-1.9 m.
Collapse
|
10
|
Hiroi Y, Kaminokado T, Ono S, Itoh Y. Focal surface occlusion. OPTICS EXPRESS 2021; 29:36581-36597. [PMID: 34809066 DOI: 10.1364/oe.440024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/13/2021] [Indexed: 06/13/2023]
Abstract
This paper proposes focal surface occlusion to provide focal cues of occlusion masks for multiple virtual objects at continuous depths in an occlusion-capable optical see-through head-mounted display. A phase-only spatial light modulator (PSLM) that acts as a dynamic free-form lens is used to conform the focal surface of an occlusion mask to the geometry of the virtual scene. To reproduce multiple and continuous focal blurs while reducing the distortion of the see-through view, an optical design based on afocal optics and edge-based optimization to exploit a property of the occlusion mask is established. The prototype with the PSLM and transmissive liquid crystal display can reproduce the focus blur of occluded objects at multiple and continuous depths with a field of view of 14.6°.
Collapse
|
11
|
Rostami SRM, Pinilla S, Shevkunov I, Katkovnik V, Egiazarian K. Power-balanced hybrid optics boosted design for achromatic extended depth-of-field imaging via optimized mixed OTF. APPLIED OPTICS 2021; 60:9365-9378. [PMID: 34807073 DOI: 10.1364/ao.434852] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/22/2021] [Indexed: 06/13/2023]
Abstract
A power-balanced hybrid optical imaging system has a diffractive computational camera, introduced in this paper, with image formation by a refractive lens and multilevel phase mask (MPM). This system provides a long focal depth with low chromatic aberrations thanks to MPM and a high energy light concentration due to the refractive lens. We introduce the concept of optical power balance between the lens and MPM, which controls the contribution of each element to modulate the incoming light. Additional features of our MPM design are the inclusion of the quantization of the MPM's shape on the number of levels and the Fresnel order (thickness) using a smoothing function. To optimize the optical power balance as well as the MPM, we built a fully differentiable image formation model for joint optimization of optical and imaging parameters for the proposed camera using neural network techniques. We also optimized a single Wiener-like optical transfer function (OTF) invariant to depth to reconstruct a sharp image. We numerically and experimentally compare the designed system with its counterparts, lensless and just-lens optical systems, for the visible wavelength interval (400-700) nm and the depth-of-field range (0.5-∞ m for numerical and 0.5-2 m for experimental). We believe the attained results demonstrate that the proposed system equipped with the optimal OTF overcomes its counterparts--even when they are used with optimized OTF--in terms of the reconstruction quality for off-focus distances. The simulation results also reveal that optimizing the optical power balance, Fresnel order, and the number of levels parameters are essential for system performance attaining an improvement of up to 5 dB of PSNR using the optimized OTF compared to its counterpart lensless setup.
Collapse
|
12
|
Chae M, Bang K, Jo Y, Yoo C, Lee B. Occlusion-capable see-through display without the screen-door effect using a photochromic mask. OPTICS LETTERS 2021; 46:4554-4557. [PMID: 34525045 DOI: 10.1364/ol.430478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 08/11/2021] [Indexed: 06/13/2023]
Abstract
Conventional occlusion-capable see-through display systems have many practical limitations such as the form factor, narrow field of view, screen-door effect, and diffraction of a real scene. In this Letter, we propose an occlusion-capable see-through display using lens arrays and a photochromic plate. By imaging the occlusion mask on the photochromic plate with near-UV light, the visible light transmittance of the plate changes. Since no black matrix lies on the photochromic plate, our system provides a clear real scene view without the grid structure of the pixels and can prevent diffraction defects of the real scene. We also alleviate the drawback of a narrow field of view using the lens arrays for a reduced form factor.
Collapse
|
13
|
Neves CA, Leuze C, Gomez AM, Navab N, Blevins N, Vaisbuch Y, McNab JA. Augmented Reality for Retrosigmoid Craniotomy Planning. Skull Base Surg 2021; 83:e564-e573. [DOI: 10.1055/s-0041-1735509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
AbstractWhile medical imaging data have traditionally been viewed on two-dimensional (2D) displays, augmented reality (AR) allows physicians to project the medical imaging data on patient's bodies to locate important anatomy. We present a surgical AR application to plan the retrosigmoid craniotomy, a standard approach to access the posterior fossa and the internal auditory canal. As a simple and accurate alternative to surface landmarks and conventional surgical navigation systems, our AR application augments the surgeon's vision to guide the optimal location of cortical bone removal. In this work, two surgeons performed a retrosigmoid approach 14 times on eight cadaver heads. In each case, the surgeon manually aligned a computed tomography (CT)-derived virtual rendering of the sigmoid sinus on the real cadaveric heads using a see-through AR display, allowing the surgeon to plan and perform the craniotomy accordingly. Postprocedure CT scans were acquired to assess the accuracy of the retrosigmoid craniotomies with respect to their intended location relative to the dural sinuses. The two surgeons had a mean margin of davg = 0.6 ± 4.7 mm and davg = 3.7 ± 2.3 mm between the osteotomy border and the dural sinuses over all their cases, respectively, and only positive margins for 12 of the 14 cases. The intended surgical approach to the internal auditory canal was successfully achieved in all cases using the proposed method, and the relatively small and consistent margins suggest that our system has the potential to be a valuable tool to facilitate planning a variety of similar skull-base procedures.
Collapse
Affiliation(s)
- Caio A. Neves
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
- Faculty of Medicine, University of Brasília, Brasília, Brazil
| | - Christoph Leuze
- Department of Radiology, Stanford School of Medicine, Stanford, United States
| | - Alejandro M. Gomez
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Germany
- Laboratory for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Germany
- Laboratory for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Nikolas Blevins
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
| | - Yona Vaisbuch
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
| | - Jennifer A. McNab
- Department of Radiology, Stanford School of Medicine, Stanford, United States
| |
Collapse
|
14
|
Zhang Y, Hu X, Kiyokawa K, Isoyama N, Sakata N, Hua H. Optical see-through augmented reality displays with wide field of view and hard-edge occlusion by using paired conical reflectors. OPTICS LETTERS 2021; 46:4208-4211. [PMID: 34469976 DOI: 10.1364/ol.428714] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/26/2021] [Indexed: 06/13/2023]
Abstract
Optical see-through head-mounted displays are actively developed in recent years. An appropriate method for mutual occlusion is essential to provide a decent user experience in many application scenarios of augmented reality. However, existing mutual occlusion methods fail to work well with a large field of view (FOV). In this Letter, we propose a double-parabolic-mirror structure that renders hard-edge occlusion within a wide FOV. The parabolic mirror increases the numerical aperture of the system significantly, and the usage of paired parabolic mirrors eliminates most optical aberrations. A liquid crystal on silicon device is introduced as the spatial light modulator for imaging a bright see-through view and rendering sharp occlusion patterns. A loop structure is built to eliminate vertical parallax. The system is designed to obtain a maximum monocular FOV of H114∘×V95∘ with hard-edge occlusion, and a FOV of H83.5∘×V53.1∘ is demonstrated with our bench-top prototype.
Collapse
|
15
|
Augmented Reality Vector Light Field Display with Large Viewing Distance Based on Pixelated Multilevel Blazed Gratings. PHOTONICS 2021. [DOI: 10.3390/photonics8080337] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Glasses-free augmented reality (AR) 3D display has attracted great interest in its ability to merge virtual 3D objects with real scenes naturally, without the aid of any wearable devices. Here we propose an AR vector light field display based on a view combiner and an off-the-shelf purchased projector. The view combiner is sparsely covered with pixelated multilevel blazed gratings (MBG) for the projection of perspective virtual images. Multi-order diffraction of the MBG is designed to increase the viewing distance and vertical viewing angle. In a 20-inch prototype, multiple sets of 16 horizontal views form a smooth parallax. The viewing distance of the 3D scene is larger than 5 m. The vertical viewing angle is 15.6°. The light efficiencies of all views are larger than 53%. We demonstrate that the displayed virtual 3D scene retains natural motion parallax and high brightness while having a consistent occlusion effect with natural objects. This research can be extended to applications in areas such as human–computer interaction, entertainment, education, and medical care.
Collapse
|
16
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
17
|
Kaminokado T, Hiroi Y, Itoh Y. StainedView: Variable-Intensity Light-Attenuation Display with Cascaded Spatial Color Filtering for Improved Color Fidelity. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3576-3586. [PMID: 32941143 DOI: 10.1109/tvcg.2020.3023569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present StainedView, an optical see-through display that spatially filters the spectral distribution of light to form an image with improved color fidelity. Existing light-attenuation displays have limited color fidelity and contrast, resulting in a degraded appearance of virtual images. To use these displays to present virtual images that are more consistent with the real world, we require three things: intensity modulation of incoming light, spatial color filtering with narrower bandwidth, and appropriate light modulation for incoming light with an arbitrary spectral distribution. In StainedView, we address the three requirements by cascading two phase-only spatial light modulators (PSLMs), a digital micromirror device, and polarization optics to control both light intensity and spectrum distribution. We show that our design has a 1.8 times wider color gamut fidelity (75.8% fulfillment of sRGB color space) compared to the existing single-PSLM approach (41.4%) under a reference white light. We demonstrated the design with a proof-of-concept display system. We further introduce our optics design and pixel-selection algorithm for the given light input, evaluate the spatial color filter, and discuss the limitation of the current prototype.
Collapse
|
18
|
Ju YG, Choi MH, Liu P, Hellman B, Lee TL, Takashima Y, Park JH. Occlusion-capable optical-see-through near-eye display using a single digital micromirror device. OPTICS LETTERS 2020; 45:3361-3364. [PMID: 32630845 DOI: 10.1364/ol.393194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 05/01/2020] [Indexed: 06/11/2023]
Abstract
Occlusion of a real scene by displayed virtual images mitigates incorrect depth cues and enhances image visibility in augmented reality applications. In this Letter, we propose a novel optical scheme for the occlusion-capable optical-see-through near-eye display. The proposed scheme uses only a single spatial light modulator, as the real-scene mask and virtual image display simultaneously. A polarization-based double-pass configuration is also combined, enabling a compact implementation. The proposed scheme is verified by optical experiments which demonstrate a 60 Hz red-green-blue video display with a 4-bit depth for each color channel and per-pixel dynamic occlusion of a 90.6% maximum occlusion ratio.
Collapse
|