1
|
Rabia S, Allain G, Tremblay R, Thibault S. Orthoscopic elemental image synthesis for 3D light field display using lens design software and real-world captured neural radiance field. OPTICS EXPRESS 2024; 32:7800-7815. [PMID: 38439452 DOI: 10.1364/oe.510579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024]
Abstract
The elemental images (EIs) generation of complex real-world scenes can be challenging for conventional integral imaging (InIm) capture techniques since the pseudoscopic effect, characterized by a depth inversion of the reconstructed 3D scene, occurs in this process. To address this problem, we present in this paper a new approach using a custom neural radiance field (NeRF) model to form real and/or virtual 3D image reconstruction from a complex real-world scene while avoiding distortion and depth inversion. One of the advantages of using a NeRF is that the 3D information of a complex scene (including transparency and reflection) is not stored by meshes or voxel grid but by a neural network that can be queried to extract desired data. The Nerfstudio API was used to generate a custom NeRF-related model while avoiding the need for a bulky acquisition system. A general workflow that includes the use of ray-tracing-based lens design software is proposed to facilitate the different processing steps involved in managing NeRF data. Through this workflow, we introduced a new mapping method for extracting desired data from the custom-trained NeRF-related model, enabling the generation of undistorted orthoscopic EIs. An experimental 3D reconstruction was conducted using an InIm-based 3D light field display (LFD) prototype to validate the effectiveness of the proposed method. A qualitative comparison with the actual real-world scene showed that the 3D reconstructed scene is accurately rendered. The proposed work can be used to manage and render undistorted orthoscopic 3D images from custom-trained NeRF-related models for various InIm applications.
Collapse
|
2
|
Alonso JR, Fernández A, Javidi B. Spatial perception in stereoscopic augmented reality based on multifocus sensing. OPTICS EXPRESS 2024; 32:5943-5955. [PMID: 38439309 DOI: 10.1364/oe.510688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 03/06/2024]
Abstract
In many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging. Spatial perception of the displayed stereo pairs can be controlled by synthesizing the desired point of view of each image of the stereo-pair along with their parallax setting. The proposed method has the potential to alleviate the accommodation-convergence conflict and make augmented reality stereoscopic devices less vulnerable to visual fatigue.
Collapse
|
3
|
Zhao BC, Yang F, Wu F. High-Aperture-Ratio Dual-View Integral Imaging Display. MICROMACHINES 2022; 13:2213. [PMID: 36557512 PMCID: PMC9785181 DOI: 10.3390/mi13122213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/04/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Low aperture ratio is a problem in the conventional dual-view integral imaging (DVII) display using a point light source array. A high-aperture-ratio DVII display using a gradient width point light source array is reported in this work. The elemental Images 1 and 2, which are alternatively aligned on a liquid crystal panel, are illuminated by the light rays emitted from an assigned point light source. The optical path is optimized by optimizing the widths of the point light sources. The aperture ratio of the proposed DVII display was demonstrated as 1.88 times the conventional DVII display. Experiments showed that the vertical viewing range is related to the vertical width of the first row point light source, whereas the aperture ratio is related to the vertical widths of all point light sources. By optimizing the widths of the point light sources, the aperture ratio is enhanced without loss of viewing range.
Collapse
Affiliation(s)
- Bai-Chuan Zhao
- School of Information Engineering, Chengdu Aeronautic Polytechnic, Chengdu 610218, China
| | - Fan Yang
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu 610041, China
| | - Fei Wu
- School of Electronic Engineering, Chengdu Technological University, Chengdu 610073, China
| |
Collapse
|
4
|
Mao Y, Wang W, Jiang X, Zhang T, Yu H, Li P, Liu X, Le S. Elemental image array generation algorithm with accurate depth information for integral imaging. APPLIED OPTICS 2021; 60:9875-9886. [PMID: 34807176 DOI: 10.1364/ao.441189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 10/12/2021] [Indexed: 06/13/2023]
Abstract
In integral imaging, reproducing the depth information of three-dimensional (3D) objects accurately is one of the goals of scientific researchers. Based on the existing research, this paper proposes a new, to the best of our knowledge, elemental image array (EIA) generation algorithm, which does not need to know the depth information of the spatial scene. By dividing the distance between the display lens array (LA) and the synthetic LA equally, and comparing the variance of the pixels corresponding to the partial of the display LA at different positions, it can obtain the depth information of the 3D objects accurately, and then the value of the synthetic pixel can be calculated. Thus, a new EIA with accurate depth information is generated. Finally, the proposed algorithm has been verified in experiments of both virtual objects and real objects.
Collapse
|
5
|
Kopycki P, Tolosa A, Luque MJ, Garcia-Domene MC, Diez-Ajenjo M, Saavedra G, Martinez-Corral M. Examining the utility of pinhole-type screens for lightfield display. OPTICS EXPRESS 2021; 29:33357-33366. [PMID: 34809149 DOI: 10.1364/oe.438827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 09/02/2021] [Indexed: 06/13/2023]
Abstract
The use of microlens arrays for lightfield display has the drawback of providing images with strong chromatic aliasing. To overcome this problem, pinhole-type lightfield monitors are proposed. This paper is devoted to evaluating the capability for such lightfield monitors to offer the user a convincing 3D experience with images with enough brightness and continuous aspect. Thus, we have designed a psychophysical test specifically adapted for lightfield monitors, which allowed us to confirm the usability of pinhole-type monitors.
Collapse
|
6
|
Ai L, Shi X, Wang X, Cao H, Wang S. Motion parallax enhanced 3-D integral imaging display from the commercial plenoptic camera. OPTICS EXPRESS 2020; 28:31127-31139. [PMID: 33115094 DOI: 10.1364/oe.402926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 09/20/2020] [Indexed: 06/11/2023]
Abstract
The direct pickup of integral imaging typically needs to overcome limitations especially the restricted depth of field (DoF) under a lenslet array. In order to solve the problem, we design a motion parallax enhancing approach for three-dimensional (3-D) integral optical display only relying on a commercial Lytro camera. First, the non-uniform axial compression from the zoom lens of the Lytro camera is analyzed and experimentally investigated. Next, using depth slicing, locating and retargeting, the parallax of the integral optical display is significantly enhanced. Additionally, the displayed depth information can be presented in a uniform compression with the same proportion as the real scene even without the prior knowledge of the actual object distance. The experimental results prove the feasibility of the proposed method, which provides an efficient way for the acquisition of the elemental image array. Additionally, it is also a new attempt to expand the application scope of the Lytro camera from 2-D refocusing to the content acquisition for the integral display.
Collapse
|
7
|
Chen D, Sang X, Peng W, Yu X, Wang HC. Multi-parallax views synthesis for three-dimensional light-field display using unsupervised CNN. OPTICS EXPRESS 2018; 26:27585-27598. [PMID: 30469822 DOI: 10.1364/oe.26.027585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 09/24/2018] [Indexed: 06/09/2023]
Abstract
Multi-view applications have been used in a wide range, especially Three-Dimensional (3D) display. Since capturing dense multiple views for 3D light-field display is still a difficult work, view synthesis becomes an accessible way. Convolutional neural networks (CNN) has been used to synthesize new views of the scene. However, training targets are sometimes difficult to obtain, and it views are very difficult to synthesize at arbitrary positions. Here, an unsupervised network of Multi-Parallax View Net (MPVN) is proposed, which can synthesize multi-parallax views for 3D light-field display. Existing parallax views are re-projected to the target position to build input towers. The network is operated on these towers, and outputs a color tower and a selection tower. These two towers yield the final output image by per-pixel weight summing. MPVN adopts end-to-end unsupervised training to minimize prediction errors at existing positions. It can predict virtual views at any parallax position between existing views in a high quality. Experimental results demonstrate the validation of our proposed network, and SSIM of synthetic views are mostly over 0.95. We believe that this method can effectively provide enough views for 3D light-field display in the future work.
Collapse
|
8
|
New Method of Microimages Generation for 3D Display. SENSORS 2018; 18:s18092805. [PMID: 30149639 PMCID: PMC6164900 DOI: 10.3390/s18092805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 08/22/2018] [Accepted: 08/23/2018] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used to create a point cloud. A set of elemental images of this point cloud is captured synthetically and from it the microimages are computed. The main feature of this method is that the reference plane of displayed images can be set at will, while the empty pixels are avoided. Another advantage of the method is that the center point of displayed images and also their scale and field of view can be set. To show the final results, a 3D InI display prototype is implemented through a tablet and a microlens array. We demonstrate that this new technique overcomes the drawbacks of previous similar ones and provides more flexibility setting the characteristics of the final image.
Collapse
|
9
|
Hu J, Lou Y, Wu F, Chen A. Twin imaging phenomenon of integral imaging. OPTICS EXPRESS 2018; 26:13301-13310. [PMID: 29801355 DOI: 10.1364/oe.26.013301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Accepted: 05/02/2018] [Indexed: 06/08/2023]
Abstract
The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.
Collapse
|
10
|
Yan Z, Yan X, Jiang X, Ai L. Computational integral imaging reconstruction of perspective and orthographic view images by common patches analysis. OPTICS EXPRESS 2017; 25:21887-21900. [PMID: 29041480 DOI: 10.1364/oe.25.021887] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 08/24/2017] [Indexed: 06/07/2023]
Abstract
A novel method to computationally reconstruct perspective and orthographic view images with full resolution of a recording device from a single integral photograph is proposed. Firstly, a group of image slices that contain full yet redundant information to reconstruct the view image are generated, and the object surface is divided into pieces by the points that correspond to the centers of image slices. Secondly, the image slices that contribute to the pieces are extracted and redundant information embedded in them are figured out by common patches analysis. Finally, the view image is reconstructed by excluding the redundant information and resampling with maximum sampling rate. Each piece of the object surface is represented with 9 patches at most from 4 adjacent elemental images, and view images with high quality are reconstructed. Both simulations and experiments verify the validity of the method.
Collapse
|
11
|
Yim J, Choi KH, Min SW. Real object pickup method of integral imaging using offset lens array. APPLIED OPTICS 2017; 56:F167-F172. [PMID: 28463313 DOI: 10.1364/ao.56.00f167] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose the pickup system of integral imaging using the offset lens array (OLA), which is a useful optical component for both the pickup and display processes. The main purpose of our system is resolving the pseudoscopic image problem of integral imaging. Also, the flipped image of integral imaging that has the wrong perspective information can be removed by adding an external barrier in the display process. In this paper, the above properties are explained in detail, and the experimental results to verify the feasibility of the proposed system are presented. We are certain that our system can also be applied to other various pickup systems based on integral imaging.
Collapse
|
12
|
Martinez-Uso A, Latorre-Carmona P, Sotoca JM, Pla F, Javidi B. Depth estimation in Integral Imaging based on a maximum voting strategy. ACTA ACUST UNITED AC 2016. [DOI: 10.1109/jdt.2016.2615565] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
13
|
Jeong Y, Kim J, Yeom J, Lee CK, Lee B. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera. APPLIED OPTICS 2015; 54:10333-10341. [PMID: 26836855 DOI: 10.1364/ao.54.010333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.
Collapse
|
14
|
Wang Z, Wang A, Wang S, Ma X, Ming H. Resolution-enhanced integral imaging using two micro-lens arrays with different focal lengths for capturing and display. OPTICS EXPRESS 2015; 23:28970-28977. [PMID: 26561165 DOI: 10.1364/oe.23.028970] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We proposed a resolution enhanced integral imaging display method using two micro-lens arrays (MLA) with different focal lengths for capturing and display respectively. An elemental image array (EIA) is captured with MLA of focal length of f(1) and a processed EIA is displayed with MLA of focal length of f(2) which is larger than f(1). We enlarge the "effective area" in processed EIA to increase the information obtained by viewer, in other words, enhance the viewing resolution. The two micro-lens arrays for capturing and display are g and mg distant from display device respectively, and we can get m(2) times resolution enhancement.
Collapse
|
15
|
Xiao X, Shen X, Martinez-Corral M, Javidi B. Multiple-Planes Pseudoscopic-to-Orthoscopic Conversion for 3D Integral Imaging Display. ACTA ACUST UNITED AC 2015. [DOI: 10.1109/jdt.2014.2387854] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
16
|
Wang J, Xiao X, Hua H, Javidi B. Augmented Reality 3D Displays With Micro Integral Imaging. ACTA ACUST UNITED AC 2015. [DOI: 10.1109/jdt.2014.2361147] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
17
|
Jang JY, Cho M. Orthoscopic real image reconstruction in integral imaging by rotating an elemental image based on the reference point of object space. APPLIED OPTICS 2015; 54:5877-5881. [PMID: 26193043 DOI: 10.1364/ao.54.005877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose a new approach for depth conversion of three-dimensional (3D) reconstruction from pseudoscopic to orthoscopic real images in resolution priority integral imaging. In integral imaging, depth of field is recorded in an elemental image array. In the proposed method, the depth information is converted by a 180° rotation of each elemental image in an elemental image array based on a reference point of conversion, which is caused by a reference point of object space. Orthoscopic real images can be reconstructed in 3D space by using the depth conversion of an elemental image array. The feasibility of the proposed method has been confirmed through preliminary experiments as well as ray optical analysis.
Collapse
|
18
|
Zhang J, Wang X, Chen Y, Zhang Q, Yu S, Yuan Y, Guo B. Feasibility study for pseudoscopic problem in integral imaging using negative refractive index materials. OPTICS EXPRESS 2014; 22:20757-20769. [PMID: 25321279 DOI: 10.1364/oe.22.020757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
To solve the pseudoscopic problem, we propose a one-step integral imaging system with negative refractive index materials, which can avoid the deterioration in resolution inherent to the optical or digital two-step processes. Specifically, the proposed method is based on the novel feature of negative refractive index materials, bending light to a negative angle relative to the surface normal. The pseudoscopic imaging property of the negative refractive index material slab is theoretically investigated. For formation of orthoscopic reconstructed images, the matching condition of the negative index lens array and the positive index lens array is deduced. Two types of conceptual prototypes of integral imaging system with negative refractive index materials are designed. Experimental results show the validity of the proposed method. To the best of our knowledge, this is the first time to explore the application of negative index materials in eliminating the pseudoscopic effect in integral imaging.
Collapse
|
19
|
Martínez-Corral M, Dorado A, Navarro H, Saavedra G, Javidi B. Three-dimensional display by smart pseudoscopic-to-orthoscopic conversion with tunable focus. APPLIED OPTICS 2014; 53:E19-25. [PMID: 25090349 DOI: 10.1364/ao.53.000e19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 04/23/2014] [Indexed: 05/24/2023]
Abstract
The original aim of the integral-imaging concept, reported by Gabriel Lippmann more than a century ago, is the capture of images of 3D scenes for their projection onto an autostereoscopic display. In this paper we report a new algorithm for the efficient generation of microimages for their direct projection onto an integral-imaging monitor. Like our previous algorithm, the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm, this algorithm produces microimages ready to produce 3D display with full parallax. However, this new algorithm is much simpler than the previous one, produces microimages free of black pixels, and permits fixing at will, between certain limits, the reference plane and the field of view of the displayed 3D scene. Proofs of concept are illustrated with 3D capture and 3D display experiments.
Collapse
|
20
|
Xiao X, Javidi B, Martinez-Corral M, Stern A. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. APPLIED OPTICS 2013; 52:546-60. [PMID: 23385893 DOI: 10.1364/ao.52.000546] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
Collapse
Affiliation(s)
- Xiao Xiao
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269-4157, USA
| | | | | | | |
Collapse
|
21
|
Jung JH, Kim J, Lee B. Solution of pseudoscopic problem in integral imaging for real-time processing. OPTICS LETTERS 2013; 38:76-78. [PMID: 23282843 DOI: 10.1364/ol.38.000076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Proposed is a very effective conversion method from pseudoscopic (PS) to orthoscopic elemental image with adjustable depth position of a reconstructed three-dimensional (3D) object for integral imaging (InIm) in real-time. The proposed method is based on the interweaving process in multi-view display (MVD) with consideration of the difference between the ray sampling method of MVD and InIm. The simple transformation matrix formalism enables the real-time conversion from pickup image to display image based on InIm without a PS problem.
Collapse
Affiliation(s)
- Jae-Hyun Jung
- School of Electrical Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 151-744, South Korea
| | | | | |
Collapse
|
22
|
Li G, Kwon KC, Shin GH, Jeong JS, Yoo KH, Kim N. Simplified Integral Imaging Pickup Method for Real Objects Using a Depth Camera. ACTA ACUST UNITED AC 2012. [DOI: 10.3807/josk.2012.16.4.381] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
23
|
Xu Y, Wang X, Sun Y, Zhang J. Homogeneous light field model for interactive control of viewing parameters of integral imaging displays. OPTICS EXPRESS 2012; 20:14137-14151. [PMID: 22714478 DOI: 10.1364/oe.20.014137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
A novel model for three dimensional (3D) interactive control of viewing parameters of integral imaging systems is established in this paper. Specifically, transformation matrices are derived in an extended homogeneous light field coordinate space based on interactive controllable requirement of integral imaging displays. In this model, new elemental images can be synthesized directly from the ones captured in the record process to display 3D images with expected viewing parameters, and no extra geometrical information of the 3D scene is required in the synthesis process. Computer simulation and optical experimental results show that the reconstructed 3D scenes with depth control, lateral translation and rotation can be achieved.
Collapse
Affiliation(s)
- Yin Xu
- School of Technical Physics, Xidian University, Xi’an Shaanxi 710071, China
| | | | | | | |
Collapse
|