1
|
Wani P, Usmani K, Krishnan G, Javidi B. 3D object tracking using integral imaging with mutual information and Bayesian optimization. OPTICS EXPRESS 2024; 32:7495-7512. [PMID: 38439428 DOI: 10.1364/oe.517312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024]
Abstract
Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging's depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object's depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object's bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object's depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach.
Collapse
|
2
|
Krishnan G, Goswami S, Joshi R, Javidi B. Three-dimensional integral imaging-based image descattering and recovery using physics informed unsupervised CycleGAN. OPTICS EXPRESS 2024; 32:1825-1835. [PMID: 38297725 DOI: 10.1364/oe.510830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 12/20/2023] [Indexed: 02/02/2024]
Abstract
Image restoration and denoising has been a challenging problem in optics and computer vision. There has been active research in the optics and imaging communities to develop a robust, data-efficient system for image restoration tasks. Recently, physics-informed deep learning has received wide interest in scientific problems. In this paper, we introduce a three-dimensional integral imaging-based physics-informed unsupervised CycleGAN algorithm for underwater image descattering and recovery using physics-informed CycleGAN (Generative Adversarial Network). The system consists of a forward and backward pass. The base architecture consists of an encoder and a decoder. The encoder takes the clean image along with the depth map and the degradation parameters to produce the degraded image. The decoder takes the degraded image generated by the encoder along with the depth map and produces the clean image along with the degradation parameters. In order to provide physical significance for the input degradation parameter w.r.t a physical model for the degradation, we also incorporated the physical model into the loss function. The proposed model has been assessed under the dataset curated through underwater experiments at various levels of turbidity. In addition to recovering the original image from the degraded image, the proposed algorithm also helps to model the distribution under which the degraded images have been sampled. Furthermore, the proposed three-dimensional Integral Imaging approach is compared with the traditional deep learning-based approach and 2D imaging approach under turbid and partially occluded environments. The results suggest the proposed approach is promising, especially under the above experimental conditions.
Collapse
|
3
|
Wani P, Javidi B. 3D integral imaging depth estimation of partially occluded objects using mutual information and Bayesian optimization. OPTICS EXPRESS 2023; 31:22863-22884. [PMID: 37475387 DOI: 10.1364/oe.492160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 06/12/2023] [Indexed: 07/22/2023]
Abstract
Integral imaging (InIm) is useful for passive ranging and 3D visualization of partially-occluded objects. We consider 3D object localization within a scene and in occlusions. 2D localization can be achieved using machine learning and non-machine learning-based techniques. These techniques aim to provide a 2D bounding box around each one of the objects of interest. A recent study uses InIm for the 3D reconstruction of the scene with occlusions and utilizes mutual information (MI) between the bounding box in this 3D reconstructed scene and the corresponding bounding box in the central elemental image to achieve passive depth estimation of partially occluded objects. Here, we improve upon this InIm method by using Bayesian optimization to minimize the number of required 3D scene reconstructions. We evaluate the performance of the proposed approach by analyzing different kernel functions, acquisition functions, and parameter estimation algorithms for Bayesian optimization-based inference for simultaneous depth estimation of objects and occlusion. In our optical experiments, mutual-information-based depth estimation with Bayesian optimization achieves depth estimation with a handful of 3D reconstructions. To the best of our knowledge, this is the first report to use Bayesian optimization for mutual information-based InIm depth estimation.
Collapse
|
4
|
Usmani K, O'Connor T, Wani P, Javidi B. 3D object detection through fog and occlusion: passive integral imaging vs active (LiDAR) sensing. OPTICS EXPRESS 2023; 31:479-491. [PMID: 36606982 DOI: 10.1364/oe.478125] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/07/2022] [Indexed: 06/17/2023]
Abstract
In this paper, we address the problem of object recognition in degraded environments including fog and partial occlusion. Both long wave infrared (LWIR) imaging systems and LiDAR (time of flight) imaging systems using Azure Kinect, which combine conventional visible and lidar sensing information, have been previously demonstrated for object recognition in ideal conditions. However, the object detection performance of Azure Kinect depth imaging systems may decrease significantly in adverse weather conditions such as fog, rain, and snow. The concentration of fog degrades the depth images of Azure Kinect camera, and the overall visibility of RGBD images (fused RGB and depth image), which can make object recognition tasks challenging. LWIR imaging may avoid these issues of lidar-based imaging systems. However, due to poor spatial resolution of LWIR cameras, thermal imaging provides limited textural information within a scene and hence may fail to provide adequate discriminatory information to identify between objects of similar texture, shape and size. To improve the object detection task in fog and occlusion, we use three-dimensional (3D) integral imaging (InIm) system with a visible range camera. 3D InIm provides depth information, mitigates the occlusion and fog in front of the object, and improves the object recognition capabilities. For object recognition, the YOLOv3 neural network is used for each of the tested imaging systems. Since the concentration of fog affects the images from different sensors (visible, LWIR, and Azure Kinect depth cameras) in different ways, we compared the performance of the network on these images in terms of average precision and average miss rate. For the experiments we conducted, the results indicate that in degraded environment 3D InIm using visible range cameras can provide better image reconstruction as compared to the LWIR camera and Azure Kinect RGBD camera, and therefore it may improve the detection accuracy of the network. To the best of our knowledge, this is the first report comparing the performance of object detection between passive integral imaging system vs active (LiDAR) sensing in degraded environments such as fog and partial occlusion.
Collapse
|
5
|
Wani P, Krishnan G, O'Connor T, Javidi B. Information theoretic performance evaluation of 3D integral imaging. OPTICS EXPRESS 2022; 30:43157-43171. [PMID: 36523020 DOI: 10.1364/oe.475086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Integral imaging (InIm) has proved useful for three-dimensional (3D) object sensing, visualization, and classification of partially occluded objects. This paper presents an information-theoretic approach for simulating and evaluating the integral imaging capture and reconstruction process. We utilize mutual information (MI) as a metric for evaluating the fidelity of the reconstructed 3D scene. Also we consider passive depth estimation using mutual information. We apply this formulation for optimal pitch estimation of integral-imaging capture and reconstruction to maximize the longitudinal resolution. The effect of partial occlusion in integral imaging 3D reconstruction using mutual information is evaluated. Computer simulation tests and experiments are presented.
Collapse
|
6
|
Wani P, Usmani K, Krishnan G, O'Connor T, Javidi B. Lowlight object recognition by deep learning with passive three-dimensional integral imaging in visible and long wave infrared wavelengths. OPTICS EXPRESS 2022; 30:1205-1218. [PMID: 35209285 DOI: 10.1364/oe.443657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
Traditionally, long wave infrared imaging has been used in photon starved conditions for object detection and classification. We investigate passive three-dimensional (3D) integral imaging (InIm) in visible spectrum for object classification using deep neural networks in photon-starved conditions and under partial occlusion. We compare the proposed passive 3D InIm operating in the visible domain with that of the long wave infrared sensing in both 2D and 3D imaging cases for object classification in degraded conditions. This comparison is based on average precision, recall, and miss rates. Our experimental results demonstrate that cold and hot object classification using 3D InIm in the visible spectrum may outperform both 2D and 3D imaging implemented in long wave infrared spectrum for photon-starved and partially occluded scenes. While these experiments are not comprehensive, they demonstrate the potential of 3D InIm in the visible spectrum for low light applications. Imaging in the visible spectrum provides higher spatial resolution, more compact optics, and lower cost hardware compared with long wave infrared imaging. In addition, higher spatial resolution obtained in the visible spectrum can improve object classification accuracy. Our experimental results provide a proof of concept for implementing visible spectrum imaging in place of the traditional LWIR spectrum imaging for certain object recognition tasks.
Collapse
|
7
|
Krishnan G, Joshi R, O'Connor T, Javidi B. Optical signal detection in turbid water using multidimensional integral imaging with deep learning. OPTICS EXPRESS 2021; 29:35691-35701. [PMID: 34808998 DOI: 10.1364/oe.440114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/03/2021] [Indexed: 06/13/2023]
Abstract
Optical signal detection in turbid and occluded environments is a challenging task due to the light scattering and beam attenuation inside the medium. Three-dimensional (3D) integral imaging is an imaging approach which integrates two-dimensional images from multiple perspectives and has proved to be useful for challenging conditions such as occlusion and turbidity. In this manuscript, we present an approach for the detection of optical signals in turbid water and occluded environments using multidimensional integral imaging employing temporal encoding with deep learning. In our experiments, an optical signal is temporally encoded with gold code and transmitted through turbid water via a light-emitting diode (LED). A camera array captures videos of the optical signals from multiple perspectives and performs the 3D signal reconstruction of temporal signal. The convolutional neural network-based bidirectional Long Short-Term Network (CNN-BiLSTM) network is trained with clear water video sequences to perform classification on the binary transmitted signal. The testing data was collected in turbid water scenes with partial signal occlusion, and a sliding window with CNN-BiLSTM-based classification was performed on the reconstructed 3D video data to detect the encoded binary data sequence. The proposed approach is compared to previously presented correlation-based detection models. Furthermore, we compare 3D integral imaging to conventional two-dimensional (2D) imaging for signal detection using the proposed deep learning strategy. The experimental results using the proposed approach show that the multidimensional integral imaging-based methodology significantly outperforms the previously reported approaches and conventional 2D sensing-based methods. To the best of our knowledge, this is the first report on underwater signal detection using multidimensional integral imaging with deep neural networks.
Collapse
|
8
|
Krishnan G, Huang Y, Joshi R, O'Connor T, Javidi B. Spatio-temporal continuous gesture recognition under degraded environments: performance comparison between 3D integral imaging (InIm) and RGB-D sensors. OPTICS EXPRESS 2021; 29:30937-30951. [PMID: 34614809 DOI: 10.1364/oe.438110] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 08/29/2021] [Indexed: 06/13/2023]
Abstract
In this paper, we introduce a deep learning-based spatio-temporal continuous human gesture recognition algorithm under degraded conditions using three-dimensional (3D) integral imaging. The proposed system is shown as an efficient continuous human gesture recognition system for degraded environments such as partial occlusion. In addition, we compare the performance between the 3D integral imaging-based sensing and RGB-D sensing for continuous gesture recognition under degraded environments. Captured 3D data serves as the input to a You Look Only Once (YOLOv2) neural network for hand detection. Then, a temporal segmentation algorithm is employed to segment the individual gestures from a continuous video sequence. Following segmentation, the output is fed to a convolutional neural network-based bidirectional long short-term memory network (CNN-BiLSTM) for gesture classification. Our experimental results suggest that the proposed deep learning-based spatio-temporal continuous human gesture recognition provides substantial improvement over both RGB-D sensing and conventional 2D imaging system. To the best of our knowledge, this is the first report of 3D integral imaging-based continuous human gesture recognition with deep learning and the first comparison between 3D integral imaging and RGB-D sensors for this task.
Collapse
|
9
|
Usmani K, O'Connor T, Javidi B. Three-dimensional polarimetric image restoration in low light with deep residual learning and integral imaging. OPTICS EXPRESS 2021; 29:29505-29517. [PMID: 34615059 DOI: 10.1364/oe.435900] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
Polarimetric imaging can become challenging in degraded environments such as low light illumination conditions or in partial occlusions. In this paper, we propose the denoising convolutional neural network (DnCNN) model with three-dimensional (3D) integral imaging to enhance the reconstructed image quality of polarimetric imaging in degraded environments such as low light and partial occlusions. The DnCNN is trained based on the physical model of the image capture in degraded environments to enhance the visualization of polarimetric imaging where simulated low light polarimetric images are used in the training process. The DnCNN model is experimentally tested on real polarimetric images captured in real low light environments and in partial occlusion. The performance of DnCNN model is compared with that of total variation denoising. Experimental results demonstrate that DnCNN performs better than total variation denoising for polarimetric integral imaging in terms of signal-to-noise ratio and structural similarity index measure in low light environments as well as low light environments under partial occlusions. To the best of our knowledge, this is the first report of polarimetric 3D object visualization and restoration in low light environments and occlusions using DnCNN with integral imaging. The proposed approach is also useful for 3D image restoration in conventional (non-polarimetric) integral imaging in a degraded environment.
Collapse
|
10
|
Usmani K, Krishnan G, O'Connor T, Javidi B. Deep learning polarimetric three-dimensional integral imaging object recognition in adverse environmental conditions. OPTICS EXPRESS 2021; 29:12215-12228. [PMID: 33984986 DOI: 10.1364/oe.421287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
Polarimetric imaging is useful for object recognition and material classification because of its ability to discriminate objects based on polarimetric signatures of materials. Polarimetric imaging of an object captures important physical properties such as shape and surface properties and can be effective even in low light environments. Integral imaging is a passive three-dimensional (3D) imaging approach that takes advantage of multiple 2D imaging perspectives to perform 3D reconstruction. In this paper, we propose a unified polarimetric detection and classification of objects in degraded environments such as low light and the presence of occlusion. This task is accomplished using a deep learning model for 3D polarimetric integral imaging data captured in the visible spectral domain. The neural network system is designed and trained for 3D object detection and classification using polarimetric integral images. We compare the detection and classification results between polarimetric and non-polarimetric 2D and 3D imaging. The system performance in degraded environmental conditions is evaluated using average miss rate, average precision, and F-1 score. The results indicate that for the experiments we have performed, polarimetric 3D integral imaging outperforms 2D polarimetric imaging as well as non-polarimetric 2D and 3D imaging for object recognition in adverse conditions such as low light and occlusions. To the best of our knowledge, this is the first report for polarimetric 3D object recognition in low light environments and occlusions using a deep learning-based integral imaging. The proposed approach is attractive because low light polarimetric object recognition in the visible spectral band benefits from much higher spatial resolution, more compact optics, and lower system cost compared with long wave infrared imaging which is the conventional imaging approach for low light environments.
Collapse
|
11
|
Javidi B, Carnicer A, Arai J, Fujii T, Hua H, Liao H, Martínez-Corral M, Pla F, Stern A, Waller L, Wang QH, Wetzstein G, Yamaguchi M, Yamamoto H. Roadmap on 3D integral imaging: sensing, processing, and display. OPTICS EXPRESS 2020; 28:32266-32293. [PMID: 33114917 DOI: 10.1364/oe.402193] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.
Collapse
|
12
|
Krishnan G, Joshi R, O'Connor T, Pla F, Javidi B. Human gesture recognition under degraded environments using 3D-integral imaging and deep learning. OPTICS EXPRESS 2020; 28:19711-19725. [PMID: 32672242 DOI: 10.1364/oe.396339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 06/14/2020] [Indexed: 06/11/2023]
Abstract
In this paper, we propose a spatio-temporal human gesture recognition algorithm under degraded conditions using three-dimensional integral imaging and deep learning. The proposed algorithm leverages the advantages of integral imaging with deep learning to provide an efficient human gesture recognition system under degraded environments such as occlusion and low illumination conditions. The 3D data captured using integral imaging serves as the input to a convolutional neural network (CNN). The spatial features extracted by the convolutional and pooling layers of the neural network are fed into a bi-directional long short-term memory (BiLSTM) network. The BiLSTM network is designed to capture the temporal variation in the input data. We have compared the proposed approach with conventional 2D imaging and with the previously reported approaches using spatio-temporal interest points with support vector machines (STIP-SVMs) and distortion invariant non-linear correlation-based filters. Our experimental results suggest that the proposed approach is promising, especially in degraded environments. Using the proposed approach, we find a substantial improvement over previously published methods and find 3D integral imaging to provide superior performance over the conventional 2D imaging system. To the best of our knowledge, this is the first report that examines deep learning algorithms based on 3D integral imaging for human activity recognition in degraded environments.
Collapse
|
13
|
Usmani K, O'Connor T, Shen X, Marasco P, Carnicer A, Dey D, Javidi B. Three-dimensional polarimetric integral imaging in photon-starved conditions: performance comparison between visible and long wave infrared imaging. OPTICS EXPRESS 2020; 28:19281-19294. [PMID: 32672208 DOI: 10.1364/oe.395301] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 06/09/2020] [Indexed: 06/11/2023]
Abstract
Three-dimensional (3D) polarimetric integral imaging (InIm) to extract the 3D polarimetric information of objects in photon-starved conditions is investigated using a low noise visible range camera and a long wave infrared (LWIR) range camera, and the performance between the two sensors is compared. Stokes polarization parameters and degree of polarization (DoP) are calculated to extract the polarimetric information of the 3D scene while integral imaging reconstruction provides depth information and improves the performance of low-light imaging tasks. An LWIR wire grid polarizer and a linear polarizer film are used as polarimetric objects for the LWIR range and visible range cameras, respectively. To account for a limited number of photons per pixel using the visible range camera in low light conditions, we apply a mathematical restoration model at each elemental image of visible camera to enhance the signal. We show that the low noise visible range camera may outperform the LWIR camera in detection of polarimetric objects under low illumination conditions. Our experiments indicate that for 3D polarimetric measurements under photon-starved conditions, visible range sensing may produce a signal-to-noise ratio (SNR) that is not lower than the LWIR range sensing. We derive the probability density function (PDF) of the 2D and 3D degree of polarization (DoP) images and show that the theoretical model demonstrates agreement to that of the experimentally obtained results. To the best of our knowledge, this is the first report comparing the polarimetric imaging performance between visible range and infrared (IR) range sensors under photon-starved conditions and the relevant statistical models of 3D polarimetric integral imaging.
Collapse
|
14
|
Scrofani G, Sola-Pikabea J, Llavador A, Sanchez-Ortiga E, Barreiro JC, Saavedra G, Garcia-Sucerquia J, Martínez-Corral M. FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples. BIOMEDICAL OPTICS EXPRESS 2018; 9:335-346. [PMID: 29359107 PMCID: PMC5772586 DOI: 10.1364/boe.9.000335] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 12/09/2017] [Accepted: 12/10/2017] [Indexed: 05/12/2023]
Abstract
In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and [Formula: see text]-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens.
Collapse
Affiliation(s)
- G. Scrofani
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - J. Sola-Pikabea
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - A. Llavador
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - E. Sanchez-Ortiga
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - J. C. Barreiro
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - G. Saavedra
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - J. Garcia-Sucerquia
- Universidad Nacional de Colombia, Sede Medellin, School of Physics, A.A. 3840 Medellín 050034, Colombia
| | | |
Collapse
|
15
|
Stevens RF, Davies N, Milnethorpe G. Lens arrays and optical system for orthoscopic three-dimensional imaging. IMAGING SCIENCE JOURNAL 2016. [DOI: 10.1080/13682199.2001.11784378] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
16
|
Xiao X, Shen X, Martinez-Corral M, Javidi B. Multiple-Planes Pseudoscopic-to-Orthoscopic Conversion for 3D Integral Imaging Display. ACTA ACUST UNITED AC 2015. [DOI: 10.1109/jdt.2014.2387854] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
17
|
Tolosa Á, Martinez-Cuenca R, Navarro H, Saavedra G, Martínez-Corral M, Javidi B, Pons A. Enhanced field-of-view integral imaging display using multi-Köhler illumination. OPTICS EXPRESS 2014; 22:31853-31863. [PMID: 25607153 DOI: 10.1364/oe.22.031853] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A common drawback in 3D integral imaging displays is the appearance of pseudoimages beyond the viewing angle. These pseudoimages appear when the light rays coming from each elemental image are not passing through the corresponding microlens, and a set of barriers must be used to avoid this flipping effect. We present a pure optical arrangement based on Köhler illumination to generate these barriers thus avoiding the pseudoimages. The proposed system does not use additional lenses to project the elemental images, so no optical aberrations are introduced. As an added benefit, Köhler illumination provides a higher contrast 3D display.
Collapse
|
18
|
Martínez-Corral M, Dorado A, Navarro H, Saavedra G, Javidi B. Three-dimensional display by smart pseudoscopic-to-orthoscopic conversion with tunable focus. APPLIED OPTICS 2014; 53:E19-25. [PMID: 25090349 DOI: 10.1364/ao.53.000e19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 04/23/2014] [Indexed: 05/24/2023]
Abstract
The original aim of the integral-imaging concept, reported by Gabriel Lippmann more than a century ago, is the capture of images of 3D scenes for their projection onto an autostereoscopic display. In this paper we report a new algorithm for the efficient generation of microimages for their direct projection onto an integral-imaging monitor. Like our previous algorithm, the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm, this algorithm produces microimages ready to produce 3D display with full parallax. However, this new algorithm is much simpler than the previous one, produces microimages free of black pixels, and permits fixing at will, between certain limits, the reference plane and the field of view of the displayed 3D scene. Proofs of concept are illustrated with 3D capture and 3D display experiments.
Collapse
|
19
|
Navarro Fructuoso H, Martinez-Corral M, Saavedra Tortosa G, Pons Marti A, Javidi B. Photoelastic Analysis of Partially Occluded Objects With an Integral-Imaging Polariscope. ACTA ACUST UNITED AC 2014. [DOI: 10.1109/jdt.2013.2287767] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
20
|
Wu C, Wang Q, Wang H, Lan J. Spatial-resolution analysis and optimal design of integral imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2013; 30:2328-2333. [PMID: 24322932 DOI: 10.1364/josaa.30.002328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Integral imaging is a promising technology for 3D imaging and display. This paper reports the 3D spatial-resolution research based on reconstructed 3D space. Through geometric analysis of the reconstructed optical distribution from all the element images that attend recording, the relationship among microlens parameters, planar-recording resolution, and 3D spatial resolution was obtained. The effect of microlens parameter accuracy on the reconstructed position error also was discussed. The research was carried on the depth priority integral imaging system (DPII). The results can be used in the optimal design of integral imaging.
Collapse
|
21
|
Makanjuola JK, Aggoun A, Swash M, Grange PCR, Challacombe B, Dasgupta P. 3D-holoscopic imaging: a new dimension to enhance imaging in minimally invasive therapy in urologic oncology. J Endourol 2013; 27:535-9. [PMID: 23216303 PMCID: PMC3643331 DOI: 10.1089/end.2012.0368] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND AND PURPOSE Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. METHODS We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. RESULTS The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. CONCLUSION The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging.
Collapse
|
22
|
Arai J, Kawakita M, Yamashita T, Sasaki H, Miura M, Hiura H, Okui M, Okano F. Integral three-dimensional television with video system using pixel-offset method. OPTICS EXPRESS 2013; 21:3474-3485. [PMID: 23481805 DOI: 10.1364/oe.21.003474] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Integral three-dimensional (3D) television based on integral imaging requires huge amounts of information. Previously, we constructed an Integral 3D television using Super Hi-Vision (SHV) technology, with 7680 pixels horizontally and 4320 pixels vertically. We report on improved image quality through the development of video system with an equivalent of 8000 scan lines for use with Integral 3D television. We conducted experiments to evaluate the resolution of 3D images using an experimental setup and were able to show that by using the pixel-offset method we have eliminated aliasing produced by full-resolution SHV video equipment. We confirmed that the application of the pixel-offset method to integral 3D television is effective in increasing the resolution of reconstructed images.
Collapse
Affiliation(s)
- Jun Arai
- Science and Technology Research Laboratories, NHK (Japan Broadcasting Corporation), 1-10-11 Kinuta, Setagaya, Tokyo 1578510, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Xiao X, Javidi B, Martinez-Corral M, Stern A. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. APPLIED OPTICS 2013; 52:546-60. [PMID: 23385893 DOI: 10.1364/ao.52.000546] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
Collapse
Affiliation(s)
- Xiao Xiao
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269-4157, USA
| | | | | | | |
Collapse
|
24
|
Cho M, Javidi B. Three-dimensional photon counting integral imaging using moving array lens technique. OPTICS LETTERS 2012; 37:1487-1489. [PMID: 22555713 DOI: 10.1364/ol.37.001487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this Letter, we present three-dimensional (3D) photon counting integral imaging using the moving array-lens technique (MALT) to improve the visualization of a reconstructed 3D scene. In 3D scene reconstruction of photon counting integral imaging, various techniques such as maximum likelihood estimation may be used. However, the visual quality depends on the number of scene photons or detector pixels activated by photons. We show that MALT may improve the viewing resolution of integral imaging for reconstructed 3D scene under photon-starved conditions.
Collapse
Affiliation(s)
- Myungjin Cho
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269, USA
| | | |
Collapse
|
25
|
Navarro H, Barreiro JC, Saavedra G, Martínez-Corral M, Javidi B. High-resolution far-field integral-imaging camera by double snapshot. OPTICS EXPRESS 2012; 20:890-5. [PMID: 22274435 DOI: 10.1364/oe.20.000890] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed images is equal to the spatial density of microlenses in the array. In this paper we report a simple method, based on the realization of double snapshots, to double the 2D pixel density of reconstructed scenes. Experiments are reported to support the proposed approach.
Collapse
Affiliation(s)
- H Navarro
- Department of Optics, University of Valencia, Burjassot, Spain
| | | | | | | | | |
Collapse
|
26
|
|
27
|
Park JH, Jeong KM. Frequency domain depth filtering of integral imaging. OPTICS EXPRESS 2011; 19:18729-18741. [PMID: 21935243 DOI: 10.1364/oe.19.018729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
A novel technique for depth filtering of integral imaging is proposed. Integral imaging captures spatio-angular distribution of the light rays which delivers three-dimensional information of the object scene. The proposed method performs filtering operation in the frequency domain of the captured spatio-angular light ray distribution, achieving depth selective reconstruction. Grating projection further enhances the depth discrimination performance. The principle is verified experimentally.
Collapse
Affiliation(s)
- Jae-Hyeung Park
- School of Electrical & Computer Engineering, Chungbuk National University, Chungbuk, Korea.
| | | |
Collapse
|
28
|
Arai J, Okano F, Kawakita M, Okui M, Haino Y, Yoshimura M, Furuya M, Sato M. Integral Three-Dimensional Television Using a 33-Megapixel Imaging System. ACTA ACUST UNITED AC 2010. [DOI: 10.1109/jdt.2010.2050192] [Citation(s) in RCA: 84] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
29
|
Park JH, Hong K, Lee B. Recent progress in three-dimensional information processing based on integral imaging. APPLIED OPTICS 2009; 48:H77-94. [PMID: 19956305 DOI: 10.1364/ao.48.000h77] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Recently developed integral imaging techniques are reviewed. Integral imaging captures and reproduces the light rays from the object space, enabling the acquisition and the display of the three-dimensional information of the object in an efficient way. Continuous effort on integral imaging has been improving the performance of the capture and display process in various aspects, including distortion, resolution, viewing angle, and depth range. Digital data processing of the captured light rays can now visualize the three-dimensional structure of the object with a high degree of freedom and enhanced quality. This recent progress is of high interest for both industrial applications and academic research.
Collapse
Affiliation(s)
- Jae-Hyeung Park
- School of Electrical & Computer Engineering, Chungbuk National University, 410 SungBong-Ro, Heungduk-Gu, Cheongju-Si, Chungbuk, 361-763, Korea
| | | | | |
Collapse
|
30
|
Kim Y, Park G, Jung JH, Kim J, Lee B. Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array. APPLIED OPTICS 2009; 48:2178-2187. [PMID: 19363558 DOI: 10.1364/ao.48.002178] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
We propose a color moiré pattern simulation and analysis method in integral imaging for finding the moiré-reducing tilted angle of a lens array. According to the tilted angle, the color moiré patterns are simulated on the assumption of ray optics. The spatial frequencies of the color moiré patterns are numerically analyzed using a spatial Fourier transform for finding the optimal angle where the moiré is reduced. With the proposed technique the visualization of the color moiré pattern and its analysis are enabled. The moiré-reduced three-dimensional images can be displayed. The principle of the proposed method, simulation results, and their analysis are provided. Experimental results verify the validity of the proposed method.
Collapse
Affiliation(s)
- Yunhee Kim
- School of Electrical Engineering, Seoul National University, Gwanak-Gu Gwanakro 599, Seoul 151-744, Korea
| | | | | | | | | |
Collapse
|
31
|
Jung JH, Kim Y, Kim Y, Kim J, Hong K, Lee B. Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure. APPLIED OPTICS 2009; 48:998-1007. [PMID: 19209217 DOI: 10.1364/ao.48.000998] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
We propose a thin and compact integral imaging system using electroluminescent (EL) films as backlight. EL film has the advantage that it can operate continuously even when it is cut or punctured. Using this characteristic, we generate an array of pinholes on EL film to form a point light-source array for reconstructing three-dimensional (3D) images based on integral imaging. The EL pinhole film is attached on another EL film and they are electrically controlled to generate a point light-source array or a surface light source; hence, the system converts between 3D and two-dimensional (2D) modes. Taking advantage of the flexibility of EL films, we also propose a flexible 3D/2D convertible integral imaging system with a wide viewing angle using a curved EL film. We explain the principle of the proposed methods and present experimental results.
Collapse
Affiliation(s)
- Jae-Hyun Jung
- School of Electrical Engineering, Seoul National University, Gwanak-Gu Sillim-Dong, Seoul 151-744, South Korea
| | | | | | | | | | | |
Collapse
|
32
|
Kim Y, Choi H, Kim J, Cho SW, Kim Y, Park G, Lee B. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers. APPLIED OPTICS 2007; 46:3766-73. [PMID: 17538673 DOI: 10.1364/ao.46.003766] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.
Collapse
Affiliation(s)
- Yunhee Kim
- School of Electrical Engineering, Seoul National University, Kwanak-Gu Shinlim-Dong, Seoul, Korea
| | | | | | | | | | | | | |
Collapse
|
33
|
Martínez-Cuenca R, Saavedra G, Pons A, Javidi B, Martínez-Corral M. Facet braiding: a fundamental problem in integral imaging. OPTICS LETTERS 2007; 32:1078-80. [PMID: 17410241 DOI: 10.1364/ol.32.001078] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
A rigorous explanation of a phenomenon that produces significant distortions in the three-dimensional images produced by integral imaging systems is provided. The phenomenon, which we refer to as the facet-braiding effect, has been recognized in some previous publications, but to our knowledge its nature has never been analyzed. We propose a technique for attenuating the facet-braiding effect. We have conducted experiments to illustrate the consequences of the facet-braiding effect on three-dimensional integral images, and we show the usefulness of the proposed technique in eliminating this effect.
Collapse
|
34
|
|
35
|
Kim Y, Park JH, Choi H, Kim J, Cho SW, Lee B. Depth-enhanced three-dimensional integral imaging by use of multilayered display devices. APPLIED OPTICS 2006; 45:4334-43. [PMID: 16778943 DOI: 10.1364/ao.45.004334] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Integral imaging is one of the promising three-dimensional display techniques and has many advantages. However, one disadvantage of integral imaging is the limited image depth. The image can be displayed only around the central depth plane. We propose a depth-enhanced integral imaging using multilayered display devices. We locate transparent display devices that use liquid crystal in parallel to each other and incorporate them into an integral imaging system. As a result, the proposed method has multiple central depth planes and permits the limitation of expressible depth to be overcome. The principle of the proposed method is explained, and some experimental results are presented.
Collapse
Affiliation(s)
- Yunhee Kim
- School of Electrical Engineering, Seoul National University, Seoul National University, Seoul, Korea
| | | | | | | | | | | |
Collapse
|
36
|
Javidi B, Hong SH, Matoba O. Multidimensional optical sensor and imaging system. APPLIED OPTICS 2006; 45:2986-94. [PMID: 16639446 DOI: 10.1364/ao.45.002986] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
We describe a multidimensional optical sensor and imaging system (MOSIS). Using a time-multiplexing, polarimetric, and multispectral imaging system, we are able to reconstruct a fully integrated multidimensional scene. Image fusion is used to integrate the multidimensional images. The fused image contains more information than the single two-dimensional and three-dimensional (3D) images. The multidimensional imaging system utilizes polarimetric imaging, multispectral imaging, 3D integral imaging with time and space multiplexing, and 3D image-fusion techniques to reconstruct the multidimensionally integrated scene. Optical experiments and computer simulations are presented.
Collapse
Affiliation(s)
- Bahram Javidi
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs 06269-2157, USA
| | | | | |
Collapse
|
37
|
Martinez-Corral M, Javidi B, Martínez-Cuenca R, Saavedra G. Formation of real, orthoscopic integral images by smart pixel mapping. OPTICS EXPRESS 2005; 13:9175-9180. [PMID: 19503116 DOI: 10.1364/opex.13.009175] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Integral imaging systems are imaging devices that provide 3D images of 3D objects. When integral imaging systems work in their standard configuration the provided reconstructed images are pseudoscopic; that is, are reversed in depth. In this paper we present, for the first time we believe, a technique for formation of real, undistorted, orthoscopic integral images by direct pickup. The technique is based on a smart mapping of pixels of an elemental-images set. Simulated imaging experiments are presented to support our proposal.
Collapse
|
38
|
Martínez-Corral M, Javidi B, Martínez-Cuenca R, Saavedra G. Multifacet structure of observed reconstructed integral images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2005; 22:597-603. [PMID: 15839266 DOI: 10.1364/josaa.22.000597] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Three-dimensional images generated by an integral imaging system suffer from degradations in the form of grid of multiple facets. This multifacet structure breaks the continuity of the observed image and therefore reduces its visual quality. We perform an analysis of this effect and present the guidelines in the design of lenslet imaging parameters for optimization of viewing conditions with respect to the multifacet degradation. We consider the optimization of the system in terms of field of view, observer position and pupil function, lenslet parameters, and type of reconstruction. Numerical tests are presented to verify the theoretical analysis.
Collapse
Affiliation(s)
- Manuel Martínez-Corral
- Electrical and Computer Engineering Dept., University of Connecticut, Storrs, Connecticut 06269-1157 ,USA.
| | | | | | | |
Collapse
|
39
|
Min SW, Hong J, Lee B. Analysis of an optical depth converter used in a three-dimensional integral imaging system. APPLIED OPTICS 2004; 43:4539-4549. [PMID: 15376430 DOI: 10.1364/ao.43.004539] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
An optical depth converter that uses a lens array pair is analyzed theoretically and experimentally. We present a theory of depth conversion and explain the effects of the system parameters in the optical depth converter by using wave-optical analysis. Ray-optical analysis is applied to the investigation of the tendencies of the system parameter effects. We also show that the optical depth converter can be used for the three-dimensional screen in projection-type integral imaging systems.
Collapse
Affiliation(s)
- Sung-Wook Min
- National Research Laboratory of Holography Technologies, School of Electrical Engineering, Seoul National University, Kwanak-Gu Shinlim-Dong, Seoul 151-744, Korea
| | | | | |
Collapse
|
40
|
Hong J, Park JH, Jung S, Lee B. Depth-enhanced integral imaging by use of optical path control. OPTICS LETTERS 2004; 29:1790-1792. [PMID: 15352371 DOI: 10.1364/ol.29.001790] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The image depth of integral imaging is enhanced by doubling the number of central depth planes by use of optical path control. To accomplish this, the optical path lengths are changed by controlling whether reflections occur behind the lens array. We propose three schemes that use mirrors, a combination of beam splitters and polarizers, and polarization beam splitters, respectively. In experiments we implement the systems that are completely electronically controllable, are compact, and provide two central depth planes with 50.4-mm separation.
Collapse
Affiliation(s)
- Jisoo Hong
- National Research Laboratory of Holography Technologies, School of Electrical Engineering, Seoul National University, Kwanak-Gu Shinlim-Dong, Seoul 151-744, South Korea
| | | | | | | |
Collapse
|
41
|
Min SW, Javidi B, Lee B. Enhanced three-dimensional integral imaging system by use of double display devices. APPLIED OPTICS 2003; 42:4186-4195. [PMID: 12856731 DOI: 10.1364/ao.42.004186] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We propose an enhanced three-dimensional (3D) integral imaging system using multiple display devices. Experimental results with double devices prove the improvement in the image depth for a given image quality. We present experiments on an enhanced 3D integral imaging system using double display devices, in which two 3D subimages that cover different depth ranges are separately generated in each device, and then they are combined with a beam splitter to reconstruct the whole 3D image with an enhanced depth of view. In a similar manner, the double-device system can also be used to obtain a wider viewing angle by combining two images with different viewing angle ranges. We discuss the possibility of 3D integral imaging systems using multiple display devices as extensions of the system with double display devices.
Collapse
Affiliation(s)
- Sung-Wook Min
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269-2157, USA
| | | | | |
Collapse
|
42
|
Forman MC, Davies N, McCormick M. Continuous parallax in discrete pixelated integral three-dimensional displays. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2003; 20:411-420. [PMID: 12630827 DOI: 10.1364/josaa.20.000411] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
An evaluation of the retention of continuous parallax in pixelated integral three-dimensional image displays is presented. The integral image capture process is first considered, to provide a starting point for the investigation. The complementary display system is then examined in detail. The viewing geometry of the display system is analyzed to provide a foundation for the work to follow, and an experimental investigation and simulations of the characteristics of emitted ray bundles are presented. Next, an analytical model of decoding lenslet array operation is derived, leading to an understanding of the process responsible for production of continuous parallax in replay. It is found that if the lateral resolution of the lenslet is matched to that of the display, continuous parallax is retained in the replayed image, where the finite aberration-limited resolution of the lenslet acts to produce a low-pass reconstruction filter. A condition is derived for optimal continuous parallax in replay, based on a relationship between pixel width and lenslet rms spot size.
Collapse
Affiliation(s)
- Matthew C Forman
- 3D & Imaging Technologies Group, De Montfort University, The Gateway, Leicester, LE1 9BH, UK.
| | | | | |
Collapse
|
43
|
Lee BH, Jung SY, Park JH, Choi HJ. Recent Progress in Three-Dimensional Display Based on Integral Imaging. ACTA ACUST UNITED AC 2002. [DOI: 10.3807/josk.2002.6.4.133] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
44
|
Lee B, Min SW, Javidi B. Theoretical analysis for three-dimensional integral imaging systems with double devices. APPLIED OPTICS 2002; 41:4856-4865. [PMID: 12197653 DOI: 10.1364/ao.41.004856] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
By adoption of double-device systems, integral imaging can be enhanced in image depth, viewing angle, or image size. Theoretical analyses are done for the double-image-plane integral imaging systems. Both ray optics analysis and wave optics analysis confirm that the double-device integral imaging systems can pick up and display images at two separate image planes. The analysis results are also valuable in the understanding of the conventional integral imaging systems for image positions off the central depth plane.
Collapse
Affiliation(s)
- Byoungho Lee
- National Research Laboratory of Holography Technologies, School of Electrical Engineering, Seoul National University, Kwanak-Gu Shinlim-Dong, Korea.
| | | | | |
Collapse
|
45
|
Lee B, Jung S, Park JH. Viewing-angle-enhanced integral imaging by lens switching. OPTICS LETTERS 2002; 27:818-820. [PMID: 18007938 DOI: 10.1364/ol.27.000818] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In spite of the many advantages of integral imaging, its narrow viewing angle has been a disadvantage. We propose a method to enhance the viewing angle of integral imaging by opening and shutting each lens in the array (i.e., the elemental lenses) sequentially. We prove our idea by using a mask that has a pattern of an on-off vertical array of apertures. Moving the mask prevents the aliasing of a neighboring lens. Thus image overlap or image flipping is reduced and the viewing angle of the system is increased.
Collapse
|
46
|
Shin SH, Javidi B. Speckle-reduced three-dimensional volume holographic display by use of integral imaging. APPLIED OPTICS 2002; 41:2644-2649. [PMID: 12022663 DOI: 10.1364/ao.41.002644] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We propose a method to implement a speckle-reduced coherent three-dimensional (3D) display system by a combination of integral imaging and photorefractive volume holographic storage. The 3D real object is imaged through the microlens array and stored in the photorefractive crystal. During the reconstruction process a phase conjugate reading beam is used to minimize aberration, and a rotating diffuser located on the imaging plane of the lens array is employed to reduce the speckle noise. The speckle-reduced 3D image with a wide viewing angle can be reconstructed by use of the proposed system. Experimental results are presented and optical parameters of the proposed system are discussed in detail.
Collapse
Affiliation(s)
- Seung-Ho Shin
- Kangwon National University, Department of Physics, Chunchon, Korea
| | | |
Collapse
|
47
|
Jeong Y, Jung S, Park JH, Lee B. Reflection-type integral imaging scheme for displaying three-dimensional images. OPTICS LETTERS 2002; 27:704-706. [PMID: 18007905 DOI: 10.1364/ol.27.000704] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A reflection-type integral imaging scheme for displaying three-dimensional images is proposed. By use of a concave mirror array instead of a lens array, three-dimensional images are integrated in the form of a reflection type, and the experimental results are demonstrated. This scheme can readily be applied to a large integral imaging system by use of a beam projector that is located at a distance from the mirror-array plane.
Collapse
|
48
|
Park JH, Min SW, Jung S, Lee B. Analysis of viewing parameters for two display methods based on integral photography. APPLIED OPTICS 2001; 40:5217-5232. [PMID: 18364803 DOI: 10.1364/ao.40.005217] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We describe and compare two methods of displaying autostereoscopic three-dimensional images by integral photography. One method is to display the image in front of the lens array, and the other method is to display the image behind the lens array. We compare and discuss these two methods from the viewpoints of lateral resolution, depth resolution, and viewing angle. We also discuss the effect of the optical parameter difference in the pickup and display.
Collapse
|
49
|
Lee B, Jung S, Min SW, Park JH. Three-dimensional display by use of integral photography with dynamically variable image planes. OPTICS LETTERS 2001; 26:1481-2. [PMID: 18049641 DOI: 10.1364/ol.26.001481] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
A computer-generated integral photography system operating with a variable image plane is proposed. In this scheme, the gap between a lens array and a display panel is adjusted in real time. A synchronized elemental image array for real or virtual mode is integrated in front of or behind the lens array. This integration gives an observer an enhanced perception of depth. The proposed method can be applied to animated three-dimensional imaging.
Collapse
|
50
|
Manolache S, Aggoun A, McCormick M, Davies N, Kung SY. Analytical model of a three-dimensional integral image recording system that uses circular- and hexagonal-based spherical surface microlenses. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2001; 18:1814-1821. [PMID: 11488485 DOI: 10.1364/josaa.18.001814] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
A mathematical model for a three-dimensional omnidirectional integral recording camera system that uses either circular- or hexagonal-based spherical surface microlens arrays is derived. The geometry of the image formation and recording process is fully described. Matlab is then used to establish the number of recorded micro-intensity distributions representing a single object point and their dependence on spatial position. The point-spread function for the entire optical process for both close and remote imaging is obtained, and the influence of depth on the point-spread dimensions for each type of microlens and imaging condition is discussed. Comparisons of the two arrangements are made, based on the illustrative numerical results presented.
Collapse
Affiliation(s)
- S Manolache
- Department of Electrical and Electronic Engineering, De Montfort University, The Gateway, Leicester, UK.
| | | | | | | | | |
Collapse
|