1
|
Bai YC, Deng H, Yang CN, Chen YA, Zhao CJ, Tang J. Sub-pixel marking and depth-based correction methods for the elimination of voxel drifting in integral imaging display. OPTICS EXPRESS 2024; 32:12243-12256. [PMID: 38571053 DOI: 10.1364/oe.515111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 03/11/2024] [Indexed: 04/05/2024]
Abstract
Integral imaging is a kind of true three-dimensional (3D) display technology that uses a lens array to reconstruct vivid 3D images with full parallax and true color. In order to present a high-quality 3D image, it's vital to correct the axial position error caused by the misalignment and deformation of the lens array which makes the reconstructed lights deviate from the correct directions, resulting in severe voxel drifting and image blurring. We proposed a sub-pixel marking method to measure the axial position error of the lenses with great accuracy by addressing the sub-pixels under each lens and forming a homologous sub-pixel pair. The proposed measurement method relies on the geometric center alignment of image points, which is specifically expressed as the overlap between the test 3D voxel and the reference 3D voxel. Hence, measurement accuracy could be higher. Additionally, a depth-based sub-pixel correction method was proposed to eliminate the voxel drifting. The proposed correction method takes the voxel depth into consideration in the correction coefficient, and achieves accurate error correction for 3D images with different depths. The experimental results well confirmed that the proposed measuring and correction methods can greatly suppress the voxel drifting caused by the axial position error of the lenses, and greatly improve the 3D image quality.
Collapse
|
2
|
Xu S, Shi S. Analysis of error propagation: from raw light-field data to depth estimation. APPLIED OPTICS 2023; 62:8704-8715. [PMID: 38038015 DOI: 10.1364/ao.500897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 10/18/2023] [Indexed: 12/02/2023]
Abstract
In micro-lens-array-based light-field imaging, the micro-lens centers serve as the origins of local micro-lens coordinate systems. Each micro-lens receives angular/depth information coded according to its center location. Therefore, the errors in positioning the micro-lens centers will lead to errors in depth estimation. This paper proposes a method that resolves error propagation from raw light-field data to depth estimation based on analyzing large amounts of simulated images with various aperture sizes, noise levels, and object distance values. The simulation employs backward ray tracing and Monte Carlo sampling to improve computational efficiency. The errors are counted and accumulated stepwise from center positioning and generation of sub-aperture images to depth estimation. The disparity errors calculated during depth estimation are shown to be more apparent either with more significant center positioning errors or with a greater defocusing distance. An experiment using an industrial light-field camera is conducted, confirming that disparity errors at considerable object distances can be reduced significantly when the micro-lens centers are positioned with higher accuracy.
Collapse
|
3
|
Chen M, He W, Wei D, Hu C, Shi J, Zhang X, Wang H, Xie C. Depth-of-Field-Extended Plenoptic Camera Based on Tunable Multi-Focus Liquid-Crystal Microlens Array. SENSORS 2020; 20:s20154142. [PMID: 32722494 PMCID: PMC7435381 DOI: 10.3390/s20154142] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/25/2020] [Accepted: 07/24/2020] [Indexed: 12/26/2022]
Abstract
Plenoptic cameras have received a wide range of research interest because it can record the 4D plenoptic function or radiance including the radiation power and ray direction. One of its important applications is digital refocusing, which can obtain 2D images focused at different depths. To achieve digital refocusing in a wide range, a large depth of field (DOF) is needed, but there are fundamental optical limitations to this. In this paper, we proposed a plenoptic camera with an extended DOF by integrating a main lens, a tunable multi-focus liquid-crystal microlens array (TMF-LCMLA), and a complementary metal oxide semiconductor (CMOS) sensor together. The TMF-LCMLA was fabricated by traditional photolithography and standard microelectronic techniques, and its optical characteristics including interference patterns, focal lengths, and point spread functions (PSFs) were experimentally analyzed. Experiments demonstrated that the proposed plenoptic camera has a wider range of digital refocusing compared to the plenoptic camera based on a conventional liquid-crystal microlens array (LCMLA) with only one corresponding focal length at a certain voltage, which is equivalent to the extension of DOF. In addition, it also has a 2D/3D switchable function, which is not available with conventional plenoptic cameras.
Collapse
Affiliation(s)
- Mingce Chen
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Wenda He
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Dong Wei
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Chai Hu
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
- Innovation Institute, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Jiashuo Shi
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Xinyu Zhang
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, China; (H.W.); (C.X.)
- Correspondence:
| | - Haiwei Wang
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, China; (H.W.); (C.X.)
| | - Changsheng Xie
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, China; (H.W.); (C.X.)
| |
Collapse
|