1
|
Lu Z, Zuo S, Shi M, Fan J, Xie J, Xiao G, Yu L, Wu J, Dai Q. Long-term intravital subcellular imaging with confocal scanning light-field microscopy. Nat Biotechnol 2025; 43:569-580. [PMID: 38802562 PMCID: PMC11994454 DOI: 10.1038/s41587-024-02249-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 04/17/2024] [Indexed: 05/29/2024]
Abstract
Long-term observation of subcellular dynamics in living organisms is limited by background fluorescence originating from tissue scattering or dense labeling. Existing confocal approaches face an inevitable tradeoff among parallelization, resolution and phototoxicity. Here we present confocal scanning light-field microscopy (csLFM), which integrates axially elongated line-confocal illumination with the rolling shutter in scanning light-field microscopy (sLFM). csLFM enables high-fidelity, high-speed, three-dimensional (3D) imaging at near-diffraction-limit resolution with both optical sectioning and low phototoxicity. By simultaneous 3D excitation and detection, the excitation intensity can be reduced below 1 mW mm-2, with 15-fold higher signal-to-background ratio over sLFM. We imaged subcellular dynamics over 25,000 timeframes in optically challenging environments in different species, such as migrasome delivery in mouse spleen, retractosome generation in mouse liver and 3D voltage imaging in Drosophila. Moreover, csLFM facilitates high-fidelity, large-scale neural recording with reduced crosstalk, leading to high orientation selectivity to visual stimuli, similar to two-photon microscopy, which aids understanding of neural coding mechanisms.
Collapse
Affiliation(s)
- Zhi Lu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Zhejiang Hehu Technology, Hangzhou, China
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou, China
| | - Siqing Zuo
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Minghui Shi
- State Key Laboratory of Membrane Biology, Tsinghua University-Peking University Joint Center for Life Sciences, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, Beijing, China
| | - Jiaqi Fan
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Jingyu Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Guihua Xiao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Li Yu
- State Key Laboratory of Membrane Biology, Tsinghua University-Peking University Joint Center for Life Sciences, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, Beijing, China.
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Shanghai AI Laboratory, Shanghai, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Shen Z, Ni Y, Yang Y. Baseline-free structured light 3D imaging using a metasurface double-helix dot projector. NANOPHOTONICS (BERLIN, GERMANY) 2025; 14:1265-1272. [PMID: 40290276 PMCID: PMC12019951 DOI: 10.1515/nanoph-2024-0668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 01/11/2025] [Indexed: 04/30/2025]
Abstract
Structured light is a widely used 3D imaging method with a drawback that it typically requires a long baseline length between the laser projector and the camera sensor, which hinders its utilization in space-constrained scenarios. On the other hand, the application of passive 3D imaging methods, such as depth from depth-dependent point spread functions (PSFs), is impeded by the challenge in measuring textureless scenes. Here, we combine the advantages of both structured light and depth-dependent PSFs and propose a baseline-free structured light 3D imaging system. A metasurface is designed to project a structured dot array and encode depth information in the double-helix pattern of each dot simultaneously. Combined with a straightforward and fast algorithm, we demonstrate accurate 3D point cloud acquisition for various real-world scenarios including multiple cardboard boxes and a living human face. Such a technique may find application in a broad range of areas including consumer electronics and precision metrology.
Collapse
Affiliation(s)
- Zicheng Shen
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing100084, China
| | - Yibo Ni
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing100084, China
| | - Yuanmu Yang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing100084, China
| |
Collapse
|
3
|
Zhang X, Wang L, Chen J, Fang C, Yang G, Wang Y, Yang L, Song Z, Liu L, Zhang X, Xu B, Li Z, Yang Q, Li J, Zhang Z, Wang W, Ge SS. Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autononous Driving. Sci Data 2025; 12:439. [PMID: 40082463 PMCID: PMC11907064 DOI: 10.1038/s41597-025-04698-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 02/21/2025] [Indexed: 03/16/2025] Open
Abstract
4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, 4D radar is more noisy than LiDAR and requires different filtering strategies that affect the point cloud density and noise level. Comparative analyses of different point cloud densities and noise levels are still lacking, mainly because the available datasets use only one type of 4D radar, making it difficult to compare different 4D radars in the same scenario. We introduce a novel large-scale multi-modal dataset that captures both types of 4D radar, consisting of 151 sequences, most of which are 20 seconds long and contain 10,007 synchronized and annotated frames. Our dataset captures a variety of challenging driving scenarios, including multiple road conditions, weather conditions, different lighting intensities and periods. It supports 3D object detection and tracking as well as multi-modal tasks. We experimentally validate the dataset, providing valuable insights for studying different types of 4D radar.
Collapse
Affiliation(s)
- Xinyu Zhang
- School of Vehicle and Mobility, Tsinghua University, Beijing, 100084, China
- Suzhou Automotive Research Institute, Tsinghua University, Suzhou, 215200, China
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Li Wang
- School of Machanical Engineering, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jian Chen
- School of Mechanical and Electrical Engineering, China University of Mining and Technology-Beijing, Beijing, 100083, China
| | - Cheng Fang
- School of Artificial Intelligence, China University of Mining and Technology-Beijing, Beijing, 100083, China
| | - Guangqi Yang
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China
| | - Yichen Wang
- School of Excellence in Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Lei Yang
- School of Vehicle and Mobility, Tsinghua University, Beijing, 100084, China
| | - Ziying Song
- School of Computer and Information Technology, Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, 100044, China
| | - Lin Liu
- School of Computer and Information Technology, Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, 100044, China
| | - Xiaofei Zhang
- School of Vehicle and Mobility, Tsinghua University, Beijing, 100084, China
| | - Bin Xu
- School of Machanical Engineering, Beijing Institute of Technology, Beijing, 100081, China
| | - Zhiwei Li
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, 100029, China
| | - Qingshan Yang
- Beijing Jingwei Hirain Technologies Co., Inc., Beijing, 100191, China
| | - Jun Li
- School of Vehicle and Mobility, Tsinghua University, Beijing, 100084, China
| | - Zhenlin Zhang
- China Automotive Innovation Cooperation, Nanjing, 211113, China
| | - Weida Wang
- School of Machanical Engineering, Beijing Institute of Technology, Beijing, 100081, China
| | - Shuzhi Sam Ge
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| |
Collapse
|
4
|
Cheng K, Pan L, Lai Z, Jiang M, Xu Y, Qi J, Feng X. Blind aberration correction for light field photography. OPTICS LETTERS 2025; 50:209-212. [PMID: 39718890 DOI: 10.1364/ol.542480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 11/20/2024] [Indexed: 12/26/2024]
Abstract
Aberration correction is critical for obtaining sharp images but remains a challenging task. Owing to its ability to record both spatial and angular information of light rays, light field imaging is a powerful method to measure and correct optical aberrations. However, current methods need extensive calibrations to obtain prior information about the camera, which is restrictive in real-world applications. In this work, we propose a two-stage blind aberration correction method for light field imaging, which leverages self-supervised learning for general blind aberration correction and low-rank approximation to exploit the specific correlations of light fields to further abate aberrations. We demonstrated experimentally the superiority of our method over current state-of-the-art.
Collapse
|
5
|
Liu Y, Chen X. Monocular meta-imaging camera sees depth. LIGHT, SCIENCE & APPLICATIONS 2025; 14:5. [PMID: 39741126 DOI: 10.1038/s41377-024-01666-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2025]
Abstract
A novel monocular depth-sensing camera based on meta-imaging sensor technology has been developed, offering more precise depth sensing with millimeter-level accuracy and enhanced robustness compared to conventional 2D and light-field cameras.
Collapse
Affiliation(s)
- Yujin Liu
- Innovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou, 510555, Guangdong, China
| | - Xueli Chen
- Innovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou, 510555, Guangdong, China.
- Center for Biomedical-photonics and Molecular Imaging, Advanced Diagnostic-Therapy Technology and Equipment Key Laboratory of Higher Education Institutions in Shaanxi Province, School of Life Science and Technology, Xidian University, Xi'an, 710126, Shaanxi, China.
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, 710126, Shaanxi, China.
| |
Collapse
|
6
|
Gu T, Wang K, Cai A, Wu F, Chang Y, Zhao H, Wang L. Metasurface-Coated Liquid Microlens for Super Resolution Imaging. MICROMACHINES 2024; 16:25. [PMID: 39858681 PMCID: PMC11767574 DOI: 10.3390/mi16010025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 12/25/2024] [Accepted: 12/26/2024] [Indexed: 01/27/2025]
Abstract
Inspired by metasurfaces' control over light fields, this study created a liquid microlens coated with a layer of Au@TiO2, Core-Shell nanospheres. Utilizing the surface plasmon resonance (SPR) effect of Au@TiO2, Core-Shell nanospheres, and the formation of photonic nanojets (PNJs), this study aimed to extend the imaging system's cutoff frequency, improve microlens focusing, enhance the capture capability of evanescent waves, and utilize nanospheres to improve the conversion of evanescent waves into propagating waves, thus boosting the liquid microlens's super-resolution capabilities. The finite difference time domain (FDTD) method analyzed the impact of parameters including nanosphere size, microlens sample contact width, and droplet's initial contact angle on super-resolution imaging. The results indicate that the full width at half maximum (FWHM) of the field distribution produced by the uncoated microlens is 1.083 times that of the field distribution produced by the Au@TiO2, Core-Shell nanospheres coated microlens. As the nanosphere radius, droplet contact angle, and droplet base diameter increased, the microlens's light intensity correspondingly increased. These findings confirm that metasurface coating enhances the super-resolution capabilities of the microlens.
Collapse
Affiliation(s)
- Tongkai Gu
- School of Mechanical and Electrical Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China; (T.G.); (K.W.)
- State Key Laboratory for Manufacturing System Engineering, Xi’an Jiaotong University, Xi’an 710054, China
| | - Kang Wang
- School of Mechanical and Electrical Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China; (T.G.); (K.W.)
| | - Anjiang Cai
- School of Mechanical and Electrical Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China; (T.G.); (K.W.)
| | - Fan Wu
- School of Textile Science and Engineering, Xi’an Polytechnic University, Xi’an 710699, China;
| | - Yasheng Chang
- School of Optical and Electronic Information, Suzhou City University, Suzhou 215104, China;
| | - Haiyan Zhao
- School of Architecture and Design, Kunshan Dengyun College of Science and Technology, Suzhou 215300, China;
| | - Lanlan Wang
- State Key Laboratory for Manufacturing System Engineering, Xi’an Jiaotong University, Xi’an 710054, China
| |
Collapse
|
7
|
Bian L, Chang X, Xu H, Zhang J. Ultra-fast light-field microscopy with event detection. LIGHT, SCIENCE & APPLICATIONS 2024; 13:306. [PMID: 39511142 PMCID: PMC11544014 DOI: 10.1038/s41377-024-01603-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2024]
Abstract
The event detection technique has been introduced to light-field microscopy, boosting its imaging speed in orders of magnitude with simultaneous axial resolution enhancement in scattering medium.
Collapse
Affiliation(s)
- Liheng Bian
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China
| | - Xuyang Chang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China
| | - Hanwen Xu
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China
| | - Jun Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China.
| |
Collapse
|
8
|
Bian L, Wang Z, Zhang Y, Li L, Zhang Y, Yang C, Fang W, Zhao J, Zhu C, Meng Q, Peng X, Zhang J. A broadband hyperspectral image sensor with high spatio-temporal resolution. Nature 2024; 635:73-81. [PMID: 39506154 PMCID: PMC11541218 DOI: 10.1038/s41586-024-08109-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/24/2024] [Indexed: 11/08/2024]
Abstract
Hyperspectral imaging provides high-dimensional spatial-temporal-spectral information showing intrinsic matter characteristics1-5. Here we report an on-chip computational hyperspectral imaging framework with high spatial and temporal resolution. By integrating different broadband modulation materials on the image sensor chip, the target spectral information is non-uniformly and intrinsically coupled to each pixel with high light throughput. Using intelligent reconstruction algorithms, multi-channel images can be recovered from each frame, realizing real-time hyperspectral imaging. Following this framework, we fabricated a broadband visible-near-infrared (400-1,700 nm) hyperspectral image sensor using photolithography, with an average light throughput of 74.8% and 96 wavelength channels. The demonstrated resolution is 1,024 × 1,024 pixels at 124 fps. We demonstrated its wide applications, including chlorophyll and sugar quantification for intelligent agriculture, blood oxygen and water quality monitoring for human health, textile classification and apple bruise detection for industrial automation, and remote lunar detection for astronomy. The integrated hyperspectral image sensor weighs only tens of grams and can be assembled on various resource-limited platforms or equipped with off-the-shelf optical systems. The technique transforms the challenge of high-dimensional imaging from a high-cost manufacturing and cumbersome system to one that is solvable through on-chip compression and agile computation.
Collapse
Affiliation(s)
- Liheng Bian
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China.
| | - Zhen Wang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Yuzhe Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Lianjie Li
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Yinuo Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Chen Yang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Wen Fang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Jiajun Zhao
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Chunli Zhu
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Qinghao Meng
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Xuan Peng
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Jun Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
9
|
Zhang Y, Wang M, Zhu Q, Guo Y, Liu B, Li J, Yao X, Kong C, Zhang Y, Huang Y, Qi H, Wu J, Guo ZV, Dai Q. Long-term mesoscale imaging of 3D intercellular dynamics across a mammalian organ. Cell 2024; 187:6104-6122.e25. [PMID: 39276776 DOI: 10.1016/j.cell.2024.08.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 06/06/2024] [Accepted: 08/13/2024] [Indexed: 09/17/2024]
Abstract
A comprehensive understanding of physio-pathological processes necessitates non-invasive intravital three-dimensional (3D) imaging over varying spatial and temporal scales. However, huge data throughput, optical heterogeneity, surface irregularity, and phototoxicity pose great challenges, leading to an inevitable trade-off between volume size, resolution, speed, sample health, and system complexity. Here, we introduce a compact real-time, ultra-large-scale, high-resolution 3D mesoscope (RUSH3D), achieving uniform resolutions of 2.6 × 2.6 × 6 μm3 across a volume of 8,000 × 6,000 × 400 μm3 at 20 Hz with low phototoxicity. Through the integration of multiple computational imaging techniques, RUSH3D facilitates a 13-fold improvement in data throughput and an orders-of-magnitude reduction in system size and cost. With these advantages, we observed premovement neural activity and cross-day visual representational drift across the mouse cortex, the formation and progression of multiple germinal centers in mouse inguinal lymph nodes, and heterogeneous immune responses following traumatic brain injury-all at single-cell resolution, opening up a horizon for intravital mesoscale study of large-scale intercellular interactions at the organ level.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Mingrui Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Qiyu Zhu
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Yuduo Guo
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Bo Liu
- School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China
| | - Jiamin Li
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Xiao Yao
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Chui Kong
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Yi Zhang
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Yuchao Huang
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Hai Qi
- School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| | - Zengcai V Guo
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
10
|
Badloe T, Yang Y, Lee S, Jeon D, Youn J, Kim DS, Rho J. Artificial Intelligence-Enhanced Metasurfaces for Instantaneous Measurements of Dispersive Refractive Index. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2403143. [PMID: 39225343 PMCID: PMC11497055 DOI: 10.1002/advs.202403143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 07/13/2024] [Indexed: 09/04/2024]
Abstract
Measurements of the refractive index of liquids are in high demand in numerous fields such as agriculture, food and beverages, and medicine. However, conventional ellipsometric refractive index measurements are too expensive and labor-intensive for consumer devices, while Abbe refractometry is limited to the measurement at a single wavelength. Here, a new approach is proposed using machine learning to unlock the potential of colorimetric metasurfaces for the real-time measurement of the dispersive refractive index of liquids over the entire visible spectrum. The platform with a proof-of-concept experiment for measuring the concentration of glucose is further demonstrated, which holds a profound impact in non-invasive medical sensing. High-index-dielectric metasurfaces are designed and fabricated, while their experimentally measured reflectance and reflected colors, through microscopy and a standard smartphone, are used to train deep-learning models to provide measurements of the dispersive background refractive index with a resolution of ≈10-4, which is comparable to the known index as measured with ellipsometry. These results show the potential of enabling the unique optical properties of metasurfaces with machine learning to create a platform for the quick, simple, and high-resolution measurement of the dispersive refractive index of liquids, without the need for highly specialized experts and optical procedures.
Collapse
Affiliation(s)
- Trevon Badloe
- Graduate School of Artificial IntelligencePohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
- Department of Electronics and Information EngineeringKorea UniversitySejong30019Republic of Korea
| | - Younghwan Yang
- Department of Mechanical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
| | - Seokho Lee
- Department of Mechanical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
| | - Dongmin Jeon
- Department of Mechanical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
| | - Jaeseung Youn
- Department of Mechanical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
| | - Dong Sung Kim
- Department of Mechanical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
| | - Junsuk Rho
- Department of Mechanical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
- Department of Chemical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
- Department of Electrical EngineeringPohang University of Science and Technology (POSTECH)Pohang37673Republic of Korea
- POSCO‐POSTECH‐RIST Convergence Research Center for Flat Optics and MetaphotonicsPohang37673Republic of Korea
- National Institute of Nanomaterials Technology (NINT)Pohang37673Republic of Korea
| |
Collapse
|
11
|
Huang Y, Cao J, Shi X, Wang J, Chang J. Stereo imaging inspired by bionic optics. OPTICS LETTERS 2024; 49:5647-5650. [PMID: 39353028 DOI: 10.1364/ol.537074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 09/10/2024] [Indexed: 10/04/2024]
Abstract
Stereo imaging has been a focal point in fields such as robotics and autonomous driving. This Letter discusses the imaging mechanisms of jumping spiders and human eyes from a biomimetic perspective and proposes a monocular stereo imaging solution with low computational cost and high stability. The stereo imaging mechanism of jumping spiders enables monocular imaging without relying on multiple viewpoints, thus avoiding complex large-scale feature point matching and significantly conserving computational resources. The foveal imaging mechanism of the human eye allows for complex imaging tasks to be completed only on the locally interested regions, resulting in more efficient execution of various visual tasks. By combining these two advantages, we have developed a more computationally efficient monocular stereo imaging method that can achieve stereo imaging on only the locally interested regions without sacrificing the performance of wide field-of-view (FOV) imaging. Finally, through experimental validation, we demonstrate that the method proposed in this Letter exhibits excellent stereo imaging performance.
Collapse
|
12
|
Cao Z, Li N, Zhu L, Wu J, Dai Q, Qiao H. Aberration-robust monocular passive depth sensing using a meta-imaging camera. LIGHT, SCIENCE & APPLICATIONS 2024; 13:236. [PMID: 39237492 PMCID: PMC11377717 DOI: 10.1038/s41377-024-01609-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 08/23/2024] [Accepted: 08/26/2024] [Indexed: 09/07/2024]
Abstract
Depth sensing plays a crucial role in various applications, including robotics, augmented reality, and autonomous driving. Monocular passive depth sensing techniques have come into their own for the cost-effectiveness and compact design, offering an alternative to the expensive and bulky active depth sensors and stereo vision systems. While the light-field camera can address the defocus ambiguity inherent in 2D cameras and achieve unambiguous depth perception, it compromises the spatial resolution and usually struggles with the effect of optical aberration. In contrast, our previously proposed meta-imaging sensor1 has overcome such hurdles by reconciling the spatial-angular resolution trade-off and achieving the multi-site aberration correction for high-resolution imaging. Here, we present a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision by calculating the Cramér-Rao lower bound of depth estimation. Quantitative evaluations reveal that the meta-imaging camera exhibits not only higher precision over a broader depth range than the light-field camera but also superior robustness against changes in signal-background ratio. Moreover, both the simulation and experimental results demonstrate that the meta-imaging camera maintains the capability of providing precise depth information even in the presence of aberrations. Showing the promising compatibility with other point-spread-function engineering methods, we anticipate that the meta-imaging camera may facilitate the advancement of monocular passive depth sensing in various applications.
Collapse
Affiliation(s)
- Zhexuan Cao
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Ning Li
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Laiyu Zhu
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, 100084, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, China.
| | - Hui Qiao
- Department of Automation, Tsinghua University, Beijing, 100084, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
13
|
Jie K, Yao Z, Zheng Y, Wang M, Yuan D, Lin Z, Chen S, Qin F, Ou H, Li X, Cao Y. Ultrahigh precision laser nanoprinting based on defect-compensated digital holography for fast-fabricating optical metalenses. OPTICS LETTERS 2024; 49:3288-3291. [PMID: 38875602 DOI: 10.1364/ol.522575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/14/2024] [Indexed: 06/16/2024]
Abstract
The 3D structured light field manipulated by a digital-micromirror-device (DMD)-based digital hologram has demonstrated its superiority in fast-fabricating stereo nanostructures. However, this technique intrinsically suffers from defects of light intensity in generating modulated focal spots, which prevents from achieving high-precision micro/nanodevices. In this Letter, we have demonstrated a compensation approach based on adapting spatial voxel density for fabricating optical metalenses with ultrahigh precision. The modulated focal spot experiences intensity fluctuations of up to 3% by changing the spatial position, leading to a 20% variation of the structural dimension in fabrication. By altering the voxel density to improve the uniformity of the laser cumulative exposure dosage over the fabrication region, we achieved an increased dimensional uniformity from 94.4% to 97.6% in fabricated pillars. This approach enables fast fabrication of metalenses capable of sub-diffraction focusing of 0.44λ/NA with the increased mainlobe-sidelobe ratio from 1:0.34 to 1:0.14. A 6 × 5 supercritical lens array is fabricated within 2 min, paving a way for the fast fabrication of large-scale photonic devices.
Collapse
|
14
|
Zheng X, Zan S, Lv X, Zhang F, Zhang L. Thermal active optical technology to achieve in-orbit wavefront aberration correction for optical remote sensing satellites. APPLIED OPTICS 2024; 63:3842-3853. [PMID: 38856347 DOI: 10.1364/ao.517834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 04/02/2024] [Indexed: 06/11/2024]
Abstract
Image quality and resolution are important factors affecting the application value of remote sensing images. Although increasing the optical aperture of space optical remote sensors (SORSs) improves image resolution, it exacerbates the effects of the space environment on imaging quality. Thus, this study proposes thermal active optical technology (TAO) to enhance image quality while increasing the optical aperture of SORSs by actively correcting in-orbit wavefront aberrations. Replacing traditional wavefront detection and reconstruction with numerical calculation and simulation analysis, more realistic in-orbit SORS wavefront aberrations are obtained. Numerical and finite element analyses demonstrate that nonlinearities in TAO control lead to the failure of traditional wavefront correction algorithms. To address this, we use a neural network algorithm combining CNN and ResNet. Simulation results show that the residual of the systematic wavefront RMS error for SORS reduces to 1/100λ. The static and dynamic modular transfer functions are improved, and the structural similarity index is recovered by over 23%, highlighting the effectiveness of TAO in image quality enhancement. The static and thermal vacuum experiments demonstrate the wide applicability and engineering prospects of TAO.
Collapse
|
15
|
Choi M, Munley C, Fröch JE, Chen R, Majumdar A. Nonlocal, Flat-Band Meta-Optics for Monolithic, High-Efficiency, Compact Photodetectors. NANO LETTERS 2024; 24:3150-3156. [PMID: 38477059 DOI: 10.1021/acs.nanolett.3c05139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Miniaturized photodetectors are becoming increasingly sought-after components for next-generation technologies, such as autonomous vehicles, integrated wearable devices, or gadgets embedded on the Internet of Things. A major challenge, however, lies in shrinking the device footprint while maintaining high efficiency. This conundrum can be solved by realizing a nontrivial relation between the energy and momentum of photons, such as dispersion-free devices, known as flat bands. Here, we leverage flat-band meta-optics to simultaneously achieve critical absorption over a wide range of incidence angles. For a monolithic silicon meta-optical photodiode, we achieved an ∼10-fold enhancement in the photon-to-electron conversion efficiency. Such enhancement over a large angular range of ∼36° allows incoming light to be collected via a large-aperture lens and focused on a compact photodiode, potentially enabling high-speed and low-light operation. Our research unveils new possibilities for creating compact and efficient optoelectronic devices with far-reaching impact on various applications, including augmented reality and light detection and ranging.
Collapse
Affiliation(s)
- Minho Choi
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195, United States
| | - Christopher Munley
- Department of Physics, University of Washington, Seattle, Washington 98195, United States
| | - Johannes E Fröch
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195, United States
- Department of Physics, University of Washington, Seattle, Washington 98195, United States
| | - Rui Chen
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195, United States
| | - Arka Majumdar
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195, United States
- Department of Physics, University of Washington, Seattle, Washington 98195, United States
| |
Collapse
|
16
|
Zhang X, Wang L, Cao XW, Jiang S, Yu YH, Xu WW, Juodkazis S, Chen QD. Single femtosecond pulse writing of a bifocal lens. OPTICS LETTERS 2024; 49:911-914. [PMID: 38359214 DOI: 10.1364/ol.515811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 01/15/2024] [Indexed: 02/17/2024]
Abstract
In this Letter, a method for the fabrication of bifocal lenses is presented by combining surface ablation and bulk modification in a single laser exposure followed by the wet etching processing step. The intensity of a single femtosecond laser pulse was modulated axially into two foci with a designed computer-generated hologram (CGH). Such pulse simultaneously induced an ablation region on the surface and a modified volume inside the fused silica. After etching in hydrofluoric acid (HF), the two exposed regions evolved into a bifocal lens. The area ratio (diameter) of the two lenses can be flexibly adjusted via control of the pulse energy distribution through the CGH. Besides, bifocal lenses with a center offset as well as convex lenses were obtained by a replication technique. This method simplifies the fabrication of micro-optical elements and opens a highly efficient and simple pathway for complex optical surfaces and integrated imaging systems.
Collapse
|
17
|
Mei D, Luan Y, Li X, Wu X. Light field camera calibration and point spread function calculation based on differentiable ray tracing. OPTICS LETTERS 2024; 49:965-968. [PMID: 38359237 DOI: 10.1364/ol.507898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 01/18/2024] [Indexed: 02/17/2024]
Abstract
The imaging process of the light field (LF) camera with a micro-lens array (MLA) may suffer from multiple aberrations. It is thus difficult to precisely calibrate the intrinsic hardware parameters and calculate the corresponding point spread function (PSF). To build an aberration-aware solution with better generalization, we propose an end-to-end imaging model based on the differentiable ray tracing. The input end is the point source location, and the output end is the rendered LF image, namely, PSF. Specially, a projection method is incorporated into the imaging model, eliminating the huge memory overhead induced by a large array of periodic elements. Taking captured PSF images as the ground truth, the LF camera is calibrated with the genetic algorithm initially and then the gradient-based optimization. This method is promising to be used in various LF camera applications, especially in challenging imaging conditions with severe aberrations.
Collapse
|
18
|
Wakayama T, Zama A, Higuchi Y, Takahashi Y, Aizawa K, Higashiguchi T. Simultaneous detection of polarization states and wavefront by an angular variant micro-retarder-lens array. OPTICS EXPRESS 2024; 32:2405-2417. [PMID: 38297771 DOI: 10.1364/oe.509574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 12/22/2023] [Indexed: 02/02/2024]
Abstract
We have demonstrated simultaneous detection of the polarization states and wavefront of light using a 7 × 7 array of angular variant micro-retarder-lenses. Manipulating the angular variant polarization with our optical element allows us to determine the two-dimensional distribution of polarization states. We have also proposed a calibration method for polarization measurements using our micro-retarder-lens array, allowing accurate detection of polarization states with an ellipticity of ± 0.01 and an azimuth of ± 1.0°. We made wavefront measurements using the micro-retarder-lens array, achieving a resolution of 25 nm. We conducted simultaneous detection of the polarization states and wavefront on four types of structured beam as samples. The results show that the two-dimensional distributions of the polarization states and wavefront for the four types of structured light are radially and azimuthally polarized beams, as well as left- and right-hand optical vortices. Our sensing technology has the potential to enhance our understanding of the nature of light in the fields of laser sciences, astrophysics, and even ophthalmology.
Collapse
|
19
|
Chen Z, Zheng S, Wang W, Song J, Yuan X. Temporal structured illumination and vision-transformer enables large field-of-view binary snapshot ptychography. OPTICS EXPRESS 2024; 32:1540-1551. [PMID: 38297703 DOI: 10.1364/oe.504721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/21/2023] [Indexed: 02/02/2024]
Abstract
Ptychography, a widely used computational imaging method, generates images by processing coherent interference patterns scattered from an object of interest. In order to capture scenes with large field-of-view (FoV) and high spatial resolution simultaneously in a single shot, we propose a temporal-compressive structured-light Ptychography system. A novel three-step reconstruction algorithm composed of multi-frame spectra reconstruction, phase retrieval, and multi-frame image stitching is developed, where we employ the emerging Transformer-based network in the first step. Experimental results demonstrate that our system can expand the FoV by 20× without losing spatial resolution. Our results offer huge potential for enabling lensless imaging of molecules with large FoV as well as high spatial-temporal resolutions. We also notice that due to the loss of low-intensity information caused by the compressed sensing process, our method so far is only applicable to binary targets.
Collapse
|
20
|
Wang J, Zhao X, Wang Y, Li D. Quantitative real-time phase microscopy for extended depth-of-field imaging based on the 3D single-shot differential phase contrast (ssDPC) imaging method. OPTICS EXPRESS 2024; 32:2081-2096. [PMID: 38297745 DOI: 10.1364/oe.512285] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 12/21/2023] [Indexed: 02/02/2024]
Abstract
Optical diffraction tomography (ODT) is a promising label-free imaging method capable of quantitatively measuring the three-dimensional (3D) refractive index distribution of transparent samples. In recent years, partially coherent ODT (PC-ODT) has attracted increasing attention due to its system simplicity and absence of laser speckle noise. Quantitative phase imaging (QPI) technologies represented by Fourier ptychographic microscopy (FPM), differential phase contrast (DPC) imaging and intensity diffraction tomography (IDT) need to collect several or hundreds of intensity images, which usually introduce motion artifacts when shooting fast-moving targets, leading to a decrease in image quality. Hence, a quantitative real-time phase microscopy (qRPM) for extended depth of field (DOF) imaging based on 3D single-shot differential phase contrast (ssDPC) imaging method is proposed in this research study. qRPM incorporates a microlens array (MLA) to simultaneously collect spatial information and angular information. In subsequent optical information processing, a deconvolution method is used to obtain intensity stacks under different illumination angles in a raw light field image. Importing the obtained intensity stack into the 3D DPC imaging model is able to finally obtain the 3D refractive index distribution. The captured four-dimensional light field information enables the reconstruction of 3D information in a single snapshot and extending the DOF of qRPM. The imaging capability of the proposed qRPM system is experimental verified on different samples, achieve single-exposure 3D label-free imaging with an extended DOF for 160 µm which is nearly 30 times higher than the traditional microscope system.
Collapse
|
21
|
Li L, Wang S, Zhao F, Zhang Y, Wen S, Chai H, Gao Y, Wang W, Cao L, Yang Y. Single-shot deterministic complex amplitude imaging with a single-layer metalens. SCIENCE ADVANCES 2024; 10:eadl0501. [PMID: 38181086 PMCID: PMC10776002 DOI: 10.1126/sciadv.adl0501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 12/01/2023] [Indexed: 01/07/2024]
Abstract
Conventional imaging systems can only capture light intensity. Meanwhile, the lost phase information may be critical for a variety of applications such as label-free microscopy and optical metrology. Existing phase retrieval techniques typically require a bulky setup, multiframe measurements, or prior information of the target scene. Here, we proposed an extremely compact system for complex amplitude imaging, leveraging the extreme versatility of a single-layer metalens to generate spatially multiplexed and polarization phase-shifted point spread functions. Combining the metalens with a polarization camera, the system can simultaneously record four polarization shearing interference patterns along both in-plane directions, thus allowing the deterministic reconstruction of the complex amplitude light field in a single shot. Using an incoherent light-emitting diode as the illumination, we experimentally demonstrated speckle-noise-free complex amplitude imaging for both static and moving objects with tailored magnification ratio and field of view. The miniaturized and robust system may open the door for complex amplitude imaging in portable devices for point-of-care applications.
Collapse
Affiliation(s)
| | | | - Feng Zhao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yixin Zhang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Shun Wen
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Huichao Chai
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yunhui Gao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Wenhui Wang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Liangcai Cao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | | |
Collapse
|
22
|
Su D, Gao W, Li H, Guo C, Zhao W. Highly flexible and compact volumetric endoscope by integrating multiple micro-imaging devices. OPTICS LETTERS 2023; 48:6416-6419. [PMID: 38099762 DOI: 10.1364/ol.506261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 11/10/2023] [Indexed: 12/18/2023]
Abstract
A light-field endoscope can simultaneously capture the three-dimensional information of in situ lesions and enables single-shot quantitative depth perception with minimal invasion for improving surgical and diagnostic accuracy. However, due to oversized rigid probes, clinical applications of current techniques are limited by their cumbersome devices. To minimize the size and enhance the flexibility, here we report a highly flexible and compact volumetric endoscope by employing precision-machined multiple micro-imaging devices (MIRDs). To further protect the flexibility, the designed MIRD with a diameter and height of 5 mm is packaged in pliable polyamide, using soft data cables for data transmission. It achieves the optimal lateral resolvability of 31 µm and axial resolvability of 255 µm, with an imaging volume over 2.3 × 2.3 × 10 mm3. Our technique allows easy access to the organism interior through the natural entrance, which has been verified through observational experiments of the stomach and rectum of a rabbit. Together, we expect this device can assist in the removal of tumors and polyps as well as the identification of certain early cancers of the digestive tract.
Collapse
|
23
|
Guzmán F, Skowronek J, Vera E, Brady DJ. Compressive video via IR-pulsed illumination. OPTICS EXPRESS 2023; 31:39201-39212. [PMID: 38018004 DOI: 10.1364/oe.506011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 10/23/2023] [Indexed: 11/30/2023]
Abstract
We propose and demonstrate a compressive temporal imaging system based on pulsed illumination to encode temporal dynamics into the signal received by the imaging sensor during exposure time. Our approach enables >10x increase in effective frame rate without increasing camera complexity. To mitigate the complexity of the inverse problem during reconstruction, we introduce two keyframes: one before and one after the coded frame. We also craft what we believe to be a novel deep learning architecture for improved reconstruction of the high-speed scenes, combining specialized convolutional and transformer architectures. Simulation and experimental results clearly demonstrate the reconstruction of high-quality, high-speed videos from the compressed data.
Collapse
|
24
|
Chen Y, Nazhamaiti M, Xu H, Meng Y, Zhou T, Li G, Fan J, Wei Q, Wu J, Qiao F, Fang L, Dai Q. All-analog photoelectronic chip for high-speed vision tasks. Nature 2023; 623:48-57. [PMID: 37880362 PMCID: PMC10620079 DOI: 10.1038/s41586-023-06558-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/21/2023] [Indexed: 10/27/2023]
Abstract
Photonic computing enables faster and more energy-efficient processing of vision data1-5. However, experimental superiority of deployable systems remains a challenge because of complicated optical nonlinearities, considerable power consumption of analog-to-digital converters (ADCs) for downstream digital processing and vulnerability to noises and system errors1,6-8. Here we propose an all-analog chip combining electronic and light computing (ACCEL). It has a systemic energy efficiency of 74.8 peta-operations per second per watt and a computing speed of 4.6 peta-operations per second (more than 99% implemented by optics), corresponding to more than three and one order of magnitude higher than state-of-the-art computing processors, respectively. After applying diffractive optical computing as an optical encoder for feature extraction, the light-induced photocurrents are directly used for further calculation in an integrated analog computing chip without the requirement of analog-to-digital converters, leading to a low computing latency of 72 ns for each frame. With joint optimizations of optoelectronic computing and adaptive training, ACCEL achieves competitive classification accuracies of 85.5%, 82.0% and 92.6%, respectively, for Fashion-MNIST, 3-class ImageNet classification and time-lapse video recognition task experimentally, while showing superior system robustness in low-light conditions (0.14 fJ μm-2 each frame). ACCEL can be used across a broad range of applications such as wearable devices, autonomous driving and industrial inspections.
Collapse
Affiliation(s)
- Yitong Chen
- Department of Automation, Tsinghua University, Beijing, China
| | | | - Han Xu
- Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Yao Meng
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Tiankuang Zhou
- Department of Automation, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Guangpu Li
- Department of Automation, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Jingtao Fan
- Department of Automation, Tsinghua University, Beijing, China
| | - Qi Wei
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
| | - Fei Qiao
- Department of Electronic Engineering, Tsinghua University, Beijing, China.
| | - Lu Fang
- Department of Electronic Engineering, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
| |
Collapse
|
25
|
Meng Y, Zhong H, Xu Z, He T, Kim JS, Han S, Kim S, Park S, Shen Y, Gong M, Xiao Q, Bae SH. Functionalizing nanophotonic structures with 2D van der Waals materials. NANOSCALE HORIZONS 2023; 8:1345-1365. [PMID: 37608742 DOI: 10.1039/d3nh00246b] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
The integration of two-dimensional (2D) van der Waals materials with nanostructures has triggered a wide spectrum of optical and optoelectronic applications. Photonic structures of conventional materials typically lack efficient reconfigurability or multifunctionality. Atomically thin 2D materials can thus generate new functionality and reconfigurability for a well-established library of photonic structures such as integrated waveguides, optical fibers, photonic crystals, and metasurfaces, to name a few. Meanwhile, the interaction between light and van der Waals materials can be drastically enhanced as well by leveraging micro-cavities or resonators with high optical confinement. The unique van der Waals surfaces of the 2D materials enable handiness in transfer and mixing with various prefabricated photonic templates with high degrees of freedom, functionalizing as the optical gain, modulation, sensing, or plasmonic media for diverse applications. Here, we review recent advances in synergizing 2D materials to nanophotonic structures for prototyping novel functionality or performance enhancements. Challenges in scalable 2D materials preparations and transfer, as well as emerging opportunities in integrating van der Waals building blocks beyond 2D materials are also discussed.
Collapse
Affiliation(s)
- Yuan Meng
- Department of Mechanical Engineering and Materials Science, Washington University in St. Louis, St. Louis, MO, USA.
| | - Hongkun Zhong
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.
| | - Zhihao Xu
- Institute of Materials Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA
| | - Tiantian He
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.
| | - Justin S Kim
- Institute of Materials Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA
| | - Sangmoon Han
- Department of Mechanical Engineering and Materials Science, Washington University in St. Louis, St. Louis, MO, USA.
| | - Sunok Kim
- Department of Mechanical Engineering and Materials Science, Washington University in St. Louis, St. Louis, MO, USA.
| | - Seoungwoong Park
- Institute of Materials Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA
| | - Yijie Shen
- Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, Singapore
- Optoelectronics Research Centre, University of Southampton, Southampton, UK
| | - Mali Gong
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.
| | - Qirong Xiao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.
| | - Sang-Hoon Bae
- Department of Mechanical Engineering and Materials Science, Washington University in St. Louis, St. Louis, MO, USA.
- Institute of Materials Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
26
|
Kong C, Wang Y, Xiao G. Neuron populations across layer 2-6 in the mouse visual cortex exhibit different coding abilities in the awake mice. Front Cell Neurosci 2023; 17:1238777. [PMID: 37817884 PMCID: PMC10560757 DOI: 10.3389/fncel.2023.1238777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 09/05/2023] [Indexed: 10/12/2023] Open
Abstract
Introduction The visual cortex is a key region in the mouse brain, responsible for processing visual information. Comprised of six distinct layers, each with unique neuronal types and connections, the visual cortex exhibits diverse decoding properties across its layers. This study aimed to investigate the relationship between visual stimulus decoding properties and the cortical layers of the visual cortex while considering how this relationship varies across different decoders and brain regions. Methods This study reached the above conclusions by analyzing two publicly available datasets obtained through two-photon microscopy of visual cortex neuronal responses. Various types of decoders were tested for visual cortex decoding. Results Our findings indicate that the decoding accuracy of neuronal populations with consistent sizes varies among visual cortical layers for visual stimuli such as drift gratings and natural images. In particular, layer 4 neurons in VISp exhibited significantly higher decoding accuracy for visual stimulus identity compared to other layers. However, in VISm, the decoding accuracy of neuronal populations with the same size in layer 2/3 was higher than that in layer 4, despite the overall accuracy being lower than that in VISp and VISl. Furthermore, SVM surpassed other decoders in terms of accuracy, with the variation in decoding performance across layers being consistent among decoders. Additionally, we found that the difference in decoding accuracy across different imaging depths was not associated with the mean orientation selectivity index (OSI) and the mean direction selectivity index (DSI) neurons, but showed a significant positive correlation with the mean reliability and mean signal-to-noise ratio (SNR) of each layer's neuron population. Discussion These findings lend new insights into the decoding properties of the visual cortex, highlighting the role of different cortical layers and decoders in determining decoding accuracy. The correlations identified between decoding accuracy and factors such as reliability and SNR pave the way for more nuanced understandings of visual cortex functioning.
Collapse
Affiliation(s)
- Chui Kong
- School of Information Science and Technology, Fudan University, Shanghai, China
- Department of Communication Science and Engineering, Fudan University, Shanghai, China
| | - Yangzhen Wang
- School of Information Science and Technology, Fudan University, Shanghai, China
- Department of Automation, Tsinghua University, Beijing, China
| | - Guihua Xiao
- School of Information Science and Technology, Fudan University, Shanghai, China
- Department of Automation, Tsinghua University, Beijing, China
- BNRist, Tsinghua University, Beijing, China
| |
Collapse
|
27
|
Yu Y, Xiong T, Kang J, Zhou Z, Long H, Liu DY, Liu L, Liu YY, Yang J, Wei Z. Dual-band real-time object identification via polarization reversal based on 2D GeSe image sensor. Sci Bull (Beijing) 2023; 68:1867-1870. [PMID: 37563030 DOI: 10.1016/j.scib.2023.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 06/21/2023] [Accepted: 08/01/2023] [Indexed: 08/12/2023]
Affiliation(s)
- Yali Yu
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tao Xiong
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jun Kang
- Beijing Computational Science Research Center, Beijing 100193, China
| | - Ziqi Zhou
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Haoran Long
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Duan-Yang Liu
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Liyuan Liu
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yue-Yang Liu
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Juehan Yang
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Zhongming Wei
- State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
28
|
Chen K, Xie W, Deng Y, Han J, Zhu Y, Sun J, Yuan K, Wu L, Deng Y. Alkaloid Precipitant Reaction Inspired Controllable Synthesis of Mesoporous Tungsten Oxide Spheres for Biomarker Sensing. ACS NANO 2023; 17:15763-15775. [PMID: 37556610 DOI: 10.1021/acsnano.3c03549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
Highly porous sensitive materials with well-defined structures and morphologies are extremely desirable for developing high-performance chemiresistive gas sensors. Herein, inspired by the classical alkaloid precipitant reaction, a robust and reliable active mesoporous nitrogen polymer sphere-directed synthesis method was demonstrated for the controllable construction of heteroatom-doped mesoporous tungsten oxide spheres. In the typical synthesis, P-doped mesoporous WO3 monodisperse spheres with radially oriented channels (P-mWO3-R) were obtained with a diameter of ∼180 nm, high specific surface area, and crystalline skeleton. The in situ-introduced P atoms could effectively adjust the coordination environment of W atoms (Wδ+-Ov), giving rise to dramatically enhanced active surface-adsorbed oxygen species and unusual metastable ε-WO3 crystallites. The P-mWO3-R spheres were applied for the sensing of 3-hydroxy-2-butanone (3H2B), a biomarker of foodborne pathogenic bacteria Listeria monocytogenes (LM). The sensor exhibited high sensitivity (Ra/Rg = 29 to 3 ppm), fast response dynamics (26/7 s), outstanding selectivity, and good long-term stability. Furthermore, the device was integrated into a wireless sensing module to realize remote real-time and precise detection of LM in practical applications, making it possible to evaluate food quality using gas sensors conveniently.
Collapse
Affiliation(s)
- Keyu Chen
- Department of Chemistry, Department of Gastroenterology and Hepatology, Zhongshan Hospital, State Key Laboratory of Molecular Engineering of Polymers, State Key Lab of Transducer Technology, Shanghai Key Laboratory of Molecular Catalysis and Innovative Materials, Fudan University, Shanghai 200433, China
| | - Wenhe Xie
- Department of Chemistry, Department of Gastroenterology and Hepatology, Zhongshan Hospital, State Key Laboratory of Molecular Engineering of Polymers, State Key Lab of Transducer Technology, Shanghai Key Laboratory of Molecular Catalysis and Innovative Materials, Fudan University, Shanghai 200433, China
| | - Yu Deng
- Department of Chemistry, Department of Gastroenterology and Hepatology, Zhongshan Hospital, State Key Laboratory of Molecular Engineering of Polymers, State Key Lab of Transducer Technology, Shanghai Key Laboratory of Molecular Catalysis and Innovative Materials, Fudan University, Shanghai 200433, China
| | - Jingting Han
- Ministry of Agriculture and Shanghai Engineering Research Center of Aquatic Product Processing & Preservation, Shanghai Ocean University, Shanghai 201306, China
| | - Yongheng Zhu
- Ministry of Agriculture and Shanghai Engineering Research Center of Aquatic Product Processing & Preservation, Shanghai Ocean University, Shanghai 201306, China
| | - Jianguo Sun
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University; NHC Key Laboratory of Myopia (Fudan University), Shanghai 200031, China
| | - Kaiping Yuan
- Frontier Institute of Chip and System, State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 200433, China
| | - Limin Wu
- Institute of Energy and Materials Chemistry, Inner Mongolia University, Hohhot 010021, China
| | - Yonghui Deng
- Department of Chemistry, Department of Gastroenterology and Hepatology, Zhongshan Hospital, State Key Laboratory of Molecular Engineering of Polymers, State Key Lab of Transducer Technology, Shanghai Key Laboratory of Molecular Catalysis and Innovative Materials, Fudan University, Shanghai 200433, China
| |
Collapse
|
29
|
Zhang Y, Song X, Xie J, Hu J, Chen J, Li X, Zhang H, Zhou Q, Yuan L, Kong C, Shen Y, Wu J, Fang L, Dai Q. Large depth-of-field ultra-compact microscope by progressive optimization and deep learning. Nat Commun 2023; 14:4118. [PMID: 37433856 DOI: 10.1038/s41467-023-39860-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 06/28/2023] [Indexed: 07/13/2023] Open
Abstract
The optical microscope is customarily an instrument of substantial size and expense but limited performance. Here we report an integrated microscope that achieves optical performance beyond a commercial microscope with a 5×, NA 0.1 objective but only at 0.15 cm3 and 0.5 g, whose size is five orders of magnitude smaller than that of a conventional microscope. To achieve this, a progressive optimization pipeline is proposed which systematically optimizes both aspherical lenses and diffractive optical elements with over 30 times memory reduction compared to the end-to-end optimization. By designing a simulation-supervision deep neural network for spatially varying deconvolution during optical design, we accomplish over 10 times improvement in the depth-of-field compared to traditional microscopes with great generalization in a wide variety of samples. To show the unique advantages, the integrated microscope is equipped in a cell phone without any accessories for the application of portable diagnostics. We believe our method provides a new framework for the design of miniaturized high-performance imaging systems by integrating aspherical optics, computational optics, and deep learning.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China
| | - Xiaofei Song
- Tsinghua Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Jiachen Xie
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China
| | - Jing Hu
- State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, 310027, Hangzhou, China
| | - Jiawei Chen
- OPPO Research Institute, 518101, Shenzhen, China
| | - Xiang Li
- OPPO Research Institute, 518101, Shenzhen, China
| | - Haiyu Zhang
- OPPO Research Institute, 518101, Shenzhen, China
| | - Qiqun Zhou
- OPPO Research Institute, 518101, Shenzhen, China
| | - Lekang Yuan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, 518055, Shenzhen, China
| | - Chui Kong
- School of Information Science and Technology, Fudan University, 200433, Shanghai, China
| | - Yibing Shen
- State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, 310027, Hangzhou, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China.
| | - Lu Fang
- Department of Electronic Engineering, Tsinghua University, 100084, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China.
| |
Collapse
|
30
|
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
| | - Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
| |
Collapse
|
31
|
Feng BY, Guo H, Xie M, Boominathan V, Sharma MK, Veeraraghavan A, Metzler CA. NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media. SCIENCE ADVANCES 2023; 9:eadg4671. [PMID: 37379386 DOI: 10.1126/sciadv.adg4671] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 05/23/2023] [Indexed: 06/30/2023]
Abstract
Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators-but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.
Collapse
Affiliation(s)
- Brandon Y Feng
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| | - Haiyun Guo
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Mingyang Xie
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Manoj K Sharma
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Christopher A Metzler
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| |
Collapse
|
32
|
Liu Z, Hu G, Ye H, Wei M, Guo Z, Chen K, Liu C, Tang B, Zhou G. Mold-free self-assembled scalable microlens arrays with ultrasmooth surface and record-high resolution. LIGHT, SCIENCE & APPLICATIONS 2023; 12:143. [PMID: 37286533 DOI: 10.1038/s41377-023-01174-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/11/2023] [Accepted: 04/26/2023] [Indexed: 06/09/2023]
Abstract
Microlens arrays (MLAs) based on the selective wetting have opened new avenues for developing compact and miniaturized imaging and display techniques with ultrahigh resolution beyond the traditional bulky and volumetric optics. However, the selective wetting lenses explored so far have been constrained by the lack of precisely defined pattern for highly controllable wettability contrast, thus limiting the available droplet curvature and numerical aperture, which is a major challenge towards the practical high-performance MLAs. Here we report a mold-free and self-assembly approach of mass-production of scalable MLAs, which can also have ultrasmooth surface, ultrahigh resolution, and the large tuning range of the curvatures. The selective surface modification based on tunable oxygen plasma can facilitate the precise pattern with adjusted chemical contrast, thus creating large-scale microdroplets array with controlled curvature. The numerical aperture of the MLAs can be up to 0.26 and precisely tuned by adjusting the modification intensity or the droplet dose. The fabricated MLAs have high-quality surface with subnanometer roughness and allow for record-high resolution imaging up to equivalently 10,328 ppi, as we demonstrated. This study shows a cost-effective roadmap for mass-production of high-performance MLAs, which may find applications in the rapid proliferating integral imaging industry and high-resolution display.
Collapse
Affiliation(s)
- Zhihao Liu
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China
| | - Guangwei Hu
- School of Electrical and Electronic Engineering, 50 Nanyang Avenue, Nanyang Technological University, Singapore, 639798, Singapore
| | - Huapeng Ye
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China.
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China.
| | - Miaoyang Wei
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China
| | - Zhenghao Guo
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China
| | - Kexu Chen
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China
| | - Chen Liu
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China
| | - Biao Tang
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China.
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China.
| | - Guofu Zhou
- Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute of Electronic Paper Displays, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China.
- National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China.
- Shenzhen Guohua Optoelectronics Tech. Co. Ltd, Shenzhen, 518110, China.
| |
Collapse
|
33
|
Detectors that encode angles of incoming light as colour. Nature 2023:10.1038/d41586-023-01366-6. [PMID: 37165216 DOI: 10.1038/d41586-023-01366-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
|
34
|
Yi L, Hou B, Zhao H, Liu X. X-ray-to-visible light-field detection through pixelated colour conversion. Nature 2023:10.1038/s41586-023-05978-w. [PMID: 37165192 DOI: 10.1038/s41586-023-05978-w] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 03/20/2023] [Indexed: 05/12/2023]
Abstract
Light-field detection measures both the intensity of light rays and their precise direction in free space. However, current light-field detection techniques either require complex microlens arrays or are limited to the ultraviolet-visible light wavelength ranges1-4. Here we present a robust, scalable method based on lithographically patterned perovskite nanocrystal arrays that can be used to determine radiation vectors from X-rays to visible light (0.002-550 nm). With these multicolour nanocrystal arrays, light rays from specific directions can be converted into pixelated colour outputs with an angular resolution of 0.0018°. We find that three-dimensional light-field detection and spatial positioning of light sources are possible by modifying nanocrystal arrays with specific orientations. We also demonstrate three-dimensional object imaging and visible light and X-ray phase-contrast imaging by combining pixelated nanocrystal arrays with a colour charge-coupled device. The ability to detect light direction beyond optical wavelengths through colour-contrast encoding could enable new applications, for example, in three-dimensional phase-contrast imaging, robotics, virtual reality, tomographic biological imaging and satellite autonomous navigation.
Collapse
Affiliation(s)
- Luying Yi
- Department of Chemistry, National University of Singapore, Singapore, Singapore
| | - Bo Hou
- Department of Chemistry, National University of Singapore, Singapore, Singapore
| | - He Zhao
- Department of Chemistry, National University of Singapore, Singapore, Singapore
- Joint School of National University of Singapore and Tianjin University, Fuzhou, China
| | - Xiaogang Liu
- Department of Chemistry, National University of Singapore, Singapore, Singapore.
- Joint School of National University of Singapore and Tianjin University, Fuzhou, China.
- Center for Functional Materials, National University of Singapore Suzhou Research Institute, Suzhou, China.
- Institute of Materials Research and Engineering, Agency for Science, Technology and Research, Singapore, Singapore.
| |
Collapse
|
35
|
Zhao Z, Zhou Y, Liu B, He J, Zhao J, Cai Y, Fan J, Li X, Wang Z, Lu Z, Wu J, Qi H, Dai Q. Two-photon synthetic aperture microscopy for minimally invasive fast 3D imaging of native subcellular behaviors in deep tissue. Cell 2023; 186:2475-2491.e22. [PMID: 37178688 DOI: 10.1016/j.cell.2023.04.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/21/2023] [Accepted: 04/10/2023] [Indexed: 05/15/2023]
Abstract
Holistic understanding of physio-pathological processes requires noninvasive 3D imaging in deep tissue across multiple spatial and temporal scales to link diverse transient subcellular behaviors with long-term physiogenesis. Despite broad applications of two-photon microscopy (TPM), there remains an inevitable tradeoff among spatiotemporal resolution, imaging volumes, and durations due to the point-scanning scheme, accumulated phototoxicity, and optical aberrations. Here, we harnessed the concept of synthetic aperture radar in TPM to achieve aberration-corrected 3D imaging of subcellular dynamics at a millisecond scale for over 100,000 large volumes in deep tissue, with three orders of magnitude reduction in photobleaching. With its advantages, we identified direct intercellular communications through migrasome generation following traumatic brain injury, visualized the formation process of germinal center in the mouse lymph node, and characterized heterogeneous cellular states in the mouse visual cortex, opening up a horizon for intravital imaging to understand the organizations and functions of biological systems at a holistic level.
Collapse
Affiliation(s)
- Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Yiliang Zhou
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Bo Liu
- Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China; Department of Basic Medical Sciences, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Jing He
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Jiayin Zhao
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Yeyi Cai
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Jingtao Fan
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China
| | - Xinyang Li
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Zilin Wang
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Department of Anesthesiology, the First Medical Center, Chinese PLA General Hospital, Beijing 100853, China
| | - Zhi Lu
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| | - Hai Qi
- Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China; Department of Basic Medical Sciences, School of Medicine, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory for Immunological Research on Chronic Diseases, Tsinghua University, Beijing 100084, China; Beijing Frontier Research Center for Biological Structure, Tsinghua University, Beijing 100084, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
36
|
Lu Z, Liu Y, Jin M, Luo X, Yue H, Wang Z, Zuo S, Zeng Y, Fan J, Pang Y, Wu J, Yang J, Dai Q. Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging. Nat Methods 2023; 20:735-746. [PMID: 37024654 PMCID: PMC10172145 DOI: 10.1038/s41592-023-01839-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 03/07/2023] [Indexed: 04/08/2023]
Abstract
High-speed three-dimensional (3D) intravital imaging in animals is useful for studying transient subcellular interactions and functions in health and disease. Light-field microscopy (LFM) provides a computational solution for snapshot 3D imaging with low phototoxicity but is restricted by low resolution and reconstruction artifacts induced by optical aberrations, motion and noise. Here, we propose virtual-scanning LFM (VsLFM), a physics-based deep learning framework to increase the resolution of LFM up to the diffraction limit within a snapshot. By constructing a 40 GB high-resolution scanning LFM dataset across different species, we exploit physical priors between phase-correlated angular views to address the frequency aliasing problem. This enables us to bypass hardware scanning and associated motion artifacts. Here, we show that VsLFM achieves ultrafast 3D imaging of diverse processes such as the beating heart in embryonic zebrafish, voltage activity in Drosophila brains and neutrophil migration in the mouse liver at up to 500 volumes per second.
Collapse
Affiliation(s)
- Zhi Lu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yu Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Manchang Jin
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Xin Luo
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Huanjing Yue
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Zian Wang
- Department of Automation, Tsinghua University, Beijing, China
| | - Siqing Zuo
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiaqi Fan
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yanwei Pang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Jingyu Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
37
|
Chen H, Zhang H, He Y, Wei L, Yang J, Li X, Huang L, Wei K. Direct wavefront sensing with a plenoptic sensor based on deep learning. OPTICS EXPRESS 2023; 31:10320-10332. [PMID: 37157581 DOI: 10.1364/oe.481433] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Traditional plenoptic wavefront sensors (PWS) suffer from the obvious step change of the slope response which leads to the poor performance of phase retrieval. In this paper, a neural network model combining the transformer architecture with the U-Net model is utilized to restore wavefront directly from the plenoptic image of PWS. The simulation results show that the averaged root mean square error (RMSE) of residual wavefront is less than 1/14λ (Marechal criterion), proving the proposed method successfully breaks through the non-linear problem existed in PWS wavefront sensing. In addition, our model performs better than the recently developed deep learning models and traditional modal approach. Furthermore, the robustness of our model to turbulence strength and signal level is also tested, proving the good generalizability of our model. To the best of our knowledge, it is the first time to perform direct wavefront detection with a deep-learning-based method in PWS-based applications and achieve the state-of-the-art performance.
Collapse
|
38
|
Qiu Y, Zhao Z, Yang J, Cheng Y, Liu Y, Yang BR, Qin Z. Light field displays with computational vision correction for astigmatism and high-order aberrations with real-time implementation. OPTICS EXPRESS 2023; 31:6262-6280. [PMID: 36823887 DOI: 10.1364/oe.485547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
Vision-correcting near-eye displays are necessary concerning the large population with refractive errors. However, varifocal optics cannot effectively address astigmatism (AST) and high-order aberration (HOAs); freeform optics has little prescription flexibility. Thus, a computational solution is desired to correct AST and HOA with high prescription flexibility and no increase in volume and hardware complexity. In addition, the computational complexity should support real-time rendering. We propose that the light field display can achieve such computational vision correction by manipulating sampling rays so that rays forming a voxel are re-focused on the retina. The ray manipulation merely requires updating the elemental image array (EIA), being a fully computational solution. The correction is first calculated based on an eye's wavefront map and then refined by a simulator performing iterative optimization with a schematic eye model. Using examples of HOA and AST, we demonstrate that corrected EIAs make sampling rays distributed within ±1 arcmin on the retina. Correspondingly, the synthesized image is recovered to nearly as clear as normal vision. We also propose a new voxel-based EIA generation method considering the computational complexity. All voxel positions and the mapping between voxels and their homogeneous pixels are acquired in advance and stored as a lookup table, bringing about an ultra-fast rendering speed of 10 ms per frame with no cost in computing hardware and rendering accuracy. Finally, experimental verification is carried out by introducing the HOA and AST with customized lenses in front of a camera. As a result, significantly recovered images are reported.
Collapse
|