1
|
Liu W, Delikoyun K, Chen Q, Yildiz A, Myo SK, Kuan WS, Soong JTY, Cove ME, Hayden O, Lee HK. OAH-Net: a deep neural network for efficient and robust hologram reconstruction for off-axis digital holographic microscopy. BIOMEDICAL OPTICS EXPRESS 2025; 16:894-909. [PMID: 40109528 PMCID: PMC11919354 DOI: 10.1364/boe.547292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Revised: 01/17/2025] [Accepted: 01/17/2025] [Indexed: 03/22/2025]
Abstract
Off-axis digital holographic microscopy is a high-throughput, label-free imaging technology that provides three-dimensional, high-resolution information about samples, which is particularly useful in large-scale cellular imaging. However, the hologram reconstruction process poses a significant bottleneck for timely data analysis. To address this challenge, we propose a novel reconstruction approach that integrates deep learning with the physical principles of off-axis holography. We initialized part of the network weights based on the physical principle and then fine-tuned them via supersized learning. Our off-axis hologram network (OAH-Net) retrieves phase and amplitude images with errors that fall within the measurement error range attributable to hardware, and its reconstruction speed significantly surpasses the microscope's acquisition rate. Crucially, OAH-Net, trained and validated on diluted whole blood samples, demonstrates remarkable external generalization capabilities on unseen samples with distinct patterns. Additionally, it can be seamlessly integrated with other models for downstream tasks, enabling end-to-end real-time hologram analysis. This capability further expands off-axis holography's applications in both biological and medical studies.
Collapse
Affiliation(s)
- Wei Liu
- Bioinformatics Institute, Agency for Science, Technology and Research, 30 Biopolis Street, 138671, Singapore
| | - Kerem Delikoyun
- School of Computation, Information and Technology, Technical University of Munich, Arcisstr. 21, 80333, Munich, Germany
- TUMCREATE, 1 Create Way, 138602, Singapore
| | - Qianyu Chen
- School of Computation, Information and Technology, Technical University of Munich, Arcisstr. 21, 80333, Munich, Germany
- TUMCREATE, 1 Create Way, 138602, Singapore
| | | | - Si Ko Myo
- TUMCREATE, 1 Create Way, 138602, Singapore
| | - Win Sen Kuan
- Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr, 117597, Singapore
- Emergency Medicine Department, National University Hospital, 5 Lower Kent Ridge Road, 119074, Singapore
| | - John Tshon Yit Soong
- Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr, 117597, Singapore
- Department of Medicine, National University of Singapore, NUHS Tower Block, 119228, Singapore
| | - Matthew Edward Cove
- Department of Medicine, National University of Singapore, NUHS Tower Block, 119228, Singapore
| | - Oliver Hayden
- School of Computation, Information and Technology, Technical University of Munich, Arcisstr. 21, 80333, Munich, Germany
- TUMCREATE, 1 Create Way, 138602, Singapore
| | - Hwee Kuan Lee
- Bioinformatics Institute, Agency for Science, Technology and Research, 30 Biopolis Street, 138671, Singapore
- Centre for Frontier AI Research, Agency for Science, Technology and Research, 1 Fusionopolis Way, 138671, Singapore
- International Research Laboratory on Artificial Intelligence, Agency for Science, Technology and Research, 1 Fusionopolis Way, 138671, Singapore
- School of Biological Sciences, Nanyang Technological University, 60 Nanyang Dr, 639798, Singapore
- School of Computing, National University of Singapore, 13 Computing Dr, 117417, Singapore
| |
Collapse
|
2
|
Xu C, Xu H, Giannarou S. Distance Regression Enhanced With Temporal Information Fusion and Adversarial Training for Robot-Assisted Endomicroscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3895-3908. [PMID: 38801689 DOI: 10.1109/tmi.2024.3405794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Probe-based confocal laser endomicroscopy (pCLE) has a role in characterising tissue intraoperatively to guide tumour resection during surgery. To capture good quality pCLE data which is important for diagnosis, the probe-tissue contact needs to be maintained within a working range of micrometre scale. This can be achieved through micro-surgical robotic manipulation which requires the automatic estimation of the probe-tissue distance. In this paper, we propose a novel deep regression framework composed of the Deep Regression Generative Adversarial Network (DR-GAN) and a Sequence Attention (SA) module. The aim of DR-GAN is to train the network using an enhanced image-based supervision approach. It extents the standard generator by using a well-defined function for image generation, instead of a learnable decoder. Also, DR-GAN uses a novel learnable neural perceptual loss which combines for the first time spatial and frequency domain features. This effectively suppresses the adverse effects of noise in the pCLE data. To incorporate temporal information, we've designed the SA module which is a cross-attention module, enhanced with Radial Basis Function based encoding (SA-RBF). Furthermore, to train the regression framework, we designed a multi-step training mechanism. During inference, the trained network is used to generate data representations which are fused along time in the SA-RBF module to boost the regression stability. Our proposed network advances SOTA networks by addressing the challenge of excessive noise in the pCLE data and enhancing regression stability. It outperforms SOTA networks applied on the pCLE Regression dataset (PRD) in terms of accuracy, data quality and stability.
Collapse
|
3
|
Chen N, Cao Y, Li J, Yang Q, Cao K, Tan L. Holography optimization based on combining iterative Green's function algorithm and deep learning method. OPTICS LETTERS 2024; 49:5619-5622. [PMID: 39353020 DOI: 10.1364/ol.531648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 09/09/2024] [Indexed: 10/04/2024]
Abstract
In this Letter, we present a novel, to the best of our knowledge, approach that combines a new numerical iterative algorithm with a physics-informed neural network (PINN) architecture to solve the Helmholtz equation, thereby achieving highly generalized refractive index modulation holography. Firstly, we design a non-uniform refractive index convolutional neural network (NRI-CNN) to modify the refractive index and extract a feature vector. Then we propose an iterative Green's function algorithm (IGFA) to approximately solve the Helmholtz equation. In order to enhance the generalization ability of the solution, the abstracted vector is utilized as a multiplier term in IGFA, obtaining an approximately spatial distribution of the light field. Ultimately, we design a U-net to handle residuals of the Helmholtz equation and phases of optical fields (ERPU-net). We apply this method for holographic reconstructions on random Gaussian beams, beams with image data, and those altered by simulated turbulent phases.
Collapse
|
4
|
Fanous MJ, Casteleiro Costa P, Işıl Ç, Huang L, Ozcan A. Neural network-based processing and reconstruction of compromised biophotonic image data. LIGHT, SCIENCE & APPLICATIONS 2024; 13:231. [PMID: 39237561 PMCID: PMC11377739 DOI: 10.1038/s41377-024-01544-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 09/07/2024]
Abstract
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| |
Collapse
|
5
|
Yang T, Lu Z. Holo-U 2Net for High-Fidelity 3D Hologram Generation. SENSORS (BASEL, SWITZERLAND) 2024; 24:5505. [PMID: 39275416 PMCID: PMC11398203 DOI: 10.3390/s24175505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 08/19/2024] [Accepted: 08/20/2024] [Indexed: 09/16/2024]
Abstract
Traditional methods of hologram generation, such as point-, polygon-, and layer-based physical simulation approaches, suffer from substantial computational overhead and generate low-fidelity holograms. Deep learning-based computer-generated holography demonstrates effective performance in terms of speed and hologram fidelity. There is potential to enhance the network's capacity for fitting and modeling in the context of computer-generated holography utilizing deep learning methods. Specifically, the ability of the proposed network to simulate Fresnel diffraction based on the provided hologram dataset requires further improvement to meet expectations for high-fidelity holograms. We propose a neural architecture called Holo-U2Net to address the challenge of generating a high-fidelity hologram within an acceptable time frame. Holo-U2Net shows notable performance in hologram evaluation metrics, including an average structural similarity of 0.9988, an average peak signal-to-noise ratio of 46.75 dB, an enhanced correlation coefficient of 0.9996, and a learned perceptual image patch similarity of 0.0008 on the MIT-CGH-4K large-scale hologram dataset.
Collapse
Affiliation(s)
- Tian Yang
- School of Computer Science and Technology, Xidian University, South Taibai Road No. 2, Xi'an 710071, China
- Xi'an Key Laboratory of Big Data and Intelligent Vision, Xidian University, South Taibai Road No. 2, Xi'an 710071, China
- Guangzhou Institute of Technology, Xidian University, Zhimin Road No. 83, Guangzhou 510555, China
| | - Zixiang Lu
- School of Computer Science and Technology, Xidian University, South Taibai Road No. 2, Xi'an 710071, China
- Xi'an Key Laboratory of Big Data and Intelligent Vision, Xidian University, South Taibai Road No. 2, Xi'an 710071, China
- Guangzhou Institute of Technology, Xidian University, Zhimin Road No. 83, Guangzhou 510555, China
| |
Collapse
|
6
|
Yang Q, Guo R, Hu G, Xue Y, Li Y, Tian L. Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network. OPTICA 2024; 11:860-871. [PMID: 39895923 PMCID: PMC11784641 DOI: 10.1364/optica.523636] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/27/2024] [Indexed: 02/04/2025]
Abstract
Traditional fluorescence microscopy is constrained by inherent trade-offs among resolution, field of view, and system complexity. To navigate these challenges, we introduce a simple and low-cost computational multi-aperture miniature microscope, utilizing a microlens array for single-shot wide-field, high-resolution imaging. Addressing the challenges posed by extensive view multiplexing and non-local, shift-variant aberrations in this device, we present SV-FourierNet, a multi-channel Fourier neural network. SV-FourierNet facilitates high-resolution image reconstruction across the entire imaging field through its learned global receptive field. We establish a close relationship between the physical spatially varying point-spread functions and the network's learned effective receptive field. This ensures that SV-FourierNet has effectively encapsulated the spatially varying aberrations in our system and learned a physically meaningful function for image reconstruction. Training of SV-FourierNet is conducted entirely on a physics-based simulator. We showcase wide-field, high-resolution video reconstructions on colonies of freely moving C. elegans and imaging of a mouse brain section. Our computational multi-aperture miniature microscope, augmented with SV-FourierNet, represents a major advancement in computational microscopy and may find broad applications in biomedical research and other fields requiring compact microscopy solutions.
Collapse
Affiliation(s)
- Qianwan Yang
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Ruipeng Guo
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California 94720, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, USA
- Neurophotonics Center, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
7
|
Kim J, Lee SJ. Digital in-line holographic microscopy for label-free identification and tracking of biological cells. Mil Med Res 2024; 11:38. [PMID: 38867274 PMCID: PMC11170804 DOI: 10.1186/s40779-024-00541-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 05/31/2024] [Indexed: 06/14/2024] Open
Abstract
Digital in-line holographic microscopy (DIHM) is a non-invasive, real-time, label-free technique that captures three-dimensional (3D) positional, orientational, and morphological information from digital holographic images of living biological cells. Unlike conventional microscopies, the DIHM technique enables precise measurements of dynamic behaviors exhibited by living cells within a 3D volume. This review outlines the fundamental principles and comprehensive digital image processing procedures employed in DIHM-based cell tracking methods. In addition, recent applications of DIHM technique for label-free identification and digital tracking of various motile biological cells, including human blood cells, spermatozoa, diseased cells, and unicellular microorganisms, are thoroughly examined. Leveraging artificial intelligence has significantly enhanced both the speed and accuracy of digital image processing for cell tracking and identification. The quantitative data on cell morphology and dynamics captured by DIHM can effectively elucidate the underlying mechanisms governing various microbial behaviors and contribute to the accumulation of diagnostic databases and the development of clinical treatments.
Collapse
Affiliation(s)
- Jihwan Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Sang Joon Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Gyeongbuk, 37673, Republic of Korea.
| |
Collapse
|
8
|
Li J, Li Y, Gan T, Shen CY, Jarrahi M, Ozcan A. All-optical complex field imaging using diffractive processors. LIGHT, SCIENCE & APPLICATIONS 2024; 13:120. [PMID: 38802376 PMCID: PMC11130282 DOI: 10.1038/s41377-024-01482-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 05/11/2024] [Accepted: 05/13/2024] [Indexed: 05/29/2024]
Abstract
Complex field imaging, which captures both the amplitude and phase information of input optical fields or objects, can offer rich structural insights into samples, such as their absorption and refractive index distributions. However, conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field. This limitation can be overcome using interferometric or holographic methods, often supplemented by iterative phase retrieval algorithms, leading to a considerable increase in hardware complexity and computational demand. Here, we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing. Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field, forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design, axially spanning ~100 wavelengths. The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field, eliminating the need for any digital image reconstruction algorithms. We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum, with the output amplitude and phase channel images closely aligning with our numerical simulations. We envision that this complex field imager will have various applications in security, biomedical imaging, sensing and material science, among others.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Che-Yung Shen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
9
|
Buitrago-Duque C, Tobón-Maya H, Gómez-Ramírez A, Zapata-Valencia SI, Lopera MJ, Trujillo C, Garcia-Sucerquia J. Open-access database for digital lensless holographic microscopy and its application on the improvement of deep-learning-based autofocusing models. APPLIED OPTICS 2024; 63:B49-B58. [PMID: 38437255 DOI: 10.1364/ao.507412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/22/2023] [Indexed: 03/06/2024]
Abstract
Among modern optical microscopy techniques, digital lensless holographic microscopy (DLHM) is one of the simplest label-free coherent imaging approaches. However, the hardware simplicity provided by the lensless configuration is often offset by the demanding computational postprocessing required to match the retrieved sample information to the user's expectations. A promising avenue to simplify this stage is the integration of artificial intelligence and machine learning (ML) solutions into the DLHM workflow. The biggest challenge to do so is the preparation of an extensive and high-quality experimental dataset of curated DLHM recordings to train ML models. In this work, a diverse, open-access dataset of DLHM recordings is presented as support for future research, contributing to the data needs of the applied research community. The database comprises 11,760 experimental DLHM holograms of bio and non-bio samples with diversity on the main recording parameters of the DLHM architecture. The database is divided into two datasets of 10 independent imaged samples. The first group, named multi-wavelength dataset, includes 8160 holograms and was recorded using laser diodes emitting at 654 nm, 510 nm, and 405 nm; the second group, named single-wavelength dataset, is composed of 3600 recordings and was acquired using a 633 nm He-Ne laser. All the experimental parameters related to the dataset acquisition, preparation, and calibration are described in this paper. The advantages of this large dataset are validated by re-training an existing autofocusing model for DLHM and as the training set for a simpler architecture that achieves comparable performance, proving its feasibility for improving existing ML-based models and the development of new ones.
Collapse
|
10
|
Wang D, Li ZS, Zheng Y, Zhao YR, Liu C, Xu JB, Zheng YW, Huang Q, Chang CL, Zhang DW, Zhuang SL, Wang QH. Liquid lens based holographic camera for real 3D scene hologram acquisition using end-to-end physical model-driven network. LIGHT, SCIENCE & APPLICATIONS 2024; 13:62. [PMID: 38424072 PMCID: PMC10904790 DOI: 10.1038/s41377-024-01410-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/30/2024] [Accepted: 02/12/2024] [Indexed: 03/02/2024]
Abstract
With the development of artificial intelligence, neural network provides unique opportunities for holography, such as high fidelity and dynamic calculation. How to obtain real 3D scene and generate high fidelity hologram in real time is an urgent problem. Here, we propose a liquid lens based holographic camera for real 3D scene hologram acquisition using an end-to-end physical model-driven network (EEPMD-Net). As the core component of the liquid camera, the first 10 mm large aperture electrowetting-based liquid lens is proposed by using specially fabricated solution. The design of the liquid camera ensures that the multi-layers of the real 3D scene can be obtained quickly and with great imaging performance. The EEPMD-Net takes the information of real 3D scene as the input, and uses two new structures of encoder and decoder networks to realize low-noise phase generation. By comparing the intensity information between the reconstructed image after depth fusion and the target scene, the composite loss function is constructed for phase optimization, and the high-fidelity training of hologram with true depth of the 3D scene is realized for the first time. The holographic camera achieves the high-fidelity and fast generation of the hologram of the real 3D scene, and the reconstructed experiment proves that the holographic image has the advantage of low noise. The proposed holographic camera is unique and can be used in 3D display, measurement, encryption and other fields.
Collapse
Affiliation(s)
- Di Wang
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Zhao-Song Li
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Yi Zheng
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - You-Ran Zhao
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Chao Liu
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Jin-Bo Xu
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Yi-Wei Zheng
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Qian Huang
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Chen-Liang Chang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Da-Wei Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Song-Lin Zhuang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Qiong-Hua Wang
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
11
|
Pan T, Ye J, Liu H, Zhang F, Xu P, Xu O, Xu Y, Qin Y. Non-orthogonal optical multiplexing empowered by deep learning. Nat Commun 2024; 15:1580. [PMID: 38383508 PMCID: PMC10881499 DOI: 10.1038/s41467-024-45845-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 02/05/2024] [Indexed: 02/23/2024] Open
Abstract
Orthogonality among channels is a canonical basis for optical multiplexing featured with division multiplexing, which substantially reduce the complexity of signal post-processing in demultiplexing. However, it inevitably imposes an upper limit of capacity for multiplexing. Herein, we report on non-orthogonal optical multiplexing over a multimode fiber (MMF) leveraged by a deep neural network, termed speckle light field retrieval network (SLRnet), where it can learn the complicated mapping relation between multiple non-orthogonal input light field encoded with information and their corresponding single intensity output. As a proof-of-principle experimental demonstration, it is shown that the SLRnet can effectively solve the ill-posed problem of non-orthogonal optical multiplexing over an MMF, where multiple non-orthogonal input signals mediated by the same polarization, wavelength and spatial position can be explicitly retrieved utilizing a single-shot speckle output with fidelity as high as ~ 98%. Our results resemble an important step for harnessing non-orthogonal channels for high capacity optical multiplexing.
Collapse
Affiliation(s)
- Tuqiang Pan
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China
| | - Jianwei Ye
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China
| | - Haotian Liu
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China
| | - Fan Zhang
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China
| | - Pengbai Xu
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China
| | - Ou Xu
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China
| | - Yi Xu
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China.
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China.
| | - Yuwen Qin
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangzhou, 510006, China.
- Guangdong Provincial Key Laboratory of Information Photonics Technology, Institute of Advanced Photonic Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China.
| |
Collapse
|
12
|
Ciaparrone G, Pirone D, Fiore P, Xin L, Xiao W, Li X, Bardozzo F, Bianco V, Miccio L, Pan F, Memmolo P, Tagliaferri R, Ferraro P. Label-free cell classification in holographic flow cytometry through an unbiased learning strategy. LAB ON A CHIP 2024; 24:924-932. [PMID: 38264771 DOI: 10.1039/d3lc00385j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
Nowadays, label-free imaging flow cytometry at the single-cell level is considered the stepforward lab-on-a-chip technology to address challenges in clinical diagnostics, biology, life sciences and healthcare. In this framework, digital holography in microscopy promises to be a powerful imaging modality thanks to its multi-refocusing and label-free quantitative phase imaging capabilities, along with the encoding of the highest information content within the imaged samples. Moreover, the recent achievements of new data analysis tools for cell classification based on deep/machine learning, combined with holographic imaging, are urging these systems toward the effective implementation of point of care devices. However, the generalization capabilities of learning-based models may be limited from biases caused by data obtained from other holographic imaging settings and/or different processing approaches. In this paper, we propose a combination of a Mask R-CNN to detect the cells, a convolutional auto-encoder, used to the image feature extraction and operating on unlabelled data, thus overcoming the bias due to data coming from different experimental settings, and a feedforward neural network for single cell classification, that operates on the above extracted features. We demonstrate the proposed approach in the challenging classification task related to the identification of drug-resistant endometrial cancer cells.
Collapse
Affiliation(s)
- Gioele Ciaparrone
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
| | - Daniele Pirone
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Pierpaolo Fiore
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
| | - Lu Xin
- Key Laboratory of Precision Opto-Mechatronics Technology of Ministry of Education, School of Instrumentation Science & Optoelectronics Engineering, Beihang University, 100191 Beijing, China.
| | - Wen Xiao
- Key Laboratory of Precision Opto-Mechatronics Technology of Ministry of Education, School of Instrumentation Science & Optoelectronics Engineering, Beihang University, 100191 Beijing, China.
| | - Xiaoping Li
- Department of Obstetrics and Gynecology, Peking University People's Hospital, Beijing 100044, China
| | - Francesco Bardozzo
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Vittorio Bianco
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Lisa Miccio
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Feng Pan
- Key Laboratory of Precision Opto-Mechatronics Technology of Ministry of Education, School of Instrumentation Science & Optoelectronics Engineering, Beihang University, 100191 Beijing, China.
| | - Pasquale Memmolo
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Roberto Tagliaferri
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Pietro Ferraro
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| |
Collapse
|
13
|
Rogalski M, Arcab P, Stanaszek L, Micó V, Zuo C, Trusiak M. Physics-driven universal twin-image removal network for digital in-line holographic microscopy. OPTICS EXPRESS 2024; 32:742-761. [PMID: 38175095 DOI: 10.1364/oe.505440] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/22/2023] [Indexed: 01/05/2024]
Abstract
Digital in-line holographic microscopy (DIHM) enables efficient and cost-effective computational quantitative phase imaging with a large field of view, making it valuable for studying cell motility, migration, and bio-microfluidics. However, the quality of DIHM reconstructions is compromised by twin-image noise, posing a significant challenge. Conventional methods for mitigating this noise involve complex hardware setups or time-consuming algorithms with often limited effectiveness. In this work, we propose UTIRnet, a deep learning solution for fast, robust, and universally applicable twin-image suppression, trained exclusively on numerically generated datasets. The availability of open-source UTIRnet codes facilitates its implementation in various DIHM systems without the need for extensive experimental training data. Notably, our network ensures the consistency of reconstruction results with input holograms, imparting a physics-based foundation and enhancing reliability compared to conventional deep learning approaches. Experimental verification was conducted among others on live neural glial cell culture migration sensing, which is crucial for neurodegenerative disease research.
Collapse
|
14
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
15
|
Park J, Bai B, Ryu D, Liu T, Lee C, Luo Y, Lee MJ, Huang L, Shin J, Zhang Y, Ryu D, Li Y, Kim G, Min HS, Ozcan A, Park Y. Artificial intelligence-enabled quantitative phase imaging methods for life sciences. Nat Methods 2023; 20:1645-1660. [PMID: 37872244 DOI: 10.1038/s41592-023-02041-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/11/2023] [Indexed: 10/25/2023]
Abstract
Quantitative phase imaging, integrated with artificial intelligence, allows for the rapid and label-free investigation of the physiology and pathology of biological systems. This review presents the principles of various two-dimensional and three-dimensional label-free phase imaging techniques that exploit refractive index as an intrinsic optical imaging contrast. In particular, we discuss artificial intelligence-based analysis methodologies for biomedical studies including image enhancement, segmentation of cellular or subcellular structures, classification of types of biological samples and image translation to furnish subcellular and histochemical information from label-free phase images. We also discuss the advantages and challenges of artificial intelligence-enabled quantitative phase imaging analyses, summarize recent notable applications in the life sciences, and cover the potential of this field for basic and industrial research in the life sciences.
Collapse
Affiliation(s)
- Juyeon Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Chungha Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Mahn Jae Lee
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Jeongwon Shin
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | | | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | | | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.
- Tomocube, Daejeon, Republic of Korea.
| |
Collapse
|
16
|
Narayan SK, Vithin AVS, Gannavarpu R. Deep learning assisted non-contact defect identification method using diffraction phase microscopy. APPLIED OPTICS 2023; 62:5433-5442. [PMID: 37706860 DOI: 10.1364/ao.489867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/18/2023] [Indexed: 09/15/2023]
Abstract
Reliable detection of defects from optical fringe patterns is a crucial problem in non-destructive optical interferometric metrology. In this work, we propose a deep-learning-based method for fringe pattern defect identification. By attributing the defect information to the fringe pattern's phase gradient, we compute the spatial phase derivatives using the deep learning model and apply the gradient map to localize the defect. The robustness of the proposed method is illustrated on multiple numerically synthesized fringe pattern defects at various noise levels. Further, the practical utility of the proposed method is substantiated for experimental defect identification in diffraction phase microscopy.
Collapse
|
17
|
Manisha, Mandal AC, Rathor M, Zalevsky Z, Singh RK. Randomness assisted in-line holography with deep learning. Sci Rep 2023; 13:10986. [PMID: 37419990 PMCID: PMC10329003 DOI: 10.1038/s41598-023-37810-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 06/28/2023] [Indexed: 07/09/2023] Open
Abstract
We propose and demonstrate a holographic imaging scheme exploiting random illuminations for recording hologram and then applying numerical reconstruction and twin image removal. We use an in-line holographic geometry to record the hologram in terms of the second-order correlation and apply the numerical approach to reconstruct the recorded hologram. This strategy helps to reconstruct high-quality quantitative images in comparison to the conventional holography where the hologram is recorded in the intensity rather than the second-order intensity correlation. The twin image issue of the in-line holographic scheme is resolved by an unsupervised deep learning based method using an auto-encoder scheme. Proposed learning technique leverages the main characteristic of autoencoders to perform blind single-shot hologram reconstruction, and this does not require a dataset of samples with available ground truth for training and can reconstruct the hologram solely from the captured sample. Experimental results are presented for two objects, and a comparison of the reconstruction quality is given between the conventional inline holography and the one obtained with the proposed technique.
Collapse
Affiliation(s)
- Manisha
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
| | - Aditya Chandra Mandal
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
- Department of Mining Engineering, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
| | - Mohit Rathor
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
| | - Zeev Zalevsky
- Faculty of Engineering and Nano Technology Center, Bar-Ilan University, Ramat Gan, Israel
| | - Rakesh Kumar Singh
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India.
| |
Collapse
|
18
|
Mach M, Psota P, Žídek K, Mokrý P. On-chip digital holographic interferometry for measuring wavefront deformation in transparent samples. OPTICS EXPRESS 2023; 31:17185-17200. [PMID: 37381459 DOI: 10.1364/oe.486997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 04/25/2023] [Indexed: 06/30/2023]
Abstract
This paper describes on-chip digital holographic interferometry for measuring the wavefront deformation of transparent samples. The interferometer is based on a Mach-Zehnder arrangement with a waveguide in the reference arm, which allows for a compact on-chip arrangement. The method thus exploits the sensitivity of digital holographic interferometry and the advantages of the on-chip approach, which provides high spatial resolution over a large area, simplicity, and compactness of the system. The method's performance is demonstrated by measuring a model glass sample fabricated by depositing SiO2 layers of different thicknesses on a planar glass substrate and visualizing the domain structure in periodically poled lithium niobate. Finally, the results of the measurement made with the on-chip digital holographic interferometer were compared with those made with a conventional Mach-Zehnder type digital holographic interferometer with lens and with a commercial white light interferometer. The comparison of the obtained results indicates that the on-chip digital holographic interferometer provides accuracy comparable to conventional methods while offering the benefits of a large field of view and simplicity.
Collapse
|
19
|
Chen X, Wang H, Razi A, Kozicki M, Mann C. DH-GAN: a physics-driven untrained generative adversarial network for holographic imaging. OPTICS EXPRESS 2023; 31:10114-10135. [PMID: 37157567 DOI: 10.1364/oe.480894] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms. The object's 3D shape can be obtained by numerical analysis of the captured holograms and recovering the incurred phase. Recently, deep learning (DL) methods have been used for more accurate holographic processing. However, most supervised methods require large datasets to train the model, which is rarely available in most DH applications due to the scarcity of samples or privacy concerns. A few one-shot DL-based recovery methods exist with no reliance on large datasets of paired images. Still, most of these methods often neglect the underlying physics law that governs wave propagation. These methods offer a black-box operation, which is not explainable, generalizable, and transferrable to other samples and applications. In this work, we propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality while using a generative network as a function approximator to model the inverse of hologram formation. We impose smoothness on the background part of the recovered image using a progressive masking module powered by simulated annealing to enhance the reconstruction quality. The proposed method exhibits high transferability to similar samples, which facilitates its fast deployment in time-sensitive applications without the need for retraining the network from scratch. The results show a considerable improvement to competitor methods in reconstruction quality (about 5 dB PSNR gain) and robustness to noise (about 50% reduction in PSNR vs noise increase rate).
Collapse
|
20
|
Zhao X, Liu G, Jin R, Gong H, Luo Q, Yang X. Partially interpretable image deconvolution framework based on the Richardson-Lucy model. OPTICS LETTERS 2023; 48:940-943. [PMID: 36790980 DOI: 10.1364/ol.478885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
Fluorescence microscopy typically suffers from aberration induced by system and sample, which could be circumvented by image deconvolution. We proposed a novel, to the best of our knowledge, Richardson-Lucy (RL) model-driven deconvolution framework to improve reconstruction performance and speed. Two kinds of neural networks within this framework were devised, which are partially interpretable compared with previous deep learning methods. We first introduce RL into deep feature space, which has superior generalizability to the convolutional neural networks (CNN). We further accelerate it with an unmatched backprojector, providing a five times faster reconstruction speed than classic RL. Our deconvolution approaches outperform both CNN and traditional methods regarding image quality for blurred images caused by out-of-focus or imaging system aberration.
Collapse
|
21
|
Dong Z, Xu C, Ling Y, Li Y, Su Y. Fourier-inspired neural module for real-time and high-fidelity computer-generated holography. OPTICS LETTERS 2023; 48:759-762. [PMID: 36723582 DOI: 10.1364/ol.477630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 12/16/2022] [Indexed: 06/18/2023]
Abstract
Learning-based computer-generated holography (CGH) algorithms appear as novel alternatives to generate phase-only holograms. However, most existing learning-based approaches underperform their iterative peers regarding display quality. Here, we recognize that current convolutional neural networks have difficulty learning cross-domain tasks due to the limited receptive field. In order to overcome this limitation, we propose a Fourier-inspired neural module, which can be easily integrated into various CGH frameworks and significantly enhance the quality of reconstructed images. By explicitly leveraging Fourier transforms within the neural network architecture, the mesoscopic information within the phase-only hologram can be more handily extracted. Both simulation and experiment were performed to showcase its capability. By incorporating it into U-Net and HoloNet, the peak signal-to-noise ratio of reconstructed images is measured at 29.16 dB and 33.50 dB during the simulation, which is 4.97 dB and 1.52 dB higher than those by the baseline U-Net and HoloNet, respectively. Similar trends are observed in the experimental results. We also experimentally demonstrated that U-Net and HoloNet with the proposed module can generate a monochromatic 1080p hologram in 0.015 s and 0.020 s, respectively.
Collapse
|
22
|
Matlock A, Zhu J, Tian L. Multiple-scattering simulator-trained neural network for intensity diffraction tomography. OPTICS EXPRESS 2023; 31:4094-4107. [PMID: 36785385 DOI: 10.1364/oe.477396] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering C. elegans worms. We benchmark the network's performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network's robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network's generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.
Collapse
|
23
|
Computational Portable Microscopes for Point-of-Care-Test and Tele-Diagnosis. Cells 2022; 11:cells11223670. [PMID: 36429102 PMCID: PMC9688637 DOI: 10.3390/cells11223670] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/11/2022] [Accepted: 11/16/2022] [Indexed: 11/22/2022] Open
Abstract
In bio-medical mobile workstations, e.g., the prevention of epidemic viruses/bacteria, outdoor field medical treatment and bio-chemical pollution monitoring, the conventional bench-top microscopic imaging equipment is limited. The comprehensive multi-mode (bright/dark field imaging, fluorescence excitation imaging, polarized light imaging, and differential interference microscopy imaging, etc.) biomedical microscopy imaging systems are generally large in size and expensive. They also require professional operation, which means high labor-cost, money-cost and time-cost. These characteristics prevent them from being applied in bio-medical mobile workstations. The bio-medical mobile workstations need microscopy systems which are inexpensive and able to handle fast, timely and large-scale deployment. The development of lightweight, low-cost and portable microscopic imaging devices can meet these demands. Presently, for the increasing needs of point-of-care-test and tele-diagnosis, high-performance computational portable microscopes are widely developed. Bluetooth modules, WLAN modules and 3G/4G/5G modules generally feature very small sizes and low prices. And industrial imaging lens, microscopy objective lens, and CMOS/CCD photoelectric image sensors are also available in small sizes and at low prices. Here we review and discuss these typical computational, portable and low-cost microscopes by refined specifications and schematics, from the aspect of optics, electronic, algorithms principle and typical bio-medical applications.
Collapse
|
24
|
Terbe D, Orzó L, Zarándy Á. Classification of Holograms with 3D-CNN. SENSORS (BASEL, SWITZERLAND) 2022; 22:8366. [PMID: 36366064 PMCID: PMC9654288 DOI: 10.3390/s22218366] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 10/26/2022] [Accepted: 10/28/2022] [Indexed: 06/16/2023]
Abstract
A hologram, measured by using appropriate coherent illumination, records all substantial volumetric information of the measured sample. It is encoded in its interference patterns and, from these, the image of the sample objects can be reconstructed in different depths by using standard techniques of digital holography. We claim that a 2D convolutional network (CNN) cannot be efficient in decoding this volumetric information spread across the whole image as it inherently operates on local spatial features. Therefore, we propose a method, where we extract the volumetric information of the hologram by mapping it to a volume-using a standard wavefield propagation algorithm-and then feed it to a 3D-CNN-based architecture. We apply this method to a challenging real-life classification problem and compare its performance with an equivalent 2D-CNN counterpart. Furthermore, we inspect the robustness of the methods to slightly defocused inputs and find that the 3D method is inherently more robust in such cases. Additionally, we introduce a hologram-specific augmentation technique, called hologram defocus augmentation, that improves the performance of both methods for slightly defocused inputs. The proposed 3D-model outperforms the standard 2D method in classification accuracy both for in-focus and defocused input samples. Our results confirm and support our fundamental hypothesis that a 2D-CNN-based architecture is limited in the extraction of volumetric information globally encoded in the reconstructed hologram image.
Collapse
|