1
|
Rawat S, Trius Béjar J, Wang A. Characterization of Optical, Thermal, and Viscoelastic Properties of Pollenkitt in Angiosperm Pollen Using In-Line Digital Holographic Microscopy. ACS APPLIED BIO MATERIALS 2024; 7:4029-4038. [PMID: 38756048 DOI: 10.1021/acsabm.4c00367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
Abstract
Pollen grains are remarkable material composites, with various organelles in their fragile interior protected by a strong shell made of sporopollenin. The outermost layer of angiosperm pollen grains contains a lipid-rich substance called pollenkitt, which is a natural bioadhesive that helps preserve structural integrity when the pollen grain is exposed to external environmental stresses. In addition, its viscous nature enables it to adhere to various floral and insect surfaces, facilitating the pollination process. To examine the physicochemical properties of aqueous pollenkitt droplets, we used in-line digital holographic microscopy to capture light scattering from individual pollenkitt particles. Comparison of pollenkitt holograms to those modeled using the Lorenz-Mie theory enables investigations into the minute variations in the refractive index and size resulting from changes in local temperature and pollen aging.
Collapse
Affiliation(s)
- Siddharth Rawat
- School of Chemistry, UNSW Sydney, Sydney, New South Wales 2052, Australia
- School of Physics, UNSW Sydney, Sydney, New South Wales 2052, Australia
- Australian Centre for Astrobiology, UNSW Sydney, Sydney, New South Wales 2052, Australia
- ARC CoE in Synthetic Biology, UNSW Sydney, Sydney, New South Wales 2052, Australia
| | - Juan Trius Béjar
- Departament de Física, Universitat Politècnica de Catalunya, Barcelona 08034, Spain
| | - Anna Wang
- School of Chemistry, UNSW Sydney, Sydney, New South Wales 2052, Australia
- Australian Centre for Astrobiology, UNSW Sydney, Sydney, New South Wales 2052, Australia
- ARC CoE in Synthetic Biology, UNSW Sydney, Sydney, New South Wales 2052, Australia
| |
Collapse
|
2
|
Kim J, Lee SJ. Digital in-line holographic microscopy for label-free identification and tracking of biological cells. Mil Med Res 2024; 11:38. [PMID: 38867274 PMCID: PMC11170804 DOI: 10.1186/s40779-024-00541-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 05/31/2024] [Indexed: 06/14/2024] Open
Abstract
Digital in-line holographic microscopy (DIHM) is a non-invasive, real-time, label-free technique that captures three-dimensional (3D) positional, orientational, and morphological information from digital holographic images of living biological cells. Unlike conventional microscopies, the DIHM technique enables precise measurements of dynamic behaviors exhibited by living cells within a 3D volume. This review outlines the fundamental principles and comprehensive digital image processing procedures employed in DIHM-based cell tracking methods. In addition, recent applications of DIHM technique for label-free identification and digital tracking of various motile biological cells, including human blood cells, spermatozoa, diseased cells, and unicellular microorganisms, are thoroughly examined. Leveraging artificial intelligence has significantly enhanced both the speed and accuracy of digital image processing for cell tracking and identification. The quantitative data on cell morphology and dynamics captured by DIHM can effectively elucidate the underlying mechanisms governing various microbial behaviors and contribute to the accumulation of diagnostic databases and the development of clinical treatments.
Collapse
Affiliation(s)
- Jihwan Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Sang Joon Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Gyeongbuk, 37673, Republic of Korea.
| |
Collapse
|
3
|
Pan K, Wu X, Li P, Liu S, Wei B, Li D, Yang D, Chen X, Zhao J, Wen D. Cylindrical Vector Beam Holography without Preservation of OAM Modes. NANO LETTERS 2024; 24:6761-6766. [PMID: 38775803 DOI: 10.1021/acs.nanolett.4c01490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2024]
Abstract
Orbital angular momentum (OAM) multiplexed holograms have attracted a great deal of attention recently due to their physically unbounded set of orthogonal helical modes. However, preserving the OAM property in each pixel hinders fine sampling of the target image in principle and requires a fundamental filtering aperture array in the detector plane. Here, we demonstrate the concept of metasurface-based vectorial holography with cylindrical vector beams (CVBs), whose unlimited polarization orders and unique polarization distributions can be used to boost information storage capacity. Although CVBs are composed of OAM modes, the holographic images do not preserve the OAM modes in our design, enabling fine sampling of the target image in a quasi-continuous way like traditional computer-generated holograms. Moreover, the images can be directly observed by passing them through a polarizer without the need for a fundamental mode filter array. We anticipate that our method may pave the way for high-capacity holographic devices.
Collapse
Affiliation(s)
- Kai Pan
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Xuanguang Wu
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Peng Li
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Sheng Liu
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Bingyan Wei
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Dong Li
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Dexing Yang
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Xianzhong Chen
- Institute of Photonics and Quantum Sciences, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, U.K
| | - Jianlin Zhao
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| | - Dandan Wen
- Key Laboratory of Light Field Manipulation and Information Acquisition, Ministry of Industry and Information Technology, and Shaanxi Key Laboratory of Optical Information Technology, School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an 710129, China
| |
Collapse
|
4
|
He X, Tao H, Veetil SP, Chang C, Liu C, Zhu J. Fast reconstruction of laser beam near-field and focal spot profiles using deep neural network and numerical propagation. OPTICS EXPRESS 2024; 32:21649-21662. [PMID: 38859514 DOI: 10.1364/oe.510088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 05/15/2024] [Indexed: 06/12/2024]
Abstract
Inertial confinement fusion (ICF) experiments demand precise knowledge of laser beam parameters on high-power laser facilities. Among these parameters, near-field and focal spot distributions are crucial for characterizing laser beam quality. While iterative phase retrieval shows promise for laser beam reconstruction, its utility is hindered by extensive iterative calculations. To address this limitation, we propose an online laser beam reconstruction method based on deep neural network. In this method, we utilize coherent modulation imaging (CMI) to obtain labels for training the neural network. The neural network reconstructs the complex near-field distribution, including amplitude and phase, directly from a defocused diffraction pattern without iteration. Subsequently, the focal spot distribution is obtained by propagating the established complex near-field distribution to the far-field. Proof-of-principle experiments validate the feasibility of our proposed method.
Collapse
|
5
|
Castañeda R, Trujillo C, Doblas A. A human erythrocytes hologram dataset for learning-based model training. Data Brief 2024; 54:110424. [PMID: 38708305 PMCID: PMC11068518 DOI: 10.1016/j.dib.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 04/09/2024] [Accepted: 04/09/2024] [Indexed: 05/07/2024] Open
Abstract
This manuscript presents a paired dataset with experimental holograms and their corresponding reconstructed phase maps of human red blood cells (RBCs). The holographic images were recorded using an off-axis telecentric Digital Holographic Microscope (DHM). The imaging system consists of a 40 × /0.65NA infinity-corrected microscope objective (MO) lens and a tube lens (TL) with a focal distance of 200 mm, recording diffraction-limited holograms. A CMOS camera with dimensions of 1920 × 1200 pixels and a pixel pitch of 5.86 µm was located at the back focal plane of the TL lens, capturing image-plane holograms. The off-axis, telecentric, and diffraction-limited DHM system guarantees accurate quantitative phase maps. Initially comprising 300 holograms, the dataset was augmented to 36,864 instances, enabling the investigation (i.e., training and testing) of learning-based models to reconstruct aberration-free phase images from raw holograms. This dataset facilitates the training and testing of end-to-end models for quantitative phase imaging using DHM systems operating at the telecentric regime and non-telecentric DHM systems where the spherical wavefront has been compensated physically. In other words, this dataset holds promise for advancing investigations in digital holographic microscopy and computational imaging.
Collapse
Affiliation(s)
- Raul Castañeda
- Applied Optics Group, School of Applied Sciences and Engineering EAFIT University, Medellin 050037, Colombia
| | - Carlos Trujillo
- Applied Optics Group, School of Applied Sciences and Engineering EAFIT University, Medellin 050037, Colombia
| | - Ana Doblas
- Electrical and Computer Engineering Department, University of Massachusetts – Dartmouth, USA
| |
Collapse
|
6
|
Wang Z, Ma H, Chen Y, Liu D. Autofocusing in digital holography based on an adaptive genetic algorithm. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:976-987. [PMID: 38856405 DOI: 10.1364/josaa.518105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 04/11/2024] [Indexed: 06/11/2024]
Abstract
In digital holography (DH), determining the reconstruction distance is critical to the quality of the reconstructed image. However, traditional focal plane detection methods require considerable time investment to reconstruct and evaluate holograms at multiple distances. To address this inefficiency, this paper proposes a fast and accurate autofocusing method based on an adaptive genetic algorithm. This method only needs to find several reconstruction distances in the search area as an initial population, and then adaptively optimize the reconstruction distance through iteration to determine the optimal focal plane in the search area. In addition, an off-axis digital holographic optical system was used to capture the holograms of the USAF resolution test target and the coin. The simulation and experimental results indicated that, compared with the traditional autofocusing, the proposed method can reduce the computation time by about 70% and improve the focal plane accuracy by up to 0.5 mm.
Collapse
|
7
|
Li J, Li Y, Gan T, Shen CY, Jarrahi M, Ozcan A. All-optical complex field imaging using diffractive processors. LIGHT, SCIENCE & APPLICATIONS 2024; 13:120. [PMID: 38802376 PMCID: PMC11130282 DOI: 10.1038/s41377-024-01482-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 05/11/2024] [Accepted: 05/13/2024] [Indexed: 05/29/2024]
Abstract
Complex field imaging, which captures both the amplitude and phase information of input optical fields or objects, can offer rich structural insights into samples, such as their absorption and refractive index distributions. However, conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field. This limitation can be overcome using interferometric or holographic methods, often supplemented by iterative phase retrieval algorithms, leading to a considerable increase in hardware complexity and computational demand. Here, we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing. Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field, forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design, axially spanning ~100 wavelengths. The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field, eliminating the need for any digital image reconstruction algorithms. We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum, with the output amplitude and phase channel images closely aligning with our numerical simulations. We envision that this complex field imager will have various applications in security, biomedical imaging, sensing and material science, among others.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Che-Yung Shen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
8
|
Chia Y, Liao W, Vyas S, Chu CH, Yamaguchi T, Liu X, Tanaka T, Huang Y, Chen MK, Chen W, Tsai DP, Luo Y. In Vivo Intelligent Fluorescence Endo-Microscopy by Varifocal Meta-Device and Deep Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2307837. [PMID: 38488694 PMCID: PMC11132035 DOI: 10.1002/advs.202307837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/30/2023] [Indexed: 05/29/2024]
Abstract
Endo-microscopy is crucial for real-time 3D visualization of internal tissues and subcellular structures. Conventional methods rely on axial movement of optical components for precise focus adjustment, limiting miniaturization and complicating procedures. Meta-device, composed of artificial nanostructures, is an emerging optical flat device that can freely manipulate the phase and amplitude of light. Here, an intelligent fluorescence endo-microscope is developed based on varifocal meta-lens and deep learning (DL). The breakthrough enables in vivo 3D imaging of mouse brains, where varifocal meta-lens focal length adjusts through relative rotation angle. The system offers key advantages such as invariant magnification, a large field-of-view, and optical sectioning at a maximum focal length tuning range of ≈2 mm with 3 µm lateral resolution. Using a DL network, image acquisition time and system complexity are significantly reduced, and in vivo high-resolution brain images of detailed vessels and surrounding perivascular space are clearly observed within 0.1 s (≈50 times faster). The approach will benefit various surgical procedures, such as gastrointestinal biopsies, neural imaging, brain surgery, etc.
Collapse
Grants
- NSTC 112-2221-E-002-055-MY3 National Science and Technology Council, Taiwan
- NSTC 112-2221-E-002-212-MY3 National Science and Technology Council, Taiwan
- MOST-108-2221-E-002-168-MY4 National Science and Technology Council, Taiwan
- NTU-CC-113L891102 National Taiwan University
- NTU-113L8507 National Taiwan University
- NTU-CC-112L892902 National Taiwan University
- NTU-107L7728 National Taiwan University
- NTU-107L7807 National Taiwan University
- NTU-YIH-08HZT49001 National Taiwan University
- AoE/P-502/20 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- C1015-21E University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- C5031-22G University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU15303521 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU11310522 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU11305223 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- CityU11300123 University Grants Committee / Research Grants Council of the Hong Kong Special Administrative Region, China
- 2020B1515120073 Department of Science and Technology of Guangdong Province
- 9380131 City University of Hong Kong
- 9610628 City University of Hong Kong
- 7005867 City University of Hong Kong
- JPMJCR1904 JST CREST
- NHRI-EX113-11327EI National Health Research Institutes
- National Science and Technology Council, Taiwan
- National Taiwan University
- Department of Science and Technology of Guangdong Province
- City University of Hong Kong
- National Health Research Institutes
Collapse
Affiliation(s)
- Yu‐Hsin Chia
- Department of Biomedical EngineeringNational Taiwan UniversityTaipei10051Taiwan
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
| | - Wei‐Hao Liao
- Department of Physical Medicine and RehabilitationNational Taiwan University Hospital & National Taiwan University College of MedicineTaipei10051Taiwan
| | - Sunil Vyas
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
| | - Cheng Hung Chu
- YongLin Institute of HealthNational Taiwan UniversityTaipei10087Taiwan
| | - Takeshi Yamaguchi
- Innovative Photon Manipulation Research TeamRIKEN Center for Advanced PhotonicsSaitama351‐0198Japan
| | - Xiaoyuan Liu
- Department of Electrical EngineeringCity University of Hong KongKowloon999077Hong Kong, China
| | - Takuo Tanaka
- Innovative Photon Manipulation Research TeamRIKEN Center for Advanced PhotonicsSaitama351‐0198Japan
| | - Yi‐You Huang
- Department of Biomedical EngineeringNational Taiwan UniversityTaipei10051Taiwan
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
- Department of Biomedical EngineeringNational Taiwan University HospitalTaipei10051Taiwan
| | - Mu Ku Chen
- Department of Electrical EngineeringCity University of Hong KongKowloon999077Hong Kong, China
- Centre for Biosystems, Neuroscience and NanotechnologyCity University of Hong KongKowloon999077Hong Kong, China
- The State Key Laboratory of Terahertz and Millimeter WavesCity University of Hong KongKowloon999077Hong Kong, China
| | - Wen‐Shiang Chen
- Department of Physical Medicine and RehabilitationNational Taiwan University Hospital & National Taiwan University College of MedicineTaipei10051Taiwan
- Institute of Biomedical Engineering and NanomedicineNational Health Research InstitutesMiaoli35053Taiwan
| | - Din Ping Tsai
- Department of Electrical EngineeringCity University of Hong KongKowloon999077Hong Kong, China
- Centre for Biosystems, Neuroscience and NanotechnologyCity University of Hong KongKowloon999077Hong Kong, China
- The State Key Laboratory of Terahertz and Millimeter WavesCity University of Hong KongKowloon999077Hong Kong, China
| | - Yuan Luo
- Institute of Medical Device and ImagingNational Taiwan UniversityTaipei10051Taiwan
- YongLin Institute of HealthNational Taiwan UniversityTaipei10087Taiwan
- Molecular Imaging CenterNational Taiwan UniversityTaipei10672Taiwan
- Program for Precision Health and Intelligent MedicineNational Taiwan UniversityTaipei106319Taiwan
| |
Collapse
|
9
|
Yu H, Fang Q, Song Q, Montresor S, Picart P, Xia H. Unsupervised speckle denoising in digital holographic interferometry based on 4-f optical simulation integrated cycle-consistent generative adversarial network. APPLIED OPTICS 2024; 63:3557-3569. [PMID: 38856541 DOI: 10.1364/ao.521701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/03/2024] [Indexed: 06/11/2024]
Abstract
The speckle noise generated during digital holographic interferometry (DHI) is unavoidable and difficult to eliminate, thus reducing its accuracy. We propose a self-supervised deep-learning speckle denoising method using a cycle-consistent generative adversarial network to mitigate the effect of speckle noise. The proposed method integrates a 4-f optical speckle noise simulation module with a parameter generator. In addition, it uses an unpaired dataset for training to overcome the difficulty in obtaining noise-free images and paired data from experiments. The proposed method was tested on both simulated and experimental data, with results showing a 6.9% performance improvement compared with a conventional method and a 2.6% performance improvement compared with unsupervised deep learning in terms of the peak signal-to-noise ratio. Thus, the proposed method exhibits superior denoising performance and potential for DHI, being particularly suitable for processing large datasets.
Collapse
|
10
|
Shi K, Zhang X, Wang X, Xu J, Mu B, Yan J, Wang F, Ding Y, Wang Z. ICF-PR-Net: a deep phase retrieval neural network for X-ray phase contrast imaging of inertial confinement fusion capsules. OPTICS EXPRESS 2024; 32:14356-14376. [PMID: 38859383 DOI: 10.1364/oe.518249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 03/25/2024] [Indexed: 06/12/2024]
Abstract
X-ray phase contrast imaging (XPCI) has demonstrated capability to characterize inertial confinement fusion (ICF) capsules, and phase retrieval can reconstruct phase information from intensity images. This study introduces ICF-PR-Net, a novel deep learning-based phase retrieval method for ICF-XPCI. We numerically constructed datasets based on ICF capsule shape features, and proposed an object-image loss function to add image formation physics to network training. ICF-PR-Net outperformed traditional methods as it exhibited satisfactory robustness against strong noise and nonuniform background and was well-suited for ICF-XPCI's constrained experimental conditions and single exposure limit. Numerical and experimental results showed that ICF-PR-Net accurately retrieved the phase and absorption while maintaining retrieval quality in different situations. Overall, the ICF-PR-Net enables the diagnosis of the inner interface and electron density of capsules to address ignition-preventing problems, such as hydrodynamic instability growth.
Collapse
|
11
|
Zhang Y, Liu X, Lam EY. Single-shot inline holography using a physics-aware diffusion model. OPTICS EXPRESS 2024; 32:10444-10460. [PMID: 38571256 DOI: 10.1364/oe.517233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/27/2024] [Indexed: 04/05/2024]
Abstract
Among holographic imaging configurations, inline holography excels in its compact design and portability, making it the preferred choice for on-site or field applications with unique imaging requirements. However, effectively holographic reconstruction from a single-shot measurement remains a challenge. While several approaches have been proposed, our novel unsupervised algorithm, the physics-aware diffusion model for digital holographic reconstruction (PadDH), offers distinct advantages. By seamlessly integrating physical information with a pre-trained diffusion model, PadDH overcomes the need for a holographic training dataset and significantly reduces the number of parameters involved. Through comprehensive experiments using both synthetic and experimental data, we validate the capabilities of PadDH in reducing twin-image contamination and generating high-quality reconstructions. Our work represents significant advancements in unsupervised holographic imaging by harnessing the full potential of the pre-trained diffusion prior.
Collapse
|
12
|
Hughes MR, McCall C. Improved resolution in fiber bundle inline holographic microscopy using multiple illumination sources. BIOMEDICAL OPTICS EXPRESS 2024; 15:1500-1514. [PMID: 38495718 PMCID: PMC10942680 DOI: 10.1364/boe.516030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/17/2024] [Accepted: 01/23/2024] [Indexed: 03/19/2024]
Abstract
Recent work has shown that high-quality inline holographic microscopy images can be captured through fiber imaging bundles. Speckle patterns arising from modal interference within the bundle cores can be minimized by use of a partially-coherent optical source such as an LED delivered via a multimode fiber. This allows numerical refocusing of holograms from samples at working distances of up to approximately 1 mm from the fiber bundle before the finite coherence begins to degrade the lateral resolution. However, at short working distances the lateral resolution is limited not by coherence, but by sampling effects due to core-to-core spacing in the bundle. In this article we demonstrate that multiple shifted holograms can be combined to improve the resolution by a factor of two. The shifted holograms can be rapidly acquired by sequentially firing LEDs, which are each coupled to their own, mutually offset, illumination fiber. Following a one-time calibration, resolution-enhanced images are created in real-time at an equivalent net frame rate of up to 7.5 Hz. The resolution improvement is demonstrated quantitatively using a resolution target and qualitatively using mounted biological slides. At longer working distances, beyond 0.6 mm, the improvement is reduced as resolution becomes limited by the source spatial and temporal coherence.
Collapse
Affiliation(s)
- Michael R. Hughes
- Applied Optics Group, School of Physics and Astronomy, University of Kent, Canterbury, Kent, CT2 7NH, United Kingdom
| | - Callum McCall
- Applied Optics Group, School of Physics and Astronomy, University of Kent, Canterbury, Kent, CT2 7NH, United Kingdom
| |
Collapse
|
13
|
Glückstad J, Gejl Madsen AE. HoloTile light engine: new digital holographic modalities and applications. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2024; 87:034401. [PMID: 38373355 DOI: 10.1088/1361-6633/ad2aca] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 02/19/2024] [Indexed: 02/21/2024]
Abstract
HoloTile is a patented computer generated holography approach with the aim of reducing the speckle noise caused by the overlap of the non-trivial physical extent of the point spread function in Fourier holographic systems from adjacent frequency components. By combining tiling of phase-only of rapidly generated sub-holograms with a PSF-shaping phase profile, each frequency component-or output 'pixel'- in the Fourier domain is shaped to a desired non-overlapping profile. In this paper, we show the high-resolution, speckle-reduced reconstructions that can be achieved with HoloTile, as well as present new HoloTile modalities, including an expanded list of PSF options with new key properties. In addition, we discuss numerous applications for which HoloTile, its rapid hologram generation, and the new PSF options may be an ideal fit, including optical trapping and manipulation of particles, volumetric additive printing, information transfer and quantum communication.
Collapse
Affiliation(s)
- Jesper Glückstad
- SDU Centre for Photonics Engineering, University of Southern Denmark, Campusvej 55, Odense-M 5230, Denmark
| | - Andreas Erik Gejl Madsen
- SDU Centre for Photonics Engineering, University of Southern Denmark, Campusvej 55, Odense-M 5230, Denmark
| |
Collapse
|
14
|
Wang Z, Zheng S, Ding Z, Guo C. Dual-constrained physics-enhanced untrained neural network for lensless imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:165-173. [PMID: 38437329 DOI: 10.1364/josaa.510147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/10/2023] [Indexed: 03/06/2024]
Abstract
An untrained neural network (UNN) paves a new way to realize lensless imaging from single-frame intensity data. Based on the physics engine, such methods utilize the smoothness property of a convolutional kernel and provide an iterative self-supervised learning framework to release the needs of an end-to-end training scheme with a large dataset. However, the intrinsic overfitting problem of UNN is a challenging issue for stable and robust reconstruction. To address it, we model the phase retrieval problem into a dual-constrained untrained network, in which a phase-amplitude alternating optimization framework is designed to split the intensity-to-phase problem into two tasks: phase and amplitude optimization. In the process of phase optimization, we combine a deep image prior with a total variation prior to retrain the loss function for the phase update. In the process of amplitude optimization, a total variation denoising-based Wirtinger gradient descent method is constructed to form an amplitude constraint. Alternative iterations of the two tasks result in high-performance wavefield reconstruction. Experimental results demonstrate the superiority of our method.
Collapse
|
15
|
Rogalski M, Arcab P, Stanaszek L, Micó V, Zuo C, Trusiak M. Physics-driven universal twin-image removal network for digital in-line holographic microscopy. OPTICS EXPRESS 2024; 32:742-761. [PMID: 38175095 DOI: 10.1364/oe.505440] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/22/2023] [Indexed: 01/05/2024]
Abstract
Digital in-line holographic microscopy (DIHM) enables efficient and cost-effective computational quantitative phase imaging with a large field of view, making it valuable for studying cell motility, migration, and bio-microfluidics. However, the quality of DIHM reconstructions is compromised by twin-image noise, posing a significant challenge. Conventional methods for mitigating this noise involve complex hardware setups or time-consuming algorithms with often limited effectiveness. In this work, we propose UTIRnet, a deep learning solution for fast, robust, and universally applicable twin-image suppression, trained exclusively on numerically generated datasets. The availability of open-source UTIRnet codes facilitates its implementation in various DIHM systems without the need for extensive experimental training data. Notably, our network ensures the consistency of reconstruction results with input holograms, imparting a physics-based foundation and enhancing reliability compared to conventional deep learning approaches. Experimental verification was conducted among others on live neural glial cell culture migration sensing, which is crucial for neurodegenerative disease research.
Collapse
|
16
|
Thapa V, Galande AS, Ram GHP, John R. TIE-GANs: single-shot quantitative phase imaging using transport of intensity equation with integration of GANs. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:016010. [PMID: 38293292 PMCID: PMC10826717 DOI: 10.1117/1.jbo.29.1.016010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/18/2023] [Accepted: 01/09/2024] [Indexed: 02/01/2024]
Abstract
Significance Artificial intelligence (AI) has become a prominent technology in computational imaging over the past decade. The expeditious and label-free characteristics of quantitative phase imaging (QPI) render it a promising contender for AI investigation. Though interferometric methodologies exhibit potential efficacy, their implementation involves complex experimental platforms and computationally intensive reconstruction procedures. Hence, non-interferometric methods, such as transport of intensity equation (TIE), are preferred over interferometric methods. Aim TIE method, despite its effectiveness, is tedious as it requires the acquisition of many images at varying defocus planes. The proposed methodology holds the ability to generate a phase image utilizing a single intensity image using generative adversarial networks (GANs). We present a method called TIE-GANs to overcome the multi-shot scheme of conventional TIE. Approach The present investigation employs the TIE as a QPI methodology, which necessitates reduced experimental and computational efforts. TIE is being used for the dataset preparation as well. The proposed method captures images from different defocus planes for training. Our approach uses an image-to-image translation technique to produce phase maps and is based on GANs. The main contribution of this work is the introduction of GANs with TIE (TIE:GANs) that can give better phase reconstruction results with shorter computation times. This is the first time the GANs is proposed for TIE phase retrieval. Results The characterization of the system was carried out with microbeads of 4 μ m size and structural similarity index (SSIM) for microbeads was found to be 0.98. We demonstrated the application of the proposed method with oral cells, which yielded a maximum SSIM value of 0.95. The key characteristics include mean squared error and peak-signal-to-noise ratio values of 140 and 26.42 dB for oral cells and 100 and 28.10 dB for microbeads. Conclusions The proposed methodology holds the ability to generate a phase image utilizing a single intensity image. Our method is feasible for digital cytology because of its reported high value of SSIM. Our approach can handle defocused images in such a way that it can take intensity image from any defocus plane within the provided range and able to generate phase map.
Collapse
Affiliation(s)
- Vikas Thapa
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Ashwini Subhash Galande
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Gurram Hanu Phani Ram
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Renu John
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| |
Collapse
|
17
|
Lu JY. Modulation of Point Spread Function for Super-Resolution Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:153-171. [PMID: 37988211 DOI: 10.1109/tuffc.2023.3335883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
High image resolution is desired in wave-related areas such as ultrasound, acoustics, optics, and electromagnetics. However, the spatial resolution of an imaging system is limited by the spatial frequency of the point spread function (PSF) of the system due to diffraction. In this article, the PSF is modulated in amplitude, phase, or both to increase the spatial frequency to reconstruct super-resolution images of objects or wave sources/fields, where the modulator can be a focused shear wave produced remotely by, for example, a radiation force from a focused Bessel beam or X-wave, or can be a small particle manipulated remotely by a radiation-force (such as acoustic and optical tweezers) or electrical and magnetic forces. A theory of the PSF-modulation method was developed, and computer simulations and experiments were conducted. The result of an ultrasound experiment shows that a pulse-echo (two-way) image reconstructed has a super-resolution (0.65 mm) as compared to the diffraction limit (2.65 mm) using a 0.5-mm-diameter modulator at 1.483-mm wavelength, and the signal-to-noise ratio (SNR) of the image was about 31 dB. If the minimal SNR of a "visible" image is 3, the resolution can be further increased to about 0.19 mm by decreasing the size of the modulator. Another ultrasound experiment shows that a wave source was imaged (one-way) at about 30-dB SNR using the same modulator size and wavelength above. The image clearly separated two 0.5-mm spaced lines, which gives a 7.26-fold higher resolution than that of the diffraction limit (3.63 mm). Although, in theory, the method has no limit on the highest achievable image resolution, in practice, the resolution is limited by noises. Also, a PSF-weighted super-resolution imaging method based on the PSF-modulation method was developed. This method is easier to implement but may have some limitations. Finally, the methods above can be applied to imaging systems of an arbitrary PSF and can produce 4-D super-resolution images. With a proper choice of a modulator (e.g., a quantum dot) and imaging system, nanoscale (a few nanometers) imaging is possible.
Collapse
|
18
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
19
|
Hu X, Jia X, Zhang K, Lo TW, Fan Y, Liu D, Wen J, Yong H, Rahmani M, Zhang L, Lei D. Deep-learning-augmented microscopy for super-resolution imaging of nanoparticles. OPTICS EXPRESS 2024; 32:879-890. [PMID: 38175110 DOI: 10.1364/oe.505060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 12/04/2023] [Indexed: 01/05/2024]
Abstract
Conventional optical microscopes generally provide blurry and indistinguishable images for subwavelength nanostructures. However, a wealth of intensity and phase information is hidden in the corresponding diffraction-limited optical patterns and can be used for the recognition of structural features, such as size, shape, and spatial arrangement. Here, we apply a deep-learning framework to improve the spatial resolution of optical imaging for metal nanostructures with regular shapes yet varied arrangement. A convolutional neural network (CNN) is constructed and pre-trained by the optical images of randomly distributed gold nanoparticles as input and the corresponding scanning-electron microscopy images as ground truth. The CNN is then learned to recover reversely the non-diffracted super-resolution images of both regularly arranged nanoparticle dimers and randomly clustered nanoparticle multimers from their blurry optical images. The profiles and orientations of these structures can also be reconstructed accurately. Moreover, the same network is extended to deblur the optical images of randomly cross-linked silver nanowires. Most sections of these intricate nanowire nets are recovered well with a slight discrepancy near their intersections. This deep-learning augmented framework opens new opportunities for computational super-resolution optical microscopy with many potential applications in the fields of bioimaging and nanoscale fabrication and characterization. It could also be applied to significantly enhance the resolving capability of low-magnification scanning-electron microscopy.
Collapse
|
20
|
Nagahama Y. Digital holography without a dark room environment: extraction of interference fringes by using deep learning. APPLIED OPTICS 2023; 62:8911-8917. [PMID: 38038037 DOI: 10.1364/ao.497889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 11/02/2023] [Indexed: 12/02/2023]
Abstract
When obtaining digital holograms, dark rooms are used to prevent the influence of natural light on the formation of holograms. Further, in recent years, researchers have actively studied machine learning techniques such as deep learning to resolve image-related problems. In this study, we obtained a pair of holograms influenced by natural light and holograms unaffected by natural light, and trained U-Net to perform image transformation to remove the effects of natural light from holograms. Thus, this study aimed to propose a method for eliminating the effects of natural light from holograms by using the U-Net we trained. To verify the effectiveness of the proposed method, we evaluated the image quality of the reconstructed image of holograms before and after image processing by U-Net. The results showed that the peak signal-to-noise ratio (PSNR) increased by 7.38 [dB] after processing by U-Net. Additionally, the structural similarity index (SSIM) increased by 0.0453 after processing by U-Net. This study confirmed that in digital holography, holograms can be acquired without the use of a dark room and that the method proposed in this study can eliminate the effects of natural light and produce high-quality reconstructed images.
Collapse
|
21
|
Park J, Bai B, Ryu D, Liu T, Lee C, Luo Y, Lee MJ, Huang L, Shin J, Zhang Y, Ryu D, Li Y, Kim G, Min HS, Ozcan A, Park Y. Artificial intelligence-enabled quantitative phase imaging methods for life sciences. Nat Methods 2023; 20:1645-1660. [PMID: 37872244 DOI: 10.1038/s41592-023-02041-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/11/2023] [Indexed: 10/25/2023]
Abstract
Quantitative phase imaging, integrated with artificial intelligence, allows for the rapid and label-free investigation of the physiology and pathology of biological systems. This review presents the principles of various two-dimensional and three-dimensional label-free phase imaging techniques that exploit refractive index as an intrinsic optical imaging contrast. In particular, we discuss artificial intelligence-based analysis methodologies for biomedical studies including image enhancement, segmentation of cellular or subcellular structures, classification of types of biological samples and image translation to furnish subcellular and histochemical information from label-free phase images. We also discuss the advantages and challenges of artificial intelligence-enabled quantitative phase imaging analyses, summarize recent notable applications in the life sciences, and cover the potential of this field for basic and industrial research in the life sciences.
Collapse
Affiliation(s)
- Juyeon Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Chungha Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Mahn Jae Lee
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Jeongwon Shin
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | | | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | | | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.
- Tomocube, Daejeon, Republic of Korea.
| |
Collapse
|
22
|
Fan C, Li J, Du Y, Hu Z, Chen H, Yang Z, Zhang G, Zhang L, Zhao Z, Zhao H. Flexible dynamic quantitative phase imaging based on division of focal plane polarization imaging technique. OPTICS EXPRESS 2023; 31:33830-33841. [PMID: 37859154 DOI: 10.1364/oe.498239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 09/17/2023] [Indexed: 10/21/2023]
Abstract
This paper proposes a flexible and accurate dynamic quantitative phase imaging (QPI) method using single-shot transport of intensity equation (TIE) phase retrieval achieved by division of focal plane (DoFP) polarization imaging technique. By exploiting the polarization property of the liquid crystal spatial light modulator (LC-SLM), two intensity images of different defocus distances contained in orthogonal polarization directions can be generated simultaneously. Then, with the help of the DoFP polarization imaging, these images can be captured with single exposure, enabling accurate dynamic QPI by solving the TIE. In addition, our approach gains great flexibility in defocus distance adjustment by adjusting the pattern loaded on the LC-SLM. Experiments on microlens array, phase plate, and living human gastric cancer cells demonstrate the accuracy, flexibility, and dynamic measurement performance for various objects. The proposed method provides a simple, flexible, and accurate approach for real-time QPI without sacrificing the field of view.
Collapse
|
23
|
Wang N, Zhang C, Wei X, Yan T, Zhou W, Zhang J, Kang H, Yuan Z, Chen X. Harnessing the power of optical microscopy for visualization and analysis of histopathological images. BIOMEDICAL OPTICS EXPRESS 2023; 14:5451-5465. [PMID: 37854561 PMCID: PMC10581782 DOI: 10.1364/boe.501893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/29/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023]
Abstract
Histopathology is the foundation and gold standard for identifying diseases, and precise quantification of histopathological images can provide the pathologist with objective clues to make a more convincing diagnosis. Optical microscopy (OM), an important branch of optical imaging technology that provides high-resolution images of tissue cytology and structural morphology, has been used in the diagnosis of histopathology and evolved into a new disciplinary direction of optical microscopic histopathology (OMH). There are a number of ex-vivo studies providing applicability of different OMH approaches, and a transfer of these techniques toward in vivo diagnosis is currently in progress. Furthermore, combined with advanced artificial intelligence algorithms, OMH allows for improved diagnostic reliability and convenience due to the complementarity of retrieval information. In this review, we cover recent advances in OMH, including the exploration of new techniques in OMH as well as their applications, and look ahead to new challenges in OMH. These typical application examples well demonstrate the application potential and clinical value of OMH techniques in histopathological diagnosis.
Collapse
Affiliation(s)
- Nan Wang
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Chang Zhang
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
| | - Xinyu Wei
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
| | - Tianyu Yan
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Wangting Zhou
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Jiaojiao Zhang
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Huan Kang
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Zhen Yuan
- Faculty of Health Sciences, University of Macau, Macau, 999078, China
| | - Xueli Chen
- Center for Biomedical-photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
- Inovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou, Guangdong 510555, China
| |
Collapse
|
24
|
Zhao J, Liu L, Wang T, Zhang J, Wang X, Du X, Hao R, Liu J, Liu Y, Liu Y. Quantitative phase imaging of living red blood cells combining digital holographic microscopy and deep learning. JOURNAL OF BIOPHOTONICS 2023; 16:e202300090. [PMID: 37321984 DOI: 10.1002/jbio.202300090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 06/12/2023] [Accepted: 06/13/2023] [Indexed: 06/17/2023]
Abstract
Digital holographic microscopy as a non-contacting, non-invasive, and highly accurate measurement technology, is becoming a valuable method for quantitatively investigating cells and tissues. Reconstruction of phases from a digital hologram is a key step in quantitative phase imaging for biological and biomedical research. This study proposes a two-stage deep convolutional neural network named VY-Net, to realize the effective and robust phase reconstruction of living red blood cells. The VY-Net can obtain the phase information of an object directly from a single-shot off-axis digital hologram. We also propose two new indices to evaluate the reconstructed phases. In experiments, the mean of the structural similarity index of reconstructed phases can reach 0.9309, and the mean of the accuracy of reconstructions of reconstructed phases is as high as 91.54%. An unseen phase map of a living human white blood cell is successfully reconstructed by the trained VY-Net, demonstrating its strong generality.
Collapse
Affiliation(s)
- Jiaxi Zhao
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Lin Liu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Tianhe Wang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jing Zhang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiangzhou Wang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaohui Du
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Ruqian Hao
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Juanxiu Liu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yi Liu
- School of Physics, University of Electronic Science and Technology of China, Chengdu, China
| | - Yong Liu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
25
|
He H, Tang C, Zhang L, Xu M, Lei Z. UN-PUNet for phase unwrapping from a single uneven and noisy ESPI phase pattern. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1969-1978. [PMID: 37855553 DOI: 10.1364/josaa.499453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 09/19/2023] [Indexed: 10/20/2023]
Abstract
The wrapped phase patterns of objects with varying materials exhibit uneven gray values. Phase unwrapping is a tricky problem from a single wrapped phase pattern in electronic speckle pattern interferometry (ESPI) due to the gray unevenness and noise. In this paper, we propose a convolutional neural network (CNN) model named UN-PUNet for phase unwrapping from a single wrapped phase pattern with uneven grayscale and noise. UN-PUNet leverages the benefits of a dual-branch encoder structure, a multi-scale feature fusion structure, a convolutional block attention module, and skip connections. Additionally, we have created an abundant dataset for phase unwrapping with varying degrees of unevenness, fringe density, and noise levels. We also propose a mixed loss function MS_SSIM + L2. Employing the proposed dataset and loss function, we can successfully train the UN-PUNet, ultimately realizing effective and robust phase unwrapping from a single uneven and noisy wrapped phase pattern. We evaluate the performance of our method on both simulated and experimental ESPI wrapped phase patterns, comparing it with DLPU, VUR-Net, and PU-M-Net. The unwrapping performance is assessed quantitatively and qualitatively. Furthermore, we conduct ablation experiments to evaluate the impact of different loss functions and the attention module utilized in our method. The results demonstrate that our proposed method outperforms the compared methods, eliminating the need for pre-processing, post-processing procedures, and parameter fine-tuning. Moreover, our method effectively solves the phase unwrapping problem while preserving the structure and shape, eliminating speckle noise, and addressing uneven grayscale.
Collapse
|
26
|
Wang C, Zhou P, Zhu J. Deep learning-based end-to-end 3D depth recovery from a single-frame fringe pattern with the MSUNet++ network. OPTICS EXPRESS 2023; 31:33287-33298. [PMID: 37859112 DOI: 10.1364/oe.501067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 09/05/2023] [Indexed: 10/21/2023]
Abstract
Deep learning (DL)-based single-frame fringe pattern to 3D depth reconstruction methods have aroused extensive research interest. The goal is to estimate high-precision 3D shape from a single frame of fringe pattern with limited information. Therefore, the purpose of this work attempts to propose an end-to-end DL-based 3D reconstruction method from the single fringe pattern with excellent capability of achieving high accuracy depth recovery and geometry details preservation of tested objects. We construct a multi-scale feature fusion convolutional neural network (CNN) called MSUNet++, which incorporates discrete wavelet transform (DWT) in data preprocessing for extracting high-frequency signals of fringe patterns as input of the network. Additionally, a loss function that combines structural similarity with edge perception is established. Through these measures, high-frequency geometry details of the reconstruction results can be obviously enhanced, while the geometric shape can be effectively maintained. Ablation experiments are involved in validating the effectiveness of our proposed solution. 3D reconstructed results and analysis of generalization experiments on different tested samples imply that the proposed method in this research enjoys capabilities of higher accuracy, better detail preservation, and robustness in comparison with the compared methods.
Collapse
|
27
|
Karpov DV, Kurdiumov S, Horak P. Convolutional neural networks for mode on-demand high finesse optical resonator design. Sci Rep 2023; 13:15567. [PMID: 37730758 PMCID: PMC10511533 DOI: 10.1038/s41598-023-42223-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 09/06/2023] [Indexed: 09/22/2023] Open
Abstract
We demonstrate the use of machine learning through convolutional neural networks to solve inverse design problems of optical resonator engineering. The neural network finds a harmonic modulation of a spherical mirror to generate a resonator mode with a given target topology ("mode on-demand"). The procedure allows us to optimize the shape of mirrors to achieve a significantly enhanced coupling strength and cooperativity between a resonator photon and a quantum emitter located at the center of the resonator. In a second example, a double-peak mode is designed which would enhance the interaction between two quantum emitters, e.g., for quantum information processing.
Collapse
Affiliation(s)
- Denis V Karpov
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
| | - Sergei Kurdiumov
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK.
| | - Peter Horak
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
| |
Collapse
|
28
|
Shang R, O’Brien MA, Wang F, Situ G, Luke GP. Approximating the uncertainty of deep learning reconstruction predictions in single-pixel imaging. COMMUNICATIONS ENGINEERING 2023; 2:53. [PMID: 38463559 PMCID: PMC10923550 DOI: 10.1038/s44172-023-00103-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 07/23/2023] [Indexed: 03/12/2024]
Abstract
Single-pixel imaging (SPI) has the advantages of high-speed acquisition over a broad wavelength range and system compactness. Deep learning (DL) is a powerful tool that can achieve higher image quality than conventional reconstruction approaches. Here, we propose a Bayesian convolutional neural network (BCNN) to approximate the uncertainty of the DL predictions in SPI. Each pixel in the predicted image represents a probability distribution rather than an image intensity value, indicating the uncertainty of the prediction. We show that the BCNN uncertainty predictions are correlated to the reconstruction errors. When the BCNN is trained and used in practical applications where the ground truths are unknown, the level of the predicted uncertainty can help to determine whether system, data, or network adjustments are needed. Overall, the proposed BCNN can provide a reliable tool to indicate the confidence levels of DL predictions as well as the quality of the model and dataset for many applications of SPI.
Collapse
Affiliation(s)
- Ruibo Shang
- Thayer School of Engineering, Dartmouth College, Hanover,
NH 03755, USA
- Department of Bioengineering, University of Washington,
Seattle, WA 98195, USA
| | | | - Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese
Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics
Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese
Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics
Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Hangzhou Institute for Advanced Study, University of
Chinese Academy of Sciences, Hangzhou 310024, China
| | - Geoffrey P. Luke
- Thayer School of Engineering, Dartmouth College, Hanover,
NH 03755, USA
| |
Collapse
|
29
|
Ryu D, Bak T, Ahn D, Kang H, Oh S, Min HS, Lee S, Lee J. Deep learning-based label-free hematology analysis framework using optical diffraction tomography. Heliyon 2023; 9:e18297. [PMID: 37576294 PMCID: PMC10412892 DOI: 10.1016/j.heliyon.2023.e18297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/15/2023] Open
Abstract
Hematology analysis, a common clinical test for screening various diseases, has conventionally required a chemical staining process that is time-consuming and labor-intensive. To reduce the costs of chemical staining, label-free imaging can be utilized in hematology analysis. In this work, we exploit optical diffraction tomography and the fully convolutional one-stage object detector or FCOS, a deep learning architecture for object detection, to develop a label-free hematology analysis framework. Detected cells are classified into four groups: red blood cell, abnormal red blood cell, platelet, and white blood cell. In the results, the trained object detection model showed superior detection performance for blood cells in refractive index tomograms (0.977 mAP) and also showed high accuracy in the four-class classification of blood cells (0.9708 weighted F1 score, 0.9712 total accuracy). For further verification, mean corpuscular volume (MCV) and mean corpuscular hemoglobin (MCH) were compared with values obtained from reference hematology equipment, with our results showing reasonable correlation in both MCV (0.905) and MCH (0.889). This study provides a successful demonstration of the proposed framework in detecting and classifying blood cells using optical diffraction tomography for label-free hematology analysis.
Collapse
Affiliation(s)
- Dongmin Ryu
- Tomocube Inc., Daejeon, 34109, Republic of Korea
| | - Taeyoung Bak
- Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea
| | - Daewoong Ahn
- Tomocube Inc., Daejeon, 34109, Republic of Korea
| | - Hayoung Kang
- Tomocube Inc., Daejeon, 34109, Republic of Korea
| | - Sanggeun Oh
- Tomocube Inc., Daejeon, 34109, Republic of Korea
| | | | - Sumin Lee
- Tomocube Inc., Daejeon, 34109, Republic of Korea
| | - Jimin Lee
- Department of Nuclear Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea
- Graduate School of Artificial Intelligence (AIGS), Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea
| |
Collapse
|
30
|
Wang X, Zhu K, Zhu K, Li B, Shen D, Zheng ZG. A simple polarimetric measurement based on a computational algorithm. OPTICS LETTERS 2023; 48:4085-4088. [PMID: 37527124 DOI: 10.1364/ol.494727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 07/02/2023] [Indexed: 08/03/2023]
Abstract
A simple and compact polarimeter comprising two electrically controlled liquid-crystal variable retarders (LCVRs) and a linear polarizer is demonstrated, which is enabled by analyzing the intensity variation of the modulated output light based on a computational algorithm. A proof-of-concept prototype is presented, which is mounted onto a power meter or a CMOS camera for the intensity data collection. The polarimetric measurement for the spatial variant polarization states of light is also verified, indicating the possibility of achieving a resolution-lossless polarimeter. Thus, our proposed method shows a cost-effective way to realize a compact polarimeter in polarization optics.
Collapse
|
31
|
Liu T, Li Y, Koydemir HC, Zhang Y, Yang E, Eryilmaz M, Wang H, Li J, Bai B, Ma G, Ozcan A. Rapid and stain-free quantification of viral plaque via lens-free holography and deep learning. Nat Biomed Eng 2023; 7:1040-1052. [PMID: 37349390 PMCID: PMC10427422 DOI: 10.1038/s41551-023-01057-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 05/14/2023] [Indexed: 06/24/2023]
Abstract
A plaque assay-the gold-standard method for measuring the concentration of replication-competent lytic virions-requires staining and usually more than 48 h of runtime. Here we show that lens-free holographic imaging and deep learning can be combined to expedite and automate the assay. The compact imaging device captures phase information label-free at a rate of approximately 0.32 gigapixels per hour per well, covers an area of about 30 × 30 mm2 and a 10-fold larger dynamic range of virus concentration than standard assays, and quantifies the infected area and the number of plaque-forming units. For the vesicular stomatitis virus, the automated plaque assay detected the first cell-lysing events caused by viral replication as early as 5 h after incubation, and in less than 20 h it detected plaque-forming units at rates higher than 90% at 100% specificity. Furthermore, it reduced the incubation time of the herpes simplex virus type 1 by about 48 h and that of the encephalomyocarditis virus by about 20 h. The stain-free assay should be amenable for use in virology research, vaccine development and clinical diagnosis.
Collapse
Affiliation(s)
- Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Hatice Ceylan Koydemir
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
- Center for Remote Health Technologies and Systems, Texas A&M University, College Station, TX, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Ethan Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Department of Mathematics, University of California, Los Angeles, CA, USA
| | - Merve Eryilmaz
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
| | - Hongda Wang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, China
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
- Department of Surgery, University of California, Los Angeles, CA, USA.
| |
Collapse
|
32
|
Mandal AC, Rathor M, Zalevsky Z, Singh RK. Randomness assisted in-line holography with deep learning. Sci Rep 2023; 13:10986. [PMID: 37419990 DOI: 10.1038/s41598-023-37810-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 06/28/2023] [Indexed: 07/09/2023] Open
Abstract
We propose and demonstrate a holographic imaging scheme exploiting random illuminations for recording hologram and then applying numerical reconstruction and twin image removal. We use an in-line holographic geometry to record the hologram in terms of the second-order correlation and apply the numerical approach to reconstruct the recorded hologram. This strategy helps to reconstruct high-quality quantitative images in comparison to the conventional holography where the hologram is recorded in the intensity rather than the second-order intensity correlation. The twin image issue of the in-line holographic scheme is resolved by an unsupervised deep learning based method using an auto-encoder scheme. Proposed learning technique leverages the main characteristic of autoencoders to perform blind single-shot hologram reconstruction, and this does not require a dataset of samples with available ground truth for training and can reconstruct the hologram solely from the captured sample. Experimental results are presented for two objects, and a comparison of the reconstruction quality is given between the conventional inline holography and the one obtained with the proposed technique.
Collapse
Affiliation(s)
- Aditya Chandra Mandal
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
- Department of Mining Engineering, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
| | - Mohit Rathor
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India
| | - Zeev Zalevsky
- Faculty of Engineering and Nano Technology Center, Bar-Ilan University, Ramat Gan, Israel
| | - Rakesh Kumar Singh
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh, 221005, India.
| |
Collapse
|
33
|
Pan T, Jin S, Miller MD, Kyrillidis A, Phillips GN. A deep learning solution for crystallographic structure determination. IUCRJ 2023; 10:487-496. [PMID: 37409806 DOI: 10.1107/s2052252523004293] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 05/17/2023] [Indexed: 07/07/2023]
Abstract
The general de novo solution of the crystallographic phase problem is difficult and only possible under certain conditions. This paper develops an initial pathway to a deep learning neural network approach for the phase problem in protein crystallography, based on a synthetic dataset of small fragments derived from a large well curated subset of solved structures in the Protein Data Bank (PDB). In particular, electron-density estimates of simple artificial systems are produced directly from corresponding Patterson maps using a convolutional neural network architecture as a proof of concept.
Collapse
Affiliation(s)
- Tom Pan
- Department of Computer Science, Rice University, Houston, Texas, USA
| | - Shikai Jin
- Department of Biosciences, Rice University, Houston, Texas, USA
| | | | | | | |
Collapse
|
34
|
Bouchama L, Dorizzi B, Thellier M, Klossa J, Gottesman Y. Fourier ptychographic microscopy image enhancement with bi-modal deep learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:3172-3189. [PMID: 37497486 PMCID: PMC10368047 DOI: 10.1364/boe.489776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/16/2023] [Accepted: 05/04/2023] [Indexed: 07/28/2023]
Abstract
Digital pathology based on a whole slide imaging system is about to permit a major breakthrough in automated diagnosis for rapid and highly sensitive disease detection. High-resolution FPM (Fourier ptychographic microscopy) slide scanners delivering rich information on biological samples are becoming available. They allow new effective data exploitation for efficient automated diagnosis. However, when the sample thickness becomes comparable to or greater than the microscope depth of field, we report an observation of undesirable contrast change of sub-cellular compartments in phase images around the optimal focal plane, reducing their usability. In this article, a bi-modal U-Net artificial neural network (i.e., a two channels U-Net fed with intensity and phase images) is trained to reinforce specifically targeted sub-cellular compartments contrast for both intensity and phase images. The procedure used to construct a reference database is detailed. It is obtained by exploiting the FPM reconstruction algorithm to explore images around the optimal focal plane with virtual Z-stacking calculations and selecting those with adequate contrast and focus. By construction and once trained, the U-Net is able to simultaneously reinforce targeted cell compartment visibility and compensate for any focus imprecision. It is efficient over a large field of view at high resolution. The interest of the approach is illustrated considering the use-case of Plasmodium falciparum detection in blood smear where improvement in the detection sensitivity is demonstrated without degradation of the specificity. Post-reconstruction FPM image processing with such U-Net and its training procedure is general and applicable to demanding biological screening applications.
Collapse
Affiliation(s)
- Lyes Bouchama
- Samovar, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France
- TRIBVN/T-life, 92800 Puteaux, France
| | - Bernadette Dorizzi
- Samovar, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France
| | - Marc Thellier
- Sorbonne Université, INSERM, Institut Pierre Louis d’Epidémiologie et de Santé Publique, AP-HP, Hôpital Pitié-Salpêtrière, 75013 Paris, France
| | | | - Yaneck Gottesman
- Samovar, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France
| |
Collapse
|
35
|
Seong B, Kim I, Moon T, Ranathunga M, Kim D, Joo C. Untrained deep learning-based differential phase-contrast microscopy. OPTICS LETTERS 2023; 48:3607-3610. [PMID: 37390192 DOI: 10.1364/ol.493391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 05/31/2023] [Indexed: 07/02/2023]
Abstract
Quantitative differential phase-contrast (DPC) microscopy produces phase images of transparent objects based on a number of intensity images. To reconstruct the phase, in DPC microscopy, a linearized model for weakly scattering objects is considered; this limits the range of objects to be imaged, and requires additional measurements and complicated algorithms to correct for system aberrations. Here, we present a self-calibrated DPC microscope using an untrained neural network (UNN), which incorporates the nonlinear image formation model. Our method alleviates the restrictions on the object to be imaged and simultaneously reconstructs the complex object information and aberrations, without any training dataset. We demonstrate the viability of UNN-DPC microscopy through both numerical simulations and LED microscope-based experiments.
Collapse
|
36
|
Bhatt S, Butola A, Kumar A, Thapa P, Joshi A, Jadhav S, Singh N, Prasad DK, Agarwal K, Mehta DS. Single-shot multispectral quantitative phase imaging of biological samples using deep learning. APPLIED OPTICS 2023; 62:3989-3999. [PMID: 37706710 DOI: 10.1364/ao.482788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 04/18/2023] [Indexed: 09/15/2023]
Abstract
Multispectral quantitative phase imaging (MS-QPI) is a high-contrast label-free technique for morphological imaging of the specimens. The aim of the present study is to extract spectral dependent quantitative information in single-shot using a highly spatially sensitive digital holographic microscope assisted by a deep neural network. There are three different wavelengths used in our method: λ=532, 633, and 808 nm. The first step is to get the interferometric data for each wavelength. The acquired datasets are used to train a generative adversarial network to generate multispectral (MS) quantitative phase maps from a single input interferogram. The network was trained and validated on two different samples: the optical waveguide and MG63 osteosarcoma cells. Validation of the present approach is performed by comparing the predicted MS phase maps with numerically reconstructed (F T+T I E) phase maps and quantifying with different image quality assessment metrices.
Collapse
|
37
|
Sun J, Czarske JW. Compressive holographic sensing simplifies quantitative phase imaging. LIGHT, SCIENCE & APPLICATIONS 2023; 12:121. [PMID: 37198148 DOI: 10.1038/s41377-023-01145-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Quantitative phase imaging (QPI) has emerged as method for investigating biological specimen and technical objects. However, conventional methods often suffer from shortcomings in image quality, such as the twin image artifact. A novel computational framework for QPI is presented with high quality inline holographic imaging from a single intensity image. This paradigm shift is promising for advanced QPI of cells and tissues.
Collapse
Affiliation(s)
- Jiawei Sun
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany
| | - Juergen W Czarske
- Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany.
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.
- Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany.
- Institute of Applied Physics, TU Dresden, Dresden, Germany.
| |
Collapse
|
38
|
Huang T, Zhang Q, Li J, Lu X, Di J, Zhong L, Qin Y. Single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting technology. OPTICS EXPRESS 2023; 31:12349-12356. [PMID: 37157396 DOI: 10.1364/oe.486289] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Fresnel incoherent correlation holography (FINCH) realizes non-scanning three-dimension (3D) images using spatial incoherent illumination, but it requires phase-shifting technology to remove the disturbance of the DC term and twin term that appears in the reconstruction field, thus increasing the complexity of the experiment and limits the real-time performance of FINCH. Here, we propose a single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method to realize rapid and high-precision image reconstruction using only a collected interferogram. A phase-shifting network is designed to implement the phase-shifting operation of FINCH. The trained network can conveniently predict two interferograms with the phase shift of 2/3 π and 4/3 π from one input interferogram. Using the conventional three-step phase-shifting algorithm, we can conveniently remove the DC term and twin term of the FINCH reconstruction and obtain high-precision reconstruction through the back propagation algorithm. The Mixed National Institute of Standards and Technology (MNIST) dataset is used to verify the feasibility of the proposed method through experiments. In the test with the MNIST dataset, the reconstruction results demonstrate that in addition to high-precision reconstruction, the proposed FINCH/DLPS method also can effectively retain the 3D information by calibrating the back propagation distance in the case of reducing the complexity of the experiment, further indicating the feasibility and superiority of the proposed FINCH/DLPS method.
Collapse
|
39
|
Wiggins L, Lord A, Murphy KL, Lacy SE, O'Toole PJ, Brackenbury WJ, Wilson J. The CellPhe toolkit for cell phenotyping using time-lapse imaging and pattern recognition. Nat Commun 2023; 14:1854. [PMID: 37012230 PMCID: PMC10070448 DOI: 10.1038/s41467-023-37447-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/14/2023] [Indexed: 04/05/2023] Open
Abstract
With phenotypic heterogeneity in whole cell populations widely recognised, the demand for quantitative and temporal analysis approaches to characterise single cell morphology and dynamics has increased. We present CellPhe, a pattern recognition toolkit for the unbiased characterisation of cellular phenotypes within time-lapse videos. CellPhe imports tracking information from multiple segmentation and tracking algorithms to provide automated cell phenotyping from different imaging modalities, including fluorescence. To maximise data quality for downstream analysis, our toolkit includes automated recognition and removal of erroneous cell boundaries induced by inaccurate tracking and segmentation. We provide an extensive list of features extracted from individual cell time series, with custom feature selection to identify variables that provide greatest discrimination for the analysis in question. Using ensemble classification for accurate prediction of cellular phenotype and clustering algorithms for the characterisation of heterogeneous subsets, we validate and prove adaptability using different cell types and experimental conditions.
Collapse
Affiliation(s)
- Laura Wiggins
- York Biomedical Research Institute, University of York, York, UK
- Department of Biology, University of York, York, UK
| | - Alice Lord
- Department of Biology, University of York, York, UK
| | - Killian L Murphy
- Wolfson Atmospheric Chemistry Laboratories, University of York, York, UK
| | - Stuart E Lacy
- Wolfson Atmospheric Chemistry Laboratories, University of York, York, UK
| | - Peter J O'Toole
- York Biomedical Research Institute, University of York, York, UK
- Department of Biology, University of York, York, UK
| | - William J Brackenbury
- York Biomedical Research Institute, University of York, York, UK
- Department of Biology, University of York, York, UK
| | - Julie Wilson
- Department of Mathematics, University of York, York, UK.
| |
Collapse
|
40
|
Bian L, Wang X, Chang X, Gao Z, Qin T. Phase retrieval via nonlocal complex-domain sparsity. OPTICS LETTERS 2023; 48:1854-1857. [PMID: 37221783 DOI: 10.1364/ol.481953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 02/09/2023] [Indexed: 05/25/2023]
Abstract
Phase retrieval is indispensable for a number of coherent imaging systems. Owing to limited exposure, it is a challenge for traditional phase retrieval algorithms to reconstruct fine details in the presence of noise. In this Letter, we report an iterative framework for noise-robust phase retrieval with high fidelity. In the framework, we investigate nonlocal structural sparsity in the complex domain by low-rank regularization, which effectively suppresses artifacts caused by measurement noise. The joint optimization of sparsity regularization and data fidelity with forward models enables satisfying detail recovery. To further improve computational efficiency, we develop an adaptive iteration strategy that automatically adjusts matching frequency. The effectiveness of the reported technique has been validated for coherent diffraction imaging and Fourier ptychography, with ≈7 dB higher peak SNR (PSNR) on average, compared with conventional alternating projection reconstruction.
Collapse
|
41
|
Valentino M, Sirico DG, Memmolo P, Miccio L, Bianco V, Ferraro P. Digital holographic approaches to the detection and characterization of microplastics in water environments. APPLIED OPTICS 2023; 62:D104-D118. [PMID: 37132775 DOI: 10.1364/ao.478700] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Microplastic (MP) pollution is seriously threatening the environmental health of the world, which has accelerated the development of new identification and characterization methods. Digital holography (DH) is one of the emerging tools to detect MPs in a high-throughput flow. Here, we review advances in MP screening by DH. We examine the problem from both the hardware and software viewpoints. Automatic analysis based on smart DH processing is reported by highlighting the role played by artificial intelligence for classification and regression tasks. In this framework, the continuous development and availability in recent years of field-portable holographic flow cytometers for water monitoring also is discussed.
Collapse
|
42
|
Chen X, Wang H, Razi A, Kozicki M, Mann C. DH-GAN: a physics-driven untrained generative adversarial network for holographic imaging. OPTICS EXPRESS 2023; 31:10114-10135. [PMID: 37157567 DOI: 10.1364/oe.480894] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms. The object's 3D shape can be obtained by numerical analysis of the captured holograms and recovering the incurred phase. Recently, deep learning (DL) methods have been used for more accurate holographic processing. However, most supervised methods require large datasets to train the model, which is rarely available in most DH applications due to the scarcity of samples or privacy concerns. A few one-shot DL-based recovery methods exist with no reliance on large datasets of paired images. Still, most of these methods often neglect the underlying physics law that governs wave propagation. These methods offer a black-box operation, which is not explainable, generalizable, and transferrable to other samples and applications. In this work, we propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality while using a generative network as a function approximator to model the inverse of hologram formation. We impose smoothness on the background part of the recovered image using a progressive masking module powered by simulated annealing to enhance the reconstruction quality. The proposed method exhibits high transferability to similar samples, which facilitates its fast deployment in time-sensitive applications without the need for retraining the network from scratch. The results show a considerable improvement to competitor methods in reconstruction quality (about 5 dB PSNR gain) and robustness to noise (about 50% reduction in PSNR vs noise increase rate).
Collapse
|
43
|
Qin Y, Wan Y, Gong Q, Zhang M. Deep-learning-based cross-talk free and high-security compressive encryption with spatially incoherent illumination. OPTICS EXPRESS 2023; 31:9800-9816. [PMID: 37157543 DOI: 10.1364/oe.483136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Incoherent optical cryptosystem is promising for its immunity against coherent noise and insensitivity to misalignment, and compressive encryption is desirable considering the increasingly demand on the exchange of encrypted data via Internet. In this paper, we propose a novel optical compressive encryption approach with spatially incoherent illumination based on deep learning (DL) and space multiplexing. For encryption, the plaintexts are individually sent to the scattering-imaging-based encryption (SIBE) scheme where they are transformed to scattering images with noise appearances. Afterwards, these images are randomly sampled and then integrated into a single package (i.e., ciphertext) by space multiplexing. The decryption is basically the inverse of the encryption, while it involves an ill-posed problem (i.e., recovering the noise-like scattering image from its randomly sampled version). We demonstrated that such a problem can be well resolved by DL. The proposal is radically free from the cross-talk noise existing in many current multiple-image encryption schemes. Also, it gets rid of the linearity bothering the SIBE and is hence robust against the ciphertext-only attack based on phase retrieval algorithm. We present a series of experimental results to confirm the effectiveness and feasibility of the proposal.
Collapse
|
44
|
Siu DMD, Lee KCM, Chung BMF, Wong JSJ, Zheng G, Tsia KK. Optofluidic imaging meets deep learning: from merging to emerging. LAB ON A CHIP 2023; 23:1011-1033. [PMID: 36601812 DOI: 10.1039/d2lc00813k] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Propelled by the striking advances in optical microscopy and deep learning (DL), the role of imaging in lab-on-a-chip has dramatically been transformed from a silo inspection tool to a quantitative "smart" engine. A suite of advanced optical microscopes now enables imaging over a range of spatial scales (from molecules to organisms) and temporal window (from microseconds to hours). On the other hand, the staggering diversity of DL algorithms has revolutionized image processing and analysis at the scale and complexity that were once inconceivable. Recognizing these exciting but overwhelming developments, we provide a timely review of their latest trends in the context of lab-on-a-chip imaging, or coined optofluidic imaging. More importantly, here we discuss the strengths and caveats of how to adopt, reinvent, and integrate these imaging techniques and DL algorithms in order to tailor different lab-on-a-chip applications. In particular, we highlight three areas where the latest advances in lab-on-a-chip imaging and DL can form unique synergisms: image formation, image analytics and intelligent image-guided autonomous lab-on-a-chip. Despite the on-going challenges, we anticipate that they will represent the next frontiers in lab-on-a-chip imaging that will spearhead new capabilities in advancing analytical chemistry research, accelerating biological discovery, and empowering new intelligent clinical applications.
Collapse
Affiliation(s)
- Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Bob M F Chung
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| |
Collapse
|
45
|
A sub-wavelength Si LED integrated in a CMOS platform. Nat Commun 2023; 14:882. [PMID: 36797286 PMCID: PMC9935894 DOI: 10.1038/s41467-023-36639-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 02/08/2023] [Indexed: 02/18/2023] Open
Abstract
A nanoscale on-chip light source with high intensity is desired for various applications in integrated photonics systems. However, it is challenging to realize such an emitter using materials and fabrication processes compatible with the standard integrated circuit technology. In this letter, we report an electrically driven Si light-emitting diode with sub-wavelength emission area fabricated in an open-foundry microelectronics complementary metal-oxide-semiconductor platform. The light-emitting diode emission spectrum is centered around 1100 nm and the emission area is smaller than 0.14 μm2 (~[Formula: see text] nm). This light-emitting diode has high spatial intensity of >50 mW/cm2 which is comparable with state-of-the-art Si-based emitters with much larger emission areas. Due to sub-wavelength confinement, the emission exhibits a high degree of spatial coherence, which is demonstrated by incorporating the light-emitting diode into a compact lensless in-line holographic microscope. This centimeter-scale, all-silicon microscope utilizes a single emitter to simultaneously illuminate ~9.5 million pixels of a complementary metal-oxide-semiconductor imager.
Collapse
|
46
|
Matlock A, Zhu J, Tian L. Multiple-scattering simulator-trained neural network for intensity diffraction tomography. OPTICS EXPRESS 2023; 31:4094-4107. [PMID: 36785385 DOI: 10.1364/oe.477396] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering C. elegans worms. We benchmark the network's performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network's robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network's generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.
Collapse
|
47
|
Picazo-Bueno JÁ, Sanz M, Granero L, García J, Micó V. Multi-Illumination Single-Holographic-Exposure Lensless Fresnel (MISHELF) Microscopy: Principles and Biomedical Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:1472. [PMID: 36772511 PMCID: PMC9918952 DOI: 10.3390/s23031472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/13/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
Lensless holographic microscopy (LHM) comes out as a promising label-free technique since it supplies high-quality imaging and adaptive magnification in a lens-free, compact and cost-effective way. Compact sizes and reduced prices of LHMs make them a perfect instrument for point-of-care diagnosis and increase their usability in limited-resource laboratories, remote areas, and poor countries. LHM can provide excellent intensity and phase imaging when the twin image is removed. In that sense, multi-illumination single-holographic-exposure lensless Fresnel (MISHELF) microscopy appears as a single-shot and phase-retrieved imaging technique employing multiple illumination/detection channels and a fast-iterative phase-retrieval algorithm. In this contribution, we review MISHELF microscopy through the description of the principles, the analysis of the performance, the presentation of the microscope prototypes and the inclusion of the main biomedical applications reported so far.
Collapse
Affiliation(s)
- José Ángel Picazo-Bueno
- Department of Optics, Optometry and Vision Science, University of Valencia, 46100 Burjassot, Spain
- Biomedical Technology Center of the Medical Faculty, University of Muenster, Mendelstr. 17, D-48149 Muenster, Germany
| | - Martín Sanz
- Department of Optics, Optometry and Vision Science, University of Valencia, 46100 Burjassot, Spain
| | - Luis Granero
- Department of Optics, Optometry and Vision Science, University of Valencia, 46100 Burjassot, Spain
| | - Javier García
- Department of Optics, Optometry and Vision Science, University of Valencia, 46100 Burjassot, Spain
| | - Vicente Micó
- Department of Optics, Optometry and Vision Science, University of Valencia, 46100 Burjassot, Spain
| |
Collapse
|
48
|
Lee C, Song G, Kim H, Ye JC, Jang M. Deep learning based on parameterized physical forward model for adaptive holographic imaging with unpaired data. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00584-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
49
|
Luo G, He Y, Shu X, Zhou R, Blu T. Complex wave and phase retrieval from a single off-axis interferogram. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:85-95. [PMID: 36607078 DOI: 10.1364/josaa.473726] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/15/2022] [Indexed: 06/17/2023]
Abstract
Single-frame off-axis holographic reconstruction is promising for quantitative phase imaging. However, reconstruction accuracy and contrast are degraded by noise, frequency spectrum overlap of the interferogram, severe phase distortion, etc. In this work, we propose an iterative single-frame complex wave retrieval based on an explicit model of object and reference waves. We also develop a phase restoration algorithm that does not resort to phase unwrapping. Both simulation and real experiments demonstrate higher accuracy and robustness compared to state-of-the-art methods, for both complex wave estimation and phase reconstruction. Importantly, the allowed bandwidth for the object wave is significantly improved in realistic experimental conditions (similar amplitudes for object and reference waves), which makes it attractive for large field-of-view and high-resolution imaging applications.
Collapse
|
50
|
Luo Y, Zhang Y, Liu T, Yu A, Wu Y, Ozcan A. Virtual Impactor-Based Label-Free Pollen Detection using Holography and Deep Learning. ACS Sens 2022; 7:3885-3894. [PMID: 36414385 DOI: 10.1021/acssensors.2c01890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Exposure to bio-aerosols such as pollen can lead to adverse health effects. There is a need for a portable and cost-effective device for long-term monitoring and quantification of various types of pollen. To address this need, we present a mobile and cost-effective label-free sensor that takes holographic images of flowing particulate matter (PM) concentrated by a virtual impactor, which selectively slows down and guides particles larger than 6 μm to fly through an imaging window. The flowing particles are illuminated by a pulsed laser diode, casting their inline holograms on a complementary metal-oxide semiconductor image sensor in a lens-free mobile imaging device. The illumination contains three short pulses with a negligible shift of the flowing particle within one pulse, and triplicate holograms of the same particle are recorded at a single frame before it exits the imaging field-of-view, revealing different perspectives of each particle. The particles within the virtual impactor are localized through a differential detection scheme, and a deep neural network classifies the pollen type in a label-free manner based on the acquired holographic images. We demonstrated the success of this mobile pollen detector with a virtual impactor using different types of pollen (i.e., bermuda, elm, oak, pine, sycamore, and wheat) and achieved a blind classification accuracy of 92.91%. This mobile and cost-effective device weighs ∼700 g and can be used for label-free sensing and quantification of various bio-aerosols over extended periods since it is based on a cartridge-free virtual impactor that does not capture or immobilize PM.
Collapse
Affiliation(s)
- Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States.,Bioengineering Department, University of California, Los Angeles, California 90095, United States.,California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States.,Bioengineering Department, University of California, Los Angeles, California 90095, United States.,California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States.,Bioengineering Department, University of California, Los Angeles, California 90095, United States.,California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Alan Yu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States.,Computer Science Department, University of California, Los Angeles, California 90095, United States
| | - Yichen Wu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States.,Bioengineering Department, University of California, Los Angeles, California 90095, United States.,California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States.,Bioengineering Department, University of California, Los Angeles, California 90095, United States.,California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| |
Collapse
|