1
|
Li Y, Zhang Z, Tian F, Luna-Palacios YY, Rocha-Mendoza I, Yang W. V-shaped PSF for 3D imaging over an extended depth of field in wide-field microscopy. OPTICS LETTERS 2025; 50:383-386. [PMID: 39815517 DOI: 10.1364/ol.544552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Accepted: 11/25/2024] [Indexed: 01/18/2025]
Abstract
Single-shot 3D optical microscopy that can capture high-resolution information over a large volume has broad applications in biology. Existing 3D imaging methods using point-spread-function (PSF) engineering often have limited depth of field (DOF) or require custom and often complex design of phase masks. We propose a new, to the best of our knowledge, PSF approach that is easy to implement and offers a large DOF. The PSF appears to be axially V-shaped, engineered by replacing the conventional tube lens with a pair of axicon lenses behind the objective lens of a wide-field microscope. The 3D information can be reconstructed from a single-shot image using a deep neural network. Simulations in a 10× magnification wide-field microscope show the V-shaped PSF offers excellent 3D resolution (<2.5 µm lateral and ∼15 µm axial) over a ∼350 µm DOF at a 550 nm wavelength. Compared to other popular PSFs designed for 3D imaging, the V-shaped PSF is simple to deploy and provides high 3D reconstruction quality over an extended DOF.
Collapse
|
2
|
Kodandaramaiah SB, Aharoni D, Gibson EA. Special Section Guest Editorial: Open-source neurophotonic tools for neuroscience. NEUROPHOTONICS 2024; 11:034301. [PMID: 39350913 PMCID: PMC11441622 DOI: 10.1117/1.nph.11.3.034301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
The editorial completes the Neurophotonics special series on open-source neurophotonic tools for neuroscience.
Collapse
Affiliation(s)
| | - Daniel Aharoni
- University of California, Los Angeles (UCLA), Los Angeles, California, United States
| | - Emily A. Gibson
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
| |
Collapse
|
3
|
Zhang Y, Yuan L, Zhu Q, Wu J, Nöbauer T, Zhang R, Xiao G, Wang M, Xie H, Guo Z, Dai Q, Vaziri A. A miniaturized mesoscope for the large-scale single-neuron-resolved imaging of neuronal activity in freely behaving mice. Nat Biomed Eng 2024; 8:754-774. [PMID: 38902522 DOI: 10.1038/s41551-024-01226-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 04/03/2024] [Indexed: 06/22/2024]
Abstract
Exploring the relationship between neuronal dynamics and ethologically relevant behaviour involves recording neuronal-population activity using technologies that are compatible with unrestricted animal behaviour. However, head-mounted microscopes that accommodate weight limits to allow for free animal behaviour typically compromise field of view, resolution or depth range, and are susceptible to movement-induced artefacts. Here we report a miniaturized head-mounted fluorescent mesoscope that we systematically optimized for calcium imaging at single-neuron resolution, for increased fields of view and depth of field, and for robustness against motion-generated artefacts. Weighing less than 2.5 g, the mesoscope enabled recordings of neuronal-population activity at up to 16 Hz, with 4 μm resolution over 300 μm depth-of-field across a field of view of 3.6 × 3.6 mm2 in the cortex of freely moving mice. We used the mesoscope to record large-scale neuronal-population activity in socially interacting mice during free exploration and during fear-conditioning experiments, and to investigate neurovascular coupling across multiple cortical regions.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Lekang Yuan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Qiyu Zhu
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua-Peking Joint Center for Life Sciences, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China
| | - Tobias Nöbauer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Rujin Zhang
- Department of Anesthesiology, the First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Guihua Xiao
- Department of Automation, Tsinghua University, Beijing, China
| | - Mingrui Wang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
| | - Zengcai Guo
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua-Peking Joint Center for Life Sciences, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA.
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA.
| |
Collapse
|
4
|
Alido J, Greene J, Xue Y, Hu G, Gilmore M, Monk KJ, DiBenedictis BT, Davison IG, Tian L, Li Y. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. OPTICS EXPRESS 2024; 32:6241-6257. [PMID: 38439332 PMCID: PMC11018337 DOI: 10.1364/oe.514072] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/16/2024] [Accepted: 01/16/2024] [Indexed: 03/06/2024]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed computational miniature mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, Massachusetts 02215, USA
| | - Brett T. DiBenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, Massachusetts 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
- Current address: Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California, 94720, USA
| |
Collapse
|
5
|
Wang J, Zhao X, Wang Y, Li D. Quantitative real-time phase microscopy for extended depth-of-field imaging based on the 3D single-shot differential phase contrast (ssDPC) imaging method. OPTICS EXPRESS 2024; 32:2081-2096. [PMID: 38297745 DOI: 10.1364/oe.512285] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 12/21/2023] [Indexed: 02/02/2024]
Abstract
Optical diffraction tomography (ODT) is a promising label-free imaging method capable of quantitatively measuring the three-dimensional (3D) refractive index distribution of transparent samples. In recent years, partially coherent ODT (PC-ODT) has attracted increasing attention due to its system simplicity and absence of laser speckle noise. Quantitative phase imaging (QPI) technologies represented by Fourier ptychographic microscopy (FPM), differential phase contrast (DPC) imaging and intensity diffraction tomography (IDT) need to collect several or hundreds of intensity images, which usually introduce motion artifacts when shooting fast-moving targets, leading to a decrease in image quality. Hence, a quantitative real-time phase microscopy (qRPM) for extended depth of field (DOF) imaging based on 3D single-shot differential phase contrast (ssDPC) imaging method is proposed in this research study. qRPM incorporates a microlens array (MLA) to simultaneously collect spatial information and angular information. In subsequent optical information processing, a deconvolution method is used to obtain intensity stacks under different illumination angles in a raw light field image. Importing the obtained intensity stack into the 3D DPC imaging model is able to finally obtain the 3D refractive index distribution. The captured four-dimensional light field information enables the reconstruction of 3D information in a single snapshot and extending the DOF of qRPM. The imaging capability of the proposed qRPM system is experimental verified on different samples, achieve single-exposure 3D label-free imaging with an extended DOF for 160 µm which is nearly 30 times higher than the traditional microscope system.
Collapse
|
6
|
Alido J, Greene J, Xue Y, Hu G, Li Y, Gilmore M, Monk KJ, Dibenedictis BT, Davison IG, Tian L. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. ARXIV 2023:arXiv:2303.12573v2. [PMID: 36994164 PMCID: PMC10055497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed Computational Miniature Mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Brett T. Dibenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| |
Collapse
|
7
|
Seong B, Kim W, Kim Y, Hyun KA, Jung HI, Lee JS, Yoo J, Joo C. E2E-BPF microscope: extended depth-of-field microscopy using learning-based implementation of binary phase filter and image deconvolution. LIGHT, SCIENCE & APPLICATIONS 2023; 12:269. [PMID: 37953314 PMCID: PMC10641084 DOI: 10.1038/s41377-023-01300-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/09/2023] [Accepted: 10/07/2023] [Indexed: 11/14/2023]
Abstract
Several image-based biomedical diagnoses require high-resolution imaging capabilities at large spatial scales. However, conventional microscopes exhibit an inherent trade-off between depth-of-field (DoF) and spatial resolution, and thus require objects to be refocused at each lateral location, which is time consuming. Here, we present a computational imaging platform, termed E2E-BPF microscope, which enables large-area, high-resolution imaging of large-scale objects without serial refocusing. This method involves a physics-incorporated, deep-learned design of binary phase filter (BPF) and jointly optimized deconvolution neural network, which altogether produces high-resolution, high-contrast images over extended depth ranges. We demonstrate the method through numerical simulations and experiments with fluorescently labeled beads, cells and tissue section, and present high-resolution imaging capability over a 15.5-fold larger DoF than the conventional microscope. Our method provides highly effective and scalable strategy for DoF-extended optical imaging system, and is expected to find numerous applications in rapid image-based diagnosis, optical vision, and metrology.
Collapse
Affiliation(s)
- Baekcheon Seong
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Woovin Kim
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Younghun Kim
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Kyung-A Hyun
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Hyo-Il Jung
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
- The DABOM Inc, Seoul, 03722, Republic of Korea
| | - Jong-Seok Lee
- School of Integrated Technology, Yonsei University, Incheon, 21983, Republic of Korea
| | - Jeonghoon Yoo
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Chulmin Joo
- Department of Mechanical Engineering, Yonsei University, Seoul, 03722, Republic of Korea.
| |
Collapse
|