1
|
Manley J, Vaziri A. Whole-brain neural substrates of behavioral variability in the larval zebrafish. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.03.583208. [PMID: 38496592 PMCID: PMC10942351 DOI: 10.1101/2024.03.03.583208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Animals engaged in naturalistic behavior can exhibit a large degree of behavioral variability even under sensory invariant conditions. Such behavioral variability can include not only variations of the same behavior, but also variability across qualitatively different behaviors driven by divergent cognitive states, such as fight-or-flight decisions. However, the neural circuit mechanisms that generate such divergent behaviors across trials are not well understood. To investigate this question, here we studied the visual-evoked responses of larval zebrafish to moving objects of various sizes, which we found exhibited highly variable and divergent responses across repetitions of the same stimulus. Given that the neuronal circuits underlying such behaviors span sensory, motor, and other brain areas, we built a novel Fourier light field microscope which enables high-resolution, whole-brain imaging of larval zebrafish during behavior. This enabled us to screen for neural loci which exhibited activity patterns correlated with behavioral variability. We found that despite the highly variable activity of single neurons, visual stimuli were robustly encoded at the population level, and the visual-encoding dimensions of neural activity did not explain behavioral variability. This robustness despite apparent single neuron variability was due to the multi-dimensional geometry of the neuronal population dynamics: almost all neural dimensions that were variable across individual trials, i.e. the "noise" modes, were orthogonal to those encoding for sensory information. Investigating this neuronal variability further, we identified two sparsely-distributed, brain-wide neuronal populations whose pre-motor activity predicted whether the larva would respond to a stimulus and, if so, which direction it would turn on a single-trial level. These populations predicted single-trial behavior seconds before stimulus onset, indicating they encoded time-varying internal modulating behavior, perhaps organizing behavior over longer timescales or enabling flexible behavior routines dependent on the animal's internal state. Our results provide the first whole-brain confirmation that sensory, motor, and internal variables are encoded in a highly mixed fashion throughout the brain and demonstrate that de-mixing each of these components at the neuronal population level is critical to understanding the mechanisms underlying the brain's remarkable flexibility and robustness.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| |
Collapse
|
2
|
Hua X, Han K, Mandracchia B, Radmand A, Liu W, Kim H, Yuan Z, Ehrlich SM, Li K, Zheng C, Son J, Silva Trenkle AD, Kwong GA, Zhu C, Dahlman JE, Jia S. Light-field flow cytometry for high-resolution, volumetric and multiparametric 3D single-cell analysis. Nat Commun 2024; 15:1975. [PMID: 38438356 PMCID: PMC10912605 DOI: 10.1038/s41467-024-46250-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 02/15/2024] [Indexed: 03/06/2024] Open
Abstract
Imaging flow cytometry (IFC) combines flow cytometry and fluorescence microscopy to enable high-throughput, multiparametric single-cell analysis with rich spatial details. However, current IFC techniques remain limited in their ability to reveal subcellular information with a high 3D resolution, throughput, sensitivity, and instrumental simplicity. In this study, we introduce a light-field flow cytometer (LFC), an IFC system capable of high-content, single-shot, and multi-color acquisition of up to 5,750 cells per second with a near-diffraction-limited resolution of 400-600 nm in all three dimensions. The LFC system integrates optical, microfluidic, and computational strategies to facilitate the volumetric visualization of various 3D subcellular characteristics through convenient access to commonly used epi-fluorescence platforms. We demonstrate the effectiveness of LFC in assaying, analyzing, and enumerating intricate subcellular morphology, function, and heterogeneity using various phantoms and biological specimens. The advancement offered by the LFC system presents a promising methodological pathway for broad cell biological and translational discoveries, with the potential for widespread adoption in biomedical research.
Collapse
Affiliation(s)
- Xuanwen Hua
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Keyi Han
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Biagio Mandracchia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Afsane Radmand
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- Department of Chemical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Hyejin Kim
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Zhou Yuan
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- Georgia W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Samuel M Ehrlich
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- Georgia W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Kaitao Li
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Corey Zheng
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Jeonghwan Son
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Aaron D Silva Trenkle
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Gabriel A Kwong
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Cheng Zhu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - James E Dahlman
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
3
|
Wani P, Usmani K, Krishnan G, Javidi B. 3D object tracking using integral imaging with mutual information and Bayesian optimization. OPTICS EXPRESS 2024; 32:7495-7512. [PMID: 38439428 DOI: 10.1364/oe.517312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024]
Abstract
Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging's depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object's depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object's bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object's depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach.
Collapse
|
4
|
Shi W, Quan H, Kong L. High-resolution 3D imaging in light-field microscopy through Stokes matrices and data fusion. OPTICS EXPRESS 2024; 32:3710-3722. [PMID: 38297586 DOI: 10.1364/oe.510728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/08/2024] [Indexed: 02/02/2024]
Abstract
The trade-off between the lateral and vertical resolution has long posed challenges to the efficient and widespread application of Fourier light-field microscopy, a highly scalable 3D imaging tool. Although existing methods for resolution enhancement can improve the measurement result to a certain extent, they come with limitations in terms of accuracy and applicable specimen types. To address these problems, this paper proposed a resolution enhancement scheme utilizing data fusion of polarization Stokes vectors and light-field information for Fourier light-field microscopy system. By introducing the surface normal vector information obtained from polarization measurement and integrating it with the light-field 3D point cloud data, 3D reconstruction results accuracy is highly improved in axial direction. Experimental results with a Fourier light-field 3D imaging microscope demonstrated a substantial enhancement of vertical resolution with a depth resolution to depth of field ratio of 0.19%. This represented approximately 44 times the improvement compared to the theoretical ratio before data fusion, enabling the system to access more detailed information with finer measurement accuracy for test samples. This work not only provides a feasible solution for breaking the limitations imposed by traditional light-field microscope hardware configurations but also offers superior 3D measurement approach in a more cost-effective and practical manner.
Collapse
|
5
|
Mandracchia B, Liu W, Hua X, Forghani P, Lee S, Hou J, Nie S, Xu C, Jia S. Optimal sparsity allows reliable system-aware restoration of fluorescence microscopy images. SCIENCE ADVANCES 2023; 9:eadg9245. [PMID: 37647399 PMCID: PMC10468132 DOI: 10.1126/sciadv.adg9245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 07/31/2023] [Indexed: 09/01/2023]
Abstract
Fluorescence microscopy is one of the most indispensable and informative driving forces for biological research, but the extent of observable biological phenomena is essentially determined by the content and quality of the acquired images. To address the different noise sources that can degrade these images, we introduce an algorithm for multiscale image restoration through optimally sparse representation (MIRO). MIRO is a deterministic framework that models the acquisition process and uses pixelwise noise correction to improve image quality. Our study demonstrates that this approach yields a remarkable restoration of the fluorescence signal for a wide range of microscopy systems, regardless of the detector used (e.g., electron-multiplying charge-coupled device, scientific complementary metal-oxide semiconductor, or photomultiplier tube). MIRO improves current imaging capabilities, enabling fast, low-light optical microscopy, accurate image analysis, and robust machine intelligence when integrated with deep neural networks. This expands the range of biological knowledge that can be obtained from fluorescence microscopy.
Collapse
Affiliation(s)
- Biagio Mandracchia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Scientific-Technical Central Units, Instituto de Salud Carlos III (ISCIII), Majadahonda, Spain
- ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | - Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Xuanwen Hua
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Parvin Forghani
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Soojung Lee
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Jessica Hou
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shuyi Nie
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Chunhui Xu
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
6
|
Wani P, Javidi B. 3D integral imaging depth estimation of partially occluded objects using mutual information and Bayesian optimization. OPTICS EXPRESS 2023; 31:22863-22884. [PMID: 37475387 DOI: 10.1364/oe.492160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 06/12/2023] [Indexed: 07/22/2023]
Abstract
Integral imaging (InIm) is useful for passive ranging and 3D visualization of partially-occluded objects. We consider 3D object localization within a scene and in occlusions. 2D localization can be achieved using machine learning and non-machine learning-based techniques. These techniques aim to provide a 2D bounding box around each one of the objects of interest. A recent study uses InIm for the 3D reconstruction of the scene with occlusions and utilizes mutual information (MI) between the bounding box in this 3D reconstructed scene and the corresponding bounding box in the central elemental image to achieve passive depth estimation of partially occluded objects. Here, we improve upon this InIm method by using Bayesian optimization to minimize the number of required 3D scene reconstructions. We evaluate the performance of the proposed approach by analyzing different kernel functions, acquisition functions, and parameter estimation algorithms for Bayesian optimization-based inference for simultaneous depth estimation of objects and occlusion. In our optical experiments, mutual-information-based depth estimation with Bayesian optimization achieves depth estimation with a handful of 3D reconstructions. To the best of our knowledge, this is the first report to use Bayesian optimization for mutual information-based InIm depth estimation.
Collapse
|
7
|
Ardebili M, Saavedra G. Analytic plenoptic camera diffraction model and radial distortion analysis due to vignetting. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1451-1467. [PMID: 37706747 DOI: 10.1364/josaa.485284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/16/2023] [Indexed: 09/15/2023]
Abstract
Using a mathematical approach, this paper presents a generalization of semi-analytical expressions for the point spread function (PSF) of plenoptic cameras. The model is applicable in the standard regime of the scalar diffraction theory while the extension to arbitrary main lens transmission functions generalizes a priori formalism. The accuracy and applicability of the model is well verified against the exact Rayleigh-Sommerfeld diffraction integral and a rigorous proof of convergence for the PSF series expression is made. Since vignetting can never be fully eliminated, it is critical to inspect the image degradation it poses through distortions. For what we believe is the first time, diffractive distortions in the diffraction-limited plenoptic camera are closely examined and demonstrated to exceed those that would otherwise be estimated by a geometrical optics formalism, further justifying the necessity of an approach based on wave optics. Microlenses subject to the edge diffraction effects of the main lens vignetting are shown to translate into radial distortions of increasing severity and instability with defocus. The distortions due to vignetting are found to be typically bound by the radius of the geometrical defocus in the image plane, while objects confined to the depth of field give rise to merely subpixel distortions.
Collapse
|
8
|
Yun H, Saavedra G, Garcia-Sucerquia J, Tolosa A, Martinez-Corral M, Sanchez-Ortiga E. Practical guide for setting up a Fourier light-field microscope. APPLIED OPTICS 2023; 62:4228-4235. [PMID: 37706910 DOI: 10.1364/ao.491369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 04/26/2023] [Indexed: 09/15/2023]
Abstract
A practical guide for the easy implementation of a Fourier light-field microscope is reported. The Fourier light-field concept applied to microscopy allows the capture in real time of a series of 2D orthographic images of microscopic thick dynamic samples. Such perspective images contain spatial and angular information of the light-field emitted by the sample. A feature of this technology is the tight requirement of a double optical conjugation relationship, and also the requirement of NA matching. For these reasons, the Fourier light-field microscope being a non-complex optical system, a clear protocol on how to set up the optical elements accurately is needed. In this sense, this guide is aimed to simplify the implementation process, with an optical bench and off-the-shelf components. This will help the widespread use of this recent technology.
Collapse
|
9
|
Sung Y. Optical projection tomography of fluorescent microscopic specimens using lateral translation of tube lens. OPTICS LETTERS 2023; 48:2623-2626. [PMID: 37186724 PMCID: PMC10798857 DOI: 10.1364/ol.491499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 04/21/2023] [Indexed: 05/17/2023]
Abstract
Optical projection tomography (OPT) is a three-dimensional (3D) fluorescence imaging technique, in which projection images are acquired for varying orientations of a sample using a large depth of field. OPT is typically applied to a millimeter-sized specimen, because the rotation of a microscopic specimen is challenging and not compatible with live cell imaging. In this Letter, we demonstrate fluorescence optical tomography of a microscopic specimen by laterally translating the tube lens of a wide-field optical microscope, which allows for high-resolution OPT without rotating the sample. The cost is the reduction of the field of view to about halfway along the direction of the tube lens translation. Using bovine pulmonary artery endothelial cells and 0.1 µm beads, we compare the 3D imaging performance of the proposed method with that of the conventional objective-focus scan method.
Collapse
Affiliation(s)
- Yongjin Sung
- College of Engineering & Applied Science, University of Wisconsin, Milwaukee, WI 53211, USA
| |
Collapse
|
10
|
Yuan RY, Ma XL, Zheng Y, Jiang Z, Wang X, Liu C, Wang QH. 3D microscope image acquisition method based on zoom objective. OPTICS EXPRESS 2023; 31:16067-16080. [PMID: 37157693 DOI: 10.1364/oe.487720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Microscopy is being pursued to obtain richer and more accurate information, and there are many challenges in imaging depth and display dimension. In this paper, we propose a three-dimensional (3D) microscope acquisition method based on a zoom objective. It enables 3D imaging of thick microscopic specimens with continuous adjustable optical magnification. The zoom objective based on liquid lenses can quickly adjust the focal length, to expand the imaging depth and change the magnification by adjusting the voltage. Based on the zoom objective, an arc shooting mount is designed to accurately rotate the objective to obtain the parallax information of the specimen and generate parallax synthesis images for 3D display. A 3D display screen is used to verify the acquisition results. The experimental results show that the obtained parallax synthesis images can accurately and efficiently restore the 3D characteristics of the specimen. The proposed method has promising applications in industrial detection, microbial observation, medical surgery, and so on.
Collapse
|
11
|
Nöbauer T, Zhang Y, Kim H, Vaziri A. Mesoscale volumetric light-field (MesoLF) imaging of neuroactivity across cortical areas at 18 Hz. Nat Methods 2023; 20:600-609. [PMID: 36823333 PMCID: PMC11057224 DOI: 10.1038/s41592-023-01789-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 01/24/2023] [Indexed: 02/25/2023]
Abstract
Various implementations of mesoscopes provide optical access for calcium imaging across multi-millimeter fields of view in the mammalian brain; however, capturing the activity of the neuronal population within such fields of view near-simultaneously and in a volumetric fashion has remained challenging as approaches for imaging scattering brain tissues typically are based on sequential acquisition. Here we present a modular, mesoscale light-field (MesoLF) imaging hardware and software solution that allows recording from thousands of neurons within volumes of ⌀ 4 × 0.2 mm, located at up to 350 µm depth in the mouse cortex, at 18 volumes per second and an effective voxel rate of ~40 megavoxels per second. Using our optical design and computational approach we show recording of ~10,000 neurons across multiple cortical areas in mice using workstation-grade computing resources.
Collapse
Affiliation(s)
- Tobias Nöbauer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Yuanlong Zhang
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
- Department of Automation, Tsinghua University, Beijing, China
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA.
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA.
| |
Collapse
|
12
|
Nöbauer T, Zhang Y, Kim H, Vaziri A. Mesoscale volumetric light field (MesoLF) imaging of neuroactivity across cortical areas at 18 Hz. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.20.533476. [PMID: 36993596 PMCID: PMC10055306 DOI: 10.1101/2023.03.20.533476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Various implementations of mesoscopes provide optical access for calcium imaging across multi-millimeter fields-of-view (FOV) in the mammalian brain. However, capturing the activity of the neuronal population within such FOVs near-simultaneously and in a volumetric fashion has remained challenging since approaches for imaging scattering brain tissues typically are based on sequential acquisition. Here, we present a modular, mesoscale light field (MesoLF) imaging hardware and software solution that allows recording from thousands of neurons within volumes of 4000 × 200 μm, located at up to 400 μm depth in the mouse cortex, at 18 volumes per second. Our optical design and computational approach enable up to hour-long recording of ~10,000 neurons across multiple cortical areas in mice using workstation-grade computing resources.
Collapse
Affiliation(s)
- Tobias Nöbauer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Yuanlong Zhang
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
- Department of Automation, Tsinghua University, Beijing, China
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA
| |
Collapse
|
13
|
Kwon KH, Erdenebat MU, Kim N, Khuderchuluun A, Imtiaz SM, Kim MY, Kwon KC. High-Quality 3D Visualization System for Light-Field Microscopy with Fine-Scale Shape Measurement through Accurate 3D Surface Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:2173. [PMID: 36850772 PMCID: PMC9967073 DOI: 10.3390/s23042173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/10/2023] [Accepted: 02/11/2023] [Indexed: 06/18/2023]
Abstract
We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain nearly realistic 3D surface data, allowing the calculation of depth data, which is relatively close to the actual surface, and measurement information from the light-field images of specimens. High-reliability area data of the focus measure map and spatial affinity information of the matting Laplacian are used to estimate nearly realistic depths. This process represents a reference value for the light-field microscopy depth range that was not previously available. A 3D model is regenerated by combining the depth data and the high-resolution 2D image. The element image array is rendered through a simplified direction-reversal calculation method, which depends on user interaction from the 3D model and is displayed on the 3D display device. We confirm that the proposed system increases the accuracy of depth estimation and measurement and improves the quality of visualization and 3D display images.
Collapse
Affiliation(s)
- Ki Hoon Kwon
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Munkh-Uchral Erdenebat
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Nam Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Anar Khuderchuluun
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Shariar Md Imtiaz
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Min Young Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Ki-Chul Kwon
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| |
Collapse
|
14
|
Galdón L, Garcia-Sucerquia J, Saavedra G, Martínez-Corral M, Sánchez-Ortiga E. Resolution limit in opto-digital systems revisited. OPTICS EXPRESS 2023; 31:2000-2012. [PMID: 36785223 DOI: 10.1364/oe.479458] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 12/11/2022] [Indexed: 06/18/2023]
Abstract
The resolution limit achievable with an optical system is a fundamental piece of information when characterizing its performance, mainly in case of microscopy imaging. Usually this information is given in the form of a distance, often expressed in microns, or in the form of a cutoff spatial frequency, often expressed in line pairs per mm. In modern imaging systems, where the final image is collected by pixelated digital cameras, the resolution limit is determined by the performance of both, the optical systems and the digital sensor. Usually, one of these factors is considered to be prevalent over the other for estimating the spatial resolution, leading to the global performance of the imaging system ruled by either the classical Abbe resolution limit, based on physical diffraction, or by the Nyquist resolution limit, based on the digital sensor features. This estimation fails significantly to predict the global performance of opto-digital imaging systems, like 3D microscopes, where none of the factors is negligible. In that case, which indeed is the most common, neither the Abbe formula nor the Nyquist formula provide by themselves a reliable prediction for the resolution limit. This is a serious drawback since systems designers often use those formulae as design input parameters. Aiming to overcome this lack, a simple mathematical expression obtained by finely articulating the Abbe and Nyquist formulas, to easily predict the spatial resolution limit of opto-digital imaging systems, is proposed here. The derived expression is tested experimentally, and shows to be valid in a broad range of opto-digital combinations.
Collapse
|
15
|
Moreschini S, Gama F, Bregovic R, Gotchev A. CIVIT dataset: Integral microscopy with Fourier plane recording. Data Brief 2022; 46:108819. [PMID: 36591387 PMCID: PMC9801074 DOI: 10.1016/j.dib.2022.108819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 12/01/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
This article describes a dataset of synthetic images representing biological scenery as captured by a Fourier Lightfield Microscope (FLMic). It includes 22,416 images related to eight scenes composed of 3D models of objects typical for biological samples, such as red blood cells and bacteria, and categorized into Cells and Filaments groups. For each scene, two types of image data structures are provided: 51 × 51 Elemental Images (EIs) representing Densely Sampled Light Fields (DSLF) and 201 images composing Z-Scans of the scenes. Auxiliary data also includes information about camera intrinsic and extrinsic calibration parameters, object descriptions, and MATLAB scripts for camera pose compensation. The images have been generated using Blender. The dataset can be used to develop and assess methods for volumetric reconstruction from Light Field (LF) images captured by a FLMic.
Collapse
Key Words
- Blender
- Cells
- DCR, Dynamic Cutting Region
- DSFL, Densely Sampled Light Field
- EI, Elemental Image
- FLMic, Fourier Lightfield Microscope
- FiMic, Fourier Integral Microscope
- Filaments
- Fourier lightfield microscopy
- GT, Ground Truth
- LF, Light Field
- Light field
- NA, Numerical Apperture
- RBC, Red Blood Cell
- RoI, Region of Interest
- Z-scan
- Z-stack
Collapse
|
16
|
Wani P, Krishnan G, O'Connor T, Javidi B. Information theoretic performance evaluation of 3D integral imaging. OPTICS EXPRESS 2022; 30:43157-43171. [PMID: 36523020 DOI: 10.1364/oe.475086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Integral imaging (InIm) has proved useful for three-dimensional (3D) object sensing, visualization, and classification of partially occluded objects. This paper presents an information-theoretic approach for simulating and evaluating the integral imaging capture and reconstruction process. We utilize mutual information (MI) as a metric for evaluating the fidelity of the reconstructed 3D scene. Also we consider passive depth estimation using mutual information. We apply this formulation for optimal pitch estimation of integral-imaging capture and reconstruction to maximize the longitudinal resolution. The effect of partial occlusion in integral imaging 3D reconstruction using mutual information is evaluated. Computer simulation tests and experiments are presented.
Collapse
|
17
|
Han K, Hua X, Vasani V, Kim GAR, Liu W, Takayama S, Jia S. 3D super-resolution live-cell imaging with radial symmetry and Fourier light-field microscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:5574-5584. [PMID: 36733732 PMCID: PMC9872894 DOI: 10.1364/boe.471967] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/23/2022] [Accepted: 09/26/2022] [Indexed: 06/18/2023]
Abstract
Live-cell imaging reveals the phenotypes and mechanisms of cellular function and their dysfunction that underscore cell physiology, development, and pathology. Here, we report a 3D super-resolution live-cell microscopy method by integrating radiality analysis and Fourier light-field microscopy (rad-FLFM). We demonstrated the method using various live-cell specimens, including actins in Hela cells, microtubules in mammary organoid cells, and peroxisomes in COS-7 cells. Compared with conventional wide-field microscopy, rad-FLFM realizes scanning-free, volumetric 3D live-cell imaging with sub-diffraction-limited resolution of ∼150 nm (x-y) and 300 nm (z), milliseconds volume acquisition time, six-fold extended depth of focus of ∼6 µm, and low photodamage. The method provides a promising avenue to explore spatiotemporal-challenging subcellular processes in a wide range of cell biological research.
Collapse
Affiliation(s)
- Keyi Han
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, 30332, USA
| | - Xuanwen Hua
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, 30332, USA
| | - Vishwa Vasani
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Ge-Ah R. Kim
- School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, 30332, USA
| | - Shuichi Takayama
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, 30332, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, 30332, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| |
Collapse
|
18
|
Xue Y, Yang Q, Hu G, Guo K, Tian L. Deep-learning-augmented computational miniature mesoscope. OPTICA 2022; 9:1009-1021. [PMID: 36506462 PMCID: PMC9731182 DOI: 10.1364/optica.464700] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 08/02/2022] [Indexed: 05/30/2023]
Abstract
Fluorescence microscopy is essential to study biological structures and dynamics. However, existing systems suffer from a trade-off between field of view (FOV), resolution, and system complexity, and thus cannot fulfill the emerging need for miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs. To overcome this challenge, we developed a computational miniature mesoscope (CM2) that exploits a computational imaging strategy to enable single-shot, 3D high-resolution imaging across a wide FOV in a miniaturized platform. Here, we present CM2 V2, which significantly advances both the hardware and computation. We complement the 3 × 3 microlens array with a hybrid emission filter that improves the imaging contrast by 5×, and design a 3D-printed free-form collimator for the LED illuminator that improves the excitation efficiency by 3×. To enable high-resolution reconstruction across a large volume, we develop an accurate and efficient 3D linear shift-variant (LSV) model to characterize spatially varying aberrations. We then train a multimodule deep learning model called CM2Net, using only the 3D-LSV simulator. We quantify the detection performance and localization accuracy of CM2Net to reconstruct fluorescent emitters under different conditions in simulation. We then show that CM2Net generalizes well to experiments and achieves accurate 3D reconstruction across a ~7-mm FOV and 800-μm depth, and provides ~6-μm lateral and ~25-μm axial resolution. This provides an ~8× better axial resolution and ~1400× faster speed compared to the previous model-based algorithm. We anticipate this simple, low-cost computational miniature imaging system will be useful for many large-scale 3D fluorescence imaging applications.
Collapse
Affiliation(s)
- Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Qianwan Yang
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Kehan Guo
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, USA
- Neurophotonics Center, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
19
|
Juntunen C, Abramczyk AR, Woller IM, Sung Y. Hyperspectral three-dimensional absorption imaging using snapshot optical tomography. PHYSICAL REVIEW APPLIED 2022; 18:034055. [PMID: 37274485 PMCID: PMC10237288 DOI: 10.1103/physrevapplied.18.034055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Hyperspectral imaging (HSI) records a series of two-dimensional (2D) images for different wavelengths to provide the chemical fingerprint at each pixel. Combining HSI with a tomographic data acquisition method, we can obtain the chemical fingerprint of a sample at each point in three-dimensional (3D) space. The so-called 3D HSI typically suffers from low imaging throughput due to the requirement of scanning the wavelength and rotating the beam or sample. In this paper we present an optical system which captures the entire four-dimensional (4D), i.e., 3D structure and 1D spectrum, dataset of a sample with the same throughput of conventional HSI systems. Our system works by combining snapshot projection optical tomography (SPOT) which collects multiple projection images with a single snapshot, and Fourier-transform spectroscopy (FTS) which results in superior spectral resolution by collecting and processing a series of interferogram images. Using this hyperspectral SPOT system we imaged the volumetric absorbance of dyed polystyrene microbeads, oxygenated red blood cells (RBCs), and deoxygenated RBCs. The 4D optical system demonstrated in this paper provides a tool for high-throughput chemical imaging of complex microscopic specimens.
Collapse
Affiliation(s)
- Cory Juntunen
- College of Engineering and Applied Science, University of Wisconsin, Milwaukee, Wisconsin 53211, USA
| | - Andrew R. Abramczyk
- College of Engineering and Applied Science, University of Wisconsin, Milwaukee, Wisconsin 53211, USA
| | - Isabel M. Woller
- College of Health Sciences, University of Wisconsin, Milwaukee, Wisconsin 53211, USA
| | - Yongjin Sung
- College of Engineering and Applied Science, University of Wisconsin, Milwaukee, Wisconsin 53211, USA
| |
Collapse
|
20
|
Fourier light-field imaging of human organoids with a hybrid point-spread function. Biosens Bioelectron 2022; 208:114201. [DOI: 10.1016/j.bios.2022.114201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/25/2022] [Accepted: 03/17/2022] [Indexed: 11/17/2022]
|
21
|
Alonso JR, Silva A, Fernández A, Arocena M. Computational multifocus fluorescence microscopy for three-dimensional visualization of multicellular tumor spheroids. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210320SSR. [PMID: 35655357 PMCID: PMC9162503 DOI: 10.1117/1.jbo.27.6.066501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 05/23/2022] [Indexed: 05/27/2023]
Abstract
SIGNIFICANCE Three-dimensional (3D) visualization of multicellular tumor spheroids (MCTS) in fluorescence microscopy can rapidly provide qualitative morphological information about the architecture of these cellular aggregates, which can recapitulate key aspects of their in vivo counterpart. AIM The present work is aimed at overcoming the shallow depth-of-field (DoF) limitation in fluorescence microscopy while achieving 3D visualization of thick biological samples under study. APPROACH A custom-built fluorescence microscope with an electrically focus-tunable lens was developed to optically sweep in-depth the structure of MCTS. Acquired multifocus stacks were combined by means of postprocessing algorithms performed in the Fourier domain. RESULTS Images with relevant characteristics as extended DoF, stereoscopic pairs as well as reconstructed viewpoints of MCTS were obtained without segmentation of the focused regions or estimation of the depth map. The reconstructed images allowed us to observe the 3D morphology of cell aggregates. CONCLUSIONS Computational multifocus fluorescence microscopy can provide 3D visualization in MCTS. This tool is a promising development in assessing the morphological structure of different cellular aggregates while preserving a robust yet simple optical setup.
Collapse
Affiliation(s)
- Julia R. Alonso
- Universidad de la República, Instituto de Física, Facultad de Ingeniería, Montevideo, Uruguay
| | - Alejandro Silva
- Universidad de la República, Instituto de Física, Facultad de Ingeniería, Montevideo, Uruguay
| | - Ariel Fernández
- Universidad de la República, Instituto de Física, Facultad de Ingeniería, Montevideo, Uruguay
| | - Miguel Arocena
- Instituto de Investigaciones Biológicas Clemente Estable, Departamento de Genómica, Montevideo, Uruguay
- Universidad de la República, Cátedra de Bioquímica y Biofísica, Facultad de Odontología, Montevideo, Uruguay
| |
Collapse
|
22
|
Rostan J, Incardona N, Sanchez-Ortiga E, Martinez-Corral M, Latorre-Carmona P. Machine Learning-Based View Synthesis in Fourier Lightfield Microscopy. SENSORS 2022; 22:s22093487. [PMID: 35591177 PMCID: PMC9099650 DOI: 10.3390/s22093487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 04/22/2022] [Accepted: 04/28/2022] [Indexed: 02/01/2023]
Abstract
Current interest in Fourier lightfield microscopy is increasing, due to its ability to acquire 3D images of thick dynamic samples. This technique is based on simultaneously capturing, in a single shot, and with a monocular setup, a number of orthographic perspective views of 3D microscopic samples. An essential feature of Fourier lightfield microscopy is that the number of acquired views is low, due to the trade-off relationship existing between the number of views and their corresponding lateral resolution. Therefore, it is important to have a tool for the generation of a high number of synthesized view images, without compromising their lateral resolution. In this context we investigate here the use of a neural radiance field view synthesis method, originally developed for its use with macroscopic scenes acquired with a moving (or an array of static) digital camera(s), for its application to the images acquired with a Fourier lightfield microscope. The results obtained and presented in this paper are analyzed in terms of lateral resolution and of continuous and realistic parallax. We show that, in terms of these requirements, the proposed technique works efficiently in the case of the epi-illumination microscopy mode.
Collapse
Affiliation(s)
- Julen Rostan
- Departamento de Ingenieria Informatica, Universidad de Burgos, E09006 Burgos, Spain; (J.R.); (P.L.-C.)
| | - Nicolo Incardona
- 3D Imaging and Display Laboratory, Department of Optics, University of Valencia, E46100 Burjassot, Spain; (E.S.-O.); (M.M.-C.)
- Correspondence:
| | - Emilio Sanchez-Ortiga
- 3D Imaging and Display Laboratory, Department of Optics, University of Valencia, E46100 Burjassot, Spain; (E.S.-O.); (M.M.-C.)
- School of Science, Universidad Europea de Valencia, Passeig de l’Albereda, 7, E46010 Valencia, Spain
| | - Manuel Martinez-Corral
- 3D Imaging and Display Laboratory, Department of Optics, University of Valencia, E46100 Burjassot, Spain; (E.S.-O.); (M.M.-C.)
| | - Pedro Latorre-Carmona
- Departamento de Ingenieria Informatica, Universidad de Burgos, E09006 Burgos, Spain; (J.R.); (P.L.-C.)
| |
Collapse
|
23
|
Galdón L, Saavedra G, Garcia-Sucerquia J, Martínez-Corral M, Sánchez-Ortiga E. Fourier lightfield microscopy: a practical design guide. APPLIED OPTICS 2022; 61:2558-2564. [PMID: 35471323 DOI: 10.1364/ao.453723] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
In this work, a practical guide for the design of a Fourier lightfield microscope is reported. The fundamentals of the Fourier lightfield are presented and condensed on a set of contour plots from which the user can select the design values of the spatial resolution, the field of view, and the depth of field, as function of the specifications of the hardware of the host microscope. This work guides the reader to select the parameters of the infinity-corrected microscope objective, the optical relay lenses, the aperture stop, the microlens array, and the digital camera. A user-friendly graphic calculator is included to ease the design, even to those who are not familiar with the lightfield technology. The guide is aimed to simplify the design process of a Fourier lightfield microscope, which sometimes could be a daunting task, and in this way, to invite the widespread use of this technology. An example of a design and experimental results on imaging different types of samples is also presented.
Collapse
|
24
|
Serabyn E. Improving image resolution on point-like sources in a type 1 light-field camera. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:364-376. [PMID: 35297419 DOI: 10.1364/josaa.445024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 01/20/2022] [Indexed: 06/14/2023]
Abstract
A ray-trace simulation of a type 1 light-field imager is used to show that resolutions significantly better than the lenslet scale can be deterministically reached in reconstructed images of isolated point-like sources. This is enabled by computationally projecting the system pupil onto the lenslet-array plane to better estimate the lenslet-plane-crossing locations through which the rays from a point source have passed on their way to the detector array. Improving light-field type 1 image resolution from the lenslet scale to the pixel scale can significantly enhance signal-to-noise ratios on faint point-like sources such as fluorescent microbes, making the technique of interest in, e.g., in situ microbial life searches in extreme environments.
Collapse
|
25
|
Galdon L, Yun H, Saavedra G, Garcia-Sucerquia J, Barreiro JC, Martinez-Corral M, Sanchez-Ortiga E. Handheld and Cost-Effective Fourier Lightfield Microscope. SENSORS 2022; 22:s22041459. [PMID: 35214359 PMCID: PMC8879591 DOI: 10.3390/s22041459] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 11/16/2022]
Abstract
In this work, the design, building, and testing of the most portable, easy-to-build, robust, handheld, and cost-effective Fourier Lightfield Microscope (FLMic) to date is reported. The FLMic is built by means of a surveillance camera lens and additional off-the-shelf optical elements, resulting in a cost-effective FLMic exhibiting all the regular sought features in lightfield microscopy, such as refocusing and gathering 3D information of samples by means of a single-shot approach. The proposed FLMic features reduced dimensions and light weight, which, combined with its low cost, turn the presented FLMic into a strong candidate for in-field application where 3D imaging capabilities are pursued. The use of cost-effective optical elements has a relatively low impact on the optical performance, regarding the figures dictated by the theory, while its price can be at least 100 times lower than that of a regular FLMic. The system operability is tested in both bright-field and fluorescent modes by imaging a resolution target, a honeybee wing, and a knot of dyed cotton fibers.
Collapse
Affiliation(s)
- Laura Galdon
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
| | - Hui Yun
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
| | - Genaro Saavedra
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
| | - Jorge Garcia-Sucerquia
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
- School of Physics, Universidad Nacional de Colombia, Medellin 050034, Colombia
| | - Juan C. Barreiro
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
| | - Manuel Martinez-Corral
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
| | - Emilio Sanchez-Ortiga
- 3D Imaging and Display Laboratory, Department of Optics, Universidad de Valencia, 46100 Burjassot, Spain; (L.G.); (H.Y.); (G.S.); (J.G.-S.); (J.C.B.); (M.M.-C.)
- Correspondence:
| |
Collapse
|
26
|
Wani P, Usmani K, Krishnan G, O'Connor T, Javidi B. Lowlight object recognition by deep learning with passive three-dimensional integral imaging in visible and long wave infrared wavelengths. OPTICS EXPRESS 2022; 30:1205-1218. [PMID: 35209285 DOI: 10.1364/oe.443657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
Traditionally, long wave infrared imaging has been used in photon starved conditions for object detection and classification. We investigate passive three-dimensional (3D) integral imaging (InIm) in visible spectrum for object classification using deep neural networks in photon-starved conditions and under partial occlusion. We compare the proposed passive 3D InIm operating in the visible domain with that of the long wave infrared sensing in both 2D and 3D imaging cases for object classification in degraded conditions. This comparison is based on average precision, recall, and miss rates. Our experimental results demonstrate that cold and hot object classification using 3D InIm in the visible spectrum may outperform both 2D and 3D imaging implemented in long wave infrared spectrum for photon-starved and partially occluded scenes. While these experiments are not comprehensive, they demonstrate the potential of 3D InIm in the visible spectrum for low light applications. Imaging in the visible spectrum provides higher spatial resolution, more compact optics, and lower cost hardware compared with long wave infrared imaging. In addition, higher spatial resolution obtained in the visible spectrum can improve object classification accuracy. Our experimental results provide a proof of concept for implementing visible spectrum imaging in place of the traditional LWIR spectrum imaging for certain object recognition tasks.
Collapse
|
27
|
Krishnan G, Joshi R, O'Connor T, Javidi B. Optical signal detection in turbid water using multidimensional integral imaging with deep learning. OPTICS EXPRESS 2021; 29:35691-35701. [PMID: 34808998 DOI: 10.1364/oe.440114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/03/2021] [Indexed: 06/13/2023]
Abstract
Optical signal detection in turbid and occluded environments is a challenging task due to the light scattering and beam attenuation inside the medium. Three-dimensional (3D) integral imaging is an imaging approach which integrates two-dimensional images from multiple perspectives and has proved to be useful for challenging conditions such as occlusion and turbidity. In this manuscript, we present an approach for the detection of optical signals in turbid water and occluded environments using multidimensional integral imaging employing temporal encoding with deep learning. In our experiments, an optical signal is temporally encoded with gold code and transmitted through turbid water via a light-emitting diode (LED). A camera array captures videos of the optical signals from multiple perspectives and performs the 3D signal reconstruction of temporal signal. The convolutional neural network-based bidirectional Long Short-Term Network (CNN-BiLSTM) network is trained with clear water video sequences to perform classification on the binary transmitted signal. The testing data was collected in turbid water scenes with partial signal occlusion, and a sliding window with CNN-BiLSTM-based classification was performed on the reconstructed 3D video data to detect the encoded binary data sequence. The proposed approach is compared to previously presented correlation-based detection models. Furthermore, we compare 3D integral imaging to conventional two-dimensional (2D) imaging for signal detection using the proposed deep learning strategy. The experimental results using the proposed approach show that the multidimensional integral imaging-based methodology significantly outperforms the previously reported approaches and conventional 2D sensing-based methods. To the best of our knowledge, this is the first report on underwater signal detection using multidimensional integral imaging with deep neural networks.
Collapse
|
28
|
Kwon KC, Erdenebat MU, Khuderchuluun A, Kwon KH, Kim MY, Kim N. High-quality 3D display system for an integral imaging microscope using a simplified direction-inversed computation based on user interaction. OPTICS LETTERS 2021; 46:5079-5082. [PMID: 34653119 DOI: 10.1364/ol.436201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 09/13/2021] [Indexed: 06/13/2023]
Abstract
We propose and implement a high-quality three-dimensional (3D) display system for an integral imaging microscope using a simplified direction-inversed computation method based on user interaction. A model of the specimen is generated from the estimated depth information (via the convolutional neural network-based algorithm), the quality of the model is defined by the high-resolution two-dimensional image. The new elemental image arrays are generated from the models via a simplified direction-inversed computation method according to the user interaction and directly displayed on the display device. A high-quality 3D visualization of the specimen is reconstructed and displayed while the lens array is placed in front of the display device. The user interaction enables more viewpoints of the specimen to be reconstructed by the proposed system, within the basic viewing zone. Remarkable quality improvement is confirmed through quantitative evaluations of the experimental results.
Collapse
|
29
|
Incardona N, Tolosa Á, Scrofani G, Martinez-Corral M, Saavedra G. The Lightfield Microscope Eyepiece. SENSORS 2021; 21:s21196619. [PMID: 34640939 PMCID: PMC8512604 DOI: 10.3390/s21196619] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 09/30/2021] [Indexed: 12/02/2022]
Abstract
Lightfield microscopy has raised growing interest in the last few years. Its ability to get three-dimensional information about the sample in a single shot makes it suitable for many applications in which time resolution is fundamental. In this paper we present a novel device, which is capable of converting any conventional microscope into a lightfield microscope. Based on the Fourier integral microscope concept, we designed the lightfield microscope eyepiece. This is coupled to the eyepiece port, to let the user exploit all the host microscope’s components (objective turret, illumination systems, translation stage, etc.) and get a 3D reconstruction of the sample. After the optical design, a proof-of-concept device was built with off-the-shelf optomechanical components. Here, its optical performances are demonstrated, which show good matching with the theoretical ones. Then, the pictures of different samples taken with the lightfield eyepiece are shown, along with the corresponding reconstructions. We demonstrated the functioning of the lightfield eyepiece and lay the foundation for the development of a commercial device that works with any microscope.
Collapse
Affiliation(s)
- Nicolò Incardona
- 3D Imaging and Display Laboratory, Department of Optics, Universitat de València, 46100 Burjassot, Spain; (Á.T.); (G.S.); (M.M.-C.); (G.S.)
- Doitplenoptic S.L., 46980 Paterna, Spain
- Correspondence:
| | - Ángel Tolosa
- 3D Imaging and Display Laboratory, Department of Optics, Universitat de València, 46100 Burjassot, Spain; (Á.T.); (G.S.); (M.M.-C.); (G.S.)
- Doitplenoptic S.L., 46980 Paterna, Spain
| | - Gabriele Scrofani
- 3D Imaging and Display Laboratory, Department of Optics, Universitat de València, 46100 Burjassot, Spain; (Á.T.); (G.S.); (M.M.-C.); (G.S.)
| | - Manuel Martinez-Corral
- 3D Imaging and Display Laboratory, Department of Optics, Universitat de València, 46100 Burjassot, Spain; (Á.T.); (G.S.); (M.M.-C.); (G.S.)
| | - Genaro Saavedra
- 3D Imaging and Display Laboratory, Department of Optics, Universitat de València, 46100 Burjassot, Spain; (Á.T.); (G.S.); (M.M.-C.); (G.S.)
| |
Collapse
|
30
|
Usmani K, O'Connor T, Javidi B. Three-dimensional polarimetric image restoration in low light with deep residual learning and integral imaging. OPTICS EXPRESS 2021; 29:29505-29517. [PMID: 34615059 DOI: 10.1364/oe.435900] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
Polarimetric imaging can become challenging in degraded environments such as low light illumination conditions or in partial occlusions. In this paper, we propose the denoising convolutional neural network (DnCNN) model with three-dimensional (3D) integral imaging to enhance the reconstructed image quality of polarimetric imaging in degraded environments such as low light and partial occlusions. The DnCNN is trained based on the physical model of the image capture in degraded environments to enhance the visualization of polarimetric imaging where simulated low light polarimetric images are used in the training process. The DnCNN model is experimentally tested on real polarimetric images captured in real low light environments and in partial occlusion. The performance of DnCNN model is compared with that of total variation denoising. Experimental results demonstrate that DnCNN performs better than total variation denoising for polarimetric integral imaging in terms of signal-to-noise ratio and structural similarity index measure in low light environments as well as low light environments under partial occlusions. To the best of our knowledge, this is the first report of polarimetric 3D object visualization and restoration in low light environments and occlusions using DnCNN with integral imaging. The proposed approach is also useful for 3D image restoration in conventional (non-polarimetric) integral imaging in a degraded environment.
Collapse
|
31
|
Sung Y. Snapshot three-dimensional absorption imaging of microscopic specimens. PHYSICAL REVIEW APPLIED 2021; 15:064065. [PMID: 34377738 PMCID: PMC8351404 DOI: 10.1103/physrevapplied.15.064065] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Snapshot projection optical tomography (SPOT) uses a micro-lens array (MLA) to simultaneously capture the projection images of a three-dimensional (3D) specimen corresponding to different viewing directions. Compared to other light-field imaging techniques using an MLA, SPOT is dual telecentric and can block high-angle stray rays without sacrificing the light collection efficiency. Using SPOT, we recently demonstrated snapshot 3D fluorescence imaging. Here we demonstrate snapshot 3D absorption imaging of microscopic specimens. For the illumination, we focus the incoherent light from a light-emitting diode onto a pinhole, which is placed at a conjugate plane to the sample plane. SPOT allows us to capture the ray bundles passing through the specimen along different directions. The images recorded by an array of lenslets can be related to the projections of 3D absorption coefficient along the viewing directions of lenslets. Using a tomographic reconstruction algorithm, we obtain the 3D map of absorption coefficient. We apply the developed system to different types of samples, which demonstrates the optical sectioning capability. The transverse and axial resolutions measured with gold nanoparticles are 1.3 μm and 2.3 μm, respectively.
Collapse
Affiliation(s)
- Yongjin Sung
- College of Engineering & Applied Science, University of Wisconsin, Milwaukee, WI 53211, USA
| |
Collapse
|
32
|
Juntunen C, Woller IM, Sung Y. Hyperspectral Three-Dimensional Fluorescence Imaging Using Snapshot Optical Tomography. SENSORS 2021; 21:s21113652. [PMID: 34073956 PMCID: PMC8197295 DOI: 10.3390/s21113652] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 05/14/2021] [Accepted: 05/21/2021] [Indexed: 12/18/2022]
Abstract
Hyperspectral three-dimensional (3D) imaging can provide both 3D structural and functional information of a specimen. The imaging throughput is typically very low due to the requirement of scanning mechanisms for different depths and wavelengths. Here we demonstrate hyperspectral 3D imaging using Snapshot projection optical tomography (SPOT) and Fourier-transform spectroscopy (FTS). SPOT allows us to instantaneously acquire the projection images corresponding to different viewing angles, while FTS allows us to perform hyperspectral imaging at high spectral resolution. Using fluorescent beads and sunflower pollens, we demonstrate the imaging performance of the developed system.
Collapse
Affiliation(s)
- Cory Juntunen
- College of Engineering and Applied Science, University of Wisconsin-Milwaukee, 3200 N Cramer St, Milwaukee, WI 53211, USA;
| | - Isabel M. Woller
- College of Health Sciences, University of Wisconsin-Milwaukee, 2025 E Newport Ave, Milwaukee, WI 53211, USA;
| | - Yongjin Sung
- College of Engineering and Applied Science, University of Wisconsin-Milwaukee, 3200 N Cramer St, Milwaukee, WI 53211, USA;
- Correspondence:
| |
Collapse
|
33
|
Hua X, Liu W, Jia S. High-resolution Fourier light-field microscopy for volumetric multi-color live-cell imaging. OPTICA 2021; 8:614-620. [PMID: 34327282 PMCID: PMC8318351 DOI: 10.1364/optica.419236] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Volumetric interrogation of the organization and processes of intracellular organelles and molecules in cellular systems with a high spatiotemporal resolution is essential for understanding cell physiology, development, and pathology. Here, we report high-resolution Fourier light-field microscopy (HR-FLFM) for fast and volumetric live-cell imaging. HR-FLFM transforms conventional cell microscopy and enables exploration of less accessible spatiotemporal-limiting regimes for single-cell studies. The results present a near-diffraction-limited resolution in all three dimensions, a five-fold extended focal depth to several micrometers, and a scanning-free volume acquisition time up to milliseconds. The system demonstrates instrumentation accessibility, low photo damage for continuous observation, and high compatibility with general cell assays. We anticipate HR-FLFM to offer a promising methodological pathway for investigating a wide range of intracellular processes and functions with exquisite spatiotemporal contextual details.
Collapse
|
34
|
Wagner N, Beuttenmueller F, Norlin N, Gierten J, Boffi JC, Wittbrodt J, Weigert M, Hufnagel L, Prevedel R, Kreshuk A. Deep learning-enhanced light-field imaging with continuous validation. Nat Methods 2021; 18:557-563. [PMID: 33963344 DOI: 10.1038/s41592-021-01136-0] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 04/01/2021] [Indexed: 12/17/2022]
Abstract
Visualizing dynamic processes over large, three-dimensional fields of view at high speed is essential for many applications in the life sciences. Light-field microscopy (LFM) has emerged as a tool for fast volumetric image acquisition, but its effective throughput and widespread use in biology has been hampered by a computationally demanding and artifact-prone image reconstruction process. Here, we present a framework for artificial intelligence-enhanced microscopy, integrating a hybrid light-field light-sheet microscope and deep learning-based volume reconstruction. In our approach, concomitantly acquired, high-resolution two-dimensional light-sheet images continuously serve as training data and validation for the convolutional neural network reconstructing the raw LFM data during extended volumetric time-lapse imaging experiments. Our network delivers high-quality three-dimensional reconstructions at video-rate throughput, which can be further refined based on the high-resolution light-sheet images. We demonstrate the capabilities of our approach by imaging medaka heart dynamics and zebrafish neural activity with volumetric imaging rates up to 100 Hz.
Collapse
Affiliation(s)
- Nils Wagner
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.,Department of Informatics, Technical University of Munich, Garching, Germany.,Munich School for Data Science (MUDS), Munich, Germany
| | - Fynn Beuttenmueller
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.,Collaboration for joint PhD degree between EMBL and Heidelberg University, Faculty of Biosciences, Heidelberg University, Heidelberg, Germany
| | - Nils Norlin
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.,Department of Experimental Medical Science, Lund University, Lund, Sweden.,Lund Bioimaging Centre, Lund University, Lund, Sweden
| | - Jakob Gierten
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany.,Department of Pediatric Cardiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Juan Carlos Boffi
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Joachim Wittbrodt
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | - Lars Hufnagel
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Robert Prevedel
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany. .,Developmental Biology Unit, European Molecular Biology Laboratory, Heidelberg, Germany. .,Epigenetics and Neurobiology Unit, European Molecular Biology Laboratory, Monterotondo, Italy. .,Molecular Medicine Partnership Unit (MMPU), European Molecular Biology Laboratory, Heidelberg, Germany.
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.
| |
Collapse
|
35
|
Usmani K, Krishnan G, O'Connor T, Javidi B. Deep learning polarimetric three-dimensional integral imaging object recognition in adverse environmental conditions. OPTICS EXPRESS 2021; 29:12215-12228. [PMID: 33984986 DOI: 10.1364/oe.421287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
Polarimetric imaging is useful for object recognition and material classification because of its ability to discriminate objects based on polarimetric signatures of materials. Polarimetric imaging of an object captures important physical properties such as shape and surface properties and can be effective even in low light environments. Integral imaging is a passive three-dimensional (3D) imaging approach that takes advantage of multiple 2D imaging perspectives to perform 3D reconstruction. In this paper, we propose a unified polarimetric detection and classification of objects in degraded environments such as low light and the presence of occlusion. This task is accomplished using a deep learning model for 3D polarimetric integral imaging data captured in the visible spectral domain. The neural network system is designed and trained for 3D object detection and classification using polarimetric integral images. We compare the detection and classification results between polarimetric and non-polarimetric 2D and 3D imaging. The system performance in degraded environmental conditions is evaluated using average miss rate, average precision, and F-1 score. The results indicate that for the experiments we have performed, polarimetric 3D integral imaging outperforms 2D polarimetric imaging as well as non-polarimetric 2D and 3D imaging for object recognition in adverse conditions such as low light and occlusions. To the best of our knowledge, this is the first report for polarimetric 3D object recognition in low light environments and occlusions using a deep learning-based integral imaging. The proposed approach is attractive because low light polarimetric object recognition in the visible spectral band benefits from much higher spatial resolution, more compact optics, and lower system cost compared with long wave infrared imaging which is the conventional imaging approach for low light environments.
Collapse
|
36
|
Inoue K, Anand A, Cho M. Angular spectrum matching for digital holographic microscopy under extremely low light conditions. OPTICS LETTERS 2021; 46:1470-1473. [PMID: 33720214 DOI: 10.1364/ol.416002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 02/23/2021] [Indexed: 06/12/2023]
Abstract
Digital holographic microscopy (DHM) is a future three-dimensional (3D) microscopy due to its high-resolution and high-precision 3D images. Thus, it is getting attention in bioinformatics, semiconductor defect detection, etc. However, some limitations still exist. Especially, high-speed holographic imaging requires high-power lasers, which are difficult to image on highly absorbent or light-sensitive samples. To overcome these issues, we propose a new, to the best of our knowledge, digital hologram recovery algorithm called angular spectrum matching (ASM), which achieves hologram imitation to recover holograms in digital holography at low light intensities. The hologram used for the background phase comparison is recorded without objects; thus, no power limitation is required. The ASM utilizes this background hologram to recover dark holograms. We present experimental results showing improved DHM numerical reconstructions and recovered holograms under extremely low light conditions.
Collapse
|
37
|
Zhang Z, Cong L, Bai L, Wang K. Light-field microscopy for fast volumetric brain imaging. J Neurosci Methods 2021; 352:109083. [PMID: 33484746 DOI: 10.1016/j.jneumeth.2021.109083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 12/23/2020] [Accepted: 01/14/2021] [Indexed: 01/06/2023]
Abstract
Recording neural activities over large populations is critical for a better understanding of the functional mechanisms of animal brains. Traditional optical imaging technologies for in vivo neural activity recording are usually limited in throughput and cannot cover a large imaging volume at high speed. Light-field microscopy features a highly parallelized imaging collection mechanism and can simultaneously record optical signals from different depths. Therefore, it can potentially increase the imaging throughput substantially. Furthermore, its unique instantaneous volumetric imaging capability enables the capture of highly dynamic processes, such as recording whole-animal neural activities in freely moving Caenorhabditis elegans and whole-brain neural activity in freely swimming larval zebrafish during prey capture. Here, we summarize the principles of and considerations in the practical implementation of light-field microscopy as currently applied in biological imaging experiments. We also discuss the strategies that light-field microscopy can employ when imaging thick tissues in the presence of scattering and background interference. Finally, we present a few examples of applying light-field microscopy in neuroscientific studies in several important animal models.
Collapse
Affiliation(s)
- Zhenkun Zhang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China; University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Lin Cong
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Lu Bai
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China; University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Kai Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China; University of Chinese Academy of Sciences, Beijing, 100049, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, 201210, China.
| |
Collapse
|
38
|
Liu W, Jia S. wFLFM: enhancing the resolution of Fourier light-field microscopy using a hybrid wide-field image. APPLIED PHYSICS EXPRESS 2021; 14:012007. [PMID: 33889222 PMCID: PMC8059709 DOI: 10.35848/1882-0786/abd3b7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We introduce wFLFM, an approach that enhances the resolution of Fourier light-field microscopy (FLFM) through a hybrid wide-field image. The system exploits the intrinsic compatibility of image formation between the on-axis FLFM elemental image and the wide-field image, allowing for minimal instrumental and computational complexity. The numerical and experimental results of wFLFM present a two- to three-fold improvement in the lateral resolution without compromising the 3D imaging capability in comparison with conventional FLFM.
Collapse
Affiliation(s)
- Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, United States of America
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, United States of America
| |
Collapse
|
39
|
Javidi B, Carnicer A, Arai J, Fujii T, Hua H, Liao H, Martínez-Corral M, Pla F, Stern A, Waller L, Wang QH, Wetzstein G, Yamaguchi M, Yamamoto H. Roadmap on 3D integral imaging: sensing, processing, and display. OPTICS EXPRESS 2020; 28:32266-32293. [PMID: 33114917 DOI: 10.1364/oe.402193] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.
Collapse
|
40
|
Scrofani G, Saavedra G, Martínez-Corral M, Sánchez-Ortiga E. Three-dimensional real-time darkfield imaging through Fourier lightfield microscopy. OPTICS EXPRESS 2020; 28:30513-30519. [PMID: 33115051 DOI: 10.1364/oe.404961] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 09/02/2020] [Indexed: 06/11/2023]
Abstract
We report a protocol that takes advantage of the Fourier lightfield microscopy concept for providing 3D darkfield images of volumetric samples in a single-shot. This microscope takes advantage of the Fourier lightfield configuration, in which a lens array is placed at the Fourier plane of the microscope objective, providing a direct multiplexing of the spatio-angular information of the sample. Using the proper illumination beam, the system collects the light scattered by the sample while the background light is blocked out. This produces a set of orthographic perspective images with shifted spatial-frequency components that can be recombined to produce a 3D darkfield image. Applying the adequate reconstruction algorithm high-contrast darkfield optical sections are calculated in real time. The presented method is applied for fast volumetric reconstructions of unstained 3D samples.
Collapse
|
41
|
Yanny K, Antipa N, Liberti W, Dehaeck S, Monakhova K, Liu FL, Shen K, Ng R, Waller L. Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy. LIGHT, SCIENCE & APPLICATIONS 2020; 9:171. [PMID: 33082940 PMCID: PMC7532148 DOI: 10.1038/s41377-020-00403-7] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 09/01/2020] [Accepted: 09/04/2020] [Indexed: 05/19/2023]
Abstract
Miniature fluorescence microscopes are a standard tool in systems biology. However, widefield miniature microscopes capture only 2D information, and modifications that enable 3D capabilities increase the size and weight and have poor resolution outside a narrow depth range. Here, we achieve the 3D capability by replacing the tube lens of a conventional 2D Miniscope with an optimized multifocal phase mask at the objective's aperture stop. Placing the phase mask at the aperture stop significantly reduces the size of the device, and varying the focal lengths enables a uniform resolution across a wide depth range. The phase mask encodes the 3D fluorescence intensity into a single 2D measurement, and the 3D volume is recovered by solving a sparsity-constrained inverse problem. We provide methods for designing and fabricating the phase mask and an efficient forward model that accounts for the field-varying aberrations in miniature objectives. We demonstrate a prototype that is 17 mm tall and weighs 2.5 grams, achieving 2.76 μm lateral, and 15 μm axial resolution across most of the 900 × 700 × 390 μm3 volume at 40 volumes per second. The performance is validated experimentally on resolution targets, dynamic biological samples, and mouse brain tissue. Compared with existing miniature single-shot volume-capture implementations, our system is smaller and lighter and achieves a more than 2× better lateral and axial resolution throughout a 10× larger usable depth range. Our microscope design provides single-shot 3D imaging for applications where a compact platform matters, such as volumetric neural imaging in freely moving animals and 3D motion studies of dynamic samples in incubators and lab-on-a-chip devices.
Collapse
Affiliation(s)
- Kyrollos Yanny
- UCB/UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, CA 94720 USA
| | - Nick Antipa
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| | - William Liberti
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| | - Sam Dehaeck
- TIPs Department, Université libre de Bruxelles (ULB), 1050 Brussels, Belgium
| | - Kristina Monakhova
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| | - Fanglin Linda Liu
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| | - Konlin Shen
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| | - Ren Ng
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| | - Laura Waller
- UCB/UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, CA 94720 USA
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720 USA
| |
Collapse
|
42
|
Linda Liu F, Kuo G, Antipa N, Yanny K, Waller L. Fourier DiffuserScope: single-shot 3D Fourier light field microscopy with a diffuser. OPTICS EXPRESS 2020; 28:28969-28986. [PMID: 33114805 DOI: 10.1364/oe.400876] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/02/2020] [Indexed: 06/11/2023]
Abstract
Light field microscopy (LFM) uses a microlens array (MLA) near the sensor plane of a microscope to achieve single-shot 3D imaging of a sample without any moving parts. Unfortunately, the 3D capability of LFM comes with a significant loss of lateral resolution at the focal plane. Placing the MLA near the pupil plane of the microscope, instead of the image plane, can mitigate the artifacts and provide an efficient forward model, at the expense of field-of-view (FOV). Here, we demonstrate improved resolution across a large volume with Fourier DiffuserScope, which uses a diffuser in the pupil plane to encode 3D information, then computationally reconstructs the volume by solving a sparsity-constrained inverse problem. Our diffuser consists of randomly placed microlenses with varying focal lengths; the random positions provide a larger FOV compared to a conventional MLA, and the diverse focal lengths improve the axial depth range. To predict system performance based on diffuser parameters, we, for the first time, establish a theoretical framework and design guidelines, which are verified by numerical simulations, and then build an experimental system that achieves < 3 µm lateral and 4 µm axial resolution over a 1000 × 1000 × 280 µm3 volume. Our diffuser design outperforms the MLA used in LFM, providing more uniform resolution over a larger volume, both laterally and axially.
Collapse
|
43
|
Usmani K, O'Connor T, Shen X, Marasco P, Carnicer A, Dey D, Javidi B. Three-dimensional polarimetric integral imaging in photon-starved conditions: performance comparison between visible and long wave infrared imaging. OPTICS EXPRESS 2020; 28:19281-19294. [PMID: 32672208 DOI: 10.1364/oe.395301] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 06/09/2020] [Indexed: 06/11/2023]
Abstract
Three-dimensional (3D) polarimetric integral imaging (InIm) to extract the 3D polarimetric information of objects in photon-starved conditions is investigated using a low noise visible range camera and a long wave infrared (LWIR) range camera, and the performance between the two sensors is compared. Stokes polarization parameters and degree of polarization (DoP) are calculated to extract the polarimetric information of the 3D scene while integral imaging reconstruction provides depth information and improves the performance of low-light imaging tasks. An LWIR wire grid polarizer and a linear polarizer film are used as polarimetric objects for the LWIR range and visible range cameras, respectively. To account for a limited number of photons per pixel using the visible range camera in low light conditions, we apply a mathematical restoration model at each elemental image of visible camera to enhance the signal. We show that the low noise visible range camera may outperform the LWIR camera in detection of polarimetric objects under low illumination conditions. Our experiments indicate that for 3D polarimetric measurements under photon-starved conditions, visible range sensing may produce a signal-to-noise ratio (SNR) that is not lower than the LWIR range sensing. We derive the probability density function (PDF) of the 2D and 3D degree of polarization (DoP) images and show that the theoretical model demonstrates agreement to that of the experimentally obtained results. To the best of our knowledge, this is the first report comparing the polarimetric imaging performance between visible range and infrared (IR) range sensors under photon-starved conditions and the relevant statistical models of 3D polarimetric integral imaging.
Collapse
|
44
|
Stefanoiu A, Scrofani G, Saavedra G, Martínez-Corral M, Lasser T. What about computational super-resolution in fluorescence Fourier light field microscopy? OPTICS EXPRESS 2020; 28:16554-16568. [PMID: 32549475 DOI: 10.1364/oe.391189] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 03/30/2020] [Indexed: 06/11/2023]
Abstract
Recently, Fourier light field microscopy was proposed to overcome the limitations in conventional light field microscopy by placing a micro-lens array at the aperture stop of the microscope objective instead of the image plane. In this way, a collection of orthographic views from different perspectives are directly captured. When inspecting fluorescent samples, the sensitivity and noise of the sensors are a major concern and large sensor pixels are required to cope with low-light conditions, which implies under-sampling issues. In this context, we analyze the sampling patterns in Fourier light field microscopy to understand to what extent computational super-resolution can be triggered during deconvolution in order to improve the resolution of the 3D reconstruction of the imaged data.
Collapse
|
45
|
Abstract
We present a new plenoptic microscopy configuration for 3D snapshot imaging, which is dual telecentric and can directly record true projection images corresponding with different viewing angles. It also allows blocking high-angle stray rays without sacrificing the light collection efficiency. This configuration named as snapshot projection optical tomography (SPOT) arranges an objective lens and a microlens array (MLA) in a 4-f telecentric configuration and places an aperture stop at the back focal plane of a relay lens. We develop a forward imaging model for SPOT, which can also be applied to existing light field microscopy techniques using an MLA as tube lens. Using the developed system, we demonstrate snapshot 3D imaging of various fluorescent beads and a biological cell, which confirms the capability of SPOT for imaging specimens with an extended fluorophore distribution as well as isolated fluorochromes. The transverse and vertical resolutions are measured to be 0.8 μm and 1.6 μm, respectively.
Collapse
Affiliation(s)
- Yongjin Sung
- College of Engineering & Applied Science, University of Wisconsin, Milwaukee, WI 53211, USA
| |
Collapse
|
46
|
Guo C, Liu W, Hua X, Li H, Jia S. Fourier light-field microscopy. OPTICS EXPRESS 2019; 27:25573-25594. [PMID: 31510428 PMCID: PMC6825611 DOI: 10.1364/oe.27.025573] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 08/05/2019] [Accepted: 08/08/2019] [Indexed: 05/20/2023]
Abstract
Observing the various anatomical and functional information that spans many spatiotemporal scales with high resolution provides deep understandings of the fundamentals of biological systems. Light-field microscopy (LFM) has recently emerged as a scanning-free, scalable method that allows for high-speed, volumetric imaging ranging from single-cell specimens to the mammalian brain. However, the prohibitive reconstruction artifacts and severe computational cost have thus far limited broader applications of LFM. To address the challenge, in this work, we report Fourier LFM (FLFM), a system that processes the light-field information through the Fourier domain. We established a complete theoretical and algorithmic framework that describes light propagation, image formation and system characterization of FLFM. Compared with conventional LFM, FLFM fundamentally mitigates the artifacts, allowing high-resolution imaging across a two- to three-fold extended depth. In addition, the system substantially reduces the reconstruction time by roughly two orders of magnitude. FLFM was validated by high-resolution, artifact-free imaging of various caliber and biological samples. Furthermore, we proposed a generic design principle for FLFM, as a highly scalable method to meet broader imaging needs across various spatial levels. We anticipate FLFM to be a particularly powerful tool for imaging diverse phenotypic and functional information, spanning broad molecular, cellular and tissue systems.
Collapse
Affiliation(s)
- Changliang Guo
- The Wallace H. Coulter Department of
Biomedical Engineering, Georgia Institute of Technology and Emory
University, Atlanta, GA 30332, USA
- These authors contributed equally to this
work
| | - Wenhao Liu
- The Wallace H. Coulter Department of
Biomedical Engineering, Georgia Institute of Technology and Emory
University, Atlanta, GA 30332, USA
- These authors contributed equally to this
work
| | - Xuanwen Hua
- The Wallace H. Coulter Department of
Biomedical Engineering, Georgia Institute of Technology and Emory
University, Atlanta, GA 30332, USA
| | - Haoyu Li
- Ultra-Precision Optoelectronic Instrument
Engineering Center, Harbin Institute of Technology, Harbin,
Heilongjiang, China
| | - Shu Jia
- The Wallace H. Coulter Department of
Biomedical Engineering, Georgia Institute of Technology and Emory
University, Atlanta, GA 30332, USA
| |
Collapse
|
47
|
Palmieri L, Scrofani G, Incardona N, Saavedra G, Martínez-Corral M, Koch R. Robust Depth Estimation for Light Field Microscopy. SENSORS 2019; 19:s19030500. [PMID: 30691038 PMCID: PMC6387340 DOI: 10.3390/s19030500] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 01/21/2019] [Accepted: 01/22/2019] [Indexed: 11/22/2022]
Abstract
Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.
Collapse
Affiliation(s)
- Luca Palmieri
- Department of Computer Science, Christian-Albrecht-University, 24118 Kiel, Germany.
| | - Gabriele Scrofani
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain.
| | - Nicolò Incardona
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain.
| | - Genaro Saavedra
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain.
| | | | - Reinhard Koch
- Department of Computer Science, Christian-Albrecht-University, 24118 Kiel, Germany.
| |
Collapse
|
48
|
Large Depth-of-Field Integral Microscopy by Use of a Liquid Lens. SENSORS 2018; 18:s18103383. [PMID: 30309009 PMCID: PMC6210099 DOI: 10.3390/s18103383] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 09/28/2018] [Accepted: 10/05/2018] [Indexed: 11/17/2022]
Abstract
Integral microscopy is a 3D imaging technique that permits the recording of spatial and angular information of microscopic samples. From this information it is possible to calculate a collection of orthographic views with full parallax and to refocus computationally, at will, through the 3D specimen. An important drawback of integral microscopy, especially when dealing with thick samples, is the limited depth of field (DOF) of the perspective views. This imposes a significant limitation on the depth range of computationally refocused images. To overcome this problem, we propose here a new method that is based on the insertion, at the pupil plane of the microscope objective, of an electrically controlled liquid lens (LL) whose optical power can be changed by simply tuning the voltage. This new apparatus has the advantage of controlling the axial position of the objective focal plane while keeping constant the essential parameters of the integral microscope, that is, the magnification, the numerical aperture and the amount of parallax. Thus, given a 3D sample, the new microscope can provide a stack of integral images with complementary depth ranges. The fusion of the set of refocused images permits to enlarge the reconstruction range, obtaining images in focus over the whole region.
Collapse
|