1
|
Xue Y, Boivin JR, Wadduwage DN, Park JK, Nedivi E, So PTC. Multiline orthogonal scanning temporal focusing (mosTF) microscopy for scattering reduction in in vivo brain imaging. Sci Rep 2024; 14:10954. [PMID: 38740797 PMCID: PMC11091065 DOI: 10.1038/s41598-024-57208-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 03/15/2024] [Indexed: 05/16/2024] Open
Abstract
Temporal focusing two-photon microscopy has been utilized for high-resolution imaging of neuronal and synaptic structures across volumes spanning hundreds of microns in vivo. However, a limitation of temporal focusing is the rapid degradation of the signal-to-background ratio and resolution with increasing imaging depth. This degradation is due to scattered emission photons being widely distributed, resulting in a strong background. To overcome this challenge, we have developed multiline orthogonal scanning temporal focusing (mosTF) microscopy. mosTF captures a sequence of images at each scan location of the excitation line. A reconstruction algorithm then reassigns scattered photons back to their correct scan positions. We demonstrate the effectiveness of mosTF by acquiring neuronal images of mice in vivo. Our results show remarkable improvements in in vivo brain imaging with mosTF, while maintaining its speed advantage.
Collapse
Affiliation(s)
- Yi Xue
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Laser Biomedical Research Center, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Department of Biomedical Engineering, University of California, Davis, CA, 95616, USA
| | - Josiah R Boivin
- Picower Institute, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Dushan N Wadduwage
- Laser Biomedical Research Center, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, 02138, USA
| | - Jong Kang Park
- Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Elly Nedivi
- Picower Institute, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Peter T C So
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- Laser Biomedical Research Center, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| |
Collapse
|
2
|
Alido J, Greene J, Xue Y, Hu G, Li Y, Gilmore M, Monk KJ, Dibenedictis BT, Davison IG, Tian L. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. ARXIV 2023:arXiv:2303.12573v2. [PMID: 36994164 PMCID: PMC10055497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed Computational Miniature Mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Brett T. Dibenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| |
Collapse
|