1
|
Lu Z, Jin M, Chen S, Wang X, Sun F, Zhang Q, Zhao Z, Wu J, Yang J, Dai Q. Physics-driven self-supervised learning for fast high-resolution robust 3D reconstruction of light-field microscopy. Nat Methods 2025:10.1038/s41592-025-02698-z. [PMID: 40355725 DOI: 10.1038/s41592-025-02698-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 04/10/2025] [Indexed: 05/14/2025]
Abstract
Light-field microscopy (LFM) and its variants have significantly advanced intravital high-speed 3D imaging. However, their practical applications remain limited due to trade-offs among processing speed, fidelity, and generalization in existing reconstruction methods. Here we propose a physics-driven self-supervised reconstruction network (SeReNet) for unscanned LFM and scanning LFM (sLFM) to achieve near-diffraction-limited resolution at millisecond-level processing speed. SeReNet leverages 4D information priors to not only achieve better generalization than existing deep-learning methods, especially under challenging conditions such as strong noise, optical aberration, and sample motion, but also improve processing speed by 700 times over iterative tomography. Axial performance can be further enhanced via fine-tuning as an optional add-on with compromised generalization. We demonstrate these advantages by imaging living cells, zebrafish embryos and larvae, Caenorhabditis elegans, and mice. Equipped with SeReNet, sLFM now enables continuous day-long high-speed 3D subcellular imaging with over 300,000 volumes of large-scale intercellular dynamics, such as immune responses and neural activities, leading to widespread practical biological applications.
Collapse
Affiliation(s)
- Zhi Lu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Beijing Key Laboratory of Cognitive Intelligence, Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Zhejiang Hehu Technology, Hangzhou, China
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou, China
| | - Manchang Jin
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Zhejiang Hehu Technology, Hangzhou, China
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou, China
- School of Information Science and Technology, Fudan University, Shanghai, China
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
- Shanghai Innovation Institute, Shanghai, China
| | - Shuai Chen
- Department of Gastroenterology and Hepatology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Xiaoge Wang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Feihao Sun
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Qi Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Cognitive Intelligence, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Beijing Visual Science and Translational Eye Research Institute (BERI), Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| | - Jingyu Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Cognitive Intelligence, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Ling Z, Liu W, Yoon K, Hou J, Forghani P, Hua X, Yoon H, Bagheri M, Dasi LP, Mandracchia B, Xu C, Nie S, Jia S. Multiscale and recursive unmixing of spatiotemporal rhythms for live-cell and intravital cardiac microscopy. NATURE CARDIOVASCULAR RESEARCH 2025; 4:637-648. [PMID: 40335723 DOI: 10.1038/s44161-025-00649-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 04/01/2025] [Indexed: 05/09/2025]
Abstract
Cardiovascular diseases remain a pressing public health issue, necessitating the development of advanced therapeutic strategies underpinned by precise cardiac observations. While fluorescence microscopy is an invaluable tool for probing biological processes, cardiovascular signals are often complicated by persistent autofluorescence, overlaying dynamic cardiovascular entities and nonspecific labeling from tissue microenvironments. Here we present multiscale recursive decomposition for the precise extraction of dynamic cardiovascular signals. Multiscale recursive decomposition constructs a comprehensive framework for cardiac microscopy that includes pixel-wise image enhancement, robust principal component analysis and recursive motion segmentation. This method has been validated in various cardiac systems, including in vitro studies with human induced pluripotent stem cell-derived cardiomyocytes and in vivo studies of cardiovascular morphology and function in Xenopus embryos. The approach advances light-field cardiac microscopy, facilitating simultaneous, multiparametric and volumetric analysis of cardiac activities with minimum photodamage. We anticipate that the methodology will advance cardiovascular studies across a broad spectrum of cardiac models.
Collapse
Affiliation(s)
- Zhi Ling
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Wenhao Liu
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kyungduck Yoon
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Jessica Hou
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Parvin Forghani
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Xuanwen Hua
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Hansol Yoon
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Maryam Bagheri
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Lakshmi P Dasi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Biagio Mandracchia
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Laboratorio de Procesado de Imagen, Universidad de Valladolid, Valladolid, Spain
| | - Chunhui Xu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Shuyi Nie
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shu Jia
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA, USA.
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
3
|
Lin B, Xing F, Su L, Wang K, Liu Y, Zhang D, Yang X, Tan H, Zhu Z, Wang D. Real-time and universal network for volumetric imaging from microscale to macroscale at high resolution. LIGHT, SCIENCE & APPLICATIONS 2025; 14:178. [PMID: 40301329 PMCID: PMC12041240 DOI: 10.1038/s41377-025-01842-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Revised: 03/04/2025] [Accepted: 03/21/2025] [Indexed: 05/01/2025]
Abstract
Light-field imaging has wide applications in various domains, including microscale life science imaging, mesoscale neuroimaging, and macroscale fluid dynamics imaging. The development of deep learning-based reconstruction methods has greatly facilitated high-resolution light-field image processing, however, current deep learning-based light-field reconstruction methods have predominantly concentrated on the microscale. Considering the multiscale imaging capacity of light-field technique, a network that can work over variant scales of light-field image reconstruction will significantly benefit the development of volumetric imaging. Unfortunately, to our knowledge, no one has reported a universal high-resolution light-field image reconstruction algorithm that is compatible with microscale, mesoscale, and macroscale. To fill this gap, we present a real-time and universal network (RTU-Net) to reconstruct high-resolution light-field images at any scale. RTU-Net, as the first network that works over multiscale light-field image reconstruction, employs an adaptive loss function based on generative adversarial theory and consequently exhibits strong generalization capability. We comprehensively assessed the performance of RTU-Net through the reconstruction of multiscale light-field images, including microscale tubulin and mitochondrion dataset, mesoscale synthetic mouse neuro dataset, and macroscale light-field particle imaging velocimetry dataset. The results indicated that RTU-Net has achieved real-time and high-resolution light-field image reconstruction for volume sizes ranging from 300 μm × 300 μm × 12 μm to 25 mm × 25 mm × 25 mm, and demonstrated higher resolution when compared with recently reported light-field reconstruction networks. The high-resolution, strong robustness, high efficiency, and especially the general applicability of RTU-Net will significantly deepen our insight into high-resolution and volumetric imaging.
Collapse
Affiliation(s)
- Bingzhi Lin
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Feng Xing
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Liwei Su
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Kekuan Wang
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yulan Liu
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Diming Zhang
- Key Laboratory of Soybean Molecular Design Breeding, National Key Laboratory of Black Soils Conservation and Utilization, Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun, China
| | - Xusan Yang
- Institute of Physics Chinese Academy of Sciences, Beijing, China
| | - Huijun Tan
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| | - Zhijing Zhu
- Key Laboratory of Novel Targets and Drug Study for Neural Repair of Zhejiang Province, School of Medicine, Hangzhou City University, Hangzhou, China.
| | - Depeng Wang
- College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| |
Collapse
|
4
|
Hsieh HC, Han Q, Brenes D, Bishop KW, Wang R, Wang Y, Poudel C, Glaser AK, Freedman BS, Vaughan JC, Allbritton NL, Liu JTC. Imaging 3D cell cultures with optical microscopy. Nat Methods 2025:10.1038/s41592-025-02647-w. [PMID: 40247123 DOI: 10.1038/s41592-025-02647-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 01/16/2025] [Indexed: 04/19/2025]
Abstract
Three-dimensional (3D) cell cultures have gained popularity in recent years due to their ability to represent complex tissues or organs more faithfully than conventional two-dimensional (2D) cell culture. This article reviews the application of both 2D and 3D microscopy approaches for monitoring and studying 3D cell cultures. We first summarize the most popular optical microscopy methods that have been used with 3D cell cultures. We then discuss the general advantages and disadvantages of various microscopy techniques for several broad categories of investigation involving 3D cell cultures. Finally, we provide perspectives on key areas of technical need in which there are clear opportunities for innovation. Our goal is to guide microscope engineers and biomedical end users toward optimal imaging methods for specific investigational scenarios and to identify use cases in which additional innovations in high-resolution imaging could be helpful.
Collapse
Affiliation(s)
- Huai-Ching Hsieh
- Department of Bioengineering, University of Washington, Seattle, WA, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Qinghua Han
- Department of Bioengineering, University of Washington, Seattle, WA, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - David Brenes
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Kevin W Bishop
- Department of Bioengineering, University of Washington, Seattle, WA, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Rui Wang
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Yuli Wang
- Department of Bioengineering, University of Washington, Seattle, WA, USA
| | - Chetan Poudel
- Department of Chemistry, University of Washington, Seattle, WA, USA
| | - Adam K Glaser
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Benjamin S Freedman
- Department of Bioengineering, University of Washington, Seattle, WA, USA
- Department of Medicine, Division of Nephrology, Kidney Research Institute and Institute for Stem Cell and Regenerative Medicine, Seattle, WA, USA
- Plurexa LLC, Seattle, WA, USA
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | - Joshua C Vaughan
- Department of Chemistry, University of Washington, Seattle, WA, USA
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Nancy L Allbritton
- Department of Bioengineering, University of Washington, Seattle, WA, USA
| | - Jonathan T C Liu
- Department of Bioengineering, University of Washington, Seattle, WA, USA.
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA.
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA.
| |
Collapse
|
5
|
Lu Z, Zuo S, Shi M, Fan J, Xie J, Xiao G, Yu L, Wu J, Dai Q. Long-term intravital subcellular imaging with confocal scanning light-field microscopy. Nat Biotechnol 2025; 43:569-580. [PMID: 38802562 PMCID: PMC11994454 DOI: 10.1038/s41587-024-02249-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 04/17/2024] [Indexed: 05/29/2024]
Abstract
Long-term observation of subcellular dynamics in living organisms is limited by background fluorescence originating from tissue scattering or dense labeling. Existing confocal approaches face an inevitable tradeoff among parallelization, resolution and phototoxicity. Here we present confocal scanning light-field microscopy (csLFM), which integrates axially elongated line-confocal illumination with the rolling shutter in scanning light-field microscopy (sLFM). csLFM enables high-fidelity, high-speed, three-dimensional (3D) imaging at near-diffraction-limit resolution with both optical sectioning and low phototoxicity. By simultaneous 3D excitation and detection, the excitation intensity can be reduced below 1 mW mm-2, with 15-fold higher signal-to-background ratio over sLFM. We imaged subcellular dynamics over 25,000 timeframes in optically challenging environments in different species, such as migrasome delivery in mouse spleen, retractosome generation in mouse liver and 3D voltage imaging in Drosophila. Moreover, csLFM facilitates high-fidelity, large-scale neural recording with reduced crosstalk, leading to high orientation selectivity to visual stimuli, similar to two-photon microscopy, which aids understanding of neural coding mechanisms.
Collapse
Affiliation(s)
- Zhi Lu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
- Zhejiang Hehu Technology, Hangzhou, China
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou, China
| | - Siqing Zuo
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Minghui Shi
- State Key Laboratory of Membrane Biology, Tsinghua University-Peking University Joint Center for Life Sciences, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, Beijing, China
| | - Jiaqi Fan
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Jingyu Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Guihua Xiao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Li Yu
- State Key Laboratory of Membrane Biology, Tsinghua University-Peking University Joint Center for Life Sciences, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, Beijing, China.
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Shanghai AI Laboratory, Shanghai, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| |
Collapse
|
6
|
Tian W, Chen R, Chen L. Computational Super-Resolution: An Odyssey in Harnessing Priors to Enhance Optical Microscopy Resolution. Anal Chem 2025; 97:4763-4792. [PMID: 40013618 PMCID: PMC11912138 DOI: 10.1021/acs.analchem.4c07047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2025]
Affiliation(s)
- Wenfeng Tian
- New Cornerstone Science Laboratory, National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, College of Future Technology, Peking University, Beijing 100871, China
| | - Riwang Chen
- New Cornerstone Science Laboratory, State Key Laboratory of Membrane Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Liangyi Chen
- New Cornerstone Science Laboratory, National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, College of Future Technology, Center for Life Sciences, Peking University, Beijing 100871, China
- PKU-IDG/McGovern Institute for Brain Research, Beijing 100084, China
| |
Collapse
|
7
|
Nguyen TN, Shalaby RA, Lee E, Kim SS, Ro Kim Y, Kim S, Je HS, Kwon HS, Chung E. Ultrafast optical imaging techniques for exploring rapid neuronal dynamics. NEUROPHOTONICS 2025; 12:S14608. [PMID: 40017464 PMCID: PMC11867703 DOI: 10.1117/1.nph.12.s1.s14608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2024] [Revised: 01/20/2025] [Accepted: 01/27/2025] [Indexed: 03/01/2025]
Abstract
Optical neuroimaging has significantly advanced our understanding of brain function, particularly through techniques such as two-photon microscopy, which captures three-dimensional brain structures with sub-cellular resolution. However, traditional methods struggle to record fast, complex neuronal interactions in real time, which are crucial for understanding brain networks and developing treatments for neurological diseases such as Alzheimer's, Parkinson's, and chronic pain. Recent advancements in ultrafast imaging technologies, including kilohertz two-photon microscopy, light field microscopy, and event-based imaging, are pushing the boundaries of temporal resolution in neuroimaging. These techniques enable the capture of rapid neural events with unprecedented speed and detail. This review examines the principles, applications, and limitations of these technologies, highlighting their potential to revolutionize neuroimaging and improve the diagnose and treatment of neurological disorders. Despite challenges such as photodamage risks and spatial resolution trade-offs, integrating these approaches promises to enhance our understanding of brain function and drive future breakthroughs in neuroscience and medicine. Continued interdisciplinary collaboration is essential to fully leverage these innovations for advancements in both basic and clinical neuroscience.
Collapse
Affiliation(s)
- Tien Nhat Nguyen
- Gwangju Institute of Science and Technology, Department of Biomedical Science and Engineering, Gwangju, Republic of Korea
| | - Reham A. Shalaby
- Gwangju Institute of Science and Technology, Department of Biomedical Science and Engineering, Gwangju, Republic of Korea
| | - Eunbin Lee
- Gwangju Institute of Science and Technology, Department of Biomedical Science and Engineering, Gwangju, Republic of Korea
| | - Sang Seong Kim
- Gwangju Institute of Science and Technology, Department of Biomedical Science and Engineering, Gwangju, Republic of Korea
| | - Young Ro Kim
- Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts United States
- Harvard Medical School, Department of Radiology, Boston, Massachusetts, United States
| | - Seonghoon Kim
- Tsinghua University, Institute for Brain and Cognitive Sciences, Beijing, China
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou, China
| | - Hyunsoo Shawn Je
- Duke-NUS Medical School, Program in Neuroscience and Behavioral Disorders, Singapore
| | - Hyuk-Sang Kwon
- Gwangju Institute of Science and Technology, Department of Biomedical Science and Engineering, Gwangju, Republic of Korea
| | - Euiheon Chung
- Gwangju Institute of Science and Technology, Department of Biomedical Science and Engineering, Gwangju, Republic of Korea
- Gwangju Institute of Science and Technology, AI Graduate School, Gwangju, Republic of Korea
| |
Collapse
|
8
|
Saberigarakani A, Patel RP, Almasian M, Zhang X, Brewer J, Hassan SS, Chai J, Lee J, Fei B, Yuan J, Carroll K, Ding Y. Volumetric imaging and computation to explore contractile function in zebrafish hearts. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.14.623621. [PMID: 39605398 PMCID: PMC11601419 DOI: 10.1101/2024.11.14.623621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Despite advancements in cardiovascular engineering, heart diseases remain a leading cause of mortality. The limited understanding of the underlying mechanisms of cardiac dysfunction at the cellular level restricts the development of effective screening and therapeutic methods. To address this, we have developed a framework that incorporates light field detection and individual cell tracking to capture real-time volumetric data in zebrafish hearts, which share structural and electrical similarities with the human heart and generate 120 to 180 beats per minute. Our results indicate that the in-house system achieves an acquisition speed of 200 volumes per second, with resolutions of up to 5.02 ± 0.54 µm laterally and 9.02 ± 1.11 µm axially across the entire depth, using the estimated-maximized-smoothed deconvolution method. The subsequent deep learning-based cell trackers enable further investigation of contractile dynamics, including cellular displacement and velocity, followed by volumetric tracking of specific cells of interest from end-systole to end-diastole in an interactive environment. Collectively, our strategy facilitates real-time volumetric imaging and assessment of contractile dynamics across the entire ventricle at the cellular resolution over multiple cycles, providing significant potential for exploring intercellular interactions in both health and disease.
Collapse
|
9
|
Bian L, Chang X, Xu H, Zhang J. Ultra-fast light-field microscopy with event detection. LIGHT, SCIENCE & APPLICATIONS 2024; 13:306. [PMID: 39511142 PMCID: PMC11544014 DOI: 10.1038/s41377-024-01603-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2024]
Abstract
The event detection technique has been introduced to light-field microscopy, boosting its imaging speed in orders of magnitude with simultaneous axial resolution enhancement in scattering medium.
Collapse
Affiliation(s)
- Liheng Bian
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China
| | - Xuyang Chang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China
| | - Hanwen Xu
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China
| | - Jun Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, No 5 Zhongguancun South Street, Haidian District, 100081, Beijing, China.
| |
Collapse
|
10
|
Bai L, Cong L, Shi Z, Zhao Y, Zhang Y, Lu B, Zhang J, Xiong ZQ, Xu N, Mu Y, Wang K. Volumetric voltage imaging of neuronal populations in the mouse brain by confocal light-field microscopy. Nat Methods 2024; 21:2160-2170. [PMID: 39379535 DOI: 10.1038/s41592-024-02458-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 09/06/2024] [Indexed: 10/10/2024]
Abstract
Voltage imaging measures neuronal activity directly and holds promise for understanding information processing within individual neurons and across populations. However, imaging voltage over large neuronal populations has been challenging owing to the simultaneous requirements of high imaging speed and signal-to-noise ratio, large volume coverage and low photobleaching rate. Here, to overcome this challenge, we developed a confocal light-field microscope that surpassed the traditional limits in speed and noise performance by incorporating a speed-enhanced camera, a fast and robust scanning mechanism, laser-speckle-noise elimination and optimized light efficiency. With this method, we achieved simultaneous recording from more than 300 spiking neurons within an 800-µm-diameter and 180-µm-thick volume in the mouse cortex, for more than 20 min. By integrating the spatial and voltage activity profiles, we have mapped three-dimensional neural coordination patterns in awake mouse brains. Our method is robust for routine application in volumetric voltage imaging.
Collapse
Affiliation(s)
- Lu Bai
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Lin Cong
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Ziqi Shi
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yuchen Zhao
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yujie Zhang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Bin Lu
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Jing Zhang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Zhi-Qi Xiong
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, China
| | - Ninglong Xu
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, China
| | - Yu Mu
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Kai Wang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
- University of Chinese Academy of Sciences, Beijing, China.
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China.
| |
Collapse
|
11
|
Zhang Y, Wang M, Zhu Q, Guo Y, Liu B, Li J, Yao X, Kong C, Zhang Y, Huang Y, Qi H, Wu J, Guo ZV, Dai Q. Long-term mesoscale imaging of 3D intercellular dynamics across a mammalian organ. Cell 2024; 187:6104-6122.e25. [PMID: 39276776 DOI: 10.1016/j.cell.2024.08.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 06/06/2024] [Accepted: 08/13/2024] [Indexed: 09/17/2024]
Abstract
A comprehensive understanding of physio-pathological processes necessitates non-invasive intravital three-dimensional (3D) imaging over varying spatial and temporal scales. However, huge data throughput, optical heterogeneity, surface irregularity, and phototoxicity pose great challenges, leading to an inevitable trade-off between volume size, resolution, speed, sample health, and system complexity. Here, we introduce a compact real-time, ultra-large-scale, high-resolution 3D mesoscope (RUSH3D), achieving uniform resolutions of 2.6 × 2.6 × 6 μm3 across a volume of 8,000 × 6,000 × 400 μm3 at 20 Hz with low phototoxicity. Through the integration of multiple computational imaging techniques, RUSH3D facilitates a 13-fold improvement in data throughput and an orders-of-magnitude reduction in system size and cost. With these advantages, we observed premovement neural activity and cross-day visual representational drift across the mouse cortex, the formation and progression of multiple germinal centers in mouse inguinal lymph nodes, and heterogeneous immune responses following traumatic brain injury-all at single-cell resolution, opening up a horizon for intravital mesoscale study of large-scale intercellular interactions at the organ level.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Mingrui Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Qiyu Zhu
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Yuduo Guo
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Bo Liu
- School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China
| | - Jiamin Li
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Xiao Yao
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Chui Kong
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Yi Zhang
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Yuchao Huang
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China
| | - Hai Qi
- School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| | - Zengcai V Guo
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; School of Basic Medical Sciences, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Center for Life Sciences, Beijing 100084, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
12
|
Zhu E, Li YR, Margolis S, Wang J, Wang K, Zhang Y, Wang S, Park J, Zheng C, Yang L, Chu A, Zhang Y, Gao L, Hsiai TK. Frontiers in artificial intelligence-directed light-sheet microscopy for uncovering biological phenomena and multi-organ imaging. VIEW 2024; 5:20230087. [PMID: 39478956 PMCID: PMC11521201 DOI: 10.1002/viw.20230087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 07/18/2024] [Indexed: 11/02/2024] Open
Abstract
Light-sheet fluorescence microscopy (LSFM) introduces fast scanning of biological phenomena with deep photon penetration and minimal phototoxicity. This advancement represents a significant shift in 3-D imaging of large-scale biological tissues and 4-D (space + time) imaging of small live animals. The large data associated with LSFM requires efficient imaging acquisition and analysis with the use of artificial intelligence (AI)/machine learning (ML) algorithms. To this end, AI/ML-directed LSFM is an emerging area for multi-organ imaging and tumor diagnostics. This review will present the development of LSFM and highlight various LSFM configurations and designs for multi-scale imaging. Optical clearance techniques will be compared for effective reduction in light scattering and optimal deep-tissue imaging. This review will further depict a diverse range of research and translational applications, from small live organisms to multi-organ imaging to tumor diagnosis. In addition, this review will address AI/ML-directed imaging reconstruction, including the application of convolutional neural networks (CNNs) and generative adversarial networks (GANs). In summary, the advancements of LSFM have enabled effective and efficient post-imaging reconstruction and data analyses, underscoring LSFM's contribution to advancing fundamental and translational research.
Collapse
Affiliation(s)
- Enbo Zhu
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Yan-Ruide Li
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Samuel Margolis
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Jing Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Kaidong Wang
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| | - Yaran Zhang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Shaolei Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Jongchan Park
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Charlie Zheng
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Lili Yang
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
- Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research, UCLA, California, 90095, USA
- Jonsson Comprehensive Cancer Center, David Geffen School of Medicine, UCLA, California, 90095, USA
- Molecular Biology Institute, UCLA, California, 90095, USA
| | - Alison Chu
- Division of Neonatology and Developmental Biology, Department of Pediatrics, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Yuhua Zhang
- Doheny Eye Institute, Department of Ophthalmology, UCLA, California, 90095, USA
| | - Liang Gao
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Tzung K. Hsiai
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| |
Collapse
|
13
|
Ma Y, Yi C, Zhou Y, Wang Z, Zhao Y, Zhu L, Wang J, Gao S, Liu J, Yuan X, Wang Z, Liu B, Fei P. Semantic redundancy-aware implicit neural compression for multidimensional biomedical image data. Commun Biol 2024; 7:1081. [PMID: 39227646 PMCID: PMC11371832 DOI: 10.1038/s42003-024-06788-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 08/27/2024] [Indexed: 09/05/2024] Open
Abstract
The surge in advanced imaging techniques has generated vast biomedical image data with diverse dimensions in space, time and spectrum, posing big challenges to conventional compression techniques in image storage, transmission, and sharing. Here, we propose an intelligent image compression approach with the first-proved semantic redundancy of biomedical data in the implicit neural function domain. This Semantic redundancy based Implicit Neural Compression guided with Saliency map (SINCS) can notably improve the compression efficiency for arbitrary-dimensional image data in terms of compression ratio and fidelity. Moreover, with weight transfer and residual entropy coding strategies, it shows improved compression speed while maintaining high quality. SINCS yields high quality compression with over 2000-fold compression ratio on 2D, 2D-T, 3D, 4D biomedical images of diverse targets ranging from single virus to entire human organs, and ensures reliable downstream tasks, such as object segmentation and quantitative analyses, to be conducted at high efficiency.
Collapse
Affiliation(s)
- Yifan Ma
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Chengqiang Yi
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Yao Zhou
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhaofei Wang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Yuxuan Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jie Wang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Shimeng Gao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jianchao Liu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Xinyue Yuan
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhaoqiang Wang
- Department of Bioengineering, Henry Samueli School of Engineering and Applied Science, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Binbing Liu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China.
- Advanced Biomedical Imaging Facility Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
14
|
Ma C, Tan W, He R, Yan B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat Methods 2024; 21:1558-1567. [PMID: 38609490 DOI: 10.1038/s41592-024-02244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/13/2024] [Indexed: 04/14/2024]
Abstract
Fluorescence microscopy-based image restoration has received widespread attention in the life sciences and has led to significant progress, benefiting from deep learning technology. However, most current task-specific methods have limited generalizability to different fluorescence microscopy-based image restoration problems. Here, we seek to improve generalizability and explore the potential of applying a pretrained foundation model to fluorescence microscopy-based image restoration. We provide a universal fluorescence microscopy-based image restoration (UniFMIR) model to address different restoration problems, and show that UniFMIR offers higher image restoration precision, better generalization and increased versatility. Demonstrations on five tasks and 14 datasets covering a wide range of microscopy imaging modalities and biological samples demonstrate that the pretrained UniFMIR can effectively transfer knowledge to a specific situation via fine-tuning, uncover clear nanoscale biomolecular structures and facilitate high-quality imaging. This work has the potential to inspire and trigger new research highlights for fluorescence microscopy-based image restoration.
Collapse
Affiliation(s)
- Chenxi Ma
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Ruian He
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| |
Collapse
|
15
|
Wang K, Xing F, Lin B, Su L, Liu J, Yang X, Tan H, Wang D. Synthetic color-and-depth encoded (sCade) illumination-based high-resolution light field particle imaging velocimetry. OPTICS EXPRESS 2024; 32:27042-27057. [PMID: 39538552 DOI: 10.1364/oe.526089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 06/18/2024] [Indexed: 11/16/2024]
Abstract
Light-field particle imaging velocimetry (LF-PIV) is widely used in large-scale flow field measurement scenarios due to its instant 3D imaging capability. However, conventional LF-PIV systems suffer low axial resolution and thereby have limited application in high-resolution and volumetric velocity measurements. Here, we report the use of synthetic color-and-depth-encoded (sCade) illumination to improve the axial resolution of LF-PIV. The sCade LF-PIV illuminated the imaging region with a color-and-depth encoded beam synthesized by structured beams of three lasers with distinct wavelengths and attained high-fidelity particle localization by decoding the color and depth information encoded in the acquired image. We systematically characterized the system performance by imaging particles and obtained 29 times improvement in axial resolution when compared to traditional LF-PIV. The high axial resolution of sCade LF-PIV allowed it to reconstruct vortices generated by square lid-driven cavity flow and a stirring disk with higher accuracy and smaller errors than the conventional method, highlighting the possibility and advantage of sCade LF-PIV for high-resolution and volumetric flow measurement applications. This approach can favorably advance the development of fluid measurement technology.
Collapse
|
16
|
Opatovski N, Nehme E, Zoref N, Barzilai I, Orange Kedem R, Ferdman B, Keselman P, Alalouf O, Shechtman Y. Depth-enhanced high-throughput microscopy by compact PSF engineering. Nat Commun 2024; 15:4861. [PMID: 38849376 PMCID: PMC11161645 DOI: 10.1038/s41467-024-48502-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 05/03/2024] [Indexed: 06/09/2024] Open
Abstract
High-throughput microscopy is vital for screening applications, where three-dimensional (3D) cellular models play a key role. However, due to defocus susceptibility, current 3D high-throughput microscopes require axial scanning, which lowers throughput and increases photobleaching and photodamage. Point spread function (PSF) engineering is an optical method that enables various 3D imaging capabilities, yet it has not been implemented in high-throughput microscopy due to the cumbersome optical extension it typically requires. Here we demonstrate compact PSF engineering in the objective lens, which allows us to enhance the imaging depth of field and, combined with deep learning, recover 3D information using single snapshots. Beyond the applications shown here, this work showcases the usefulness of high-throughput microscopy in obtaining training data for deep learning-based algorithms, applicable to a variety of microscopy modalities.
Collapse
Affiliation(s)
- Nadav Opatovski
- Russell Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa, Israel
| | - Elias Nehme
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
- Department of Electrical and Computer Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Noam Zoref
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Ilana Barzilai
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Reut Orange Kedem
- Russell Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa, Israel
| | - Boris Ferdman
- Russell Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa, Israel
| | - Paul Keselman
- Sartorius Stedim North America Inc., Bohemia, NY, USA
| | - Onit Alalouf
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Yoav Shechtman
- Russell Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa, Israel.
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel.
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
17
|
Wang N, Dong G, Qiao R, Yin X, Lin S. Bringing Artificial Intelligence (AI) into Environmental Toxicology Studies: A Perspective of AI-Enabled Zebrafish High-Throughput Screening. ENVIRONMENTAL SCIENCE & TECHNOLOGY 2024; 58:9487-9499. [PMID: 38691763 DOI: 10.1021/acs.est.4c00480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
Abstract
The booming development of artificial intelligence (AI) has brought excitement to many research fields that could benefit from its big data analysis capability for causative relationship establishment and knowledge generation. In toxicology studies using zebrafish, the microscopic images and videos that illustrate the developmental stages, phenotypic morphologies, and animal behaviors possess great potential to facilitate rapid hazard assessment and dissection of the toxicity mechanism of environmental pollutants. However, the traditional manual observation approach is both labor-intensive and time-consuming. In this Perspective, we aim to summarize the current AI-enabled image and video analysis tools to realize the full potential of AI. For image analysis, AI-based tools allow fast and objective determination of morphological features and extraction of quantitative information from images of various sorts. The advantages of providing accurate and reproducible results while avoiding human intervention play a critical role in speeding up the screening process. For video analysis, AI-based tools enable the tracking of dynamic changes in both microscopic cellular events and macroscopic animal behaviors. The subtle changes revealed by video analysis could serve as sensitive indicators of adverse outcomes. With AI-based toxicity analysis in its infancy, exciting developments and applications are expected to appear in the years to come.
Collapse
Affiliation(s)
- Nan Wang
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Gongqing Dong
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Ruxia Qiao
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Xiang Yin
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Sijie Lin
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| |
Collapse
|
18
|
Luo X, Lu Z, Jin M, Chen S, Yang J. Efficient high-resolution fluorescence projection imaging over an extended depth of field through optical hardware and deep learning optimizations. BIOMEDICAL OPTICS EXPRESS 2024; 15:3831-3847. [PMID: 38867796 PMCID: PMC11166417 DOI: 10.1364/boe.523312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/27/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024]
Abstract
Optical microscopy has witnessed notable advancements but has also become more costly and complex. Conventional wide field microscopy (WFM) has low resolution and shallow depth-of-field (DOF), which limits its applications in practical biological experiments. Recently, confocal and light sheet microscopy become major workhorses for biology that incorporate high-precision scanning to perform imaging within an extended DOF but at the sacrifice of expense, complexity, and imaging speed. Here, we propose deep focus microscopy, an efficient framework optimized both in hardware and algorithm to address the tradeoff between resolution and DOF. Our deep focus microscopy achieves large-DOF and high-resolution projection imaging by integrating a deep focus network (DFnet) into light field microscopy (LFM) setups. Based on our constructed dataset, deep focus microscopy features a significantly enhanced spatial resolution of ∼260 nm, an extended DOF of over 30 µm, and broad generalization across diverse sample structures. It also reduces the computational costs by four orders of magnitude compared to conventional LFM technologies. We demonstrate the excellent performance of deep focus microscopy in vivo, including long-term observations of cell division and migrasome formation in zebrafish embryos and mouse livers at high resolution without background contamination.
Collapse
Affiliation(s)
- Xin Luo
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Zhi Lu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Manchang Jin
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Shuai Chen
- Department of Gastroenterology and Hepatology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Jingyu Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
19
|
Barros BJ, Cunha JPS. Neurophotonics: a comprehensive review, current challenges and future trends. Front Neurosci 2024; 18:1382341. [PMID: 38765670 PMCID: PMC11102054 DOI: 10.3389/fnins.2024.1382341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 03/21/2024] [Indexed: 05/22/2024] Open
Abstract
The human brain, with its vast network of billions of neurons and trillions of synapses (connections) between diverse cell types, remains one of the greatest mysteries in science and medicine. Despite extensive research, an understanding of the underlying mechanisms that drive normal behaviors and response to disease states is still limited. Advancement in the Neuroscience field and development of therapeutics for related pathologies requires innovative technologies that can provide a dynamic and systematic understanding of the interactions between neurons and neural circuits. In this work, we provide an up-to-date overview of the evolution of neurophotonic approaches in the last 10 years through a multi-source, literature analysis. From an initial corpus of 243 papers retrieved from Scopus, PubMed and WoS databases, we have followed the PRISMA approach to select 56 papers in the area. Following a full-text evaluation of these 56 scientific articles, six main areas of applied research were identified and discussed: (1) Advanced optogenetics, (2) Multimodal neural interfaces, (3) Innovative therapeutics, (4) Imaging devices and probes, (5) Remote operations, and (6) Microfluidic platforms. For each area, the main technologies selected are discussed according to the photonic principles applied, the neuroscience application evaluated and the more indicative results of efficiency and scientific potential. This detailed analysis is followed by an outlook of the main challenges tackled over the last 10 years in the Neurophotonics field, as well as the main technological advances regarding specificity, light delivery, multimodality, imaging, materials and system designs. We conclude with a discussion of considerable challenges for future innovation and translation in Neurophotonics, from light delivery within the brain to physical constraints and data management strategies.
Collapse
Affiliation(s)
- Beatriz Jacinto Barros
- INESC TEC – Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - João P. S. Cunha
- INESC TEC – Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
- Faculty of Engineering, University of Porto, Porto, Portugal
| |
Collapse
|
20
|
Ryu J, Nejatbakhsh A, Torkashvand M, Gangadharan S, Seyedolmohadesin M, Kim J, Paninski L, Venkatachalam V. Versatile multiple object tracking in sparse 2D/3D videos via deformable image registration. PLoS Comput Biol 2024; 20:e1012075. [PMID: 38768230 PMCID: PMC11142724 DOI: 10.1371/journal.pcbi.1012075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 05/31/2024] [Accepted: 04/14/2024] [Indexed: 05/22/2024] Open
Abstract
Tracking body parts in behaving animals, extracting fluorescence signals from cells embedded in deforming tissue, and analyzing cell migration patterns during development all require tracking objects with partially correlated motion. As dataset sizes increase, manual tracking of objects becomes prohibitively inefficient and slow, necessitating automated and semi-automated computational tools. Unfortunately, existing methods for multiple object tracking (MOT) are either developed for specific datasets and hence do not generalize well to other datasets, or require large amounts of training data that are not readily available. This is further exacerbated when tracking fluorescent sources in moving and deforming tissues, where the lack of unique features and sparsely populated images create a challenging environment, especially for modern deep learning techniques. By leveraging technology recently developed for spatial transformer networks, we propose ZephIR, an image registration framework for semi-supervised MOT in 2D and 3D videos. ZephIR can generalize to a wide range of biological systems by incorporating adjustable parameters that encode spatial (sparsity, texture, rigidity) and temporal priors of a given data class. We demonstrate the accuracy and versatility of our approach in a variety of applications, including tracking the body parts of a behaving mouse and neurons in the brain of a freely moving C. elegans. We provide an open-source package along with a web-based graphical user interface that allows users to provide small numbers of annotations to interactively improve tracking results.
Collapse
Affiliation(s)
- James Ryu
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Amin Nejatbakhsh
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Mahdi Torkashvand
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Sahana Gangadharan
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Maedeh Seyedolmohadesin
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Jinmahn Kim
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Liam Paninski
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Vivek Venkatachalam
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| |
Collapse
|
21
|
Song P, Jadan HV, Howe CL, Foust AJ, Dragotti PL. Model-Based Explainable Deep Learning for Light-Field Microscopy Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3059-3074. [PMID: 38656840 PMCID: PMC11100862 DOI: 10.1109/tip.2024.3387297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/27/2024] [Accepted: 03/12/2024] [Indexed: 04/26/2024]
Abstract
In modern neuroscience, observing the dynamics of large populations of neurons is a critical step of understanding how networks of neurons process information. Light-field microscopy (LFM) has emerged as a type of scanless, high-speed, three-dimensional (3D) imaging tool, particularly attractive for this purpose. Imaging neuronal activity using LFM calls for the development of novel computational approaches that fully exploit domain knowledge embedded in physics and optics models, as well as enabling high interpretability and transparency. To this end, we propose a model-based explainable deep learning approach for LFM. Different from purely data-driven methods, the proposed approach integrates wave-optics theory, sparse representation and non-linear optimization with the artificial neural network. In particular, the architecture of the proposed neural network is designed following precise signal and optimization models. Moreover, the network's parameters are learned from a training dataset using a novel training strategy that integrates layer-wise training with tailored knowledge distillation. Such design allows the network to take advantage of domain knowledge and learned new features. It combines the benefit of both model-based and learning-based methods, thereby contributing to superior interpretability, transparency and performance. By evaluating on both structural and functional LFM data obtained from scattering mammalian brain tissues, we demonstrate the capabilities of the proposed approach to achieve fast, robust 3D localization of neuron sources and accurate neural activity identification.
Collapse
Affiliation(s)
- Pingfan Song
- Department of EngineeringUniversity of CambridgeCB2 1PZCambridgeU.K
| | - Herman Verinaz Jadan
- Faculty of Electrical and Computer EngineeringEscuela Superior Politécnica del Litoral (ESPOL)GuayaquilEC090903Ecuador
| | - Carmel L. Howe
- Department of Chemical Physiology and BiochemistryOregon Health and Science UniversityPortlandOR97239USA
| | - Amanda J. Foust
- Center for NeurotechnologyDepartment of BioengineeringImperial College LondonSW7 2AZLondonU.K
| | - Pier Luigi Dragotti
- Department of Electronic and Electrical EngineeringImperial College LondonSW7 2AZLondonUK
| |
Collapse
|
22
|
Porta-de-la-Riva M, Morales-Curiel LF, Carolina Gonzalez A, Krieg M. Bioluminescence as a functional tool for visualizing and controlling neuronal activity in vivo. NEUROPHOTONICS 2024; 11:024203. [PMID: 38348359 PMCID: PMC10861157 DOI: 10.1117/1.nph.11.2.024203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 01/18/2024] [Accepted: 01/19/2024] [Indexed: 02/15/2024]
Abstract
The use of bioluminescence as a reporter for physiology in neuroscience is as old as the discovery of the calcium-dependent photon emission of aequorin. Over the years, luciferases have been largely replaced by fluorescent reporters, but recently, the field has seen a renaissance of bioluminescent probes, catalyzed by unique developments in imaging technology, bioengineering, and biochemistry to produce luciferases with previously unseen colors and intensity. This is not surprising as the advantages of bioluminescence make luciferases very attractive for noninvasive, longitudinal in vivo observations without the need of an excitation light source. Here, we review how the development of dedicated and specific sensor-luciferases afforded, among others, transcranial imaging of calcium and neurotransmitters, or cellular metabolites and physical quantities such as forces and membrane voltage. Further, the increased versatility and light output of luciferases have paved the way for a new field of functional bioluminescence optogenetics, in which the photon emission of the luciferase is coupled to the gating of a photosensor, e.g., a channelrhodopsin and we review how they have been successfully used to engineer synthetic neuronal connections. Finally, we provide a primer to consider important factors in setting up functional bioluminescence experiments, with a particular focus on the genetic model Caenorhabditis elegans, and discuss the leading challenges that the field needs to overcome to regain a competitive advantage over fluorescence modalities. Together, our paper caters to experienced users of bioluminescence as well as novices who would like to experience the advantages of luciferases in their own hand.
Collapse
Affiliation(s)
- Montserrat Porta-de-la-Riva
- ICFO—Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, Castelldefels, Barcelona, Spain
| | - Luis-Felipe Morales-Curiel
- ICFO—Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, Castelldefels, Barcelona, Spain
| | - Adriana Carolina Gonzalez
- ICFO—Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, Castelldefels, Barcelona, Spain
| | - Michael Krieg
- ICFO—Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, Castelldefels, Barcelona, Spain
| |
Collapse
|
23
|
Page Vizcaíno J, Symvoulidis P, Wang Z, Jelten J, Favaro P, Boyden ES, Lasser T. Fast light-field 3D microscopy with out-of-distribution detection and adaptation through conditional normalizing flows. BIOMEDICAL OPTICS EXPRESS 2024; 15:1219-1232. [PMID: 38404325 PMCID: PMC10890860 DOI: 10.1364/boe.504039] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/09/2023] [Accepted: 11/20/2023] [Indexed: 02/27/2024]
Abstract
Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints but do not automatically provide a way to certify the realism of their reconstructions, which is essential in the biomedical realm. To address these shortcomings, this work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512x512x96 voxels, and it can be trained in under two hours due to the small dataset requirements (50 image-volume pairs). Furthermore, normalizing flows provides a way to compute the exact likelihood of a sample. This allows us to certify whether the predicted output is in- or ood, and retrain the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.
Collapse
Affiliation(s)
- Josué Page Vizcaíno
- Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information and Technology, Technical University of Munich, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, Germany
| | | | - Zeguan Wang
- Synthetic Neurobiology Group, Massachusetts Institute of Technology, USA
| | - Jonas Jelten
- Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information and Technology, Technical University of Munich, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, Germany
| | - Paolo Favaro
- Computer Vision Group, University of Bern, Switzerland
| | - Edward S. Boyden
- Synthetic Neurobiology Group, Massachusetts Institute of Technology, USA
| | - Tobias Lasser
- Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information and Technology, Technical University of Munich, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, Germany
| |
Collapse
|
24
|
Shi W, Quan H, Kong L. High-resolution 3D imaging in light-field microscopy through Stokes matrices and data fusion. OPTICS EXPRESS 2024; 32:3710-3722. [PMID: 38297586 DOI: 10.1364/oe.510728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/08/2024] [Indexed: 02/02/2024]
Abstract
The trade-off between the lateral and vertical resolution has long posed challenges to the efficient and widespread application of Fourier light-field microscopy, a highly scalable 3D imaging tool. Although existing methods for resolution enhancement can improve the measurement result to a certain extent, they come with limitations in terms of accuracy and applicable specimen types. To address these problems, this paper proposed a resolution enhancement scheme utilizing data fusion of polarization Stokes vectors and light-field information for Fourier light-field microscopy system. By introducing the surface normal vector information obtained from polarization measurement and integrating it with the light-field 3D point cloud data, 3D reconstruction results accuracy is highly improved in axial direction. Experimental results with a Fourier light-field 3D imaging microscope demonstrated a substantial enhancement of vertical resolution with a depth resolution to depth of field ratio of 0.19%. This represented approximately 44 times the improvement compared to the theoretical ratio before data fusion, enabling the system to access more detailed information with finer measurement accuracy for test samples. This work not only provides a feasible solution for breaking the limitations imposed by traditional light-field microscope hardware configurations but also offers superior 3D measurement approach in a more cost-effective and practical manner.
Collapse
|
25
|
Hussen E, Aakel N, Shaito AA, Al-Asmakh M, Abou-Saleh H, Zakaria ZZ. Zebrafish ( Danio rerio) as a Model for the Study of Developmental and Cardiovascular Toxicity of Electronic Cigarettes. Int J Mol Sci 2023; 25:194. [PMID: 38203365 PMCID: PMC10779276 DOI: 10.3390/ijms25010194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 11/03/2023] [Accepted: 11/08/2023] [Indexed: 01/12/2024] Open
Abstract
The increasing popularity of electronic cigarettes (e-cigarettes) as an alternative to conventional tobacco products has raised concerns regarding their potential adverse effects. The cardiovascular system undergoes intricate processes forming the heart and blood vessels during fetal development. However, the precise impact of e-cigarette smoke and aerosols on these delicate developmental processes remains elusive. Previous studies have revealed changes in gene expression patterns, disruptions in cellular signaling pathways, and increased oxidative stress resulting from e-cigarette exposure. These findings indicate the potential for e-cigarettes to cause developmental and cardiovascular harm. This comprehensive review article discusses various aspects of electronic cigarette use, emphasizing the relevance of cardiovascular studies in Zebrafish for understanding the risks to human health. It also highlights novel experimental approaches and technologies while addressing their inherent challenges and limitations.
Collapse
Affiliation(s)
- Eman Hussen
- Biological Science Program, Department of Biological and Environmental Sciences, College of Arts and Sciences, Qatar University, Doha P.O. Box 2713, Qatar;
| | - Nada Aakel
- Biomedical Sciences Department, College of Health Sciences, Qatar University, Doha P.O. Box 2713, Qatar; (N.A.); (M.A.-A.); (H.A.-S.)
| | - Abdullah A. Shaito
- Biomedical Research Center, Qatar University, Doha P.O. Box 2713, Qatar;
| | - Maha Al-Asmakh
- Biomedical Sciences Department, College of Health Sciences, Qatar University, Doha P.O. Box 2713, Qatar; (N.A.); (M.A.-A.); (H.A.-S.)
| | - Haissam Abou-Saleh
- Biomedical Sciences Department, College of Health Sciences, Qatar University, Doha P.O. Box 2713, Qatar; (N.A.); (M.A.-A.); (H.A.-S.)
- Biomedical Research Center, Qatar University, Doha P.O. Box 2713, Qatar;
| | - Zain Z. Zakaria
- Medical and Health Sciences Office, QU Health, Qatar University, Doha P.O. Box 2713, Qatar
| |
Collapse
|
26
|
Yi C, Zhu L, Sun J, Wang Z, Zhang M, Zhong F, Yan L, Tang J, Huang L, Zhang YH, Li D, Fei P. Video-rate 3D imaging of living cells using Fourier view-channel-depth light field microscopy. Commun Biol 2023; 6:1259. [PMID: 38086994 PMCID: PMC10716377 DOI: 10.1038/s42003-023-05636-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/27/2023] [Indexed: 12/18/2023] Open
Abstract
Interrogation of subcellular biological dynamics occurring in a living cell often requires noninvasive imaging of the fragile cell with high spatiotemporal resolution across all three dimensions. It thereby poses big challenges to modern fluorescence microscopy implementations because the limited photon budget in a live-cell imaging task makes the achievable performance of conventional microscopy approaches compromise between their spatial resolution, volumetric imaging speed, and phototoxicity. Here, we incorporate a two-stage view-channel-depth (VCD) deep-learning reconstruction strategy with a Fourier light-field microscope based on diffractive optical element to realize fast 3D super-resolution reconstructions of intracellular dynamics from single diffraction-limited 2D light-filed measurements. This VCD-enabled Fourier light-filed imaging approach (F-VCD), achieves video-rate (50 volumes per second) 3D imaging of intracellular dynamics at a high spatiotemporal resolution of ~180 nm × 180 nm × 400 nm and strong noise-resistant capability, with which light field images with a signal-to-noise ratio (SNR) down to -1.62 dB could be well reconstructed. With this approach, we successfully demonstrate the 4D imaging of intracellular organelle dynamics, e.g., mitochondria fission and fusion, with ~5000 times of observation.
Collapse
Affiliation(s)
- Chengqiang Yi
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jiahao Sun
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhaofei Wang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Meng Zhang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
- Britton Chance Center for Biomedical Photonics-MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Fenghe Zhong
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Luxin Yan
- State Education Commission Key Laboratory for Image Processing and Intelligent Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Jiang Tang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Liang Huang
- Department of Hematology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, 430030, China
| | - Yu-Hui Zhang
- Britton Chance Center for Biomedical Photonics-MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Dongyu Li
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
27
|
Sun J, Zhao F, Zhu L, Liu B, Fei P. Optical projection tomography reconstruction with few views using highly-generalizable deep learning at sinogram domain. BIOMEDICAL OPTICS EXPRESS 2023; 14:6260-6270. [PMID: 38420331 PMCID: PMC10898583 DOI: 10.1364/boe.500152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/31/2023] [Accepted: 10/31/2023] [Indexed: 03/02/2024]
Abstract
Optical projection tomography (OPT) reconstruction using a minimal number of measured views offers the potential to significantly reduce excitation dosage and greatly enhance temporal resolution in biomedical imaging. However, traditional algorithms for tomographic reconstruction exhibit severe quality degradation, e.g., presence of streak artifacts, when the number of views is reduced. In this study, we introduce a novel domain evaluation method which can evaluate the domain complexity, and thereby validate that the sinogram domain exhibits lower complexity as compared to the conventional spatial domain. Then we achieve robust deep-learning-based reconstruction with a feedback-based data initialization method at sinogram domain, which shows strong generalization ability that notably improves the overall performance for OPT image reconstruction. This learning-based approach, termed SinNet, enables 4-view OPT reconstructions of diverse biological samples showing robust generalization ability. It surpasses the conventional OPT reconstruction approaches in terms of peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics, showing its potential for the augment of widely-used OPT techniques.
Collapse
Affiliation(s)
- Jiahao Sun
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Fang Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - BinBing Liu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
- Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
28
|
Gonzalez-Ramos S, Wang J, Cho JM, Zhu E, Park SK, In JG, Reddy ST, Castillo EF, Campen MJ, Hsiai TK. Integrating 4-D light-sheet fluorescence microscopy and genetic zebrafish system to investigate ambient pollutants-mediated toxicity. THE SCIENCE OF THE TOTAL ENVIRONMENT 2023; 902:165947. [PMID: 37543337 PMCID: PMC10659062 DOI: 10.1016/j.scitotenv.2023.165947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/28/2023] [Accepted: 07/29/2023] [Indexed: 08/07/2023]
Abstract
Ambient air pollutants, including PM2.5 (aerodynamic diameter d ~2.5 μm), PM10 (d ~10 μm), and ultrafine particles (UFP: d < 0.1 μm) impart both short- and long-term toxicity to various organs, including cardiopulmonary, central nervous, and gastrointestinal systems. While rodents have been the principal animal model to elucidate air pollution-mediated organ dysfunction, zebrafish (Danio rerio) is genetically tractable for its short husbandry and life cycle to study ambient pollutants. Its electrocardiogram (ECG) resembles that of humans, and the fluorescent reporter-labeled tissues in the zebrafish system allow for screening a host of ambient pollutants that impair cardiovascular development, organ regeneration, and gut-vascular barriers. In parallel, the high spatiotemporal resolution of light-sheet fluorescence microscopy (LSFM) enables investigators to take advantage of the transparent zebrafish embryos and genetically labeled fluorescent reporters for imaging the dynamic cardiac structure and function at a single-cell resolution. In this context, our review highlights the integrated strengths of the genetic zebrafish system and LSFM for high-resolution and high-throughput investigation of ambient pollutants-mediated cardiac and intestinal toxicity.
Collapse
Affiliation(s)
- Sheila Gonzalez-Ramos
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, CA, USA; Department of Bioengineering, School of Engineering & Applied Science, University of California, Los Angeles, CA, USA
| | - Jing Wang
- Department of Bioengineering, School of Engineering & Applied Science, University of California, Los Angeles, CA, USA
| | - Jae Min Cho
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Enbo Zhu
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Seul-Ki Park
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Julie G In
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Srinivasa T Reddy
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, CA, USA; Department of Molecular and Medical Pharmacology, University of California, Los Angeles, CA, USA; Molecular Toxicology Interdepartmental Degree Program, Fielding School of Public Health, University of California, Los Angeles, CA, USA
| | - Eliseo F Castillo
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Matthew J Campen
- Department of Pharmaceutical Sciences, College of Pharmacy, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Tzung K Hsiai
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine at University of California, Los Angeles, CA, USA; Department of Bioengineering, School of Engineering & Applied Science, University of California, Los Angeles, CA, USA; Greater Los Angeles VA Healthcare System, Department of Medicine, Los Angeles, California, USA.
| |
Collapse
|
29
|
Shi W, Kong L. Light field measurement of specular surfaces by multi-polarization and hybrid modulated illumination. APPLIED OPTICS 2023; 62:8060-8069. [PMID: 38038101 DOI: 10.1364/ao.499319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/01/2023] [Indexed: 12/02/2023]
Abstract
Specular highlights present a challenge in light field microscopy imaging fields, leading to loss of target information and incorrect observation results. Existing highlight elimination methods suffer from computational complexity, false information and applicability. To address these issues, an adaptive multi-polarization illumination scheme is proposed to effectively eliminate highlight reflections and ensure uniform illumination without complex optical setup or mechanical rotation. Using a multi-polarized light source with hybrid modulated illumination, the system achieved combined multi-polarized illumination and physical elimination of specular highlights. This was achieved by exploiting the different light contributions at different polarization angles and by using optimal solution algorithms and precise electronic control. Experimental results show that the proposed adaptive illumination system can efficiently compute control parameters and precisely adjust the light source output in real time, resulting in a significant reduction of specular highlight pixels to less than 0.001% of the original image. In addition, the system ensures uniform illumination of the target area under different illumination configurations, further improving the overall image quality. This study presents a multi-polarization-based adaptive de-highlighting system with potential applications in miniaturization, biological imaging and materials analysis.
Collapse
|
30
|
Fazel M, Grussmayer KS, Ferdman B, Radenovic A, Shechtman Y, Enderlein J, Pressé S. Fluorescence Microscopy: a statistics-optics perspective. ARXIV 2023:arXiv:2304.01456v3. [PMID: 37064525 PMCID: PMC10104198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
Fundamental properties of light unavoidably impose features on images collected using fluorescence microscopes. Modeling these features is ever more important in quantitatively interpreting microscopy images collected at scales on par or smaller than light's wavelength. Here we review the optics responsible for generating fluorescent images, fluorophore properties, microscopy modalities leveraging properties of both light and fluorophores, in addition to the necessarily probabilistic modeling tools imposed by the stochastic nature of light and measurement.
Collapse
Affiliation(s)
- Mohamadreza Fazel
- Department of Physics, Arizona State University, Tempe, Arizona, USA
- Center for Biological Physics, Arizona State University, Tempe, Arizona, USA
| | - Kristin S Grussmayer
- Department of Bionanoscience, Faculty of Applied Science and Kavli Institute for Nanoscience, Delft University of Technology, Delft, Netherlands
| | - Boris Ferdman
- Russel Berrie Nanotechnology Institute and Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Aleksandra Radenovic
- Laboratory of Nanoscale Biology, Institute of Bioengineering, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
| | - Yoav Shechtman
- Russel Berrie Nanotechnology Institute and Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Jörg Enderlein
- III. Institute of Physics - Biophysics, Georg August University, Göttingen, Germany
| | - Steve Pressé
- Department of Physics, Arizona State University, Tempe, Arizona, USA
- Center for Biological Physics, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
31
|
Petkidis A, Andriasyan V, Greber UF. Machine learning for cross-scale microscopy of viruses. CELL REPORTS METHODS 2023; 3:100557. [PMID: 37751685 PMCID: PMC10545915 DOI: 10.1016/j.crmeth.2023.100557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/28/2023]
Abstract
Despite advances in virological sciences and antiviral research, viruses continue to emerge, circulate, and threaten public health. We still lack a comprehensive understanding of how cells and individuals remain susceptible to infectious agents. This deficiency is in part due to the complexity of viruses, including the cell states controlling virus-host interactions. Microscopy samples distinct cellular infection stages in a multi-parametric, time-resolved manner at molecular resolution and is increasingly enhanced by machine learning and deep learning. Here we discuss how state-of-the-art artificial intelligence (AI) augments light and electron microscopy and advances virological research of cells. We describe current procedures for image denoising, object segmentation, tracking, classification, and super-resolution and showcase examples of how AI has improved the acquisition and analyses of microscopy data. The power of AI-enhanced microscopy will continue to help unravel virus infection mechanisms, develop antiviral agents, and improve viral vectors.
Collapse
Affiliation(s)
- Anthony Petkidis
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| | - Vardan Andriasyan
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - Urs F Greber
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| |
Collapse
|
32
|
Chen R, Peng S, Zhu L, Meng J, Fan X, Feng Z, Zhang H, Qian J. Enhancing Total Optical Throughput of Microscopy with Deep Learning for Intravital Observation. SMALL METHODS 2023; 7:e2300172. [PMID: 37183924 DOI: 10.1002/smtd.202300172] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/17/2023] [Indexed: 05/16/2023]
Abstract
The significance of performing large-depth dynamic microscopic imaging in vivo for life science research cannot be overstated. However, the optical throughput of the microscope limits the available information per unit of time, i.e., it is difficult to obtain both high spatial and temporal resolution at once. Here, a method is proposed to construct a kind of intravital microscopy with high optical throughput, by making near-infrared-II (NIR-II, 900-1880 nm) wide-field fluorescence microscopy learn from two-photon fluorescence microscopy based on a scale-recurrent network. Using this upgraded NIR-II fluorescence microscope, vessels in the opaque brain of a rodent are reconstructed three-dimensionally. Five-fold axial and thirteen-fold lateral resolution improvements are achieved without sacrificing temporal resolution and light utilization. Also, tiny cerebral vessel dilatations in early acute respiratory failure mice are observed, with this high optical throughput NIR-II microscope at an imaging speed of 30 fps.
Collapse
Affiliation(s)
- Runze Chen
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Shiyi Peng
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Liang Zhu
- College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology (ZIINT), Zhejiang University, 310027, Hangzhou, China
| | - Jia Meng
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Xiaoxiao Fan
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Zhe Feng
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
- Dr. Li Dak Sum & Yip Yio Chin Center for Stem Cell and Regenerative Medicine, Zhejiang University, 310058, Hangzhou, China
| | - Hequn Zhang
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Jun Qian
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
- Dr. Li Dak Sum & Yip Yio Chin Center for Stem Cell and Regenerative Medicine, Zhejiang University, 310058, Hangzhou, China
| |
Collapse
|
33
|
Goda K, Lu H, Fei P, Guck J. Revolutionizing microfluidics with artificial intelligence: a new dawn for lab-on-a-chip technologies. LAB ON A CHIP 2023; 23:3737-3740. [PMID: 37503818 DOI: 10.1039/d3lc90061d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Keisuke Goda, Hang Lu, Peng Fei, and Jochen Guck introduce the AI in Microfluidics themed collection, on revolutionizing microfluidics with artificial intelligence: a new dawn for lab-on-a-chip technologies.
Collapse
Affiliation(s)
- Keisuke Goda
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
- Department of Bioengineering, University of California, Los Angeles, California 90095, USA
- Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Hang Lu
- School of Chemical and Biomolecular Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
- Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
| | - Peng Fei
- School of Optical and Electronic Information, Wuhan National Laboratory for Optoelectronics, Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jochen Guck
- Max Planck Institute for the Science of Light and Max-Planck-Zentrum für Physik und Medizin, Erlangen, Germany
| |
Collapse
|
34
|
Taylor JM. Fast algorithm for 3D volume reconstruction from light field microscopy datasets. OPTICS LETTERS 2023; 48:4177-4180. [PMID: 37581986 DOI: 10.1364/ol.490061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 07/05/2023] [Indexed: 08/17/2023]
Abstract
Light field microscopy can capture 3D volume datasets in a snapshot, making it a valuable tool for high-speed 3D imaging of dynamic biological events. However, subsequent computational reconstruction of the raw data into a human-interpretable 3D+time image is very time-consuming, limiting the technique's utility as a routine imaging tool. Here we derive improved equations for 3D volume reconstruction from light field microscopy datasets, leading to dramatic speedups. We characterize our open-source Python implementation of these algorithms and demonstrate real-world reconstruction speedups of more than an order of magnitude compared with established approaches. The scale of this performance improvement opens up new possibilities for studying large timelapse datasets in light field microscopy.
Collapse
|
35
|
Komuro J, Kusumoto D, Hashimoto H, Yuasa S. Machine learning in cardiology: Clinical application and basic research. J Cardiol 2023; 82:128-133. [PMID: 37141938 DOI: 10.1016/j.jjcc.2023.04.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 04/23/2023] [Accepted: 04/28/2023] [Indexed: 05/06/2023]
Abstract
Machine learning is a subfield of artificial intelligence. The quality and versatility of machine learning have been rapidly improving and playing a critical role in many aspects of social life. This trend is also observed in the medical field. Generally, there are three main types of machine learning: supervised, unsupervised, and reinforcement learning. Each type of learning is adequately selected for the purpose and type of data. In the field of medicine, various types of information are collected and used, and research using machine learning is becoming increasingly relevant. Many clinical studies are conducted using electronic health and medical records, including in the cardiovascular area. Machine learning has also been applied in basic research. Machine learning has been widely used for several types of data analysis, such as clustering of microarray analysis and RNA sequence analysis. Machine learning is essential for genome and multi-omics analyses. This review summarizes the recent advancements in the use of machine learning in clinical applications and basic cardiovascular research.
Collapse
Affiliation(s)
- Jin Komuro
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
| | - Dai Kusumoto
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
| | - Hisayuki Hashimoto
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
| | - Shinsuke Yuasa
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan.
| |
Collapse
|
36
|
Jia D, Zhang Y, Yang Q, Xue Y, Tan Y, Guo Z, Zhang M, Tian L, Cheng JX. 3D Chemical Imaging by Fluorescence-detected Mid-Infrared Photothermal Fourier Light Field Microscopy. CHEMICAL & BIOMEDICAL IMAGING 2023; 1:260-267. [PMID: 37388959 PMCID: PMC10302888 DOI: 10.1021/cbmi.3c00022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 03/04/2023] [Accepted: 03/08/2023] [Indexed: 07/01/2023]
Abstract
Three-dimensional molecular imaging of living organisms and cells plays a significant role in modern biology. Yet, current volumetric imaging modalities are largely fluorescence-based and thus lack chemical content information. Mid-infrared photothermal microscopy as a chemical imaging technology provides infrared spectroscopic information at submicrometer spatial resolution. Here, by harnessing thermosensitive fluorescent dyes to sense the mid-infrared photothermal effect, we demonstrate 3D fluorescence-detected mid-infrared photothermal Fourier light field (FMIP-FLF) microscopy at the speed of 8 volumes per second and submicron spatial resolution. Protein contents in bacteria and lipid droplets in living pancreatic cancer cells are visualized. Altered lipid metabolism in drug-resistant pancreatic cancer cells is observed with the FMIP-FLF microscope.
Collapse
Affiliation(s)
- Danchen Jia
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Yi Zhang
- Department
of Physics, Boston University, Boston, Massachusetts 02215, United States
| | - Qianwan Yang
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Yujia Xue
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Yuying Tan
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Zhongyue Guo
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Meng Zhang
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Lei Tian
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Ji-Xin Cheng
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| |
Collapse
|
37
|
Zhang B, Sun X, Mai J, Wang W. Deep learning-enhanced fluorescence microscopy via confocal physical imaging model. OPTICS EXPRESS 2023; 31:19048-19064. [PMID: 37381330 DOI: 10.1364/oe.490037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/09/2023] [Indexed: 06/30/2023]
Abstract
Confocal microscopy is one of the most widely used tools for high-resolution cellular, tissue imaging and industrial inspection. Micrograph reconstruction based on deep learning has become an effective tool for modern microscopy imaging techniques. While most deep learning methods neglect the imaging process mechanism, which requires a lot of work to solve the multi-scale image pairs aliasing problem. We show that these limitations can be mitigated via an image degradation model based on Richards-Wolf vectorial diffraction integral and confocal imaging theory. The low-resolution images required for network training are generated by model degradation from their high-resolution counterparts, thereby eliminating the need for accurate image alignment. The image degradation model ensures the generalization and fidelity of the confocal images. By combining the residual neural network with a lightweight feature attention module with degradation model of confocal microscopy ensures high fidelity and generalization. Experiments on different measured data report that compared with the two deconvolution algorithms, non-negative least squares algorithm and Richardson-Lucy algorithm, the structural similarity index between the network output image and the real image reaches a high level above 0.82, and the peak signal-to-noise ratio can be improved by more than 0.6 dB. It also shows good applicability in different deep learning networks.
Collapse
|
38
|
Zhang X, Almasian M, Hassan SS, Jotheesh R, Kadam VA, Polk AR, Saberigarakani A, Rahat A, Yuan J, Lee J, Carroll K, Ding Y. 4D Light-sheet imaging and interactive analysis of cardiac contractility in zebrafish larvae. APL Bioeng 2023; 7:026112. [PMID: 37351330 PMCID: PMC10283270 DOI: 10.1063/5.0153214] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 06/05/2023] [Indexed: 06/24/2023] Open
Abstract
Despite ongoing efforts in cardiovascular research, the acquisition of high-resolution and high-speed images for the purpose of assessing cardiac contraction remains challenging. Light-sheet fluorescence microscopy (LSFM) offers superior spatiotemporal resolution and minimal photodamage, providing an indispensable opportunity for the in vivo study of cardiac micro-structure and contractile function in zebrafish larvae. To track the myocardial architecture and contractility, we have developed an imaging strategy ranging from LSFM system construction, retrospective synchronization, single cell tracking, to user-directed virtual reality (VR) analysis. Our system enables the four-dimensional (4D) investigation of individual cardiomyocytes across the entire atrium and ventricle during multiple cardiac cycles in a zebrafish larva at the cellular resolution. To enhance the throughput of our model reconstruction and assessment, we have developed a parallel computing-assisted algorithm for 4D synchronization, resulting in a nearly tenfold enhancement of reconstruction efficiency. The machine learning-based nuclei segmentation and VR-based interaction further allow us to quantify cellular dynamics in the myocardium from end-systole to end-diastole. Collectively, our strategy facilitates noninvasive cardiac imaging and user-directed data interpretation with improved efficiency and accuracy, holding great promise to characterize functional changes and regional mechanics at the single cell level during cardiac development and regeneration.
Collapse
Affiliation(s)
- Xinyuan Zhang
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Milad Almasian
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Sohail S. Hassan
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Rosemary Jotheesh
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Vinay A. Kadam
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Austin R. Polk
- Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Alireza Saberigarakani
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Aayan Rahat
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Jie Yuan
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Juhyun Lee
- Department of Bioengineering, The University of Texas at Arlington, Arlington, Texas 76019, USA
| | - Kelli Carroll
- Department of Biology, Austin College, Sherman, Texas 75090, USA
| | - Yichen Ding
- Author to whom correspondence should be addressed:. Tel.: 972–883-7217
| |
Collapse
|
39
|
Yun H, Saavedra G, Garcia-Sucerquia J, Tolosa A, Martinez-Corral M, Sanchez-Ortiga E. Practical guide for setting up a Fourier light-field microscope. APPLIED OPTICS 2023; 62:4228-4235. [PMID: 37706910 DOI: 10.1364/ao.491369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 04/26/2023] [Indexed: 09/15/2023]
Abstract
A practical guide for the easy implementation of a Fourier light-field microscope is reported. The Fourier light-field concept applied to microscopy allows the capture in real time of a series of 2D orthographic images of microscopic thick dynamic samples. Such perspective images contain spatial and angular information of the light-field emitted by the sample. A feature of this technology is the tight requirement of a double optical conjugation relationship, and also the requirement of NA matching. For these reasons, the Fourier light-field microscope being a non-complex optical system, a clear protocol on how to set up the optical elements accurately is needed. In this sense, this guide is aimed to simplify the implementation process, with an optical bench and off-the-shelf components. This will help the widespread use of this recent technology.
Collapse
|
40
|
Cho JM, Poon MLS, Zhu E, Wang J, Butcher JT, Hsiai T. Quantitative 4D imaging of biomechanical regulation of ventricular growth and maturation. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2023; 26:100438. [PMID: 37424697 PMCID: PMC10327868 DOI: 10.1016/j.cobme.2022.100438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Abnormal cardiac development is intimately associated with congenital heart disease. During development, a sponge-like network of muscle fibers in the endocardium, known as trabeculation, becomes compacted. Biomechanical forces regulate myocardial differentiation and proliferation to form trabeculation, while the molecular mechanism is still enigmatic. Biomechanical forces, including intracardiac hemodynamic flow and myocardial contractile force, activate a host of molecular signaling pathways to mediate cardiac morphogenesis. While mechanotransduction pathways to initiate ventricular trabeculation is well studied, deciphering the relative importance of hemodynamic shear vs. mechanical contractile forces to modulate the transition from trabeculation to compaction requires advanced imaging tools and genetically tractable animal models. For these reasons, the advent of 4-D multi-scale light-sheet imaging and complementary multiplex live imaging via micro-CT in the beating zebrafish heart and live chick embryos respectively. Thus, this review highlights the complementary animal models and advanced imaging needed to elucidate the mechanotransduction underlying cardiac ventricular development.
Collapse
Affiliation(s)
- Jae Min Cho
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA
- Department of Medicine, Greater Los Angeles VA Healthcare System
| | - Mong Lung Steve Poon
- Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University
| | - Enbo Zhu
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA
- Department of Medicine, Greater Los Angeles VA Healthcare System
| | | | - Jonathan T. Butcher
- Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University
| | - Tzung Hsiai
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA
- Department of Medicine, Greater Los Angeles VA Healthcare System
- Department of Bioengineering, UCLA
| |
Collapse
|
41
|
Chen R, Tang X, Zhao Y, Shen Z, Zhang M, Shen Y, Li T, Chung CHY, Zhang L, Wang J, Cui B, Fei P, Guo Y, Du S, Yao S. Single-frame deep-learning super-resolution microscopy for intracellular dynamics imaging. Nat Commun 2023; 14:2854. [PMID: 37202407 DOI: 10.1038/s41467-023-38452-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 04/28/2023] [Indexed: 05/20/2023] Open
Abstract
Single-molecule localization microscopy (SMLM) can be used to resolve subcellular structures and achieve a tenfold improvement in spatial resolution compared to that obtained by conventional fluorescence microscopy. However, the separation of single-molecule fluorescence events that requires thousands of frames dramatically increases the image acquisition time and phototoxicity, impeding the observation of instantaneous intracellular dynamics. Here we develop a deep-learning based single-frame super-resolution microscopy (SFSRM) method which utilizes a subpixel edge map and a multicomponent optimization strategy to guide the neural network to reconstruct a super-resolution image from a single frame of a diffraction-limited image. Under a tolerable signal density and an affordable signal-to-noise ratio, SFSRM enables high-fidelity live-cell imaging with spatiotemporal resolutions of 30 nm and 10 ms, allowing for prolonged monitoring of subcellular dynamics such as interplays between mitochondria and endoplasmic reticulum, the vesicle transport along microtubules, and the endosome fusion and fission. Moreover, its adaptability to different microscopes and spectra makes it a useful tool for various imaging systems.
Collapse
Affiliation(s)
- Rong Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiao Tang
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Yuxuan Zhao
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Zeyu Shen
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Meng Zhang
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Yusheng Shen
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Tiantian Li
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Casper Ho Yin Chung
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lijuan Zhang
- School of Pharmaceutical Sciences, Guizhou University, 550025, Guizhou, China
| | - Ji Wang
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Binbin Cui
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Peng Fei
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Yusong Guo
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China.
| | - Shengwang Du
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
- Department of Physics, The Hong Kong University of Science and Technology, Hong Kong, China.
- Department of Physics, The University of Texas at Dallas, Richardson, TX, 75080, USA.
| | - Shuhuai Yao
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
| |
Collapse
|
42
|
Zhao Z, Zhou Y, Liu B, He J, Zhao J, Cai Y, Fan J, Li X, Wang Z, Lu Z, Wu J, Qi H, Dai Q. Two-photon synthetic aperture microscopy for minimally invasive fast 3D imaging of native subcellular behaviors in deep tissue. Cell 2023; 186:2475-2491.e22. [PMID: 37178688 DOI: 10.1016/j.cell.2023.04.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/21/2023] [Accepted: 04/10/2023] [Indexed: 05/15/2023]
Abstract
Holistic understanding of physio-pathological processes requires noninvasive 3D imaging in deep tissue across multiple spatial and temporal scales to link diverse transient subcellular behaviors with long-term physiogenesis. Despite broad applications of two-photon microscopy (TPM), there remains an inevitable tradeoff among spatiotemporal resolution, imaging volumes, and durations due to the point-scanning scheme, accumulated phototoxicity, and optical aberrations. Here, we harnessed the concept of synthetic aperture radar in TPM to achieve aberration-corrected 3D imaging of subcellular dynamics at a millisecond scale for over 100,000 large volumes in deep tissue, with three orders of magnitude reduction in photobleaching. With its advantages, we identified direct intercellular communications through migrasome generation following traumatic brain injury, visualized the formation process of germinal center in the mouse lymph node, and characterized heterogeneous cellular states in the mouse visual cortex, opening up a horizon for intravital imaging to understand the organizations and functions of biological systems at a holistic level.
Collapse
Affiliation(s)
- Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Yiliang Zhou
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Bo Liu
- Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China; Department of Basic Medical Sciences, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Jing He
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Jiayin Zhao
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Yeyi Cai
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Jingtao Fan
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China
| | - Xinyang Li
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Zilin Wang
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Department of Anesthesiology, the First Medical Center, Chinese PLA General Hospital, Beijing 100853, China
| | - Zhi Lu
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China; Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| | - Hai Qi
- Tsinghua-Peking Center for Life Sciences, Beijing 100084, China; Laboratory of Dynamic Immunobiology, Institute for Immunology, Tsinghua University, Beijing 100084, China; Department of Basic Medical Sciences, School of Medicine, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory for Immunological Research on Chronic Diseases, Tsinghua University, Beijing 100084, China; Beijing Frontier Research Center for Biological Structure, Tsinghua University, Beijing 100084, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing 100084, China; Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China; Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing 100084, China; IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
43
|
Lu Z, Liu Y, Jin M, Luo X, Yue H, Wang Z, Zuo S, Zeng Y, Fan J, Pang Y, Wu J, Yang J, Dai Q. Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging. Nat Methods 2023; 20:735-746. [PMID: 37024654 PMCID: PMC10172145 DOI: 10.1038/s41592-023-01839-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 03/07/2023] [Indexed: 04/08/2023]
Abstract
High-speed three-dimensional (3D) intravital imaging in animals is useful for studying transient subcellular interactions and functions in health and disease. Light-field microscopy (LFM) provides a computational solution for snapshot 3D imaging with low phototoxicity but is restricted by low resolution and reconstruction artifacts induced by optical aberrations, motion and noise. Here, we propose virtual-scanning LFM (VsLFM), a physics-based deep learning framework to increase the resolution of LFM up to the diffraction limit within a snapshot. By constructing a 40 GB high-resolution scanning LFM dataset across different species, we exploit physical priors between phase-correlated angular views to address the frequency aliasing problem. This enables us to bypass hardware scanning and associated motion artifacts. Here, we show that VsLFM achieves ultrafast 3D imaging of diverse processes such as the beating heart in embryonic zebrafish, voltage activity in Drosophila brains and neutrophil migration in the mouse liver at up to 500 volumes per second.
Collapse
Affiliation(s)
- Zhi Lu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yu Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Manchang Jin
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Xin Luo
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Huanjing Yue
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Zian Wang
- Department of Automation, Tsinghua University, Beijing, China
| | - Siqing Zuo
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiaqi Fan
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yanwei Pang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Jingyu Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
44
|
Hasani H, Sun J, Zhu SI, Rong Q, Willomitzer F, Amor R, McConnell G, Cossairt O, Goodhill GJ. Whole-brain imaging of freely-moving zebrafish. Front Neurosci 2023; 17:1127574. [PMID: 37139528 PMCID: PMC10150962 DOI: 10.3389/fnins.2023.1127574] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023] Open
Abstract
One of the holy grails of neuroscience is to record the activity of every neuron in the brain while an animal moves freely and performs complex behavioral tasks. While important steps forward have been taken recently in large-scale neural recording in rodent models, single neuron resolution across the entire mammalian brain remains elusive. In contrast the larval zebrafish offers great promise in this regard. Zebrafish are a vertebrate model with substantial homology to the mammalian brain, but their transparency allows whole-brain recordings of genetically-encoded fluorescent indicators at single-neuron resolution using optical microscopy techniques. Furthermore zebrafish begin to show a complex repertoire of natural behavior from an early age, including hunting small, fast-moving prey using visual cues. Until recently work to address the neural bases of these behaviors mostly relied on assays where the fish was immobilized under the microscope objective, and stimuli such as prey were presented virtually. However significant progress has recently been made in developing brain imaging techniques for zebrafish which are not immobilized. Here we discuss recent advances, focusing particularly on techniques based on light-field microscopy. We also draw attention to several important outstanding issues which remain to be addressed to increase the ecological validity of the results obtained.
Collapse
Affiliation(s)
- Hamid Hasani
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States
| | - Jipeng Sun
- Department of Computer Science, Northwestern University, Evanston, IL, United States
| | - Shuyu I. Zhu
- Departments of Developmental Biology and Neuroscience, Washington University in St. Louis, St. Louis, MO, United States
| | - Qiangzhou Rong
- Departments of Developmental Biology and Neuroscience, Washington University in St. Louis, St. Louis, MO, United States
| | - Florian Willomitzer
- Wyant College of Optical Sciences, University of Arizona, Tucson, AZ, United States
| | - Rumelo Amor
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD, Australia
| | - Gail McConnell
- Centre for Biophotonics, Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, United Kingdom
| | - Oliver Cossairt
- Department of Computer Science, Northwestern University, Evanston, IL, United States
| | - Geoffrey J. Goodhill
- Departments of Developmental Biology and Neuroscience, Washington University in St. Louis, St. Louis, MO, United States
| |
Collapse
|
45
|
Zhu T, Nie J, Yu T, Zhu D, Huang Y, Chen Z, Gu Z, Tang J, Li D, Fei P. Large-scale high-throughput 3D culture, imaging, and analysis of cell spheroids using microchip-enhanced light-sheet microscopy. BIOMEDICAL OPTICS EXPRESS 2023; 14:1659-1669. [PMID: 37078040 PMCID: PMC10110308 DOI: 10.1364/boe.485217] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/24/2023] [Accepted: 03/06/2023] [Indexed: 05/03/2023]
Abstract
Light sheet microscopy combined with a microchip is an emerging tool in biomedical research that notably improves efficiency. However, microchip-enhanced light-sheet microscopy is limited by noticeable aberrations induced by the complex refractive indices in the chip. Herein, we report a droplet microchip that is specifically engineered to be capable of large-scale culture of 3D spheroids (over 600 samples per chip) and has a polymer index matched to water (difference <1%). When combined with a lab-built open-top light-sheet microscope, this microchip-enhanced microscopy technique allows 3D time-lapse imaging of the cultivated spheroids with ∼2.5-µm single-cell resolution and a high throughput of ∼120 spheroids per minute. This technique was validated by a comparative study on the proliferation and apoptosis rates of hundreds of spheroids with or without treatment with the apoptosis-inducing drug Staurosporine.
Collapse
Affiliation(s)
- Tingting Zhu
- School of Optical and Electronic Information - Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jun Nie
- Institute for Cell Analysis, Shenzhen Bay Laboratory, Shenzhen 518132, China
| | - Tingting Yu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China
| | - Dan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China
| | - Yanyi Huang
- Institute for Cell Analysis, Shenzhen Bay Laboratory, Shenzhen 518132, China
- College of Chemistry, Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing 100871, China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Jiang Tang
- School of Optical and Electronic Information - Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Dongyu Li
- School of Optical and Electronic Information - Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Peng Fei
- School of Optical and Electronic Information - Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
46
|
Zhu L, Yi C, Fei P. A practical guide to deep-learning light-field microscopy for 3D imaging of biological dynamics. STAR Protoc 2023; 4:102078. [PMID: 36853699 PMCID: PMC9898296 DOI: 10.1016/j.xpro.2023.102078] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/12/2022] [Accepted: 01/11/2023] [Indexed: 01/30/2023] Open
Abstract
Here, we present a step-by-step protocol for the implementation of deep-learning-enhanced light-field microscopy enabling 3D imaging of instantaneous biological processes. We first provide the instructions to build a light-field microscope (LFM) capable of capturing optically encoded dynamic signals. Then, we detail the data processing and model training of a view-channel-depth (VCD) neural network, which enables instant 3D image reconstruction from a single 2D light-field snapshot. Finally, we describe VCD-LFM imaging of several model organisms and demonstrate image-based quantitative studies on neural activities and cardio-hemodynamics. For complete details on the use and execution of this protocol, please refer to Wang et al. (2021).1.
Collapse
Affiliation(s)
- Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Chengqiang Yi
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
47
|
Zhai J, Jin C, Kong L. Compact, Hybrid Light-Sheet and Fourier Light-Field Microscopy with a Single Objective for High-Speed Volumetric Imaging In Vivo. J Phys Chem A 2023; 127:2873-2879. [PMID: 36926932 DOI: 10.1021/acs.jpca.3c00325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Abstract
Volumetric imaging of biodynamics at high spatiotemporal resolutions in vivo is vital in biomedical studies, in which Fourier light field microscopy (FLFM) is a promising technique. However, the commonly used wide-field illumination strategy in FLFM introduces intense out of depth-of-field background, which not only degrades the image quality, but also introduces reconstruction artifacts. Employing light sheet illumination is an effective way to alleviate the background and reduce photobleaching in light-field microscopy. Unfortunately, the introduction of light-sheet illumination often requires an extra objective and precise alignment, which increases the system complexity. Here, we propose the compact, hybrid light-sheet and FLFM (CLS-FLFM), which uses only a single objective to achieve both light-sheet illumination and Fourier light-field imaging simultaneously. With a micromirror under the objective, we focus the light sheet, which ensures selective-volume-illumination, on the imaging plane of the FLFM to perform volumetric imaging. We demonstrate the superior performance of CLS-FLFM in inhibiting background in both structural and dynamical imaging of larval zebrafish in vivo. We envision that CLS-FLFM finds wide applications in high-speed, background-inhibited volumetric imaging of biodynamics in vivo.
Collapse
Affiliation(s)
- Jiazhen Zhai
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Cheng Jin
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Lingjie Kong
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing 100084, China.,IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| |
Collapse
|
48
|
Siu DMD, Lee KCM, Chung BMF, Wong JSJ, Zheng G, Tsia KK. Optofluidic imaging meets deep learning: from merging to emerging. LAB ON A CHIP 2023; 23:1011-1033. [PMID: 36601812 DOI: 10.1039/d2lc00813k] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Propelled by the striking advances in optical microscopy and deep learning (DL), the role of imaging in lab-on-a-chip has dramatically been transformed from a silo inspection tool to a quantitative "smart" engine. A suite of advanced optical microscopes now enables imaging over a range of spatial scales (from molecules to organisms) and temporal window (from microseconds to hours). On the other hand, the staggering diversity of DL algorithms has revolutionized image processing and analysis at the scale and complexity that were once inconceivable. Recognizing these exciting but overwhelming developments, we provide a timely review of their latest trends in the context of lab-on-a-chip imaging, or coined optofluidic imaging. More importantly, here we discuss the strengths and caveats of how to adopt, reinvent, and integrate these imaging techniques and DL algorithms in order to tailor different lab-on-a-chip applications. In particular, we highlight three areas where the latest advances in lab-on-a-chip imaging and DL can form unique synergisms: image formation, image analytics and intelligent image-guided autonomous lab-on-a-chip. Despite the on-going challenges, we anticipate that they will represent the next frontiers in lab-on-a-chip imaging that will spearhead new capabilities in advancing analytical chemistry research, accelerating biological discovery, and empowering new intelligent clinical applications.
Collapse
Affiliation(s)
- Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Bob M F Chung
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| |
Collapse
|
49
|
Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat Biotechnol 2023; 41:282-292. [PMID: 36163547 PMCID: PMC9931589 DOI: 10.1038/s41587-022-01450-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 07/29/2022] [Indexed: 11/09/2022]
Abstract
A fundamental challenge in fluorescence microscopy is the photon shot noise arising from the inevitable stochasticity of photon detection. Noise increases measurement uncertainty and limits imaging resolution, speed and sensitivity. To achieve high-sensitivity fluorescence imaging beyond the shot-noise limit, we present DeepCAD-RT, a self-supervised deep learning method for real-time noise suppression. Based on our previous framework DeepCAD, we reduced the number of network parameters by 94%, memory consumption by 27-fold and processing time by a factor of 20, allowing real-time processing on a two-photon microscope. A high imaging signal-to-noise ratio can be acquired with tenfold fewer photons than in standard imaging approaches. We demonstrate the utility of DeepCAD-RT in a series of photon-limited experiments, including in vivo calcium imaging of mice, zebrafish larva and fruit flies, recording of three-dimensional (3D) migration of neutrophils after acute brain injury and imaging of 3D dynamics of cortical ATP release. DeepCAD-RT will facilitate the morphological and functional interrogation of biological dynamics with a minimal photon budget.
Collapse
|
50
|
Morales-Curiel LF, Gonzalez AC, Castro-Olvera G, Lin LCL, El-Quessny M, Porta-de-la-Riva M, Severino J, Morera LB, Venturini V, Ruprecht V, Ramallo D, Loza-Alvarez P, Krieg M. Volumetric imaging of fast cellular dynamics with deep learning enhanced bioluminescence microscopy. Commun Biol 2022; 5:1330. [PMID: 36463346 PMCID: PMC9719505 DOI: 10.1038/s42003-022-04292-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 11/23/2022] [Indexed: 12/05/2022] Open
Abstract
Bioluminescence microscopy is an appealing alternative to fluorescence microscopy, because it does not depend on external illumination, and consequently does neither produce spurious background autofluorescence, nor perturb intrinsically photosensitive processes in living cells and animals. The low photon emission of known luciferases, however, demands long exposure times that are prohibitive for imaging fast biological dynamics. To increase the versatility of bioluminescence microscopy, we present an improved low-light microscope in combination with deep learning methods to image extremely photon-starved samples enabling subsecond exposures for timelapse and volumetric imaging. We apply our method to image subcellular dynamics in mouse embryonic stem cells, epithelial morphology during zebrafish development, and DAF-16 FoxO transcription factor shuttling from the cytoplasm to the nucleus under external stress. Finally, we concatenate neural networks for denoising and light-field deconvolution to resolve intracellular calcium dynamics in three dimensions of freely moving Caenorhabditis elegans.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Jacqueline Severino
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Laura Battle Morera
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Valeria Venturini
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - Verena Ruprecht
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
- ICREA, Pg. Lluis Companys 23, 08010, Barcelona, Spain
| | - Diego Ramallo
- ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | | | - Michael Krieg
- ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain.
| |
Collapse
|