1
|
Kikuchi S, Kotaka T, Hanaki Y, Ueda M, Higaki T. Distinct actin microfilament localization during early cell plate formation through deep learning-based image restoration. PLANT CELL REPORTS 2025; 44:115. [PMID: 40335746 PMCID: PMC12058911 DOI: 10.1007/s00299-025-03498-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2025] [Accepted: 04/08/2025] [Indexed: 05/09/2025]
Abstract
KEY MESSAGE Using deep learning-based image restoration, we achieved high-resolution 4D imaging with minimal photodamage, revealing distinct localization and suggesting Lifeact-RFP-labeled actin microfilaments play a role in initiating cell plate formation. Phragmoplasts are plant-specific intracellular structures composed of microtubules, actin microfilaments (AFs), membranes, and associated proteins. Importantly, they are involved in the formation and the expansion of cell plates that partition daughter cells during cell division. While previous studies have revealed the important role of cytoskeletal dynamics in the proper functioning of the phragmoplast, the localization and the role of AFs in the initial phase of cell plate formation remain controversial. Here, we used deep learning-based image restoration to achieve high-resolution 4D imaging with minimal laser-induced damage, enabling us to investigate the dynamics of AFs during the initial phase of cell plate formation in transgenic tobacco BY-2 cells labeled with Lifeact-RFP or RFP-ABD2 (actin-binding domain 2). This computational approach overcame the limitation of conventional imaging, namely laser-induced photobleaching and phototoxicity. The restored images indicated that RFP-ABD2-labeled AFs were predominantly localized near the daughter nucleus, whereas Lifeact-RFP-labeled AFs were found not only near the daughter nucleus but also around the initial cell plate. These findings, validated by imaging with a long exposure time, highlight distinct localization patterns between the two AF probes and suggest that Lifeact-RFP-labeled AFs play a role in initiating cell plate formation.
Collapse
Affiliation(s)
- Suzuka Kikuchi
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan
| | - Takumi Kotaka
- Faculty of Science, Kumamoto University, Kumamoto, Japan
| | - Yuga Hanaki
- Graduate School of Life Sciences, Tohoku University, Sendai, Japan
| | - Minako Ueda
- Graduate School of Life Sciences, Tohoku University, Sendai, Japan
| | - Takumi Higaki
- Faculty of Science, Kumamoto University, Kumamoto, Japan.
- Graduate School of Science and Technology, Kumamoto University, Kumamoto, Japan.
- International Research Center for Agricultural and Environmental Biology, Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
2
|
Fu L, Li L, Lu B, Guo X, Shi X, Tian J, Hu Z. Deep Equilibrium Unfolding Learning for Noise Estimation and Removal in Optical Molecular Imaging. Comput Med Imaging Graph 2025; 120:102492. [PMID: 39823663 DOI: 10.1016/j.compmedimag.2025.102492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 01/03/2025] [Accepted: 01/03/2025] [Indexed: 01/19/2025]
Abstract
In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.
Collapse
Affiliation(s)
- Lidan Fu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lingbing Li
- Interventional Radiology Department, Chinese PLA General Hospital, Beijing 100039, China
| | - Binchun Lu
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Xiaoyong Guo
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Gastrointestinal Cancer Center, Ward I, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Xiaojing Shi
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Key Laboratory of Big Data-Based Precision Medicine of Ministry of Industry and Information Technology, School of Engineering Medicine, Beihang University, Beijing 100191, China; Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an 710071, China; National Key Laboratory of Kidney Diseases, Beijing 100853, China.
| | - Zhenhua Hu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; National Key Laboratory of Kidney Diseases, Beijing 100853, China.
| |
Collapse
|
3
|
Baiz CR, Kanevche K, Kozuch J, Heberle J. Data-driven signal-to-noise enhancement in scattering near-field infrared microscopy. J Chem Phys 2025; 162:054201. [PMID: 39898567 DOI: 10.1063/5.0247251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Accepted: 01/06/2025] [Indexed: 02/04/2025] Open
Abstract
This study introduces a machine-learning approach to enhance signal-to-noise ratios in scattering-type scanning near-field optical microscopy (s-SNOM). While s-SNOM offers a high spatial resolution, its effectiveness is often hindered by low signal levels, particularly in weakly absorbing samples. To address these challenges, we utilize a data-driven "patch-based" machine learning reconstruction method, incorporating modern generative adversarial neural networks (CycleGANs) for denoising s-SNOM images. This method allows for flexible reconstruction of images of arbitrary sizes, a critical capability given the variable nature of scanned sample areas in point-scanning probe-based microscopies. The CycleGAN model is trained on unpaired sets of images captured at both rapid and extended acquisition times, thereby modeling instrument noise while preserving essential topographical and molecular information. The results show significant improvements in image quality, as indicated by higher structural similarity index and peak signal-to-noise ratio values, comparable to those obtained from images captured with four times the integration time. This method not only enhances image quality but also has the potential to reduce the overall data acquisition time, making high-resolution s-SNOM imaging more feasible for a wide range of biological and materials science applications.
Collapse
Affiliation(s)
- Carlos R Baiz
- Department of Chemistry, University of Texas at Austin, 105 E 24th St. A5300, Austin, Texas 78712, USA
- Fachbereich Physik, Experimentelle Molekulare Biophysik, Freie Universität Berlin, Berlin 14195, Germany
| | - Katerina Kanevche
- Fachbereich Physik, Experimentelle Molekulare Biophysik, Freie Universität Berlin, Berlin 14195, Germany
- Department of Chemistry, Princeton University, Princeton, New Jersey 08544, USA
| | - Jacek Kozuch
- Fachbereich Physik, Experimentelle Molekulare Biophysik, Freie Universität Berlin, Berlin 14195, Germany
| | - Joachim Heberle
- Fachbereich Physik, Experimentelle Molekulare Biophysik, Freie Universität Berlin, Berlin 14195, Germany
| |
Collapse
|
4
|
Li S, Omer AM, Duan Y, Fang Q, Hamad KO, Fernandez M, Lin R, Wen J, Wang Y, Cai J, Guo G, Wu Y, Yi F, Meng J, Mao Z, Duan Y. Deep-Optimal Leucorrhea Detection Through Fluorescent Benchmark Data Analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01428-3. [PMID: 39904942 DOI: 10.1007/s10278-025-01428-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 01/16/2025] [Accepted: 01/23/2025] [Indexed: 02/06/2025]
Abstract
Vaginitis is a common condition in women that is described medically as irritation and/or inflammation of the vagina; it poses a significant health risk for women, necessitating precise diagnostic methods. Presently, conventional techniques for examining vaginal discharge involve the use of wet mounts and gram staining to identify vaginal diseases. In this research, we utilized fluorescent staining, which enables distinct visualization of cellular and pathogenic components, each exhibiting unique color characteristics when exposed to the same light source. We established a large, challenging multiple fluorescence leucorrhea dataset benchmark comprising 8 categories with a total of 343 K high-quality labels. We also presented a robust lightweight deep-learning network, LRNet. It includes a lightweight feature extraction network that employs Ghost modules, a feature pyramid network that incorporates deformable convolution in the neck, and a single detection head. The evaluation results indicate that this detection network surpasses conventional networks and can cut down the model parameters by up to 91.4% and floating-point operations (FLOPs) by 74%. The deep-optimal leucorrhea detection capability of LRNet significantly enhances its ability to detect various crucial indicators related to vaginal health.
Collapse
Affiliation(s)
- Shuang Li
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Akam M Omer
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Yuping Duan
- Qingdao Central Hospital, University of Health and Rehabilitation Sciences, 127 Siliu South Road, Qingdao, Shandong, China
| | - Qiang Fang
- School of Marine Engineering Equipment, Zhejiang Ocean University, Zhoushan, 316022, China.
| | - Kamyar Othman Hamad
- School of Automation, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Mauricio Fernandez
- School of Computer Science and Engineering, Central South University, 932 Lushan South Road, Changsha, 410083, China
| | - Ruiqing Lin
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Jianghua Wen
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Yanping Wang
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Jingang Cai
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Guangchao Guo
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Yingying Wu
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Fang Yi
- Department of Geriatric Neurology, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Jianqiao Meng
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Zhiqun Mao
- Department of PET Imaging Center, Hunan Provincial People's Hospital, Changsha, Hunan, China.
| | - Yuxia Duan
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China.
| |
Collapse
|
5
|
Choudhury P, Boruah BR. Neural network-assisted localization of clustered point spread functions in single-molecule localization microscopy. J Microsc 2025; 297:153-164. [PMID: 39367610 DOI: 10.1111/jmi.13362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/16/2024] [Accepted: 09/19/2024] [Indexed: 10/06/2024]
Abstract
Single-molecule localization microscopy (SMLM), which has revolutionized nanoscale imaging, faces challenges in densely labelled samples due to fluorophore clustering, leading to compromised localization accuracy. In this paper, we propose a novel convolutional neural network (CNN)-assisted approach to address the issue of locating the clustered fluorophores. Our CNN is trained on a diverse data set of simulated SMLM images where it learns to predict point spread function (PSF) locations by generating Gaussian blobs as output. Through rigorous evaluation, we demonstrate significant improvements in PSF localization accuracy, especially in densely labelled samples where traditional methods struggle. In addition, we employ blob detection as a post-processing technique to refine the predicted PSF locations and enhance localization precision. Our study underscores the efficacy of CNN in addressing clustering challenges in SMLM, thereby advancing spatial resolution and enabling deeper insights into complex biological structures.
Collapse
Affiliation(s)
- Pranjal Choudhury
- Department of Physics, Indian Institute of Technology Guwahati, Guwahati, Assam, India
| | - Bosanta R Boruah
- Department of Physics, Indian Institute of Technology Guwahati, Guwahati, Assam, India
| |
Collapse
|
6
|
Li Y, Xu X, Zhang C, Sun X, Zhou S, Li X, Guo J, Hu R, Qu J, Liu L. In Vivo Neurodynamics Mapping via High-Speed Two-Photon Fluorescence Lifetime Volumetric Projection Microscopy. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2025; 12:e2410605. [PMID: 39716869 PMCID: PMC11831470 DOI: 10.1002/advs.202410605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Revised: 12/01/2024] [Indexed: 12/25/2024]
Abstract
Monitoring the morphological and biochemical information of neurons and glial cells at high temporal resolution in three-dimensional (3D) volumes of in vivo is pivotal for understanding their structure and function, and quantifying the brain microenvironment. Conventional two-photon fluorescence lifetime volumetric imaging speed faces the acquisition speed challenges of slow serial focal tomographic scanning, complex post-processing procedures for lifetime images, and inherent trade-offs among contrast, signal-to-noise ratio, and speed. This study presents a two-photon fluorescence lifetime volumetric projection microscopy using an axially elongated Bessel focus and instant frequency-domain fluorescence lifetime technique, and integrating with a convolutional network to enhance the imaging speed for in vivo neurodynamics mapping. The proposed method is validated by monitoring intracellular Ca2+ concentration throughout whole volume, tracking microglia movement and microenvironmental changes following thermal injury in the zebrafish brain, analyzing structural and functional variations of gap junctions in astrocyte networks, and measuring the Ca2+ concentration in neurons in mouse brains. This innovative methodology enables quantitative in vivo visualization of neurodynamics and the cellular processes and interactions in the brain.
Collapse
Affiliation(s)
- Yanping Li
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Xiangcong Xu
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Chao Zhang
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Xuefeng Sun
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Sisi Zhou
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Xuan Li
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Jiaqing Guo
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Rui Hu
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liwei Liu
- State Key Laboratory of Radio Frequency Heterogeneous Integration & Key Laboratory of Optoelectronic Devices and SystemsCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
7
|
Qiao C, Liu S, Wang Y, Xu W, Geng X, Jiang T, Zhang J, Meng Q, Qiao H, Li D, Dai Q. A neural network for long-term super-resolution imaging of live cells with reliable confidence quantification. Nat Biotechnol 2025:10.1038/s41587-025-02553-8. [PMID: 39881027 DOI: 10.1038/s41587-025-02553-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 01/03/2025] [Indexed: 01/31/2025]
Abstract
Super-resolution (SR) neural networks transform low-resolution optical microscopy images into SR images. Application of single-image SR (SISR) methods to long-term imaging has not exploited the temporal dependencies between neighboring frames and has been subject to inference uncertainty that is difficult to quantify. Here, by building a large-scale fluorescence microscopy dataset and evaluating the propagation and alignment components of neural network models, we devise a deformable phase-space alignment (DPA) time-lapse image SR (TISR) neural network. DPA-TISR adaptively enhances the cross-frame alignment in the phase domain and outperforms existing state-of-the-art SISR and TISR models. We also develop Bayesian DPA-TISR and design an expected calibration error minimization framework that reliably infers inference confidence. We demonstrate multicolor live-cell SR imaging for more than 10,000 time points of various biological specimens with high fidelity, temporal consistency and accurate confidence quantification.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography, Tsinghua University, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China
| | - Shuran Liu
- Department of Automation, Tsinghua University, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Wencong Xu
- Department of Automation, Tsinghua University, Beijing, China
| | - Xiaohan Geng
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Jingyu Zhang
- Department of Automation, Tsinghua University, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography, Tsinghua University, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China.
| | - Dong Li
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua-Peking Center for Life Sciences, Beijing Frontier Research Center for Biological Structure, State Key Laboratory of Membrane Biology, New Cornerstone Science Laboratory, School of Life Sciences, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography, Tsinghua University, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China.
| |
Collapse
|
8
|
Wang WJ, Xin ZY, Su X, Hao L, Qiu Z, Li K, Luo Y, Cai XM, Zhang J, Alam P, Feng J, Wang S, Zhao Z, Tang BZ. Aggregation-Induced Emission Luminogens Realizing High-Contrast Bioimaging. ACS NANO 2025; 19:281-306. [PMID: 39745533 DOI: 10.1021/acsnano.4c14887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
Abstract
A revolutionary transformation in biomedical imaging is unfolding with the advent of aggregation-induced emission luminogens (AIEgens). These cutting-edge molecules not only overcome the limitations of traditional fluorescent probes but also improve the boundaries of high-contrast imaging. Unlike conventional fluorophores suffering from aggregation-caused quenching, AIEgens exhibit enhanced luminescence when aggregated, enabling superior imaging performance. This review delves into the molecular mechanisms of aggregation-induced emission (AIE), demonstrating how strategic molecular design unlocks exceptional luminescence and superior imaging contrast, which is crucial for distinguishing healthy and diseased tissues. This review also highlights key applications of AIEgens, such as time-resolved imaging, second near-infrared window (NIR-II), and the advancement of AIEgens in sensitivity to physical and biochemical cue-responsive imaging. The development of AIE technology promises to transform healthcare from early disease detection to targeted therapies, potentially reshaping personalized medicine. This paradigm shift in biophotonics offers efficient tools to decode the complexities of biological systems at the molecular level, bringing us closer to a future where the invisible becomes visible and the incurable becomes treatable.
Collapse
Affiliation(s)
- Wen-Jin Wang
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Zhuo-Yang Xin
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Xuxian Su
- Department of Chemistry, The Hong Kong Branch of Chinese National Engineering Research Center for Tissue Restoration and Reconstruction, Division of Life Science, State Key Laboratory of Molecular Neuroscience, and Department of Biological and Chemical Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR 999077, China
| | - Liang Hao
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Zijie Qiu
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Kang Li
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Yumei Luo
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Xu-Min Cai
- Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, International Innovation Center for Forest Chemicals and Materials, College of Chemical Engineering, Nanjing Forestry University, Nanjing, Jiangsu 210037, China
| | - Jianquan Zhang
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Parvej Alam
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Jing Feng
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Shaojuan Wang
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Zheng Zhao
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
| | - Ben Zhong Tang
- Clinical Translational Research Center of Aggregation-Induced Emission, The Second Affiliated Hospital, School of Medicine, School of Science and Engineering, Shenzhen Institute of Aggregate Science and Technology, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Shenzhen, Guangdong 518172, China
- Department of Chemistry, The Hong Kong Branch of Chinese National Engineering Research Center for Tissue Restoration and Reconstruction, Division of Life Science, State Key Laboratory of Molecular Neuroscience, and Department of Biological and Chemical Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR 999077, China
| |
Collapse
|
9
|
Ward EN, Scheeder A, Barysevich M, Kaminski CF. Self-Driving Microscopes: AI Meets Super-Resolution Microscopy. SMALL METHODS 2025:e2401757. [PMID: 39797467 DOI: 10.1002/smtd.202401757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 12/01/2024] [Indexed: 01/13/2025]
Abstract
The integration of Machine Learning (ML) with super-resolution microscopy represents a transformative advancement in biomedical research. Recent advances in ML, particularly deep learning (DL), have significantly enhanced image processing tasks, such as denoising and reconstruction. This review explores the growing potential of automation in super-resolution microscopy, focusing on how DL can enable autonomous imaging tasks. Overcoming the challenges of automation, particularly in adapting to dynamic biological processes and minimizing manual intervention, is crucial for the future of microscopy. Whilst still in its infancy, automation in super-resolution can revolutionize drug discovery and disease phenotyping leading to similar breakthroughs as have been recognized in this year's Nobel Prizes for Physics and Chemistry.
Collapse
Affiliation(s)
- Edward N Ward
- Dept. Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, CB3 0AS, UK
| | - Anna Scheeder
- Dept. Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, CB3 0AS, UK
| | - Max Barysevich
- Dept. Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, CB3 0AS, UK
| | - Clemens F Kaminski
- Dept. Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, CB3 0AS, UK
| |
Collapse
|
10
|
Zhou Y, Zhao J, Wen J, Wu Z, Dong Y, Chen Y. Unsupervised Learning-Assisted Acoustic-Driven Nano-Lens Holography for the Ultrasensitive and Amplification-Free Detection of Viable Bacteria. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2025; 12:e2406912. [PMID: 39575510 PMCID: PMC11727406 DOI: 10.1002/advs.202406912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 11/05/2024] [Indexed: 01/14/2025]
Abstract
Bacterial infection is a crucial factor resulting in public health issues worldwide, often triggering epidemics and even fatalities. The accurate, rapid, and convenient detection of viable bacteria is an effective method for reducing infections and illness outbreaks. Here, an unsupervised learning-assisted and surface acoustic wave-interdigital transducer-driven nano-lens holography biosensing platform is developed for the ultrasensitive and amplification-free detection of viable bacteria. The monitoring device integrated with the nano-lens effect can achieve the holographic imaging of polystyrene microsphere probes in an ultra-wide field of view (∽28.28 mm2), with a sensitivity limit of as low as 99 nm. A lightweight unsupervised learning hologram processing algorithm considerably reduces training time and computing hardware requirements, without requiring datasets with manual labels. By combining phage-mediated viable bacterial DNA extraction and enhanced CRISPR-Cas12a systems, this strategy successfully achieves the ultrasensitive detection of viable Salmonella in various real samples, demonstrating enhanced accuracy validated with the qPCR benchmark method. This approach has a low cost (∽$500) and is rapid (∽1 h) and highly sensitive (∽38 CFU mL-1), allowing for the amplification-free detection of viable bacteria and emerging as a powerful tool for food safety inspection and clinical diagnosis.
Collapse
Affiliation(s)
- Yang Zhou
- State Key Laboratory of Marine Food Processing and Safety ControlDalian Polytechnic UniversityDalianLiaoning116034China
- Institute of Biopharmaceutical and Health EngineeringShenzhen International Graduate SchoolTsinghua UniversityShenzhenGuangdong518055China
- College of EngineeringHuazhong Agricultural UniversityWuhanHubei430070China
| | - Junpeng Zhao
- College of Food Science and TechnologyHuazhong Agricultural UniversityWuhanHubei430070China
| | - Junping Wen
- College of Food Science and TechnologyHuazhong Agricultural UniversityWuhanHubei430070China
| | - Ziyan Wu
- College of Food Science and TechnologyHuazhong Agricultural UniversityWuhanHubei430070China
| | - Yongzhen Dong
- State Key Laboratory of Marine Food Processing and Safety ControlDalian Polytechnic UniversityDalianLiaoning116034China
| | - Yiping Chen
- State Key Laboratory of Marine Food Processing and Safety ControlDalian Polytechnic UniversityDalianLiaoning116034China
| |
Collapse
|
11
|
Zhu L, Chen Y, Liu L, Xing L, Yu L. Multi-Sensor Learning Enables Information Transfer Across Different Sensory Data and Augments Multi-Modality Imaging. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2025; 47:288-304. [PMID: 39302777 PMCID: PMC11875987 DOI: 10.1109/tpami.2024.3465649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Multi-modality imaging is widely used in clinical practice and biomedical research to gain a comprehensive understanding of an imaging subject. Currently, multi-modality imaging is accomplished by post hoc fusion of independently reconstructed images under the guidance of mutual information or spatially registered hardware, which limits the accuracy and utility of multi-modality imaging. Here, we investigate a data-driven multi-modality imaging (DMI) strategy for synergetic imaging of CT and MRI. We reveal two distinct types of features in multi-modality imaging, namely intra- and inter-modality features, and present a multi-sensor learning (MSL) framework to utilize the crossover inter-modality features for augmented multi-modality imaging. The MSL imaging approach breaks down the boundaries of traditional imaging modalities and allows for optimal hybridization of CT and MRI, which maximizes the use of sensory data. We showcase the effectiveness of our DMI strategy through synergetic CT-MRI brain imaging. The principle of DMI is quite general and holds enormous potential for various DMI applications across disciplines.
Collapse
|
12
|
Liu X, Duan C, Cai W, Shao X. Unmixing Autoencoder for Image Reconstruction from Hyperspectral Data. Anal Chem 2024; 96:20354-20361. [PMID: 39690477 DOI: 10.1021/acs.analchem.4c02720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2024]
Abstract
Due to the complexity of samples and the limitations in spatial resolution, the spectra in hyperspectral imaging (HSI) are generally contributed to by multiple components, making univariate analysis ineffective. Although feature extraction methods have been applied, the chemical meaning of the compressed variables is difficult to interpret, limiting their further applications. An unmixing autoencoder (UAE) was developed in this work for the separation of the mixed spectra in HSI. The proposed model is composed of an encoder and a fully connected (FC) layer. The former is used to compress the input spectrum into several variables, and the latter is employed to reconstruct the spectrum. Combining reconstruction loss and sparse regularization, the weights and the spectral profiles of the components will be encoded in the compressed variables and the connection weights of FC, respectively. A simulated and three experimental HSI data sets were adopted to investigate the performance of the UAE model. The spectral components were successfully obtained, from which the handwriting under papers was revealed from the image of near-infrared (NIR) diffusive reflectance spectroscopy, and the images of lipids, proteins, and nucleic acids were reconstructed from the Raman and stimulated Raman scattering (SRS) images.
Collapse
Affiliation(s)
- Xuyang Liu
- Research Center for Analytical Sciences, Tianjin Key Laboratory of Biosensing and Molecular Recognition, State Key Laboratory of Medicinal Chemical Biology, College of Chemistry, Nankai University, Tianjin 300071, China
| | - Chaoshu Duan
- Research Center for Analytical Sciences, Tianjin Key Laboratory of Biosensing and Molecular Recognition, State Key Laboratory of Medicinal Chemical Biology, College of Chemistry, Nankai University, Tianjin 300071, China
| | - Wensheng Cai
- Research Center for Analytical Sciences, Tianjin Key Laboratory of Biosensing and Molecular Recognition, State Key Laboratory of Medicinal Chemical Biology, College of Chemistry, Nankai University, Tianjin 300071, China
| | - Xueguang Shao
- Research Center for Analytical Sciences, Tianjin Key Laboratory of Biosensing and Molecular Recognition, State Key Laboratory of Medicinal Chemical Biology, College of Chemistry, Nankai University, Tianjin 300071, China
- Haihe Laboratory of Sustainable Chemical Transformations, Tianjin 300192, China
| |
Collapse
|
13
|
Park E, Misra S, Hwang DG, Yoon C, Ahn J, Kim D, Jang J, Kim C. Unsupervised inter-domain transformation for virtually stained high-resolution mid-infrared photoacoustic microscopy using explainable deep learning. Nat Commun 2024; 15:10892. [PMID: 39738110 PMCID: PMC11685655 DOI: 10.1038/s41467-024-55262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 12/04/2024] [Indexed: 01/01/2025] Open
Abstract
Mid-infrared photoacoustic microscopy can capture biochemical information without staining. However, the long mid-infrared optical wavelengths make the spatial resolution of photoacoustic microscopy significantly poorer than that of conventional confocal fluorescence microscopy. Here, we demonstrate an explainable deep learning-based unsupervised inter-domain transformation of low-resolution unlabeled mid-infrared photoacoustic microscopy images into confocal-like virtually fluorescence-stained high-resolution images. The explainable deep learning-based framework is proposed for this transformation, wherein an unsupervised generative adversarial network is primarily employed and then a saliency constraint is added for better explainability. We validate the performance of explainable deep learning-based mid-infrared photoacoustic microscopy by identifying cell nuclei and filamentous actins in cultured human cardiac fibroblasts and matching them with the corresponding CFM images. The XDL ensures similar saliency between the two domains, making the transformation process more stable and more reliable than existing networks. Our XDL-MIR-PAM enables label-free high-resolution duplexed cellular imaging, which can significantly benefit many research avenues in cell biology.
Collapse
Affiliation(s)
- Eunwoo Park
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Sampa Misra
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Dong Gyu Hwang
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Center for 3D Organ Printing and Stem Cells, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Chiho Yoon
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Department of Electrical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Joongho Ahn
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Department of Electrical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Opticho Inc, Pohang, Republic of Korea
| | - Donggyu Kim
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Jinah Jang
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Center for 3D Organ Printing and Stem Cells, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Department of Medical Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Institute for Convergence Research and Education in Advanced Technology, Yonsei University, Seoul, Republic of Korea.
| | - Chulhong Kim
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Department of Electrical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Opticho Inc, Pohang, Republic of Korea.
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Department of Medical Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
| |
Collapse
|
14
|
Wanner J, Kuhn Cuellar L, Rausch L, W. Berendzen K, Wanke F, Gabernet G, Harter K, Nahnsen S. Nf-Root: A Best-Practice Pipeline for Deep-Learning-Based Analysis of Apoplastic pH in Microscopy Images of Developmental Zones in Plant Root Tissue. QUANTITATIVE PLANT BIOLOGY 2024; 5:e12. [PMID: 39777028 PMCID: PMC11706687 DOI: 10.1017/qpb.2024.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 06/03/2024] [Accepted: 06/04/2024] [Indexed: 01/11/2025]
Abstract
Hormonal mechanisms associated with cell elongation play a vital role in the development and growth of plants. Here, we report Nextflow-root (nf-root), a novel best-practice pipeline for deep-learning-based analysis of fluorescence microscopy images of plant root tissue from A. thaliana. This bioinformatics pipeline performs automatic identification of developmental zones in root tissue images. This also includes apoplastic pH measurements, which is useful for modeling hormone signaling and cell physiological responses. We show that this nf-core standard-based pipeline successfully automates tissue zone segmentation and is both high-throughput and highly reproducible. In short, a deep-learning module deploys deterministically trained convolutional neural network models and augments the segmentation predictions with measures of prediction uncertainty and model interpretability, while aiming to facilitate result interpretation and verification by experienced plant biologists. We observed a high statistical similarity between the manually generated results and the output of the nf-root.
Collapse
Affiliation(s)
- Julian Wanner
- Quantitative Biology Center (QBiC), University of Tübingen, Tübingen, Germany
- Hasso Plattner Institute, University of Potsdam, Germany
- Finnish Institute for Molecular Medicine (FIMM), University of Helsinki, Helsinki, Finland
| | - Luis Kuhn Cuellar
- Quantitative Biology Center (QBiC), University of Tübingen, Tübingen, Germany
| | - Luiselotte Rausch
- Center for Plant Molecular Biology (ZMBP), University of Tübingen, Tübingen, Germany
| | - Kenneth W. Berendzen
- Center for Plant Molecular Biology (ZMBP), University of Tübingen, Tübingen, Germany
| | - Friederike Wanke
- Center for Plant Molecular Biology (ZMBP), University of Tübingen, Tübingen, Germany
| | - Gisela Gabernet
- Quantitative Biology Center (QBiC), University of Tübingen, Tübingen, Germany
| | - Klaus Harter
- Center for Plant Molecular Biology (ZMBP), University of Tübingen, Tübingen, Germany
| | - Sven Nahnsen
- Quantitative Biology Center (QBiC), University of Tübingen, Tübingen, Germany
| |
Collapse
|
15
|
Cao R, Divekar NS, Nuñez JK, Upadhyayula S, Waller L. Neural space-time model for dynamic multi-shot imaging. Nat Methods 2024; 21:2336-2341. [PMID: 39317729 PMCID: PMC11621023 DOI: 10.1038/s41592-024-02417-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 08/15/2024] [Indexed: 09/26/2024]
Abstract
Computational imaging reconstructions from multiple measurements that are captured sequentially often suffer from motion artifacts if the scene is dynamic. We propose a neural space-time model (NSTM) that jointly estimates the scene and its motion dynamics, without data priors or pre-training. Hence, we can both remove motion artifacts and resolve sample dynamics from the same set of raw measurements used for the conventional reconstruction. We demonstrate NSTM in three computational imaging systems: differential phase-contrast microscopy, three-dimensional structured illumination microscopy and rolling-shutter DiffuserCam. We show that NSTM can recover subcellular motion dynamics and thus reduce the misinterpretation of living systems caused by motion artifacts.
Collapse
Affiliation(s)
- Ruiming Cao
- Department of Bioengineering, UC Berkeley, Berkeley, CA, USA.
| | - Nikita S Divekar
- Department of Molecular and Cell Biology, UC Berkeley, Berkeley, CA, USA
| | - James K Nuñez
- Department of Molecular and Cell Biology, UC Berkeley, Berkeley, CA, USA
| | | | - Laura Waller
- Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA, USA.
| |
Collapse
|
16
|
Hsieh YT, Jhan KC, Lee JC, Huang GJ, Chung CL, Chen WC, Chang TC, Chen BC, Pan MK, Wu SC, Chu SW. TAG-SPARK: Empowering High-Speed Volumetric Imaging With Deep Learning and Spatial Redundancy. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2405293. [PMID: 39283040 DOI: 10.1002/advs.202405293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 08/16/2024] [Indexed: 11/07/2024]
Abstract
Two-photon high-speed fluorescence calcium imaging stands as a mainstream technique in neuroscience for capturing neural activities with high spatiotemporal resolution. However, challenges arise from the inherent tradeoff between acquisition speed and image quality, grappling with a low signal-to-noise ratio (SNR) due to limited signal photon flux. Here, a contrast-enhanced video-rate volumetric system, integrating a tunable acoustic gradient (TAG) lens-based high-speed microscopy with a TAG-SPARK denoising algorithm is demonstrated. The former facilitates high-speed dense z-sampling at sub-micrometer-scale intervals, allowing the latter to exploit the spatial redundancy of z-slices for self-supervised model training. This spatial redundancy-based approach, tailored for 4D (xyzt) dataset, not only achieves >700% SNR enhancement but also retains fast-spiking functional profiles of neuronal activities. High-speed plus high-quality images are exemplified by in vivo Purkinje cells calcium observation, revealing intriguing dendritic-to-somatic signal convolution, i.e., similar dendritic signals lead to reverse somatic responses. This tailored technique allows for capturing neuronal activities with high SNR, thus advancing the fundamental comprehension of neuronal transduction pathways within 3D neuronal architecture.
Collapse
Affiliation(s)
- Yin-Tzu Hsieh
- Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, 10617, Taiwan
| | - Kai-Chun Jhan
- Department of Engineering and System Science, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Jye-Chang Lee
- Molecular Imaging Center, National Taiwan University, Taipei, 10617, Taiwan
| | - Guan-Jie Huang
- Department of Physics, National Taiwan University, Taipei, 10617, Taiwan
| | - Chang-Ling Chung
- Department of Physics, National Taiwan University, Taipei, 10617, Taiwan
| | - Wun-Ci Chen
- Department of Engineering and System Science, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Ting-Chen Chang
- Department of Physics, National Taiwan University, Taipei, 10617, Taiwan
| | - Bi-Chang Chen
- Research Center for Applied Sciences (RCAS), Academia Sinica, Taipei, 115, Taiwan
| | - Ming-Kai Pan
- Molecular Imaging Center, National Taiwan University, Taipei, 10617, Taiwan
- Department of Medical Research, National Taiwan University Hospital, Taipei, 10002, Taiwan
- Department and Graduate Institute of Pharmacology, National Taiwan University College of Medicine, Taipei, 10002, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei, 11529, Taiwan
- Cerebellar Research Center, National Taiwan University Hospital, Yun-Lin Branch, Yun-Lin, 64041, Taiwan
| | - Shun-Chi Wu
- Department of Engineering and System Science, National Tsing Hua University, Hsinchu, 30013, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Shi-Wei Chu
- Molecular Imaging Center, National Taiwan University, Taipei, 10617, Taiwan
- Department of Physics, National Taiwan University, Taipei, 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan
| |
Collapse
|
17
|
Zhou S, Miao Y, Qiu H, Yao Y, Wang W, Chen C. Deep learning based local feature classification to automatically identify single molecule fluorescence events. Commun Biol 2024; 7:1404. [PMID: 39468368 PMCID: PMC11519536 DOI: 10.1038/s42003-024-07122-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 10/22/2024] [Indexed: 10/30/2024] Open
Abstract
Long-term single-molecule fluorescence measurements are widely used powerful tools to study the conformational dynamics of biomolecules in real time to further elucidate their conformational dynamics. Typically, thousands or even more single-molecule traces are analyzed to provide statistically meaningful information, which is labor-intensive and can introduce user bias. Recently, several deep-learning models have been developed to automatically classify single-molecule traces. In this study, we introduce DEBRIS (Deep lEarning Based fRagmentatIon approach for Single-molecule fluorescence event identification), a deep-learning model focusing on classifying local features and capable of automatically identifying steady fluorescence signals and dynamically emerging signals of different patterns. DEBRIS efficiently and accurately identifies both one-color and two-color single-molecule events, including their start and end points. By adjusting user-defined criteria, DEBRIS becomes the pioneer in using a deep learning model to accurately classify four different types of single-molecule fluorescence events using the same trained model, demonstrating its universality and ability to enrich the current toolbox.
Collapse
Affiliation(s)
- Shuqi Zhou
- State Key Laboratory of Membrane Biology, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, 100084, Beijing, China
| | - Yu Miao
- State Key Laboratory of Membrane Biology, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, 100084, Beijing, China
| | - Haoren Qiu
- State Key Laboratory of Membrane Biology, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, 100084, Beijing, China
| | - Yuan Yao
- Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
| | - Wenjuan Wang
- Technology Center for Protein Sciences, School of Life Sciences, Tsinghua University, 100084, Beijing, China
| | - Chunlai Chen
- State Key Laboratory of Membrane Biology, Beijing Frontier Research Center for Biological Structure, School of Life Sciences, Tsinghua University, 100084, Beijing, China.
| |
Collapse
|
18
|
Molani A, Pennati F, Ravazzani S, Scarpellini A, Storti FM, Vegetali G, Paganelli C, Aliverti A. Advances in Portable Optical Microscopy Using Cloud Technologies and Artificial Intelligence for Medical Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:6682. [PMID: 39460161 PMCID: PMC11510803 DOI: 10.3390/s24206682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 10/11/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024]
Abstract
The need for faster and more accessible alternatives to laboratory microscopy is driving many innovations throughout the image and data acquisition chain in the biomedical field. Benchtop microscopes are bulky, lack communications capabilities, and require trained personnel for analysis. New technologies, such as compact 3D-printed devices integrated with the Internet of Things (IoT) for data sharing and cloud computing, as well as automated image processing using deep learning algorithms, can address these limitations and enhance the conventional imaging workflow. This review reports on recent advancements in microscope miniaturization, with a focus on emerging technologies such as photoacoustic microscopy and more established approaches like smartphone-based microscopy. The potential applications of IoT in microscopy are examined in detail. Furthermore, this review discusses the evolution of image processing in microscopy, transitioning from traditional to deep learning methods that facilitate image enhancement and data interpretation. Despite numerous advancements in the field, there is a noticeable lack of studies that holistically address the entire microscopy acquisition chain. This review aims to highlight the potential of IoT and artificial intelligence (AI) in combination with portable microscopy, emphasizing the importance of a comprehensive approach to the microscopy acquisition chain, from portability to image analysis.
Collapse
|
19
|
Qu L, Zhao S, Huang Y, Ye X, Wang K, Liu Y, Liu X, Mao H, Hu G, Chen W, Guo C, He J, Tan J, Li H, Chen L, Zhao W. Self-inspired learning for denoising live-cell super-resolution microscopy. Nat Methods 2024; 21:1895-1908. [PMID: 39261639 DOI: 10.1038/s41592-024-02400-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/31/2024] [Indexed: 09/13/2024]
Abstract
Every collected photon is precious in live-cell super-resolution (SR) microscopy. Here, we describe a data-efficient, deep learning-based denoising solution to improve diverse SR imaging modalities. The method, SN2N, is a Self-inspired Noise2Noise module with self-supervised data generation and self-constrained learning process. SN2N is fully competitive with supervised learning methods and circumvents the need for large training set and clean ground truth, requiring only a single noisy frame for training. We show that SN2N improves photon efficiency by one-to-two orders of magnitude and is compatible with multiple imaging modalities for volumetric, multicolor, time-lapse SR microscopy. We further integrated SN2N into different SR reconstruction algorithms to effectively mitigate image artifacts. We anticipate SN2N will enable improved live-SR imaging and inspire further advances.
Collapse
Affiliation(s)
- Liying Qu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shiqun Zhao
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuanyuan Huang
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianxin Ye
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Kunhao Wang
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuzhen Liu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianming Liu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Heng Mao
- School of Mathematical Sciences, Peking University, Beijing, China
| | - Guangwei Hu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wei Chen
- School of Mechanical Science and Engineering, Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, China
| | - Changliang Guo
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Jiaye He
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiubin Tan
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
| | - Haoyu Li
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China
| | - Liangyi Chen
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Beijing, China
- Beijing Academy of Artificial Intelligence, Beijing, China
| | - Weisong Zhao
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China.
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
20
|
Rudinskiy M, Morone D, Molinari M. Fluorescent Reporters, Imaging, and Artificial Intelligence Toolkits to Monitor and Quantify Autophagy, Heterophagy, and Lysosomal Trafficking Fluxes. Traffic 2024; 25:e12957. [PMID: 39450581 DOI: 10.1111/tra.12957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 08/21/2024] [Accepted: 10/03/2024] [Indexed: 10/26/2024]
Abstract
Lysosomal compartments control the clearance of cell-own material (autophagy) or of material that cells endocytose from the external environment (heterophagy) to warrant supply of nutrients, to eliminate macromolecules or parts of organelles present in excess, aged, or containing toxic material. Inherited or sporadic mutations in lysosomal proteins and enzymes may hamper their folding in the endoplasmic reticulum (ER) and their lysosomal transport via the Golgi compartment, resulting in lysosomal dysfunction and storage disorders. Defective cargo delivery to lysosomal compartments is harmful to cells and organs since it causes accumulation of toxic compounds and defective organellar homeostasis. Assessment of resident proteins and cargo fluxes to the lysosomal compartments is crucial for the mechanistic dissection of intracellular transport and catabolic events. It might be combined with high-throughput screenings to identify cellular, chemical, or pharmacological modulators of these events that may find therapeutic use for autophagy-related and lysosomal storage disorders. Here, discuss qualitative, quantitative and chronologic monitoring of autophagic, heterophagic and lysosomal protein trafficking in fixed and live cells, which relies on fluorescent single and tandem reporters used in combination with biochemical, flow cytometry, light and electron microscopy approaches implemented by artificial intelligence-based technology.
Collapse
Affiliation(s)
- Mikhail Rudinskiy
- Università della Svizzera italiana, Lugano, Switzerland
- Institute for Research in Biomedicine, Bellinzona, Switzerland
- Department of Biology, Swiss Federal Institute of Technology, Zurich, Switzerland
| | - Diego Morone
- Università della Svizzera italiana, Lugano, Switzerland
- Institute for Research in Biomedicine, Bellinzona, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Maurizio Molinari
- Università della Svizzera italiana, Lugano, Switzerland
- Institute for Research in Biomedicine, Bellinzona, Switzerland
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
21
|
Zhu E, Li YR, Margolis S, Wang J, Wang K, Zhang Y, Wang S, Park J, Zheng C, Yang L, Chu A, Zhang Y, Gao L, Hsiai TK. Frontiers in artificial intelligence-directed light-sheet microscopy for uncovering biological phenomena and multi-organ imaging. VIEW 2024; 5:20230087. [PMID: 39478956 PMCID: PMC11521201 DOI: 10.1002/viw.20230087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 07/18/2024] [Indexed: 11/02/2024] Open
Abstract
Light-sheet fluorescence microscopy (LSFM) introduces fast scanning of biological phenomena with deep photon penetration and minimal phototoxicity. This advancement represents a significant shift in 3-D imaging of large-scale biological tissues and 4-D (space + time) imaging of small live animals. The large data associated with LSFM requires efficient imaging acquisition and analysis with the use of artificial intelligence (AI)/machine learning (ML) algorithms. To this end, AI/ML-directed LSFM is an emerging area for multi-organ imaging and tumor diagnostics. This review will present the development of LSFM and highlight various LSFM configurations and designs for multi-scale imaging. Optical clearance techniques will be compared for effective reduction in light scattering and optimal deep-tissue imaging. This review will further depict a diverse range of research and translational applications, from small live organisms to multi-organ imaging to tumor diagnosis. In addition, this review will address AI/ML-directed imaging reconstruction, including the application of convolutional neural networks (CNNs) and generative adversarial networks (GANs). In summary, the advancements of LSFM have enabled effective and efficient post-imaging reconstruction and data analyses, underscoring LSFM's contribution to advancing fundamental and translational research.
Collapse
Affiliation(s)
- Enbo Zhu
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Yan-Ruide Li
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Samuel Margolis
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Jing Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Kaidong Wang
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| | - Yaran Zhang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Shaolei Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Jongchan Park
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Charlie Zheng
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Lili Yang
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
- Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research, UCLA, California, 90095, USA
- Jonsson Comprehensive Cancer Center, David Geffen School of Medicine, UCLA, California, 90095, USA
- Molecular Biology Institute, UCLA, California, 90095, USA
| | - Alison Chu
- Division of Neonatology and Developmental Biology, Department of Pediatrics, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Yuhua Zhang
- Doheny Eye Institute, Department of Ophthalmology, UCLA, California, 90095, USA
| | - Liang Gao
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Tzung K. Hsiai
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| |
Collapse
|
22
|
Roos J, Bancelin S, Delaire T, Wilhelmi A, Levet F, Engelhardt M, Viasnoff V, Galland R, Nägerl UV, Sibarita JB. Arkitekt: streaming analysis and real-time workflows for microscopy. Nat Methods 2024; 21:1884-1894. [PMID: 39294366 DOI: 10.1038/s41592-024-02404-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 08/01/2024] [Indexed: 09/20/2024]
Abstract
Quantitative microscopy workflows have evolved dramatically over the past years, progressively becoming more complex with the emergence of deep learning. Long-standing challenges such as three-dimensional segmentation of complex microscopy data can finally be addressed, and new imaging modalities are breaking records in both resolution and acquisition speed, generating gigabytes if not terabytes of data per day. With this shift in bioimage workflows comes an increasing need for efficient orchestration and data management, necessitating multitool interoperability and the ability to span dedicated computing resources. However, existing solutions are still limited in their flexibility and scalability and are usually restricted to offline analysis. Here we introduce Arkitekt, an open-source middleman between users and bioimage apps that enables complex quantitative microscopy workflows in real time. It allows the orchestration of popular bioimage software locally or remotely in a reliable and efficient manner. It includes visualization and analysis modules, but also mechanisms to execute source code and pilot acquisition software, making 'smart microscopy' a reality.
Collapse
Affiliation(s)
- Johannes Roos
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
- Institute of Anatomy and Cell Biology, Medical Faculty, Johannes Kepler University, Linz, Austria
| | - Stéphane Bancelin
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - Tom Delaire
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | | | - Florian Levet
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
- Bordeaux Imaging Center, University of Bordeaux, CNRS, INSERM, Bordeaux, France
| | - Maren Engelhardt
- Institute of Anatomy and Cell Biology, Medical Faculty, Johannes Kepler University, Linz, Austria
- Clinical Research Institute for Neurosciences, Johannes Kepler University, Linz, Austria
| | - Virgile Viasnoff
- Mechanobiology Institute, National University of Singapore, Singapore, Singapore
| | - Rémi Galland
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - U Valentin Nägerl
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - Jean-Baptiste Sibarita
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France.
| |
Collapse
|
23
|
Kaderuppan SS, Sharma A, Saifuddin MR, Wong WLE, Woo WL. Θ-Net: A Deep Neural Network Architecture for the Resolution Enhancement of Phase-Modulated Optical Micrographs In Silico. SENSORS (BASEL, SWITZERLAND) 2024; 24:6248. [PMID: 39409287 PMCID: PMC11478931 DOI: 10.3390/s24196248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 09/23/2024] [Accepted: 09/23/2024] [Indexed: 10/20/2024]
Abstract
Optical microscopy is widely regarded to be an indispensable tool in healthcare and manufacturing quality control processes, although its inability to resolve structures separated by a lateral distance under ~200 nm has culminated in the emergence of a new field named fluorescence nanoscopy, while this too is prone to several caveats (namely phototoxicity, interference caused by exogenous probes and cost). In this regard, we present a triplet string of concatenated O-Net ('bead') architectures (termed 'Θ-Net' in the present study) as a cost-efficient and non-invasive approach to enhancing the resolution of non-fluorescent phase-modulated optical microscopical images in silico. The quality of the afore-mentioned enhanced resolution (ER) images was compared with that obtained via other popular frameworks (such as ANNA-PALM, BSRGAN and 3D RCAN), with the Θ-Net-generated ER images depicting an increased level of detail (unlike previous DNNs). In addition, the use of cross-domain (transfer) learning to enhance the capabilities of models trained on differential interference contrast (DIC) datasets [where phasic variations are not as prominently manifested as amplitude/intensity differences in the individual pixels unlike phase-contrast microscopy (PCM)] has resulted in the Θ-Net-generated images closely approximating that of the expected (ground truth) images for both the DIC and PCM datasets. This thus demonstrates the viability of our current Θ-Net architecture in attaining highly resolved images under poor signal-to-noise ratios while eliminating the need for a priori PSF and OTF information, thereby potentially impacting several engineering fronts (particularly biomedical imaging and sensing, precision engineering and optical metrology).
Collapse
Affiliation(s)
- Shiraz S. Kaderuppan
- Faculty of Science, Agriculture & Engineering (SAgE), Newcastle University, Newcastle upon Tyne NE1 7RU, UK; (A.S.); (M.R.S.)
| | - Anurag Sharma
- Faculty of Science, Agriculture & Engineering (SAgE), Newcastle University, Newcastle upon Tyne NE1 7RU, UK; (A.S.); (M.R.S.)
| | - Muhammad Ramadan Saifuddin
- Faculty of Science, Agriculture & Engineering (SAgE), Newcastle University, Newcastle upon Tyne NE1 7RU, UK; (A.S.); (M.R.S.)
| | - Wai Leong Eugene Wong
- Engineering Cluster, Singapore Institute of Technology, 10 Dover Drive, Singapore 138683, Singapore;
| | - Wai Lok Woo
- Computer and Information Sciences, Sutherland Building, Northumbria University, Northumberland Road, Newcastle upon Tyne NE1 8ST, UK;
| |
Collapse
|
24
|
Shah ZH, Müller M, Hübner W, Ortkrass H, Hammer B, Huser T, Schenck W. Image restoration in frequency space using complex-valued CNNs. Front Artif Intell 2024; 7:1353873. [PMID: 39376505 PMCID: PMC11456741 DOI: 10.3389/frai.2024.1353873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 09/03/2024] [Indexed: 10/09/2024] Open
Abstract
Real-valued convolutional neural networks (RV-CNNs) in the spatial domain have outperformed classical approaches in many image restoration tasks such as image denoising and super-resolution. Fourier analysis of the results produced by these spatial domain models reveals the limitations of these models in properly processing the full frequency spectrum. This lack of complete spectral information can result in missing textural and structural elements. To address this limitation, we explore the potential of complex-valued convolutional neural networks (CV-CNNs) for image restoration tasks. CV-CNNs have shown remarkable performance in tasks such as image classification and segmentation. However, CV-CNNs for image restoration problems in the frequency domain have not been fully investigated to address the aforementioned issues. Here, we propose several novel CV-CNN-based models equipped with complex-valued attention gates for image denoising and super-resolution in the frequency domains. We also show that our CV-CNN-based models outperform their real-valued counterparts for denoising super-resolution structured illumination microscopy (SR-SIM) and conventional image datasets. Furthermore, the experimental results show that our proposed CV-CNN-based models preserve the frequency spectrum better than their real-valued counterparts in the denoising task. Based on these findings, we conclude that CV-CNN-based methods provide a plausible and beneficial deep learning approach for image restoration in the frequency domain.
Collapse
Affiliation(s)
- Zafran Hussain Shah
- Center for Applied Data Science, Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, Bielefeld, Germany
| | - Marcel Müller
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Wolfgang Hübner
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Henning Ortkrass
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Barbara Hammer
- CITEC—Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Thomas Huser
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Wolfram Schenck
- Center for Applied Data Science, Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, Bielefeld, Germany
| |
Collapse
|
25
|
Liu ML, Liu YP, Guo XX, Wu ZY, Zhang XT, Roe AW, Hu JM. Orientation selectivity mapping in the visual cortex. Prog Neurobiol 2024; 240:102656. [PMID: 39009108 DOI: 10.1016/j.pneurobio.2024.102656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 06/17/2024] [Accepted: 07/05/2024] [Indexed: 07/17/2024]
Abstract
The orientation map is one of the most well-studied functional maps of the visual cortex. However, results from the literature are of different qualities. Clear boundaries among different orientation domains and blurred uncertain distinctions were shown in different studies. These unclear imaging results will lead to an inaccuracy in depicting cortical structures, and the lack of consideration in experimental design will also lead to biased depictions of the cortical features. How we accurately define orientation domains will impact the entire field of research. In this study, we test how spatial frequency (SF), stimulus size, location, chromatic, and data processing methods affect the orientation functional maps (including a large area of dorsal V4, and parts of dorsal V1) acquired by intrinsic signal optical imaging. Our results indicate that, for large imaging fields, large grating stimuli with mixed SF components should be considered to acquire the orientation map. A diffusion model image enhancement based on the difference map could further improve the map quality. In addition, the similar outcomes of achromatic and chromatic gratings indicate two alternative types of afferents from LGN, pooling in V1 to generate cue-invariant orientation selectivity.
Collapse
Affiliation(s)
- Mei-Lan Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yi-Peng Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Xin-Xia Guo
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Zhi-Yi Wu
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310010, China
| | - Xiao-Tong Zhang
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China; College of Electrical Engineering, Zhejiang University, Hangzhou 310000, China
| | - Anna Wang Roe
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China; The State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310058, China.
| | - Jia-Ming Hu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China.
| |
Collapse
|
26
|
Rotem O, Schwartz T, Maor R, Tauber Y, Shapiro MT, Meseguer M, Gilboa D, Seidman DS, Zaritsky A. Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization. Nat Commun 2024; 15:7390. [PMID: 39191720 DOI: 10.1038/s41467-024-51136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 07/31/2024] [Indexed: 08/29/2024] Open
Abstract
The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a "black box" lacking human meaningful explanations for the models' decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables "human-in-the-loop" interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of "black box" classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.
Collapse
Affiliation(s)
- Oded Rotem
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel
| | | | - Ron Maor
- AIVF Ltd., Tel Aviv, 69271, Israel
| | | | | | - Marcos Meseguer
- IVI Foundation Instituto de Investigación Sanitaria La FeValencia, Valencia, 46026, Spain
- Department of Reproductive Medicine, IVIRMA Valencia, 46015, Valencia, Spain
| | | | - Daniel S Seidman
- AIVF Ltd., Tel Aviv, 69271, Israel
- The Faculty of Medicine, Tel Aviv University, Tel-Aviv, 69978, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel.
| |
Collapse
|
27
|
Carnevali D, Zhong L, González-Almela E, Viana C, Rotkevich M, Wang A, Franco-Barranco D, Gonzalez-Marfil A, Neguembor MV, Castells-Garcia A, Arganda-Carreras I, Cosma MP. A deep learning method that identifies cellular heterogeneity using nanoscale nuclear features. NAT MACH INTELL 2024; 6:1021-1033. [PMID: 39309215 PMCID: PMC11415298 DOI: 10.1038/s42256-024-00883-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/12/2024] [Indexed: 09/25/2024]
Abstract
Cellular phenotypic heterogeneity is an important hallmark of many biological processes and understanding its origins remains a substantial challenge. This heterogeneity often reflects variations in the chromatin structure, influenced by factors such as viral infections and cancer, which dramatically reshape the cellular landscape. To address the challenge of identifying distinct cell states, we developed artificial intelligence of the nucleus (AINU), a deep learning method that can identify specific nuclear signatures at the nanoscale resolution. AINU can distinguish different cell states based on the spatial arrangement of core histone H3, RNA polymerase II or DNA from super-resolution microscopy images. With only a small number of images as the training data, AINU correctly identifies human somatic cells, human-induced pluripotent stem cells, very early stage infected cells transduced with DNA herpes simplex virus type 1 and even cancer cells after appropriate retraining. Finally, using AI interpretability methods, we find that the RNA polymerase II localizations in the nucleoli aid in distinguishing human-induced pluripotent stem cells from their somatic cells. Overall, AINU coupled with super-resolution microscopy of nuclear structures provides a robust tool for the precise detection of cellular heterogeneity, with considerable potential for advancing diagnostics and therapies in regenerative medicine, virology and cancer biology.
Collapse
Affiliation(s)
- Davide Carnevali
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Limei Zhong
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Esther González-Almela
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Carlotta Viana
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Mikhail Rotkevich
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Aiping Wang
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Daniel Franco-Barranco
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Aitor Gonzalez-Marfil
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Maria Victoria Neguembor
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Alvaro Castells-Garcia
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Ignacio Arganda-Carreras
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- Biofisika Institute, Barrio Sarrena s/n, Leioa, Spain
| | - Maria Pia Cosma
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- ICREA, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| |
Collapse
|
28
|
Mihelic SA, Engelmann SA, Sadr M, Jafari CZ, Zhou A, Woods AL, Williamson MR, Jones TA, Dunn AK. Microvascular plasticity in mouse stroke model recovery: Anatomy statistics, dynamics measured by longitudinal in vivo two-photon angiography, network vectorization. J Cereb Blood Flow Metab 2024:271678X241270465. [PMID: 39113424 PMCID: PMC11572002 DOI: 10.1177/0271678x241270465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 04/19/2024] [Accepted: 06/23/2024] [Indexed: 11/20/2024]
Abstract
This manuscript quantitatively investigates remodeling dynamics of the cortical microvascular network (thousands of connected capillaries) following photothrombotic ischemia (cubic millimeter volume, imaged weekly) using a novel in vivo two-photon angiography and high throughput vascular vectorization method. The results suggest distinct temporal patterns of cerebrovascular plasticity, with acute remodeling peaking at one week post-stroke. The network architecture then gradually stabilizes, returning to a new steady state after four weeks. These findings align with previous literature on neuronal plasticity, highlighting the correlation between neuronal and neurovascular remodeling. Quantitative analysis of neurovascular networks using length- and strand-based statistical measures reveals intricate changes in network anatomy and topology. The distance and strand-length statistics show significant alterations, with a peak of plasticity observed at one week post-stroke, followed by a gradual return to baseline. The orientation statistic plasticity peaks at two weeks, gradually approaching the (conserved across subjects) stroke signature. The underlying mechanism of the vascular response (angiogenesis vs. tissue deformation), however, is yet unexplored. Overall, the combination of chronic two-photon angiography, vascular vectorization, reconstruction/visualization, and statistical analysis enables both qualitative and quantitative assessments of neurovascular remodeling dynamics, demonstrating a method for investigating cortical microvascular network disorders and the therapeutic modes of action thereof.
Collapse
Affiliation(s)
- Samuel A Mihelic
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| | - Shaun A Engelmann
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| | - Mahdi Sadr
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| | - Chakameh Z Jafari
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| | - Annie Zhou
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| | - Aaron L Woods
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| | | | - Theresa A Jones
- Institute for Neuroscience, University of Texas at Austin, Austin, TX, USA
| | - Andrew K Dunn
- Biomedical Engineering Department, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
29
|
Rehman A, Zhovmer A, Sato R, Mukouyama YS, Chen J, Rissone A, Puertollano R, Liu J, Vishwasrao HD, Shroff H, Combs CA, Xue H. Convolutional neural network transformer (CNNT) for fluorescence microscopy image denoising with improved generalization and fast adaptation. Sci Rep 2024; 14:18184. [PMID: 39107416 PMCID: PMC11303381 DOI: 10.1038/s41598-024-68918-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 07/30/2024] [Indexed: 08/10/2024] Open
Abstract
Deep neural networks can improve the quality of fluorescence microscopy images. Previous methods, based on Convolutional Neural Networks (CNNs), require time-consuming training of individual models for each experiment, impairing their applicability and generalization. In this study, we propose a novel imaging-transformer based model, Convolutional Neural Network Transformer (CNNT), that outperforms CNN based networks for image denoising. We train a general CNNT based backbone model from pairwise high-low Signal-to-Noise Ratio (SNR) image volumes, gathered from a single type of fluorescence microscope, an instant Structured Illumination Microscope. Fast adaptation to new microscopes is achieved by fine-tuning the backbone on only 5-10 image volume pairs per new experiment. Results show that the CNNT backbone and fine-tuning scheme significantly reduces training time and improves image quality, outperforming models trained using only CNNs such as 3D-RCAN and Noise2Fast. We show three examples of efficacy of this approach in wide-field, two-photon, and confocal fluorescence microscopy.
Collapse
Affiliation(s)
- Azaan Rehman
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
| | - Alexander Zhovmer
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration (FDA), Silver Spring, MD, 20903, USA
| | - Ryo Sato
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Yoh-Suke Mukouyama
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | - Alberto Rissone
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Rosa Puertollano
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiamin Liu
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | | | - Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Christian A Combs
- Light Microscopy Core, National Heart, Lung, and Blood Institute, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD, 20892, USA.
| | - Hui Xue
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
- Health Futures, Microsoft Research, Redmond, Washington, 98052, USA
| |
Collapse
|
30
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
31
|
Ma C, Tan W, He R, Yan B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat Methods 2024; 21:1558-1567. [PMID: 38609490 DOI: 10.1038/s41592-024-02244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/13/2024] [Indexed: 04/14/2024]
Abstract
Fluorescence microscopy-based image restoration has received widespread attention in the life sciences and has led to significant progress, benefiting from deep learning technology. However, most current task-specific methods have limited generalizability to different fluorescence microscopy-based image restoration problems. Here, we seek to improve generalizability and explore the potential of applying a pretrained foundation model to fluorescence microscopy-based image restoration. We provide a universal fluorescence microscopy-based image restoration (UniFMIR) model to address different restoration problems, and show that UniFMIR offers higher image restoration precision, better generalization and increased versatility. Demonstrations on five tasks and 14 datasets covering a wide range of microscopy imaging modalities and biological samples demonstrate that the pretrained UniFMIR can effectively transfer knowledge to a specific situation via fine-tuning, uncover clear nanoscale biomolecular structures and facilitate high-quality imaging. This work has the potential to inspire and trigger new research highlights for fluorescence microscopy-based image restoration.
Collapse
Affiliation(s)
- Chenxi Ma
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Ruian He
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| |
Collapse
|
32
|
Cam RM, Villa U, Anastasio MA. Learning a stable approximation of an existing but unknown inverse mapping: application to the half-time circular Radon transform. INVERSE PROBLEMS 2024; 40:085002. [PMID: 38933410 PMCID: PMC11197394 DOI: 10.1088/1361-6420/ad4f0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/05/2024] [Accepted: 05/22/2024] [Indexed: 06/28/2024]
Abstract
Supervised deep learning-based methods have inspired a new wave of image reconstruction methods that implicitly learn effective regularization strategies from a set of training data. While they hold potential for improving image quality, they have also raised concerns regarding their robustness. Instabilities can manifest when learned methods are applied to find approximate solutions to ill-posed image reconstruction problems for which a unique and stable inverse mapping does not exist, which is a typical use case. In this study, we investigate the performance of supervised deep learning-based image reconstruction in an alternate use case in which a stable inverse mapping is known to exist but is not yet analytically available in closed form. For such problems, a deep learning-based method can learn a stable approximation of the unknown inverse mapping that generalizes well to data that differ significantly from the training set. The learned approximation of the inverse mapping eliminates the need to employ an implicit (optimization-based) reconstruction method and can potentially yield insights into the unknown analytic inverse formula. The specific problem addressed is image reconstruction from a particular case of radially truncated circular Radon transform (CRT) data, referred to as 'half-time' measurement data. For the half-time image reconstruction problem, we develop and investigate a learned filtered backprojection method that employs a convolutional neural network to approximate the unknown filtering operation. We demonstrate that this method behaves stably and readily generalizes to data that differ significantly from training data. The developed method may find application to wave-based imaging modalities that include photoacoustic computed tomography.
Collapse
Affiliation(s)
- Refik Mert Cam
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| | - Umberto Villa
- Oden Institute for Computational Engineering & Sciences, The University of Texas at Austin, Austin, TX 78712, United States of America
| | - Mark A Anastasio
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
- Department of Bioengineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| |
Collapse
|
33
|
Liu J, Gao F, Zhang L, Yang H. A Saturation Artifacts Inpainting Method Based on Two-Stage GAN for Fluorescence Microscope Images. MICROMACHINES 2024; 15:928. [PMID: 39064439 PMCID: PMC11279111 DOI: 10.3390/mi15070928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 07/10/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024]
Abstract
Fluorescence microscopic images of cells contain a large number of morphological features that are used as an unbiased source of quantitative information about cell status, through which researchers can extract quantitative information about cells and study the biological phenomena of cells through statistical and analytical analysis. As an important research object of phenotypic analysis, images have a great influence on the research results. Saturation artifacts present in the image result in a loss of grayscale information that does not reveal the true value of fluorescence intensity. From the perspective of data post-processing, we propose a two-stage cell image recovery model based on a generative adversarial network to solve the problem of phenotypic feature loss caused by saturation artifacts. The model is capable of restoring large areas of missing phenotypic features. In the experiment, we adopt the strategy of progressive restoration to improve the robustness of the training effect and add the contextual attention structure to enhance the stability of the restoration effect. We hope to use deep learning methods to mitigate the effects of saturation artifacts to reveal how chemical, genetic, and environmental factors affect cell state, providing an effective tool for studying the field of biological variability and improving image quality in analysis.
Collapse
Affiliation(s)
- Jihong Liu
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; (F.G.); (L.Z.)
| | - Fei Gao
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; (F.G.); (L.Z.)
| | - Lvheng Zhang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; (F.G.); (L.Z.)
| | - Haixu Yang
- Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, China;
| |
Collapse
|
34
|
Ertürk A. Deep 3D histology powered by tissue clearing, omics and AI. Nat Methods 2024; 21:1153-1165. [PMID: 38997593 DOI: 10.1038/s41592-024-02327-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 05/28/2024] [Indexed: 07/14/2024]
Abstract
To comprehensively understand tissue and organism physiology and pathophysiology, it is essential to create complete three-dimensional (3D) cellular maps. These maps require structural data, such as the 3D configuration and positioning of tissues and cells, and molecular data on the constitution of each cell, spanning from the DNA sequence to protein expression. While single-cell transcriptomics is illuminating the cellular and molecular diversity across species and tissues, the 3D spatial context of these molecular data is often overlooked. Here, I discuss emerging 3D tissue histology techniques that add the missing third spatial dimension to biomedical research. Through innovations in tissue-clearing chemistry, labeling and volumetric imaging that enhance 3D reconstructions and their synergy with molecular techniques, these technologies will provide detailed blueprints of entire organs or organisms at the cellular level. Machine learning, especially deep learning, will be essential for extracting meaningful insights from the vast data. Further development of integrated structural, molecular and computational methods will unlock the full potential of next-generation 3D histology.
Collapse
Affiliation(s)
- Ali Ertürk
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany.
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians University, Munich, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Deep Piction GmbH, Munich, Germany.
| |
Collapse
|
35
|
Cao Y, Xu B, Li B, Fu H. Advanced Design of Soft Robots with Artificial Intelligence. NANO-MICRO LETTERS 2024; 16:214. [PMID: 38869734 PMCID: PMC11176285 DOI: 10.1007/s40820-024-01423-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/22/2024] [Indexed: 06/14/2024]
Abstract
A comprehensive review focused on the whole systems of the soft robotics with artificial intelligence, which can feel, think, react and interact with humans, is presented. The design strategies concerning about various aspects of the soft robotics, like component materials, device structures, prepared technologies, integrated method, and potential applications, are summarized. A broad outlook on the future considerations for the soft robots is proposed. In recent years, breakthrough has been made in the field of artificial intelligence (AI), which has also revolutionized the industry of robotics. Soft robots featured with high-level safety, less weight, lower power consumption have always been one of the research hotspots. Recently, multifunctional sensors for perception of soft robotics have been rapidly developed, while more algorithms and models of machine learning with high accuracy have been optimized and proposed. Designs of soft robots with AI have also been advanced ranging from multimodal sensing, human–machine interaction to effective actuation in robotic systems. Nonetheless, comprehensive reviews concerning the new developments and strategies for the ingenious design of the soft robotic systems equipped with AI are rare. Here, the new development is systematically reviewed in the field of soft robots with AI. First, background and mechanisms of soft robotic systems are briefed, after which development focused on how to endow the soft robots with AI, including the aspects of feeling, thought and reaction, is illustrated. Next, applications of soft robots with AI are systematically summarized and discussed together with advanced strategies proposed for performance enhancement. Design thoughts for future intelligent soft robotics are pointed out. Finally, some perspectives are put forward.
Collapse
Affiliation(s)
- Ying Cao
- Nanotechnology Center, School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong, 999077, People's Republic of China
| | - Bingang Xu
- Nanotechnology Center, School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong, 999077, People's Republic of China.
| | - Bin Li
- Bioinspired Engineering and Biomechanics Center, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Hong Fu
- Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong, 999077, People's Republic of China.
| |
Collapse
|
36
|
Perez-Lopez R, Ghaffari Laleh N, Mahmood F, Kather JN. A guide to artificial intelligence for cancer researchers. Nat Rev Cancer 2024; 24:427-441. [PMID: 38755439 DOI: 10.1038/s41568-024-00694-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/09/2024] [Indexed: 05/18/2024]
Abstract
Artificial intelligence (AI) has been commoditized. It has evolved from a specialty resource to a readily accessible tool for cancer researchers. AI-based tools can boost research productivity in daily workflows, but can also extract hidden information from existing data, thereby enabling new scientific discoveries. Building a basic literacy in these tools is useful for every cancer researcher. Researchers with a traditional biological science focus can use AI-based tools through off-the-shelf software, whereas those who are more computationally inclined can develop their own AI-based software pipelines. In this article, we provide a practical guide for non-computational cancer researchers to understand how AI-based tools can benefit them. We convey general principles of AI for applications in image analysis, natural language processing and drug discovery. In addition, we give examples of how non-computational researchers can get started on the journey to productively use AI in their own work.
Collapse
Affiliation(s)
- Raquel Perez-Lopez
- Radiomics Group, Vall d'Hebron Institute of Oncology, Vall d'Hebron Barcelona Hospital Campus, Barcelona, Spain
| | - Narmin Ghaffari Laleh
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumour Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
37
|
Aghigh A, Jargot G, Zaouter C, Preston SEJ, Mohammadi MS, Ibrahim H, Del Rincón SV, Patten K, Légaré F. A comparative study of CARE 2D and N2V 2D for tissue-specific denoising in second harmonic generation imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300565. [PMID: 38566461 DOI: 10.1002/jbio.202300565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/11/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
This study explored the application of deep learning in second harmonic generation (SHG) microscopy, a rapidly growing area. This study focuses on the impact of glycerol concentration on image noise in SHG microscopy and compares two image restoration techniques: Noise-to-Void 2D (N2V 2D, no reference image restoration) and content-aware image restoration (CARE 2D, full reference image restoration). We demonstrated that N2V 2D effectively restored the images affected by high glycerol concentrations. To reduce sample exposure and damage, this study further addresses low-power SHG imaging by reducing the laser power by 70% using deep learning techniques. CARE 2D excels in preserving detailed structures, whereas N2V 2D maintains natural muscle structure. This study highlights the strengths and limitations of these models in specific SHG microscopy applications, offering valuable insights and potential advancements in the field .
Collapse
Affiliation(s)
- Arash Aghigh
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Gaëtan Jargot
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Charlotte Zaouter
- Armand-Frappier Santé Biotechnologie Research Centre, Laval, Québec, Canada
| | - Samuel E J Preston
- Department of Experimental Medicine, Faculty of Medicine, McGill University, Montréal, Québec, Canada
- Gerald Bronfman Department of Oncology, Segal Cancer Centre, Lady Davis Institute and Jewish General Hospital, McGill University, Montréal, Québec, Canada
| | - Melika Saadat Mohammadi
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Heide Ibrahim
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Sonia V Del Rincón
- Department of Experimental Medicine, Faculty of Medicine, McGill University, Montréal, Québec, Canada
- Gerald Bronfman Department of Oncology, Segal Cancer Centre, Lady Davis Institute and Jewish General Hospital, McGill University, Montréal, Québec, Canada
| | - Kessen Patten
- Armand-Frappier Santé Biotechnologie Research Centre, Laval, Québec, Canada
| | - François Légaré
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| |
Collapse
|
38
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
39
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
40
|
Hauser SL, Brosig J, Murthy B, Attardo A, Kist AM. Implicit neural representations in light microscopy. BIOMEDICAL OPTICS EXPRESS 2024; 15:2175-2186. [PMID: 38633078 PMCID: PMC11019677 DOI: 10.1364/boe.515517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 02/08/2024] [Accepted: 02/08/2024] [Indexed: 04/19/2024]
Abstract
Three-dimensional stacks acquired with confocal or two-photon microscopy are crucial for studying neuroanatomy. However, high-resolution image stacks acquired at multiple depths are time-consuming and susceptible to photobleaching. In vivo microscopy is further prone to motion artifacts. In this work, we suggest that deep neural networks with sine activation functions encoding implicit neural representations (SIRENs) are suitable for predicting intermediate planes and correcting motion artifacts, addressing the aforementioned shortcomings. We show that we can accurately estimate intermediate planes across multiple micrometers and fully automatically and unsupervised estimate a motion-corrected denoised picture. We show that noise statistics can be affected by SIRENs, however, rescued by a downstream denoising neural network, shown exemplarily with the recovery of dendritic spines. We believe that the application of these technologies will facilitate more efficient acquisition and superior post-processing in the future.
Collapse
Affiliation(s)
- Sophie Louise Hauser
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
| | | | | | | | - Andreas M. Kist
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
| |
Collapse
|
41
|
Chen R, Xu J, Wang B, Ding Y, Abdulla A, Li Y, Jiang L, Ding X. SpiDe-Sr: blind super-resolution network for precise cell segmentation and clustering in spatial proteomics imaging. Nat Commun 2024; 15:2708. [PMID: 38548720 PMCID: PMC10978886 DOI: 10.1038/s41467-024-46989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Spatial proteomics elucidates cellular biochemical changes with unprecedented topological level. Imaging mass cytometry (IMC) is a high-dimensional single-cell resolution platform for targeted spatial proteomics. However, the precision of subsequent clinical analysis is constrained by imaging noise and resolution. Here, we propose SpiDe-Sr, a super-resolution network embedded with a denoising module for IMC spatial resolution enhancement. SpiDe-Sr effectively resists noise and improves resolution by 4 times. We demonstrate SpiDe-Sr respectively with cells, mouse and human tissues, resulting 18.95%/27.27%/21.16% increase in peak signal-to-noise ratio and 15.95%/31.63%/15.52% increase in cell extraction accuracy. We further apply SpiDe-Sr to study the tumor microenvironment of a 20-patient clinical breast cancer cohort with 269,556 single cells, and discover the invasion of Gram-negative bacteria is positively correlated with carcinogenesis markers and negatively correlated with immunological markers. Additionally, SpiDe-Sr is also compatible with fluorescence microscopy imaging, suggesting SpiDe-Sr an alternative tool for microscopy image super-resolution.
Collapse
Grants
- This work was supported by National Key R&D Program of China (2022YFC2601700, 2022YFF0710202) and NSFC Projects (T2122002, 22077079, 81871448), Shanghai Municipal Science and Technology Project(22Z510202478), Shanghai Municipal Education Commission Project(21SG10), Shanghai Jiao Tong University Projects (YG2021ZD19, Agri-X20200101, 2020 SJTU-HUJI), Shanghai Municipal Health Commission Project (2019CXJQ03). Thanks for AEMD SJTU, Shanghai Jiao Tong University Laboratory Animal Center for the supporting.
Collapse
Affiliation(s)
- Rui Chen
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiasu Xu
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Boqian Wang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Aynur Abdulla
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiyang Li
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lai Jiang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xianting Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
42
|
Tabata K, Kawagoe H, Taylor JN, Mochizuki K, Kubo T, Clement JE, Kumamoto Y, Harada Y, Nakamura A, Fujita K, Komatsuzaki T. On-the-fly Raman microscopy guaranteeing the accuracy of discrimination. Proc Natl Acad Sci U S A 2024; 121:e2304866121. [PMID: 38483992 PMCID: PMC10962959 DOI: 10.1073/pnas.2304866121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 12/15/2023] [Indexed: 03/19/2024] Open
Abstract
Accelerating the measurement for discrimination of samples, such as classification of cell phenotype, is crucial when faced with significant time and cost constraints. Spontaneous Raman microscopy offers label-free, rich chemical information but suffers from long acquisition time due to extremely small scattering cross-sections. One possible approach to accelerate the measurement is by measuring necessary parts with a suitable number of illumination points. However, how to design these points during measurement remains a challenge. To address this, we developed an imaging technique based on a reinforcement learning in machine learning (ML). This ML approach adaptively feeds back "optimal" illumination pattern during the measurement to detect the existence of specific characteristics of interest, allowing faster measurements while guaranteeing discrimination accuracy. Using a set of Raman images of human follicular thyroid and follicular thyroid carcinoma cells, we showed that our technique requires 3,333 to 31,683 times smaller number of illuminations for discriminating the phenotypes than raster scanning. To quantitatively evaluate the number of illuminations depending on the requisite discrimination accuracy, we prepared a set of polymer bead mixture samples to model anomalous and normal tissues. We then applied a home-built programmable-illumination microscope equipped with our algorithm, and confirmed that the system can discriminate the sample conditions with 104 to 4,350 times smaller number of illuminations compared to standard point illumination Raman microscopy. The proposed algorithm can be applied to other types of microscopy that can control measurement condition on the fly, offering an approach for the acceleration of accurate measurements in various applications including medical diagnosis.
Collapse
Affiliation(s)
- Koji Tabata
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
| | - Hiroyuki Kawagoe
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
| | - J. Nicholas Taylor
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
| | - Kentaro Mochizuki
- Department of Pathology and Cell Regulation, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto602–8566, Kyoto, Japan
| | - Toshiki Kubo
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
| | - Jean-Emmanuel Clement
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
| | - Yasuaki Kumamoto
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
| | - Yoshinori Harada
- Department of Pathology and Cell Regulation, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto602–8566, Kyoto, Japan
| | - Atsuyoshi Nakamura
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo060–0814, Hokkaido, Japan
| | - Katsumasa Fujita
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
- Advanced Photonics and Biosensing Open Innovation Laboratory, AIST-Osaka University, Suita565–0871, Osaka, Japan
| | - Tamiki Komatsuzaki
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
- Graduate School of Chemical Sciences and Engineering Materials Chemistry, and Engineering Course, Hokkaido University, Sapporo060–0812, Hokkaido, Japan
- The Institute of Scientific and Industrial Research, Osaka University, Ibaraki567-0047, Osaka, Japan
| |
Collapse
|
43
|
Shen B, Li Z, Pan Y, Guo Y, Yin Z, Hu R, Qu J, Liu L. Noninvasive Nonlinear Optical Computational Histology. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308630. [PMID: 38095543 PMCID: PMC10916666 DOI: 10.1002/advs.202308630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 11/28/2023] [Indexed: 03/07/2024]
Abstract
Cancer remains a global health challenge, demanding early detection and accurate diagnosis for improved patient outcomes. An intelligent paradigm is introduced that elevates label-free nonlinear optical imaging with contrastive patch-wise learning, yielding stain-free nonlinear optical computational histology (NOCH). NOCH enables swift, precise diagnostic analysis of fresh tissues, reducing patient anxiety and healthcare costs. Nonlinear modalities are evaluated, including stimulated Raman scattering and multiphoton imaging, for their ability to enhance tumor microenvironment sensitivity, pathological analysis, and cancer examination. Quantitative analysis confirmed that NOCH images accurately reproduce nuclear morphometric features across different cancer stages. Key diagnostic features, such as nuclear morphology, size, and nuclear-cytoplasmic contrast, are well preserved. NOCH models also demonstrate promising generalization when applied to other pathological tissues. The study unites label-free nonlinear optical imaging with histopathology using contrastive learning to establish stain-free computational histology. NOCH provides a rapid, non-invasive, and precise approach to surgical pathology, holding immense potential for revolutionizing cancer diagnosis and surgical interventions.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Zhenglin Li
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Ying Pan
- China–Japan Union Hospital of Jilin UniversityChangchun130033China
| | - Yuan Guo
- Shaanxi Provincial Cancer HospitalXi'an710065China
| | - Zongyi Yin
- Shenzhen University General HospitalShenzhen518055China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
44
|
Luo C, Pang W, Shen B, Zhao Z, Wang S, Hu R, Qu J, Gu B, Liu L. Data-driven coordinated attention deep learning for high-fidelity brain imaging denoising and inpainting. JOURNAL OF BIOPHOTONICS 2024; 17:e202300390. [PMID: 38168132 DOI: 10.1002/jbio.202300390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/07/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024]
Abstract
Deep learning offers promise in enhancing low-quality images by addressing weak fluorescence signals, especially in deep in vivo mouse brain imaging. However, current methods struggle with photon scarcity and noise within in vivo deep mouse brains, and often neglecting tissue preservation. In this study, we propose an innovative in vivo cortical fluorescence image restoration approach, combining signal enhancement, denoising, and inpainting. We curated a deep brain cortical image dataset and developed a novel deep brain coordinate attention restoration network (DeepCAR), integrating coordinate attention with optimized residual networks. Our method swiftly and accurately restores deep cortex images exceeding 800 μm, preserving small-scale tissue structures. It boosts the peak signal-to-noise ratio (PSNR) by 6.94 dB for weak signals and 11.22 dB for large noisy images. Crucially, we validate the effectiveness on external datasets with diverse noise distributions, structural features compared to those in our training data, showcasing real-time high-performance image restoration capabilities.
Collapse
Affiliation(s)
- Chenggui Luo
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Wen Pang
- Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Zewei Zhao
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Shiqi Wang
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Bobo Gu
- Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| |
Collapse
|
45
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
46
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
47
|
Chang GH, Wu MY, Yen LH, Huang DY, Lin YH, Luo YR, Liu YD, Xu B, Leong KW, Lai WS, Chiang AS, Wang KC, Lin CH, Wang SL, Chu LA. Isotropic multi-scale neuronal reconstruction from high-ratio expansion microscopy with contrastive unsupervised deep generative models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107991. [PMID: 38185040 DOI: 10.1016/j.cmpb.2023.107991] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/10/2023] [Accepted: 12/19/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Current methods for imaging reconstruction from high-ratio expansion microscopy (ExM) data are limited by anisotropic optical resolution and the requirement for extensive manual annotation, creating a significant bottleneck in the analysis of complex neuronal structures. METHODS We devised an innovative approach called the IsoGAN model, which utilizes a contrastive unsupervised generative adversarial network to sidestep these constraints. This model leverages multi-scale and isotropic neuron/protein/blood vessel morphology data to generate high-fidelity 3D representations of these structures, eliminating the need for rigorous manual annotation and supervision. The IsoGAN model introduces simplified structures with idealized morphologies as shape priors to ensure high consistency in the generated neuronal profiles across all points in space and scalability for arbitrarily large volumes. RESULTS The efficacy of the IsoGAN model in accurately reconstructing complex neuronal structures was quantitatively assessed by examining the consistency between the axial and lateral views and identifying a reduction in erroneous imaging artifacts. The IsoGAN model accurately reconstructed complex neuronal structures, as evidenced by the consistency between the axial and lateral views and a reduction in erroneous imaging artifacts, and can be further applied to various biological samples. CONCLUSION With its ability to generate detailed 3D neurons/proteins/blood vessel structures using significantly fewer axial view images, IsoGAN can streamline the process of imaging reconstruction while maintaining the necessary detail, offering a transformative solution to the existing limitations in high-throughput morphology analysis across different structures.
Collapse
Affiliation(s)
- Gary Han Chang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC; Graduate School of Advanced Technology, National Taiwan University, Taipei, Taiwan, ROC.
| | - Meng-Yun Wu
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ling-Hui Yen
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Da-Yu Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ya-Hui Lin
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Yi-Ru Luo
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Ya-Ding Liu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Bin Xu
- Department of Psychiatry, Columbia University, New York, NY 10032, USA
| | - Kam W Leong
- Department of Biomedical Engineering, Columbia University, New York, NY 10032, USA
| | - Wen-Sung Lai
- Department of Psychology, National Taiwan University, Taipei, Taiwan, ROC
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC; Institute of System Neuroscience, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Kuo-Chuan Wang
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Chin-Hsien Lin
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Li-An Chu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
48
|
Gritti N, Power RM, Graves A, Huisken J. Image restoration of degraded time-lapse microscopy data mediated by near-infrared imaging. Nat Methods 2024; 21:311-321. [PMID: 38177507 PMCID: PMC10864180 DOI: 10.1038/s41592-023-02127-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 11/10/2023] [Indexed: 01/06/2024]
Abstract
Time-lapse fluorescence microscopy is key to unraveling biological development and function; however, living systems, by their nature, permit only limited interrogation and contain untapped information that can only be captured by more invasive methods. Deep-tissue live imaging presents a particular challenge owing to the spectral range of live-cell imaging probes/fluorescent proteins, which offer only modest optical penetration into scattering tissues. Herein, we employ convolutional neural networks to augment live-imaging data with deep-tissue images taken on fixed samples. We demonstrate that convolutional neural networks may be used to restore deep-tissue contrast in GFP-based time-lapse imaging using paired final-state datasets acquired using near-infrared dyes, an approach termed InfraRed-mediated Image Restoration (IR2). Notably, the networks are remarkably robust over a wide range of developmental times. We employ IR2 to enhance the information content of green fluorescent protein time-lapse images of zebrafish and Drosophila embryo/larval development and demonstrate its quantitative potential in increasing the fidelity of cell tracking/lineaging in developing pescoids. Thus, IR2 is poised to extend live imaging to depths otherwise inaccessible.
Collapse
Affiliation(s)
- Nicola Gritti
- Morgridge Institute for Research, Madison, WI, USA
- Mesoscopic Imaging Facility, European Molecular Biology Laboratory Barcelona, Barcelona, Spain
| | - Rory M Power
- Morgridge Institute for Research, Madison, WI, USA
- EMBL Imaging Center, European Molecular Biology Laboratory Heidelberg, Heidelberg, Germany
| | | | - Jan Huisken
- Morgridge Institute for Research, Madison, WI, USA.
- Department of Integrative Biology, University of Wisconsin Madison, Madison, WI, USA.
- Department of Biology and Psychology, Georg-August-University Göttingen, Göttingen, Germany.
- Cluster of Excellence 'Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells' (MBExC), University of Göttingen, Göttingen, Germany.
| |
Collapse
|
49
|
Wang Q, Li Z, Zhang S, Chi N, Dai Q. A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration. Neural Netw 2024; 170:227-241. [PMID: 37992510 DOI: 10.1016/j.neunet.2023.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/06/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023]
Abstract
Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.
Collapse
Affiliation(s)
- Qinghua Wang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Ziwei Li
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Pujiang Laboratory, Shanghai, China.
| | - Shuqi Zhang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Nan Chi
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Shanghai Collaborative Innovation Center of Low-Earth-Orbit Satellite Communication Technology, Shanghai, 200433, China.
| | - Qionghai Dai
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
50
|
Jahangiri L. Predicting Neuroblastoma Patient Risk Groups, Outcomes, and Treatment Response Using Machine Learning Methods: A Review. Med Sci (Basel) 2024; 12:5. [PMID: 38249081 PMCID: PMC10801560 DOI: 10.3390/medsci12010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/28/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024] Open
Abstract
Neuroblastoma, a paediatric malignancy with high rates of cancer-related morbidity and mortality, is of significant interest to the field of paediatric cancers. High-risk NB tumours are usually metastatic and result in survival rates of less than 50%. Machine learning approaches have been applied to various neuroblastoma patient data to retrieve relevant clinical and biological information and develop predictive models. Given this background, this study will catalogue and summarise the literature that has used machine learning and statistical methods to analyse data such as multi-omics, histological sections, and medical images to make clinical predictions. Furthermore, the question will be turned on its head, and the use of machine learning to accurately stratify NB patients by risk groups and to predict outcomes, including survival and treatment response, will be summarised. Overall, this study aims to catalogue and summarise the important work conducted to date on the subject of expression-based predictor models and machine learning in neuroblastoma for risk stratification and patient outcomes including survival, and treatment response which may assist and direct future diagnostic and therapeutic efforts.
Collapse
Affiliation(s)
- Leila Jahangiri
- School of Science and Technology, Nottingham Trent University, Clifton Site, Nottingham NG11 8NS, UK;
- Division of Cellular and Molecular Pathology, Addenbrookes Hospital, University of Cambridge, Cambridge CB2 0QQ, UK
| |
Collapse
|