1
|
García Rodríguez B, Olsén E, Skärberg F, Volpe G, Höök F, Midtvedt DS. Optical label-free microscopy characterization of dielectric nanoparticles. NANOSCALE 2025; 17:8336-8362. [PMID: 40079204 PMCID: PMC11904879 DOI: 10.1039/d4nr03860f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 02/17/2025] [Indexed: 03/14/2025]
Abstract
In order to relate nanoparticle properties to function, fast and detailed particle characterization is needed. The ability to characterize nanoparticle samples using optical microscopy techniques has drastically improved over the past few decades; consequently, there are now numerous microscopy methods available for detailed characterization of particles with nanometric size. However, there is currently no "one size fits all" solution to the problem of nanoparticle characterization. Instead, since the available techniques have different detection limits and deliver related but different quantitative information, the measurement and analysis approaches need to be selected and adapted for the sample at hand. In this tutorial, we review the optical theory of single particle scattering and how it relates to the differences and similarities in the quantitative particle information obtained from commonly used label-free microscopy techniques, with an emphasis on nanometric (submicron) sized dielectric particles. Particular emphasis is placed on how the optical signal relates to mass, size, structure, and material properties of the detected particles and to its combination with diffusivity-based particle sizing. We also discuss emerging opportunities in the wake of new technology development, including examples of adaptable python notebooks for deep learning image analysis, with the ambition to guide the choice of measurement strategy based on various challenges related to different types of nanoparticle samples and associated analytical demands.
Collapse
Affiliation(s)
| | - Erik Olsén
- Department of Physics, Chalmers University of Technology, Gothenburg, Sweden.
| | - Fredrik Skärberg
- Department of Physics, University of Gothenburg, Gothenburg, Sweden.
| | - Giovanni Volpe
- Department of Physics, University of Gothenburg, Gothenburg, Sweden.
| | - Fredrik Höök
- Department of Physics, Chalmers University of Technology, Gothenburg, Sweden.
| | | |
Collapse
|
2
|
Broad Z, Robinson AW, Wells J, Nicholls D, Moshtaghpour A, Kirkland AI, Browning ND. Compressive electron backscatter diffraction imaging. J Microsc 2025; 298:44-57. [PMID: 39797608 DOI: 10.1111/jmi.13379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 10/22/2024] [Accepted: 12/18/2024] [Indexed: 01/13/2025]
Abstract
Electron backscatter diffraction (EBSD) has developed over the last few decades into a valuable crystallographic characterisation method for a wide range of sample types. Despite these advances, issues such as the complexity of sample preparation, relatively slow acquisition, and damage in beam-sensitive samples, still limit the quantity and quality of interpretable data that can be obtained. To mitigate these issues, here we propose a method based on the subsampling of probe positions and subsequent reconstruction of an incomplete data set. The missing probe locations (or pixels in the image) are recovered via an inpainting process using a dictionary-learning based method called beta-process factor analysis (BPFA). To investigate the robustness of both our inpainting method and Hough-based indexing, we simulate subsampled and noisy EBSD data sets from a real fully sampled Ni-superalloy data set for different subsampling ratios of probe positions using both Gaussian and Poisson noise models. We find that zero solution pixel detection (inpainting un-indexed pixels) enables higher-quality reconstructions to be obtained. Numerical tests confirm high-quality reconstruction of band contrast and inverse pole figure maps from only 10% of the probe positions, with the potential to reduce this to 5% if only inverse pole figure maps are needed. These results show the potential application of this method in EBSD, allowing for faster analysis and extending the use of this technique to beam sensitive materials.
Collapse
Affiliation(s)
- Zoë Broad
- Department of Mechanical, Materials and Aerospace Engineering, University of Liverpool, Liverpool, UK
| | | | | | | | - Amirafshar Moshtaghpour
- Correlated Imaging Group, Rosalind Franklin Institute, Harwell Science and Innovation Campus, Didcot, UK
| | - Angus I Kirkland
- Correlated Imaging Group, Rosalind Franklin Institute, Harwell Science and Innovation Campus, Didcot, UK
- Department of Materials, University of Oxford, Oxford, UK
| | - Nigel D Browning
- Department of Mechanical, Materials and Aerospace Engineering, University of Liverpool, Liverpool, UK
- SenseAI Innovations Ltd., Liverpool, UK
| |
Collapse
|
3
|
Stringer C, Pachitariu M. Cellpose3: one-click image restoration for improved cellular segmentation. Nat Methods 2025; 22:592-599. [PMID: 39939718 PMCID: PMC11903308 DOI: 10.1038/s41592-025-02595-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 12/18/2024] [Indexed: 02/14/2025]
Abstract
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as 'one-click' buttons inside the graphical interface of Cellpose as well as in the Cellpose API.
Collapse
|
4
|
Fu L, Li L, Lu B, Guo X, Shi X, Tian J, Hu Z. Deep Equilibrium Unfolding Learning for Noise Estimation and Removal in Optical Molecular Imaging. Comput Med Imaging Graph 2025; 120:102492. [PMID: 39823663 DOI: 10.1016/j.compmedimag.2025.102492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 01/03/2025] [Accepted: 01/03/2025] [Indexed: 01/19/2025]
Abstract
In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.
Collapse
Affiliation(s)
- Lidan Fu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lingbing Li
- Interventional Radiology Department, Chinese PLA General Hospital, Beijing 100039, China
| | - Binchun Lu
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Xiaoyong Guo
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Gastrointestinal Cancer Center, Ward I, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Xiaojing Shi
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Key Laboratory of Big Data-Based Precision Medicine of Ministry of Industry and Information Technology, School of Engineering Medicine, Beihang University, Beijing 100191, China; Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an 710071, China; National Key Laboratory of Kidney Diseases, Beijing 100853, China.
| | - Zhenhua Hu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; National Key Laboratory of Kidney Diseases, Beijing 100853, China.
| |
Collapse
|
5
|
Zhang M, Li R, Fu S, Kumar S, Mcginty J, Qin Y, Chen L. Deep learning enhanced light sheet fluorescence microscopy for in vivo 4D imaging of zebrafish heart beating. LIGHT, SCIENCE & APPLICATIONS 2025; 14:92. [PMID: 39994185 PMCID: PMC11850918 DOI: 10.1038/s41377-024-01710-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 10/09/2024] [Accepted: 12/02/2024] [Indexed: 02/26/2025]
Abstract
Time-resolved volumetric fluorescence imaging over an extended duration with high spatial/temporal resolution is a key driving force in biomedical research for investigating spatial-temporal dynamics at organism-level systems, yet it remains a major challenge due to the trade-off among imaging speed, light exposure, illumination power, and image quality. Here, we present a deep-learning enhanced light sheet fluorescence microscopy (LSFM) approach that addresses the restoration of rapid volumetric time-lapse imaging with less than 0.03% light exposure and 3.3% acquisition time compared to a typical standard acquisition. We demonstrate that the convolutional neural network (CNN)-transformer network developed here, namely U-net integrated transformer (UI-Trans), successfully achieves the mitigation of complex noise-scattering-coupled degradation and outperforms state-of-the-art deep learning networks, due to its capability of faithfully learning fine details while comprehending complex global features. With the fast generation of appropriate training data via flexible switching between confocal line-scanning LSFM (LS-LSFM) and conventional LSFM, this method achieves a three- to five-fold signal-to-noise ratio (SNR) improvement and ~1.8 times contrast improvement in ex vivo zebrafish heart imaging and long-term in vivo 4D (3D morphology + time) imaging of heartbeat dynamics at different developmental stages with ultra-economical acquisitions in terms of light dosage and acquisition time.
Collapse
Affiliation(s)
- Meng Zhang
- School of Electronic and Information Engineering, Beihang University, Beijing, 100191, China
| | - Renjian Li
- School of Electronic and Information Engineering, Beihang University, Beijing, 100191, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518118, China
| | - Songnian Fu
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 51006, China
| | - Sunil Kumar
- Photonics Group, Department of Physics, Imperial College London, London, SW7 2AZ, UK
| | - James Mcginty
- Photonics Group, Department of Physics, Imperial College London, London, SW7 2AZ, UK
| | - Yuwen Qin
- Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou, 51006, China.
| | - Lingling Chen
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518118, China.
| |
Collapse
|
6
|
Reichenbach M, Richter S, Galli R, Meinhardt M, Kirsche K, Temme A, Emmanouilidis D, Polanski W, Prilop I, Krex D, Sobottka SB, Juratli TA, Eyüpoglu IY, Uckermann O. Clinical confocal laser endomicroscopy for imaging of autofluorescence signals of human brain tumors and non-tumor brain. J Cancer Res Clin Oncol 2024; 151:19. [PMID: 39724474 PMCID: PMC11671560 DOI: 10.1007/s00432-024-06052-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Accepted: 11/29/2024] [Indexed: 12/28/2024]
Abstract
PURPOSE Analysis of autofluorescence holds promise for brain tumor delineation and diagnosis. Therefore, we investigated the potential of a commercial confocal laser scanning endomicroscopy (CLE) system for clinical imaging of brain tumors. METHODS A clinical CLE system with fiber probe and 488 nm laser excitation was used to acquire images of tissue autofluorescence. Fresh samples were obtained from routine surgeries (glioblastoma n = 6, meningioma n = 6, brain metastases n = 10, pituitary adenoma n = 2, non-tumor from surgery for the treatment of pharmacoresistant epilepsy n = 2). Additionally, in situ intraoperative label-free CLE was performed in three cases. The autofluorescence images were visually inspected for feature identification and quantification. For reference, tissue cryosections were prepared and further analyzed by label-free multiphoton microscopy and HE histology. RESULTS Label-free CLE enabled the acquisition of autofluorescence images for all cases. Autofluorescent structures were assigned to the cytoplasmic compartment of cells, elastin fibers, psammoma bodies and blood vessels by comparison to references. Sparse punctuated autofluorescence was identified in most images across all cases, while dense punctuated autofluorescence was most frequent in glioblastomas. Autofluorescent cells were observed in higher abundancies in images of non-tumor samples. Diffuse autofluorescence, fibers and round fluorescent structures were predominantly found in tumor tissues. CONCLUSION Label-free CLE imaging through an approved clinical device was able to visualize the characteristic autofluorescence patterns of human brain tumors and non-tumor brain tissue ex vivo and in situ. Therefore, this approach offers the possibility to obtain intraoperative diagnostic information before resection, importantly independent of any kind of marker or label.
Collapse
Affiliation(s)
- Marlen Reichenbach
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Sven Richter
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Roberta Galli
- Medical Physics and Biomedical Engineering, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Matthias Meinhardt
- Department of Pathology (Neuropathology), Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Katrin Kirsche
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Achim Temme
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Dimitrios Emmanouilidis
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Witold Polanski
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Insa Prilop
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Dietmar Krex
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Stephan B Sobottka
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Tareq A Juratli
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Ilker Y Eyüpoglu
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Ortrud Uckermann
- Department of Neurosurgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
- Division of Medical Biology, Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
7
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2024; 34:955-967. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
8
|
Zhong L, Li L, Yang G. Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images. BMC Bioinformatics 2024; 25:269. [PMID: 39164632 PMCID: PMC11334404 DOI: 10.1186/s12859-024-05894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 08/07/2024] [Indexed: 08/22/2024] Open
Abstract
BACKGROUND Fluorescence microscopy (FM) is an important and widely adopted biological imaging technique. Segmentation is often the first step in quantitative analysis of FM images. Deep neural networks (DNNs) have become the state-of-the-art tools for image segmentation. However, their performance on natural images may collapse under certain image corruptions or adversarial attacks. This poses real risks to their deployment in real-world applications. Although the robustness of DNN models in segmenting natural images has been studied extensively, their robustness in segmenting FM images remains poorly understood RESULTS: To address this deficiency, we have developed an assay that benchmarks robustness of DNN segmentation models using datasets of realistic synthetic 2D FM images with precisely controlled corruptions or adversarial attacks. Using this assay, we have benchmarked robustness of ten representative models such as DeepLab and Vision Transformer. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. To further assess the robustness of the selected models, we have also benchmarked them on real microscopy images of different modalities without using simulated degradation. The results are consistent with those obtained on the realistic synthetic images, confirming the fidelity and reliability of our image synthesis method as well as the effectiveness of our assay. CONCLUSIONS Based on comprehensive benchmarking experiments, we have found distinct robustness properties of deep neural networks in semantic segmentation of FM images. Based on the findings, we have made specific recommendations on selection and design of robust models for FM image segmentation.
Collapse
Affiliation(s)
- Liqun Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Lingrui Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Ge Yang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China.
| |
Collapse
|
9
|
Tekle E, Dese K, Girma S, Adissu W, Krishnamoorthy J, Kwa T. DeepLeish: a deep learning based support system for the detection of Leishmaniasis parasite from Giemsa-stained microscope images. BMC Med Imaging 2024; 24:152. [PMID: 38890604 PMCID: PMC11186139 DOI: 10.1186/s12880-024-01333-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 06/13/2024] [Indexed: 06/20/2024] Open
Abstract
BACKGROUND Leishmaniasis is a vector-born neglected parasitic disease belonging to the genus Leishmania. Out of the 30 Leishmania species, 21 species cause human infection that affect the skin and the internal organs. Around, 700,000 to 1,000,000 of the newly infected cases and 26,000 to 65,000 deaths are reported worldwide annually. The disease exhibits three clinical presentations, namely, the cutaneous, muco-cutaneous and visceral Leishmaniasis which affects the skin, mucosal membrane and the internal organs, respectively. The relapsing behavior of the disease limits its diagnosis and treatment efficiency. The common diagnostic approaches follow subjective, error-prone, repetitive processes. Despite, an ever pressing need for an accurate detection of Leishmaniasis, the research conducted so far is scarce. In this regard, the main aim of the current research is to develop an artificial intelligence based detection tool for the Leishmaniasis from the Geimsa-stained microscopic images using deep learning method. METHODS Stained microscopic images were acquired locally and labeled by experts. The images were augmented using different methods to prevent overfitting and improve the generalizability of the system. Fine-tuned Faster RCNN, SSD, and YOLOV5 models were used for object detection. Mean average precision (MAP), precision, and Recall were calculated to evaluate and compare the performance of the models. RESULTS The fine-tuned YOLOV5 outperformed the other models such as Faster RCNN and SSD, with the MAP scores, of 73%, 54% and 57%, respectively. CONCLUSION The currently developed YOLOV5 model can be tested in the clinics to assist the laboratorists in diagnosing Leishmaniasis from the microscopic images. Particularly, in low-resourced healthcare facilities, with fewer qualified medical professionals or hematologists, our AI support system can assist in reducing the diagnosing time, workload, and misdiagnosis. Furthermore, the dataset collected by us will be shared with other researchers who seek to improve upon the detection system of the parasite. The current model detects the parasites even in the presence of the monocyte cells, but sometimes, the accuracy decreases due to the differences in the sizes of the parasite cells alongside the blood cells. The incorporation of cascaded networks in future and the quantification of the parasite load, shall overcome the limitations of the currently developed system.
Collapse
Affiliation(s)
- Eden Tekle
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia.
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, 26505, USA.
| | - Selfu Girma
- Pathology Unit, Armauer Hansen Research Institute, Addis Ababa, Ethiopia
| | - Wondimagegn Adissu
- School of Medical Laboratory Sciences, Institute of Health, Jimma University, Jimma, Ethiopia
- Clinical Trial Unit, Jimma University, Jimma, Ethiopia
| | | | - Timothy Kwa
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia.
- Medtronic MiniMed, 18000 Devonshire St. Northridge, Los Angeles, CA, USA.
| |
Collapse
|
10
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
11
|
Dai W, Wong IHM, Wong TTW. Exceeding the limit for microscopic image translation with a deep learning-based unified framework. PNAS NEXUS 2024; 3:pgae133. [PMID: 38601859 PMCID: PMC11004937 DOI: 10.1093/pnasnexus/pgae133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/19/2024] [Indexed: 04/12/2024]
Abstract
Deep learning algorithms have been widely used in microscopic image translation. The corresponding data-driven models can be trained by supervised or unsupervised learning depending on the availability of paired data. However, general cases are where the data are only roughly paired such that supervised learning could be invalid due to data unalignment, and unsupervised learning would be less ideal as the roughly paired information is not utilized. In this work, we propose a unified framework (U-Frame) that unifies supervised and unsupervised learning by introducing a tolerance size that can be adjusted automatically according to the degree of data misalignment. Together with the implementation of a global sampling rule, we demonstrate that U-Frame consistently outperforms both supervised and unsupervised learning in all levels of data misalignments (even for perfectly aligned image pairs) in a myriad of image translation applications, including pseudo-optical sectioning, virtual histological staining (with clinical evaluations for cancer diagnosis), improvement of signal-to-noise ratio or resolution, and prediction of fluorescent labels, potentially serving as new standard for image translation.
Collapse
Affiliation(s)
- Weixing Dai
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| |
Collapse
|
12
|
Wüstner D, Egebjerg JM, Lauritsen L. Dynamic Mode Decomposition of Multiphoton and Stimulated Emission Depletion Microscopy Data for Analysis of Fluorescent Probes in Cellular Membranes. SENSORS (BASEL, SWITZERLAND) 2024; 24:2096. [PMID: 38610307 PMCID: PMC11013970 DOI: 10.3390/s24072096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 03/14/2024] [Accepted: 03/21/2024] [Indexed: 04/14/2024]
Abstract
An analysis of the membrane organization and intracellular trafficking of lipids often relies on multiphoton (MP) and super-resolution microscopy of fluorescent lipid probes. A disadvantage of particularly intrinsically fluorescent lipid probes, such as the cholesterol and ergosterol analogue, dehydroergosterol (DHE), is their low MP absorption cross-section, resulting in a low signal-to-noise ratio (SNR) in live-cell imaging. Stimulated emission depletion (STED) microscopy of membrane probes like Nile Red enables one to resolve membrane features beyond the diffraction limit but exposes the sample to a lot of excitation light and suffers from a low SNR and photobleaching. Here, dynamic mode decomposition (DMD) and its variant, higher-order DMD (HoDMD), are applied to efficiently reconstruct and denoise the MP and STED microscopy data of lipid probes, allowing for an improved visualization of the membranes in cells. HoDMD also allows us to decompose and reconstruct two-photon polarimetry images of TopFluor-cholesterol in model and cellular membranes. Finally, DMD is shown to not only reconstruct and denoise 3D-STED image stacks of Nile Red-labeled cells but also to predict unseen image frames, thereby allowing for interpolation images along the optical axis. This important feature of DMD can be used to reduce the number of image acquisitions, thereby minimizing the light exposure of biological samples without compromising image quality. Thus, DMD as a computational tool enables gentler live-cell imaging of fluorescent probes in cellular membranes by MP and STED microscopy.
Collapse
Affiliation(s)
- Daniel Wüstner
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, DK-5230 Odense M, Denmark; (J.M.E.); (L.L.)
| | | | | |
Collapse
|
13
|
Oh K, Bianco PR. Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment. Int J Biomed Imaging 2024; 2024:8862387. [PMID: 38449563 PMCID: PMC10917484 DOI: 10.1155/2024/8862387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 01/22/2024] [Accepted: 01/30/2024] [Indexed: 03/08/2024] Open
Abstract
Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.
Collapse
Affiliation(s)
- Kwangsung Oh
- Department of Computer Science, College of Information Science & Technology, University of Nebraska Omaha, Omaha, NE 68182, USA
| | - Piero R. Bianco
- Department of Pharmaceutical Sciences, College of Pharmacy, University of Nebraska Medical Center, Omaha, NE 68198-6025, USA
| |
Collapse
|
14
|
Hu Y, Wang P, Zhao F, Liu J. Low-frequency background estimation and noise separation from high-frequency for background and noise subtraction. APPLIED OPTICS 2024; 63:283-289. [PMID: 38175031 DOI: 10.1364/ao.507735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 11/30/2023] [Indexed: 01/05/2024]
Abstract
In fluorescence microscopy, background blur and noise are two main factors preventing the achievement of high-signal-to-noise ratio (SNR) imaging. Background blur primarily emanates from inherent factors including the spontaneous fluorescence of biological samples and out-of-focus backgrounds, while noise encompasses Gaussian and Poisson noise components. To achieve background blur subtraction and denoising simultaneously, a pioneering algorithm based on low-frequency background estimation and noise separation from high-frequency (LBNH-BNS) is presented, which effectively disentangles noise from the desired signal. Furthermore, it seamlessly integrates low-frequency features derived from background blur estimation, leading to the effective elimination of noise and background blur in wide-field fluorescence images. In comparisons with other state-of-the-art background removal algorithms, LBNH-BNS demonstrates significant advantages in key quantitative metrics such as peak signal-to-noise ratio (PSNR) and manifests substantial visual enhancements. LBNH-BNS holds immense potential for advancing the overall performance and quality of wide-field fluorescence imaging techniques.
Collapse
|
15
|
Li X, Hu X, Chen X, Fan J, Zhao Z, Wu J, Wang H, Dai Q. Spatial redundancy transformer for self-supervised fluorescence image denoising. NATURE COMPUTATIONAL SCIENCE 2023; 3:1067-1080. [PMID: 38177722 PMCID: PMC10766531 DOI: 10.1038/s43588-023-00568-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/07/2023] [Indexed: 01/06/2024]
Abstract
Fluorescence imaging with high signal-to-noise ratios has become the foundation of accurate visualization and analysis of biological phenomena. However, the inevitable noise poses a formidable challenge to imaging sensitivity. Here we provide the spatial redundancy denoising transformer (SRDTrans) to remove noise from fluorescence images in a self-supervised manner. First, a sampling strategy based on spatial redundancy is proposed to extract adjacent orthogonal training pairs, which eliminates the dependence on high imaging speed. Second, we designed a lightweight spatiotemporal transformer architecture to capture long-range dependencies and high-resolution features at low computational cost. SRDTrans can restore high-frequency information without producing oversmoothed structures and distorted fluorescence traces. Finally, we demonstrate the state-of-the-art denoising performance of SRDTrans on single-molecule localization microscopy and two-photon volumetric calcium imaging. SRDTrans does not contain any assumptions about the imaging process and the sample, thus can be easily extended to various imaging modalities and biological applications.
Collapse
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Xiaowan Hu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Jiaqi Fan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
- The Shenzhen Institute of Future Media Technology, Shenzhen, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
16
|
Cheng T, Wang Y. Edge effect of wide spectrum denoising in super-resolution microscopy. Microscopy (Oxf) 2023; 72:418-424. [PMID: 36744613 DOI: 10.1093/jmicro/dfad012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/17/2023] [Accepted: 02/03/2023] [Indexed: 02/07/2023] Open
Abstract
During the stochastic optical reconstruction microscope (STORM) raw image acquisition in super-resolution microscopy, noise is inevitable. Noise not only reduces the temporal and spatial resolution of the super-resolution image but also leads to the failure of super-resolution image reconstruction. Wide spectrum denoising (WSD) can effectively remove various random noises (such as Poisson noise and Gaussian noise) from the STORM raw image to improve the super-resolution image reconstruction. We found that there is an obvious edge effect in WSD, and its influence on STORM raw image denoising and super-resolution image reconstruction is studied. We then proposed the method of restraining edge effect. The simulation and real experiment results show that edge trimming can effectively suppress the edge effect, thus leading to better super-resolution image reconstruction.
Collapse
Affiliation(s)
- Tao Cheng
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, No. 268 Avenue Donghuan, Chengzhong District, Liuzhou, Guangxi 545006, People's Republic of China
| | - Yingshan Wang
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, No. 268 Avenue Donghuan, Chengzhong District, Liuzhou, Guangxi 545006, People's Republic of China
| |
Collapse
|
17
|
Mandracchia B, Liu W, Hua X, Forghani P, Lee S, Hou J, Nie S, Xu C, Jia S. Optimal sparsity allows reliable system-aware restoration of fluorescence microscopy images. SCIENCE ADVANCES 2023; 9:eadg9245. [PMID: 37647399 PMCID: PMC10468132 DOI: 10.1126/sciadv.adg9245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 07/31/2023] [Indexed: 09/01/2023]
Abstract
Fluorescence microscopy is one of the most indispensable and informative driving forces for biological research, but the extent of observable biological phenomena is essentially determined by the content and quality of the acquired images. To address the different noise sources that can degrade these images, we introduce an algorithm for multiscale image restoration through optimally sparse representation (MIRO). MIRO is a deterministic framework that models the acquisition process and uses pixelwise noise correction to improve image quality. Our study demonstrates that this approach yields a remarkable restoration of the fluorescence signal for a wide range of microscopy systems, regardless of the detector used (e.g., electron-multiplying charge-coupled device, scientific complementary metal-oxide semiconductor, or photomultiplier tube). MIRO improves current imaging capabilities, enabling fast, low-light optical microscopy, accurate image analysis, and robust machine intelligence when integrated with deep neural networks. This expands the range of biological knowledge that can be obtained from fluorescence microscopy.
Collapse
Affiliation(s)
- Biagio Mandracchia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Scientific-Technical Central Units, Instituto de Salud Carlos III (ISCIII), Majadahonda, Spain
- ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | - Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Xuanwen Hua
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Parvin Forghani
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Soojung Lee
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Jessica Hou
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shuyi Nie
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Chunhui Xu
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
18
|
Sardella D, Kristensen AM, Bordoni L, Kidmose H, Shahrokhtash A, Sutherland DS, Frische S, Schiessl IM. Serial intravital 2-photon microscopy and analysis of the kidney using upright microscopes. Front Physiol 2023; 14:1176409. [PMID: 37168225 PMCID: PMC10164931 DOI: 10.3389/fphys.2023.1176409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 04/03/2023] [Indexed: 05/13/2023] Open
Abstract
Serial intravital 2-photon microscopy of the kidney and other abdominal organs is a powerful technique to assess tissue function and structure simultaneously and over time. Thus, serial intravital microscopy can capture dynamic tissue changes during health and disease and holds great potential to characterize (patho-) physiological processes with subcellular resolution. However, successful image acquisition and analysis require significant expertise and impose multiple potential challenges. Abdominal organs are rhythmically displaced by breathing movements which hamper high-resolution imaging. Traditionally, kidney intravital imaging is performed on inverted microscopes where breathing movements are partly compensated by the weight of the animal pressing down. Here, we present a custom and easy-to-implement setup for intravital imaging of the kidney and other abdominal organs on upright microscopes. Furthermore, we provide image processing protocols and a new plugin for the free image analysis software FIJI to process multichannel fluorescence microscopy data. The proposed image processing pipelines cover multiple image denoising algorithms, sample drift correction using 2D registration, and alignment of serial imaging data collected over several weeks using landmark-based 3D registration. The provided tools aim to lower the barrier of entry to intravital microscopy of the kidney and are readily applicable by biomedical practitioners.
Collapse
Affiliation(s)
- Donato Sardella
- Department of Biomedicine, Aarhus University, Aarhus, Denmark
| | | | - Luca Bordoni
- Department of Biomedicine, Aarhus University, Aarhus, Denmark
| | - Hanne Kidmose
- Department of Biomedicine, Aarhus University, Aarhus, Denmark
| | - Ali Shahrokhtash
- Interdisciplinary Nanoscience Center, Aarhus University, Aarhus, Denmark
| | | | | | | |
Collapse
|
19
|
Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat Biotechnol 2023; 41:282-292. [PMID: 36163547 PMCID: PMC9931589 DOI: 10.1038/s41587-022-01450-8] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 07/29/2022] [Indexed: 11/09/2022]
Abstract
A fundamental challenge in fluorescence microscopy is the photon shot noise arising from the inevitable stochasticity of photon detection. Noise increases measurement uncertainty and limits imaging resolution, speed and sensitivity. To achieve high-sensitivity fluorescence imaging beyond the shot-noise limit, we present DeepCAD-RT, a self-supervised deep learning method for real-time noise suppression. Based on our previous framework DeepCAD, we reduced the number of network parameters by 94%, memory consumption by 27-fold and processing time by a factor of 20, allowing real-time processing on a two-photon microscope. A high imaging signal-to-noise ratio can be acquired with tenfold fewer photons than in standard imaging approaches. We demonstrate the utility of DeepCAD-RT in a series of photon-limited experiments, including in vivo calcium imaging of mice, zebrafish larva and fruit flies, recording of three-dimensional (3D) migration of neutrophils after acute brain injury and imaging of 3D dynamics of cortical ATP release. DeepCAD-RT will facilitate the morphological and functional interrogation of biological dynamics with a minimal photon budget.
Collapse
|
20
|
Cheng X, Wang J, Li Q, Duan Y, Chen Y, Teng J, Chu S, Yang H, Wang S, Gong Q. Enhancing Weak-Signal Extraction for Single-Molecule Localization Microscopy. J Phys Chem A 2023; 127:329-338. [PMID: 36541035 DOI: 10.1021/acs.jpca.2c05164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Single-molecule localization microscopy (SMLM) has been widely used in biological imaging due to its ultrahigh spatial resolution. However, due to the strategy of reducing photodamage to living cells, the fluorescence signals of emitters are usually weak and the detector noises become non-negligible, which leads to localization misalignments and signal losses, thus deteriorating the imaging capability of SMLM. Here, we propose an active modulation method to control the fluorescence of the probe emitters. It actually marks the emitters with artificial blinking character, which directly distinguishes weak signals from multiple detector noises. We demonstrated from simulations and experiments that this method improves the signal-to-noise ratio by about 10 dB over the non-modulated method and boosts the sensitivity of single-molecule localization down to -4 dB, which significantly reduces localization misalignments and signal losses in SMLM. This signal-noise decoupling strategy is generally applicable to the super-resolution system with versatile labeled probes to improve their imaging capability. We also showed its application to the densely labeled sample, showing its flexibility in super-resolution nanoscopy.
Collapse
Affiliation(s)
- Xue Cheng
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China
| | - Ju Wang
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China
| | - Qi Li
- Key Laboratory of Cell Proliferation and Differentiation of the Ministry of Education and State Key Laboratory of Membrane Biology, College of Life Sciences, Peking University, Beijing100871, China
| | - Yiqun Duan
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China
| | - Yan Chen
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China
| | - Junlin Teng
- Key Laboratory of Cell Proliferation and Differentiation of the Ministry of Education and State Key Laboratory of Membrane Biology, College of Life Sciences, Peking University, Beijing100871, China
| | - Saisai Chu
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China.,Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi030006, China.,Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing100871, China.,Peking University Yangtze Delta Institute of Optoelectronics, Nantong, Jiangsu226010, China
| | - Hong Yang
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China.,Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi030006, China.,Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing100871, China.,Peking University Yangtze Delta Institute of Optoelectronics, Nantong, Jiangsu226010, China
| | - Shufeng Wang
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China.,Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi030006, China.,Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing100871, China.,Peking University Yangtze Delta Institute of Optoelectronics, Nantong, Jiangsu226010, China
| | - Qihuang Gong
- State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Department of Physics, Peking University, Beijing100871, China.,Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi030006, China.,Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing100871, China.,Peking University Yangtze Delta Institute of Optoelectronics, Nantong, Jiangsu226010, China
| |
Collapse
|
21
|
Banharnsakun A. Aerial Image Denoising Using a Best-So-Far ABC-based Adaptive Filter Method. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2022. [DOI: 10.1142/s1469026822500249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Nowadays, digital images play an increasingly important role in helping to explain phenomena and to attract people’s attention through various types of media rather than the use of text. However, the quality of digital images may be degraded due to noise that has occurred either during their recording or their transmission via a network. Therefore, removal of image noise, which is known as “image denoising”, is one of the primary required tasks in digital image processing. Various methods in earlier studies have been developed and proposed to remove the noise found in images. For example, the use of metric filters to eliminate noise has received much attention from researchers in recent literature. However, the convergence speed when searching for the optimal filter coefficient of these proposed algorithms is quite low. Previous research in the past few years has found that biologically inspired approaches are among the more promising metaheuristic methods used to find optimal solutions. In this work, an image denoising approach based on the best-so-far (BSF) ABC algorithm combined with an adaptive filter is proposed to enhance the performance of searching for the optimal filter coefficient in the denoising process. Experimental results indicate that the denoising of images employing the proposed BSF ABC technique yields good quality and the ability to remove noise while preventing the features of the image from being lost in the denoising process. The denoised image quality obtained by the proposed method achieves a 20% increase compared with other recently developed techniques in the field of biologically inspired approaches.
Collapse
Affiliation(s)
- Anan Banharnsakun
- Computational Intelligence Research Laboratory (CIRLab), Department of Computer Engineering, Kasetsart University Sriracha Campus, Chonburi 20230, Thailand
| |
Collapse
|
22
|
Deep learning-based noise filtering toward millisecond order imaging by using scanning transmission electron microscopy. Sci Rep 2022; 12:13462. [PMID: 35931705 PMCID: PMC9356044 DOI: 10.1038/s41598-022-17360-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 07/25/2022] [Indexed: 11/09/2022] Open
Abstract
Application of scanning transmission electron microscopy (STEM) to in situ observation will be essential in the current and emerging data-driven materials science by taking STEM's high affinity with various analytical options into account. As is well known, STEM's image acquisition time needs to be further shortened to capture a targeted phenomenon in real-time as STEM's current temporal resolution is far below the conventional TEM's. However, rapid image acquisition in the millisecond per frame or faster generally causes image distortion, poor electron signals, and unidirectional blurring, which are obstacles for realizing video-rate STEM observation. Here we show an image correction framework integrating deep learning (DL)-based denoising and image distortion correction schemes optimized for STEM rapid image acquisition. By comparing a series of distortion corrected rapid scan images with corresponding regular scan speed images, the trained DL network is shown to remove not only the statistical noise but also the unidirectional blurring. This result demonstrates that rapid as well as high-quality image acquisition by STEM without hardware modification can be established by the DL. The DL-based noise filter could be applied to in-situ observation, such as dislocation activities under external stimuli, with high spatio-temporal resolution.
Collapse
|
23
|
Rasal T, Veerakumar T, Subudhi BN, Esakkirajan S. A novel approach for reduction of the noise from microscopy images using Fourier decomposition. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
24
|
Watson ER, Taherian Fard A, Mar JC. Computational Methods for Single-Cell Imaging and Omics Data Integration. Front Mol Biosci 2022; 8:768106. [PMID: 35111809 PMCID: PMC8801747 DOI: 10.3389/fmolb.2021.768106] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
Integrating single cell omics and single cell imaging allows for a more effective characterisation of the underlying mechanisms that drive a phenotype at the tissue level, creating a comprehensive profile at the cellular level. Although the use of imaging data is well established in biomedical research, its primary application has been to observe phenotypes at the tissue or organ level, often using medical imaging techniques such as MRI, CT, and PET. These imaging technologies complement omics-based data in biomedical research because they are helpful for identifying associations between genotype and phenotype, along with functional changes occurring at the tissue level. Single cell imaging can act as an intermediary between these levels. Meanwhile new technologies continue to arrive that can be used to interrogate the genome of single cells and its related omics datasets. As these two areas, single cell imaging and single cell omics, each advance independently with the development of novel techniques, the opportunity to integrate these data types becomes more and more attractive. This review outlines some of the technologies and methods currently available for generating, processing, and analysing single-cell omics- and imaging data, and how they could be integrated to further our understanding of complex biological phenomena like ageing. We include an emphasis on machine learning algorithms because of their ability to identify complex patterns in large multidimensional data.
Collapse
Affiliation(s)
| | - Atefeh Taherian Fard
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| | - Jessica Cara Mar
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
25
|
Laine RF, Jacquemet G, Krull A. Imaging in focus: An introduction to denoising bioimages in the era of deep learning. Int J Biochem Cell Biol 2021; 140:106077. [PMID: 34547502 PMCID: PMC8552122 DOI: 10.1016/j.biocel.2021.106077] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/24/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022]
Abstract
Fluorescence microscopy enables the direct observation of previously hidden dynamic processes of life, allowing profound insights into mechanisms of health and disease. However, imaging of live samples is fundamentally limited by the toxicity of the illuminating light and images are often acquired using low light conditions. As a consequence, images can become very noisy which severely complicates their interpretation. In recent years, deep learning (DL) has emerged as a very successful approach to remove this noise while retaining the useful signal. Unlike classical algorithms which use well-defined mathematical functions to remove noise, DL methods learn to denoise from example data, providing a powerful content-aware approach. In this review, we first describe the different types of noise that typically corrupt fluorescence microscopy images and introduce the denoising task. We then present the main DL-based denoising methods and their relative advantages and disadvantages. We aim to provide insights into how DL-based denoising methods operate and help users choose the most appropriate tools for their applications.
Collapse
Affiliation(s)
- Romain F Laine
- MRC Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK; The Francis Crick Institute, London NW1 1AT, UK.
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Åbo Akademi University, Faculty of Science and Engineering, Biosciences, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, 20520 Turku, Finland
| | - Alexander Krull
- School of Computer Science, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK.
| |
Collapse
|
26
|
Abstract
Live imaging is critical to determining the dynamics and spatial interactions of cells within the tissue environment. In the lung, this has proven to be difficult due to the motion brought about by ventilation and cardiac contractions. A previous version of this Current Protocols in Cytometry article reported protocols for imaging ex vivo live lung slices and the intact mouse lung. Here, we update those protocols by adding new methodologies, new approaches for quantitative image analysis, and new areas of potential application. © 2020 Wiley Periodicals LLC. Basic Protocol 1: Live imaging of lung slices Support Protocol 1: Staining lung sections with fluorescent antibodies Basic Protocol 2: Live imaging in the mouse lung Support Protocol 2: Intratracheal instillations Support Protocol 3: Intravascular instillations Support Protocol 4: Monitoring vital signs of the mouse during live lung imaging Support Protocol 5: Antibodies Support Protocol 6: Fluorescent reporter mice Basic Protocol 3: Quantification of neutrophil-platelet aggregation in pulmonary vasculature Basic Protocol 4: Quantification of platelet-dependent pulmonary thrombosis Basic Protocol 5: Quantification of pulmonary vascular permeability.
Collapse
Affiliation(s)
- Tomasz Brzoska
- Pittsburgh Heart, Lung and Blood Vascular Medicine Institute, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.,Division of Hematology/Oncology, Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Tomasz W Kaminski
- Pittsburgh Heart, Lung and Blood Vascular Medicine Institute, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Margaret F Bennewitz
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, West Virginia
| | - Prithu Sundd
- Pittsburgh Heart, Lung and Blood Vascular Medicine Institute, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania.,Division of Pulmonary Allergy and Critical Care Medicine, Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| |
Collapse
|
27
|
Blind Deconvolution Based on Compressed Sensing with bi- l0- l2-norm Regularization in Light Microscopy Image. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18041789. [PMID: 33673166 PMCID: PMC7917747 DOI: 10.3390/ijerph18041789] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/04/2021] [Accepted: 02/09/2021] [Indexed: 11/22/2022]
Abstract
Blind deconvolution of light microscopy images could improve the ability of distinguishing cell-level substances. In this study, we investigated the blind deconvolution framework for a light microscope image, which combines the benefits of bi-l0-l2-norm regularization with compressed sensing and conjugated gradient algorithms. Several existing regularization approaches were limited by staircase artifacts (or cartooned artifacts) and noise amplification. Thus, we implemented our strategy to overcome these problems using the bi-l0-l2-norm regularization proposed. It was investigated through simulations and experiments using optical microscopy images including the background noise. The sharpness was improved through the successful image restoration while minimizing the noise amplification. In addition, quantitative factors of the restored images, including the intensity profile, root-mean-square error (RMSE), edge preservation index (EPI), structural similarity index measure (SSIM), and normalized noise power spectrum, were improved compared to those of existing or comparative images. In particular, the results of using the proposed method showed RMSE, EPI, and SSIM values of approximately 0.12, 0.81, and 0.88 when compared with the reference. In addition, RMSE, EPI, and SSIM values in the restored image were proven to be improved by about 5.97, 1.26, and 1.61 times compared with the degraded image. Consequently, the proposed method is expected to be effective for image restoration and to reduce the cost of a high-performance light microscope.
Collapse
|
28
|
A Low-Cost Automated Digital Microscopy Platform for Automatic Identification of Diatoms. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10176033] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Currently, microalgae (i.e., diatoms) constitute a generally accepted bioindicator of water quality and therefore provide an index of the status of biological ecosystems. Diatom detection for specimen counting and sample classification are two difficult time-consuming tasks for the few existing expert diatomists. To mitigate this challenge, in this work, we propose a fully operative low-cost automated microscope, integrating algorithms for: (1) stage and focus control, (2) image acquisition (slide scanning, stitching, contrast enhancement), and (3) diatom detection and a prospective specimen classification (among 80 taxa). Deep learning algorithms have been applied to overcome the difficult selection of image descriptors imposed by classical machine learning strategies. With respect to the mentioned strategies, the best results were obtained by deep neural networks with a maximum precision of 86% (with the YOLO network) for detection and 99.51% for classification, among 80 different species (with the AlexNet network). All the developed operational modules are integrated and controlled by the user from the developed graphical user interface running in the main controller. With the developed operative platform, it is noteworthy that this work provides a quite useful toolbox for phycologists in their daily challenging tasks to identify and classify diatoms.
Collapse
|
29
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
30
|
HISTOBREAST, a collection of brightfield microscopy images of Haematoxylin and Eosin stained breast tissue. Sci Data 2020; 7:169. [PMID: 32503988 PMCID: PMC7275059 DOI: 10.1038/s41597-020-0500-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Accepted: 04/21/2020] [Indexed: 11/09/2022] Open
Abstract
Modern histopathology workflows rely on the digitization of histology slides. The quality of the resulting digital representations, in the form of histology slide image mosaics, depends on various specific acquisition conditions and on the image processing steps that underlie the generation of the final mosaic, e.g. registration and blending of the contained image tiles. We introduce HISTOBREAST, an extensive collection of brightfield microscopy images that we collected in a principled manner under different acquisition conditions on Haematoxylin - Eosin (H&E) stained breast tissue. HISTOBREAST is comprised of neighbour image tiles and ensemble of mosaics composed from different combinations of the available image tiles, exhibiting progressively degraded quality levels. HISTOBREAST can be used to benchmark image processing and computer vision techniques with respect to their robustness to image modifications specific to brightfield microscopy of H&E stained tissues. Furthermore, HISTOBREAST can serve in the development of new image processing methods, with the purpose of ensuring robustness to typical image artefacts that raise interpretation problems for expert histopathologists and affect the results of computerized image analysis.
Collapse
|
31
|
|
32
|
Lee S, Negishi M, Urakubo H, Kasai H, Ishii S. Mu-net: Multi-scale U-net for two-photon microscopy image denoising and restoration. Neural Netw 2020; 125:92-103. [PMID: 32078964 DOI: 10.1016/j.neunet.2020.01.026] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 12/18/2019] [Accepted: 01/22/2020] [Indexed: 12/31/2022]
Abstract
Advances in two two-photon microscopy (2PM) have made three-dimensional (3D) neural imaging of deep cortical regions possible. However, 2PM often suffers from poor image quality because of various noise factors, including blur, white noise, and photo bleaching. In addition, the effectiveness of the existing image processing methods is limited because of the special features of 2PM images such as deeper tissue penetration but higher image noises owing to rapid laser scanning. To address the denoising problems in 2PM 3D images, we present a new algorithm based on deep convolutional neural networks (CNNs). The proposed model consists of multiple U-nets in which an individual U-net removes noises at different scales and then yields a performance improvement based on a coarse-to-fine strategy. Moreover, the constituent CNNs employ fully 3D convolution operations. Such an architecture enables the proposed model to facilitate end-to-end learning without any pre/post processing. Based on the experiments on 2PM image denoising, we observed that our new algorithm demonstrates substantial performance improvements over other baseline methods.
Collapse
Affiliation(s)
- Sehyung Lee
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Japan.
| | - Makiko Negishi
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Japan
| | - Hidetoshi Urakubo
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Japan
| | - Haruo Kasai
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Japan; International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Japan
| | - Shin Ishii
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Japan; International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Japan; Advanced Telecommunications Research Institute International (ATR), Japan
| |
Collapse
|
33
|
Fast and accurate sCMOS noise correction for fluorescence microscopy. Nat Commun 2020; 11:94. [PMID: 31901080 PMCID: PMC6941997 DOI: 10.1038/s41467-019-13841-8] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 11/29/2019] [Indexed: 12/12/2022] Open
Abstract
The rapid development of scientific CMOS (sCMOS) technology has greatly advanced optical microscopy for biomedical research with superior sensitivity, resolution, field-of-view, and frame rates. However, for sCMOS sensors, the parallel charge-voltage conversion and different responsivity at each pixel induces extra readout and pattern noise compared to charge-coupled devices (CCD) and electron-multiplying CCD (EM-CCD) sensors. This can produce artifacts, deteriorate imaging capability, and hinder quantification of fluorescent signals, thereby compromising strategies to reduce photo-damage to live samples. Here, we propose a content-adaptive algorithm for the automatic correction of sCMOS-related noise (ACsN) for fluorescence microscopy. ACsN combines camera physics and layered sparse filtering to significantly reduce the most relevant noise sources in a sCMOS sensor while preserving the fine details of the signal. The method improves the camera performance, enabling fast, low-light and quantitative optical microscopy with video-rate denoising for a broad range of imaging conditions and modalities. Scientific complementary metal-oxide semiconductor (sCMOS) cameras have advanced the imaging field, but they often suffer from additional noise compared to CCD sensors. Here the authors present a content-adaptive algorithm for the automatic correction of sCMOS-related noise for fluorescence microscopy.
Collapse
|
34
|
Chao Z, Kim HJ. Removal of computed tomography ring artifacts via radial basis function artificial neural networks. Phys Med Biol 2019; 64:235015. [PMID: 31639777 DOI: 10.1088/1361-6560/ab5035] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Ring artifacts in computed tomography (CT) images are caused by the undesirable response of detector pixels, which leads to the degradation of CT images. Accordingly, it affects the image interpretation, post-processing, and quantitative analysis. In this study, a radial basis function neural network (RBFNN) was used to remove ring artifacts. The proposed method employs polar coordinate transformation. First, ring artifacts were transformed into linear artifacts by polar coordinate transformation. Then, smoothing operators were applied to locate these artifacts exactly. Subsequently, RBFNN was operated on each linear artifact. The neuron numbers of the input, hidden, and output layers of the neural network were 8, 40, and 1, respectively. Neurons in the input layer were selected according to the characteristics of the artifact itself and its relationship with the surrounding normal pixels. For the training of the neural network, a hybrid of adaptive gradient descent algorithm (AGDA) and gravitational search algorithm (GSA) was adopted. After the corrected image was obtained using the updated neural network, the inverse coordinate transformation was implemented. The experimental data were divided into simulated ring artifacts and real ring artifacts, which were based on brain and abdomen CT images. Compared with current artifact removal methods, the proposed method removed ring artifacts more effectively and retained the maximum detail of normal tissues. In addition, for index analysis, the performance of proposed method was superior to that of the other methods.
Collapse
Affiliation(s)
- Zhen Chao
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, 220-710, Republic of Korea
| | | |
Collapse
|
35
|
Bobrow TL, Mahmood F, Inserni M, Durr NJ. DeepLSR: a deep learning approach for laser speckle reduction. BIOMEDICAL OPTICS EXPRESS 2019; 10:2869-2882. [PMID: 31259057 PMCID: PMC6583356 DOI: 10.1364/boe.10.002869] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 05/08/2019] [Accepted: 05/08/2019] [Indexed: 05/06/2023]
Abstract
Speckle artifacts degrade image quality in virtually all modalities that utilize coherent energy, including optical coherence tomography, reflectance confocal microscopy, ultrasound, and widefield imaging with laser illumination. We present an adversarial deep learning framework for laser speckle reduction, called DeepLSR (https://durr.jhu.edu/DeepLSR), that transforms images from a source domain of coherent illumination to a target domain of speckle-free, incoherent illumination. We apply this method to widefield images of objects and tissues illuminated with a multi-wavelength laser, using light emitting diode-illuminated images as ground truth. In images of gastrointestinal tissues, DeepLSR reduces laser speckle noise by 6.4 dB, compared to a 2.9 dB reduction from optimized non-local means processing, a 3.0 dB reduction from BM3D, and a 3.7 dB reduction from an optical speckle reducer utilizing an oscillating diffuser. Further, DeepLSR can be combined with optical speckle reduction to reduce speckle noise by 9.4 dB. This dramatic reduction in speckle noise may enable the use of coherent light sources in applications that require small illumination sources and high-quality imaging, including medical endoscopy.
Collapse
|