1
|
Mohsin ASM, Choudhury SH. Label-free quantification of gold nanoparticles at the single-cell level using a multi-column convolutional neural network (MC-CNN). Analyst 2024; 149:2412-2419. [PMID: 38487894 DOI: 10.1039/d3an01982a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Gold nanoparticles (AuNPs) are extensively used in cellular imaging, single-particle tracking, disease diagnosis, studying membrane protein interaction, and drug delivery. Understanding the dynamics of AuNP uptake in live cells is crucial for optimizing their efficacy and safety. Traditional manual methods for quantifying AuNP uptake are time-consuming and subjective, limiting their scalability and accuracy. The available fluorescence-based techniques are limited to photobleaching and photoblinking. Optical microscopy techniques are limited by diffraction limits. Electron microscopy-based imaging techniques are destructive and unsuitable for live cell imaging. Furthermore, the resulting images may contain hundreds of particles with varied intensities, blurring, and substantial occlusion, making it difficult to manually quantify AuNP uptake. To overcome this issue and measure AuNP uptake by live cells, we annotated a dataset of dark-field images of 50 nanometer-radius AuNPs at different incubation durations. Then, to count the number of particles present in a cell, we created a customized multi-column convolutional neural network (MC-CNN). The customized MC-CNN outperformed typical particle counting architectures when compared to spectroscopy-based counting. This will allow researchers to gain a better understanding of AuNP behavior and interactions with cells, paving the way for advancements in nanomedicine, drug delivery, and biomedical research. The code for this paper is available at the following link: https://github.com/Namerlight/LabelFree_AuNP_Quantification.
Collapse
Affiliation(s)
- Abu S M Mohsin
- Nanotechnology, IoT and Applied Machine Learning Research Group, Brac University, Dhaka, Bangladesh.
| | - Shadab H Choudhury
- Nanotechnology, IoT and Applied Machine Learning Research Group, Brac University, Dhaka, Bangladesh.
| |
Collapse
|
2
|
Lei M, Zhao J, Zhou J, Lee H, Wu Q, Burns Z, Chen G, Liu Z. Super resolution label-free dark-field microscopy by deep learning. NANOSCALE 2024; 16:4703-4709. [PMID: 38268454 DOI: 10.1039/d3nr04294d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Dark-field microscopy (DFM) is a powerful label-free and high-contrast imaging technique due to its ability to reveal features of transparent specimens with inhomogeneities. However, owing to the Abbe's diffraction limit, fine structures at sub-wavelength scale are difficult to resolve. In this work, we report a single image super resolution DFM scheme using a convolutional neural network (CNN). A U-net based CNN is trained with a dataset which is numerically simulated based on the forward physical model of the DFM. The forward physical model described by the parameters of the imaging setup connects the object ground truths and dark field images. With the trained network, we demonstrate super resolution dark field imaging of various test samples with twice resolution improvement. Our technique illustrates a promising deep learning approach to double the resolution of DFM without any hardware modification.
Collapse
Affiliation(s)
- Ming Lei
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiao Zhou
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Hongki Lee
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Qianyi Wu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Guanghao Chen
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
- Materials Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| |
Collapse
|
3
|
Si L, Li N, Huang T, Du S, Dong Y, Yao Y, Ma H. Computational image translation from Mueller matrix polarimetry to bright-field microscopy. JOURNAL OF BIOPHOTONICS 2022; 15:e202100242. [PMID: 34775685 DOI: 10.1002/jbio.202100242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 10/18/2021] [Accepted: 11/10/2021] [Indexed: 06/13/2023]
Abstract
Mueller matrix (MM) polarimetry can provide comprehensive information about the polarization properties that are closely related to the microstructural features and has demonstrated its potential in biomedical studies and clinical practices, and bright-field microscopy is widely used in pathological diagnosis as the golden standard. In this work, we improve the throughput of MM microscopy by learning a statistical transformation between these two imaging systems based on deep learning. Using this approach, the MM microscope can generate an image that is equivalent to a bright-field microscope image of the matching field of view. We add new transformative capability to the existing MM imaging system without requiring extra hardware. The translation model is based on conditional generative adversarial network with customized loss functions. We demonstrated the effectiveness of our approach on liver and breast tissues and evaluated the performance by four quantitative similarity assessment methods in pixel, image and distribution levels, respectively.
Collapse
Affiliation(s)
- Lu Si
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Naiqi Li
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Tongyu Huang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Shan Du
- Department of Pathology, University of Chinese Academy of Sciences Shenzhen Hospital, Shenzhen, China
| | - Yang Dong
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Yue Yao
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Hui Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
- Department of Physics, Tsinghua University, Beijing, China
| |
Collapse
|
4
|
Ding H, Li F, Meng Z, Feng S, Ma J, Nie S, Yuan C. Auto-focusing and quantitative phase imaging using deep learning for the incoherent illumination microscopy system. OPTICS EXPRESS 2021; 29:26385-26403. [PMID: 34615075 DOI: 10.1364/oe.434014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
It is well known that the quantitative phase information which is vital in the biomedical study is hard to be directly obtained with bright-field microscopy under incoherent illumination. In addition, it is impossible to maintain the living sample in focus over long-term observation. Therefore, both the autofocusing and quantitative phase imaging techniques have to be solved in microscopy simultaneously. Here, we propose a lightweight deep learning-based framework, which is constructed by residual structure and is constrained by a novel loss function model, to realize both autofocusing and quantitative phase imaging. It outputs the corresponding in-focus amplitude and phase information at high speed (10fps) from a single-shot out-of-focus bright-field image. The training data were captured with a designed system under a hybrid incoherent and coherent illumination system. The experimental results verify that the focused and quantitative phase images of non-biological samples and biological samples can be reconstructed by using the framework. It provides a versatile quantitative technique for continuous monitoring of living cells in long-term and label-free imaging by using a traditional incoherent illumination microscopy system.
Collapse
|
5
|
Meng Z, Pedrini G, Lv X, Ma J, Nie S, Yuan C. DL-SI-DHM: a deep network generating the high-resolution phase and amplitude images from wide-field images. OPTICS EXPRESS 2021; 29:19247-19261. [PMID: 34266038 DOI: 10.1364/oe.424718] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 05/26/2021] [Indexed: 06/13/2023]
Abstract
Structured illumination digital holographic microscopy (SI-DHM) is a high-resolution, label-free technique enabling us to image unstained biological samples. SI-DHM has high requirements on the stability of the experimental setup and needs long exposure time. Furthermore, image synthesizing and phase correcting in the reconstruction process are both challenging tasks. We propose a deep-learning-based method called DL-SI-DHM to improve the recording, the reconstruction efficiency and the accuracy of SI-DHM and to provide high-resolution phase imaging. In the training process, high-resolution amplitude and phase images obtained by phase-shifting SI-DHM together with wide-field amplitudes are used as inputs of DL-SI-DHM. The well-trained network can reconstruct both the high-resolution amplitude and phase images from a single wide-field amplitude image. Compared with the traditional SI-DHM, this method significantly shortens the recording time and simplifies the reconstruction process and complex phase correction, and frequency synthesizing are not required anymore. By comparsion, with other learning-based reconstruction schemes, the proposed network has better response to high frequencies. The possibility of using the proposed method for the investigation of different biological samples has been experimentally verified, and the low-noise characteristics were also proved.
Collapse
|