1
|
Lo MCK, Siu DMD, Lee KCM, Wong JSJ, Yeung MCF, Hsin MKY, Ho JCM, Tsia KK. Information-Distilled Generative Label-Free Morphological Profiling Encodes Cellular Heterogeneity. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024:e2307591. [PMID: 38864546 DOI: 10.1002/advs.202307591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 05/17/2024] [Indexed: 06/13/2024]
Abstract
Image-based cytometry faces challenges due to technical variations arising from different experimental batches and conditions, such as differences in instrument configurations or image acquisition protocols, impeding genuine biological interpretation of cell morphology. Existing solutions, often necessitating extensive pre-existing data knowledge or control samples across batches, have proved limited, especially with complex cell image data. To overcome this, "Cyto-Morphology Adversarial Distillation" (CytoMAD), a self-supervised multi-task learning strategy that distills biologically relevant cellular morphological information from batch variations, is introduced to enable integrated analysis across multiple data batches without complex data assumptions or extensive manual annotation. Unique to CytoMAD is its "morphology distillation", symbiotically paired with deep-learning image-contrast translation-offering additional interpretable insights into label-free cell morphology. The versatile efficacy of CytoMAD is demonstrated in augmenting the power of biophysical imaging cytometry. It allows integrated label-free classification of human lung cancer cell types and accurately recapitulates their progressive drug responses, even when trained without the drug concentration information. CytoMAD also allows joint analysis of tumor biophysical cellular heterogeneity, linked to epithelial-mesenchymal plasticity, that standard fluorescence markers overlook. CytoMAD can substantiate the wide adoption of biophysical cytometry for cost-effective diagnosis and screening.
Collapse
Affiliation(s)
- Michelle C K Lo
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 000000, Hong Kong
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, New Territories, Hong Kong, 000000, Hong Kong
| | - Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 000000, Hong Kong
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, New Territories, Hong Kong, 000000, Hong Kong
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 000000, Hong Kong
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, New Territories, Hong Kong, 000000, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, New Territories, Hong Kong, 000000, Hong Kong
| | - Maximus C F Yeung
- Department of Pathology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam Road, Hong Kong, 000000, Hong Kong
| | - Michael K Y Hsin
- Department of Surgery, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam Road, Hong Kong, 000000, Hong Kong
| | - James C M Ho
- Department of Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam Road, Hong Kong, 000000, Hong Kong
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 000000, Hong Kong
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, New Territories, Hong Kong, 000000, Hong Kong
| |
Collapse
|
2
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
3
|
Shi Q, Song F, Zhou X, Chen X, Cao J, Na J, Fan Y, Zhang G, Zheng L. Early Predicting Osteogenic Differentiation of Mesenchymal Stem Cells Based on Deep Learning Within One Day. Ann Biomed Eng 2024; 52:1706-1718. [PMID: 38488988 DOI: 10.1007/s10439-024-03483-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 02/24/2024] [Indexed: 03/17/2024]
Abstract
Osteogenic differentiation of mesenchymal stem cells (MSCs) is proposed to be critical for bone tissue engineering and regenerative medicine. However, the current approach for evaluating osteogenic differentiation mainly involves immunohistochemical staining of specific markers which often can be detected at day 5-7 of osteogenic inducing. Deep learning (DL) is a significant technology for realizing artificial intelligence (AI). Computer vision, a branch of AI, has been proved to achieve high-precision image recognition using convolutional neural networks (CNNs). Our goal was to train CNNs to quantitatively measure the osteogenic differentiation of MSCs. To this end, bright-field images of MSCs during early osteogenic differentiation (day 0, 1, 3, 5, and 7) were captured using a simple optical phase contrast microscope to train CNNs. The results showed that the CNNs could be trained to recognize undifferentiated cells and differentiating cells with an accuracy of 0.961 on the independent test set. In addition, we found that CNNs successfully distinguished differentiated cells at a very early stage (only 1 day). Further analysis showed that overall morphological features of MSCs were the main basis for the CNN classification. In conclusion, MSCs differentiation detection can be achieved early and accurately through simple bright-field images and DL networks, which may also provide a potential and novel method for the field of cell detection in the near future.
Collapse
Affiliation(s)
- Qiusheng Shi
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Fan Song
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Xiaocheng Zhou
- Department of Statistics, The Chinese University of Hong Kong, Sha Tin, Hong Kong SAR, China
| | - Xinyuan Chen
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Jingqi Cao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Jing Na
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China.
| | - Guanglei Zhang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China.
| | - Lisha Zheng
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
4
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
5
|
Kuhn TM, Paulsen M, Cuylen-Haering S. Accessible high-speed image-activated cell sorting. Trends Cell Biol 2024:S0962-8924(24)00094-1. [PMID: 38789300 DOI: 10.1016/j.tcb.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/15/2024] [Accepted: 04/23/2024] [Indexed: 05/26/2024]
Abstract
Over the past six decades, fluorescence-activated cell sorting (FACS) has become an essential technology for basic and clinical research by enabling the isolation of cells of interest in high throughput. Recent technological advancements have started a new era of flow cytometry. By combining the spatial resolution of microscopy with high-speed cell sorting, new instruments allow cell sorting based on simple image-derived parameters or sophisticated image analysis algorithms, thereby greatly expanding the scope of applications. In this review, we discuss the systems that are commercially available or have been described in enough methodological and engineering detail to allow their replication. We summarize their strengths and limitations and highlight applications that have the potential to transform various fields in basic life science research and clinical settings.
Collapse
Affiliation(s)
- Terra M Kuhn
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Malte Paulsen
- Novo Nordisk Foundation Center for Stem Cell Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Sara Cuylen-Haering
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany.
| |
Collapse
|
6
|
Fredin Haslum J, Lardeau CH, Karlsson J, Turkki R, Leuchowius KJ, Smith K, Müllers E. Cell Painting-based bioactivity prediction boosts high-throughput screening hit-rates and compound diversity. Nat Commun 2024; 15:3470. [PMID: 38658534 PMCID: PMC11043326 DOI: 10.1038/s41467-024-47171-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 03/22/2024] [Indexed: 04/26/2024] Open
Abstract
Identifying active compounds for a target is a time- and resource-intensive task in early drug discovery. Accurate bioactivity prediction using morphological profiles could streamline the process, enabling smaller, more focused compound screens. We investigate the potential of deep learning on unrefined single-concentration activity readouts and Cell Painting data, to predict compound activity across 140 diverse assays. We observe an average ROC-AUC of 0.744 ± 0.108 with 62% of assays achieving ≥0.7, 30% ≥0.8, and 7% ≥0.9. In many cases, the high prediction performance can be achieved using only brightfield images instead of multichannel fluorescence images. A comprehensive analysis shows that Cell Painting-based bioactivity prediction is robust across assay types, technologies, and target classes, with cell-based assays and kinase targets being particularly well-suited for prediction. Experimental validation confirms the enrichment of active compounds. Our findings indicate that models trained on Cell Painting data, combined with a small set of single-concentration data points, can reliably predict the activity of a compound library across diverse targets and assays while maintaining high hit rates and scaffold diversity. This approach has the potential to reduce the size of screening campaigns, saving time and resources, and enabling primary screening with more complex assays.
Collapse
Affiliation(s)
- Johan Fredin Haslum
- KTH Royal Institute of Technology, Stockholm, Sweden
- Science for Life Laboratory, Stockholm, Sweden
- Research and Early Development, Cardiovascular, Renal and Metabolism (CVRM), BioPharmaceuticals R&D, AstraZeneca, Gothenburg, Sweden
| | | | - Johan Karlsson
- Discovery Sciences, R&D, AstraZeneca, Gothenburg, Sweden
| | - Riku Turkki
- Discovery Sciences, R&D, AstraZeneca, Gothenburg, Sweden
| | | | - Kevin Smith
- KTH Royal Institute of Technology, Stockholm, Sweden
- Science for Life Laboratory, Stockholm, Sweden
| | - Erik Müllers
- Research and Early Development, Cardiovascular, Renal and Metabolism (CVRM), BioPharmaceuticals R&D, AstraZeneca, Gothenburg, Sweden.
| |
Collapse
|
7
|
Rosenberg CA, Rodrigues MA, Bill M, Ludvigsen M. Comparative analysis of feature-based ML and CNN for binucleated erythroblast quantification in myelodysplastic syndrome patients using imaging flow cytometry data. Sci Rep 2024; 14:9349. [PMID: 38654058 PMCID: PMC11039460 DOI: 10.1038/s41598-024-59875-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Accepted: 04/16/2024] [Indexed: 04/25/2024] Open
Abstract
Myelodysplastic syndrome is primarily characterized by dysplasia in the bone marrow (BM), presenting a challenge in consistent morphology interpretation. Accurate diagnosis through traditional slide-based analysis is difficult, necessitating a standardized objective technique. Over the past two decades, imaging flow cytometry (IFC) has proven effective in combining image-based morphometric analyses with high-parameter phenotyping. We have previously demonstrated the effectiveness of combining IFC with a feature-based machine learning algorithm to accurately identify and quantify rare binucleated erythroblasts (BNEs) in dyserythropoietic BM cells. However, a feature-based workflow poses challenges requiring software-specific expertise. Here we employ a Convolutional Neural Network (CNN) algorithm for BNE identification and differentiation from doublets and cells with irregular nuclear morphology in IFC data. We demonstrate that this simplified AI workflow, coupled with a powerful CNN algorithm, achieves comparable BNE quantification accuracy to manual and feature-based analysis with substantial time savings, eliminating workflow complexity. This streamlined approach holds significant clinical value, enhancing IFC accessibility for routine diagnostic purposes.
Collapse
Affiliation(s)
- Carina A Rosenberg
- Department of Hematology, Aarhus University Hospital, Palle Juul-Jensens Boulevard 35, C115, 8200, Aarhus C, Denmark.
| | | | - Marie Bill
- Department of Hematology, Aarhus University Hospital, Palle Juul-Jensens Boulevard 35, C115, 8200, Aarhus C, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Maja Ludvigsen
- Department of Hematology, Aarhus University Hospital, Palle Juul-Jensens Boulevard 35, C115, 8200, Aarhus C, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
8
|
Ibrahim KA, Naidu AS, Miljkovic H, Radenovic A, Yang W. Label-Free Techniques for Probing Biomolecular Condensates. ACS NANO 2024; 18:10738-10757. [PMID: 38609349 DOI: 10.1021/acsnano.4c01534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Biomolecular condensates play important roles in a wide array of fundamental biological processes, such as cellular compartmentalization, cellular regulation, and other biochemical reactions. Since their discovery and first observations, an extensive and expansive library of tools has been developed to investigate various aspects and properties, encompassing structural and compositional information, material properties, and their evolution throughout the life cycle from formation to eventual dissolution. This Review presents an overview of the expanded set of tools and methods that researchers use to probe the properties of biomolecular condensates across diverse scales of length, concentration, stiffness, and time. In particular, we review recent years' exciting development of label-free techniques and methodologies. We broadly organize the set of tools into 3 categories: (1) imaging-based techniques, such as transmitted-light microscopy (TLM) and Brillouin microscopy (BM), (2) force spectroscopy techniques, such as atomic force microscopy (AFM) and the optical tweezer (OT), and (3) microfluidic platforms and emerging technologies. We point out the tools' key opportunities, challenges, and future perspectives and analyze their correlative potential as well as compatibility with other techniques. Additionally, we review emerging techniques, namely, differential dynamic microscopy (DDM) and interferometric scattering microscopy (iSCAT), that have huge potential for future applications in studying biomolecular condensates. Finally, we highlight how some of these techniques can be translated for diagnostics and therapy purposes. We hope this Review serves as a useful guide for new researchers in this field and aids in advancing the development of new biophysical tools to study biomolecular condensates.
Collapse
|
9
|
Renner JA, Riley PC. Using machine learning for chemical-free histological tissue staining. J Histotechnol 2024:1-4. [PMID: 38648120 DOI: 10.1080/01478885.2024.2338585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 03/27/2024] [Indexed: 04/25/2024]
Abstract
Hematoxylin and eosin staining can be hazardous, expensive, and prone to error and variability. To circumvent these issues, artificial intelligence/machine learning models such as generative adversarial networks (GANs), are being used to 'virtually' stain unstained tissue images indistinguishable from chemically stained tissue. Frameworks such as deep convolutional GANs (DCGAN) and conditional GANs (CGANs) have successfully generated highly reproducible 'stained' images. However, their utility may be limited by requiring registered, paired images which can be difficult to obtain. To avoid these dataset requirements, we attempted to use an unsupervised CycleGAN pix2pix model(5,6) to turn unpaired, unstained bright-field images into pathologist-approved digitally 'stained' images. Using formalin-fixed-paraffin-embedded liver samples, 5µm section images (20x) were obtained before and after staining to create "stained" an "unstained" datasets. Model implementation was conducted using Ubuntu 20.04.4 LTS, 32 GB RAM, Intel Core i7-9750 CPU @2.6 GHz, Nvidia GeForce RTX 2070 Mobile, Python 3.7.11 and Tensorflow 2.9.1. The CycleGAN framework utilized a u-net-based generator and discriminator from pix2pix, a CGAN. The CycleGAN used a modified loss function, cycle consistent loss that assumed unpaired images, so loss was measured twice. To our knowledge, this is the first documented application of this architecture using unpaired bright-field images. Results and suggested improvements are discussed.
Collapse
Affiliation(s)
- Julie A Renner
- US Army DEVCOM Chemical Biological Center, Aberdeen Proving Ground, MD, USA
| | - Patrick C Riley
- US Army DEVCOM Chemical Biological Center, Aberdeen Proving Ground, MD, USA
| |
Collapse
|
10
|
Chechekhina E, Voloshin N, Kulebyakin K, Tyurin-Kuzmin P. Code-Free Machine Learning Solutions for Microscopy Image Processing: Deep Learning. Tissue Eng Part A 2024. [PMID: 38556835 DOI: 10.1089/ten.tea.2024.0014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024] Open
Abstract
In recent years, there has been a significant expansion in the realm of processing microscopy images, thanks to the advent of machine learning techniques. These techniques offer diverse applications for image processing. Currently, numerous methods are used for processing microscopy images in the field of biology, ranging from conventional machine learning algorithms to sophisticated deep learning artificial neural networks with millions of parameters. However, a comprehensive grasp of the intricacies of these methods usually necessitates proficiency in programming and advanced mathematics. In our comprehensive review, we explore various widely used deep learning approaches tailored for the processing of microscopy images. Our emphasis is on algorithms that have gained popularity in the field of biology and have been adapted to cater to users lacking programming expertise. In essence, our target audience comprises biologists interested in exploring the potential of deep learning algorithms, even without programming skills. Throughout the review, we elucidate each algorithm's fundamental concepts and capabilities without delving into mathematical and programming complexities. Crucially, all the highlighted algorithms are accessible on open platforms without requiring code, and we provide detailed descriptions and links within our review. It's essential to recognize that addressing each specific problem demands an individualized approach. Consequently, our focus is not on comparing algorithms but on delineating the problems they are adept at solving. In practical scenarios, researchers typically select multiple algorithms suited to their tasks and experimentally determine the most effective one. It is worth noting that microscopy extends beyond the realm of biology; its applications span diverse fields such as geology and material science. Although our review predominantly centers on biomedical applications, the algorithms and principles outlined here are equally applicable to other scientific domains. Furthermore, a number of the proposed solutions can be modified for use in entirely distinct computer vision cases.
Collapse
Affiliation(s)
- Elizaveta Chechekhina
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Nikita Voloshin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Konstantin Kulebyakin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Pyotr Tyurin-Kuzmin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
11
|
Winetraub Y, Van Vleck A, Yuan E, Terem I, Zhao J, Yu C, Chan W, Do H, Shevidi S, Mao M, Yu J, Hong M, Blankenberg E, Rieger KE, Chu S, Aasi S, Sarin KY, de la Zerda A. Noninvasive virtual biopsy using micro-registered optical coherence tomography (OCT) in human subjects. SCIENCE ADVANCES 2024; 10:eadi5794. [PMID: 38598626 PMCID: PMC11006228 DOI: 10.1126/sciadv.adi5794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 03/07/2024] [Indexed: 04/12/2024]
Abstract
Histological hematoxylin and eosin-stained (H&E) tissue sections are used as the gold standard for pathologic detection of cancer, tumor margin detection, and disease diagnosis. Producing H&E sections, however, is invasive and time-consuming. While deep learning has shown promise in virtual staining of unstained tissue slides, true virtual biopsy requires staining of images taken from intact tissue. In this work, we developed a micron-accuracy coregistration method [micro-registered optical coherence tomography (OCT)] that can take a two-dimensional (2D) H&E slide and find the exact corresponding section in a 3D OCT image taken from the original fresh tissue. We trained a conditional generative adversarial network using the paired dataset and showed high-fidelity conversion of noninvasive OCT images to virtually stained H&E slices in both 2D and 3D. Applying these trained neural networks to in vivo OCT images should enable physicians to readily incorporate OCT imaging into their clinical practice, reducing the number of unnecessary biopsy procedures.
Collapse
Affiliation(s)
- Yonatan Winetraub
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- The Bio-X Program, Stanford, CA 94305, USA
- Biophysics Program at Stanford, Stanford, CA 94305, USA
| | - Aidan Van Vleck
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
| | - Edwin Yuan
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Itamar Terem
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Jinjing Zhao
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
| | - Caroline Yu
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Warren Chan
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Hanh Do
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Saba Shevidi
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Maiya Mao
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Jacqueline Yu
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Megan Hong
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Erick Blankenberg
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
| | - Kerri E. Rieger
- Department of Pathology, Stanford University School of Medicine and Stanford Cancer Institute, Stanford, CA 94305, USA
| | - Steven Chu
- The Bio-X Program, Stanford, CA 94305, USA
- Biophysics Program at Stanford, Stanford, CA 94305, USA
- Departments of Physics and Molecular and Cellular Physiology, Energy, Science and Engineering Stanford University, Stanford, CA 94305, USA
| | - Sumaira Aasi
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Kavita Y. Sarin
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Adam de la Zerda
- Department of Structural Biology, Stanford University, Stanford, CA 94305, USA
- Molecular Imaging Program at Stanford, Stanford, CA 94305, USA
- The Bio-X Program, Stanford, CA 94305, USA
- Biophysics Program at Stanford, Stanford, CA 94305, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
- The Chan Zuckerberg Biohub, San Francisco, CA 94158, USA
| |
Collapse
|
12
|
Brückner DB, Broedersz CP. Learning dynamical models of single and collective cell migration: a review. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2024; 87:056601. [PMID: 38518358 DOI: 10.1088/1361-6633/ad36d2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 03/22/2024] [Indexed: 03/24/2024]
Abstract
Single and collective cell migration are fundamental processes critical for physiological phenomena ranging from embryonic development and immune response to wound healing and cancer metastasis. To understand cell migration from a physical perspective, a broad variety of models for the underlying physical mechanisms that govern cell motility have been developed. A key challenge in the development of such models is how to connect them to experimental observations, which often exhibit complex stochastic behaviours. In this review, we discuss recent advances in data-driven theoretical approaches that directly connect with experimental data to infer dynamical models of stochastic cell migration. Leveraging advances in nanofabrication, image analysis, and tracking technology, experimental studies now provide unprecedented large datasets on cellular dynamics. In parallel, theoretical efforts have been directed towards integrating such datasets into physical models from the single cell to the tissue scale with the aim of conceptualising the emergent behaviour of cells. We first review how this inference problem has been addressed in both freely migrating and confined cells. Next, we discuss why these dynamics typically take the form of underdamped stochastic equations of motion, and how such equations can be inferred from data. We then review applications of data-driven inference and machine learning approaches to heterogeneity in cell behaviour, subcellular degrees of freedom, and to the collective dynamics of multicellular systems. Across these applications, we emphasise how data-driven methods can be integrated with physical active matter models of migrating cells, and help reveal how underlying molecular mechanisms control cell behaviour. Together, these data-driven approaches are a promising avenue for building physical models of cell migration directly from experimental data, and for providing conceptual links between different length-scales of description.
Collapse
Affiliation(s)
- David B Brückner
- Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria
| | - Chase P Broedersz
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
- Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilian-University Munich, Theresienstr. 37, D-80333 Munich, Germany
| |
Collapse
|
13
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
14
|
Dai W, Wong IHM, Wong TTW. Exceeding the limit for microscopic image translation with a deep learning-based unified framework. PNAS NEXUS 2024; 3:pgae133. [PMID: 38601859 PMCID: PMC11004937 DOI: 10.1093/pnasnexus/pgae133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/19/2024] [Indexed: 04/12/2024]
Abstract
Deep learning algorithms have been widely used in microscopic image translation. The corresponding data-driven models can be trained by supervised or unsupervised learning depending on the availability of paired data. However, general cases are where the data are only roughly paired such that supervised learning could be invalid due to data unalignment, and unsupervised learning would be less ideal as the roughly paired information is not utilized. In this work, we propose a unified framework (U-Frame) that unifies supervised and unsupervised learning by introducing a tolerance size that can be adjusted automatically according to the degree of data misalignment. Together with the implementation of a global sampling rule, we demonstrate that U-Frame consistently outperforms both supervised and unsupervised learning in all levels of data misalignments (even for perfectly aligned image pairs) in a myriad of image translation applications, including pseudo-optical sectioning, virtual histological staining (with clinical evaluations for cancer diagnosis), improvement of signal-to-noise ratio or resolution, and prediction of fluorescent labels, potentially serving as new standard for image translation.
Collapse
Affiliation(s)
- Weixing Dai
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, Translational and Advanced Bioimaging Laboratory, Hong Kong University of Science and Technology, Hong Kong 999077, China
| |
Collapse
|
15
|
Tsubouchi A, An Y, Kawamura Y, Yanagihashi Y, Nakayama H, Murata Y, Teranishi K, Ishiguro S, Aburatani H, Yachie N, Ota S. Pooled CRISPR screening of high-content cellular phenotypes using ghost cytometry. CELL REPORTS METHODS 2024; 4:100737. [PMID: 38531306 PMCID: PMC10985231 DOI: 10.1016/j.crmeth.2024.100737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 10/30/2023] [Accepted: 02/27/2024] [Indexed: 03/28/2024]
Abstract
Recent advancements in image-based pooled CRISPR screening have facilitated the mapping of diverse genotype-phenotype associations within mammalian cells. However, the rapid enrichment of cells based on morphological information continues to pose a challenge, constraining the capacity for large-scale gene perturbation screening across diverse high-content cellular phenotypes. In this study, we demonstrate the applicability of multimodal ghost cytometry-based cell sorting, including both fluorescent and label-free high-content phenotypes, for rapid pooled CRISPR screening within vast cell populations. Using the high-content cell sorter operating in fluorescence mode, we successfully executed kinase-specific CRISPR screening targeting genes influencing the nuclear translocation of RelA. Furthermore, using the multiparametric, label-free mode, we performed large-scale screening to identify genes involved in macrophage polarization. Notably, the label-free platform can enrich target phenotypes without requiring invasive staining, preserving untouched cells for downstream assays and expanding the potential for screening cellular phenotypes even when suitable markers are absent.
Collapse
Affiliation(s)
| | - Yuri An
- ThinkCyte Inc., Tokyo 113-8654, Japan
| | | | | | | | | | | | - Soh Ishiguro
- School of Biomedical Engineering, Faculty of Medicine and Faculty of Applied Science, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
| | - Hiroyuki Aburatani
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo 153-8904, Japan
| | - Nozomu Yachie
- School of Biomedical Engineering, Faculty of Medicine and Faculty of Applied Science, University of British Columbia, Vancouver, BC V6T 1Z3, Canada; Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo 153-8904, Japan
| | - Sadao Ota
- ThinkCyte Inc., Tokyo 113-8654, Japan; Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo 153-8904, Japan.
| |
Collapse
|
16
|
Trettner KJ, Hsieh J, Xiao W, Lee JSH, Armani AM. Nondestructive, quantitative viability analysis of 3D tissue cultures using machine learning image segmentation. APL Bioeng 2024; 8:016121. [PMID: 38566822 PMCID: PMC10985731 DOI: 10.1063/5.0189222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/04/2024] [Indexed: 04/04/2024] Open
Abstract
Ascertaining the collective viability of cells in different cell culture conditions has typically relied on averaging colorimetric indicators and is often reported out in simple binary readouts. Recent research has combined viability assessment techniques with image-based deep-learning models to automate the characterization of cellular properties. However, further development of viability measurements to assess the continuity of possible cellular states and responses to perturbation across cell culture conditions is needed. In this work, we demonstrate an image processing algorithm for quantifying features associated with cellular viability in 3D cultures without the need for assay-based indicators. We show that our algorithm performs similarly to a pair of human experts in whole-well images over a range of days and culture matrix compositions. To demonstrate potential utility, we perform a longitudinal study investigating the impact of a known therapeutic on pancreatic cancer spheroids. Using images taken with a high content imaging system, the algorithm successfully tracks viability at the individual spheroid and whole-well level. The method we propose reduces analysis time by 97% in comparison with the experts. Because the method is independent of the microscope or imaging system used, this approach lays the foundation for accelerating progress in and for improving the robustness and reproducibility of 3D culture analysis across biological and clinical research.
Collapse
Affiliation(s)
| | - Jeremy Hsieh
- Pasadena Polytechnic High School, Pasadena, California 91106, USA
| | - Weikun Xiao
- Ellison Institute of Technology, Los Angeles, California 90064, USA
| | | | | |
Collapse
|
17
|
Maramraju S, Kowalczewski A, Kaza A, Liu X, Singaraju JP, Albert MV, Ma Z, Yang H. AI-organoid integrated systems for biomedical studies and applications. Bioeng Transl Med 2024; 9:e10641. [PMID: 38435826 PMCID: PMC10905559 DOI: 10.1002/btm2.10641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 03/05/2024] Open
Abstract
In this review, we explore the growing role of artificial intelligence (AI) in advancing the biomedical applications of human pluripotent stem cell (hPSC)-derived organoids. Stem cell-derived organoids, these miniature organ replicas, have become essential tools for disease modeling, drug discovery, and regenerative medicine. However, analyzing the vast and intricate datasets generated from these organoids can be inefficient and error-prone. AI techniques offer a promising solution to efficiently extract insights and make predictions from diverse data types generated from microscopy images, transcriptomics, metabolomics, and proteomics. This review offers a brief overview of organoid characterization and fundamental concepts in AI while focusing on a comprehensive exploration of AI applications in organoid-based disease modeling and drug evaluation. It provides insights into the future possibilities of AI in enhancing the quality control of organoid fabrication, label-free organoid recognition, and three-dimensional image reconstruction of complex organoid structures. This review presents the challenges and potential solutions in AI-organoid integration, focusing on the establishment of reliable AI model decision-making processes and the standardization of organoid research.
Collapse
Affiliation(s)
- Sudhiksha Maramraju
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Texas Academy of Mathematics and ScienceUniversity of North TexasDentonTexasUSA
| | - Andrew Kowalczewski
- Department of Biomedical & Chemical EngineeringSyracuse UniversitySyracuseNew YorkUSA
- BioInspired Institute for Material and Living SystemsSyracuse UniversitySyracuseNew YorkUSA
| | - Anirudh Kaza
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Texas Academy of Mathematics and ScienceUniversity of North TexasDentonTexasUSA
| | - Xiyuan Liu
- Department of Mechanical & Aerospace EngineeringSyracuse UniversitySyracuseNew YorkUSA
| | - Jathin Pranav Singaraju
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Texas Academy of Mathematics and ScienceUniversity of North TexasDentonTexasUSA
| | - Mark V. Albert
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
- Department of Computer Science and EngineeringUniversity of North TexasDentonTexasUSA
| | - Zhen Ma
- Department of Biomedical & Chemical EngineeringSyracuse UniversitySyracuseNew YorkUSA
- BioInspired Institute for Material and Living SystemsSyracuse UniversitySyracuseNew YorkUSA
| | - Huaxiao Yang
- Department of Biomedical EngineeringUniversity of North TexasDentonTexasUSA
| |
Collapse
|
18
|
Witmer A, Bhanu B. Iterative pseudo balancing for stem cell microscopy image classification. Sci Rep 2024; 14:4489. [PMID: 38396157 PMCID: PMC10891062 DOI: 10.1038/s41598-024-54993-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
Many critical issues arise when training deep neural networks using limited biological datasets. These include overfitting, exploding/vanishing gradients and other inefficiencies which are exacerbated by class imbalances and can affect the overall accuracy of a model. There is a need to develop semi-supervised models that can reduce the need for large, balanced, manually annotated datasets so that researchers can easily employ neural networks for experimental analysis. In this work, Iterative Pseudo Balancing (IPB) is introduced to classify stem cell microscopy images while performing on the fly dataset balancing using a student-teacher meta-pseudo-label framework. In addition, multi-scale patches of multi-label images are incorporated into the network training to provide previously inaccessible image features with both local and global information for effective and efficient learning. The combination of these inputs is shown to increase the classification accuracy of the proposed deep neural network by 3[Formula: see text] over baseline, which is determined to be statistically significant. This work represents a novel use of pseudo-labeling for data limited settings, which are common in biological image datasets, and highlights the importance of the exhaustive use of available image features for improving performance of semi-supervised networks. The proposed methods can be used to reduce the need for expensive manual dataset annotation and in turn accelerate the pace of scientific research involving non-invasive cellular imaging.
Collapse
Affiliation(s)
- Adam Witmer
- Department of Bioengineering, University of California, Riverside, CA, 92521, USA.
| | - Bir Bhanu
- Department of Bioengineering, University of California, Riverside, CA, 92521, USA
- Department of Electrical and Computer Engineering, University of California, Riverside, CA, 92521, USA
| |
Collapse
|
19
|
Kim S, Lee J, Ko J, Park S, Lee SR, Kim Y, Lee T, Choi S, Kim J, Kim W, Chung Y, Kwon OH, Jeon NL. Angio-Net: deep learning-based label-free detection and morphometric analysis of in vitro angiogenesis. LAB ON A CHIP 2024; 24:751-763. [PMID: 38193617 DOI: 10.1039/d3lc00935a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Despite significant advancements in three-dimensional (3D) cell culture technology and the acquisition of extensive data, there is an ongoing need for more effective and dependable data analysis methods. These concerns arise from the continued reliance on manual quantification techniques. In this study, we introduce a microphysiological system (MPS) that seamlessly integrates 3D cell culture to acquire large-scale imaging data and employs deep learning-based virtual staining for quantitative angiogenesis analysis. We utilize a standardized microfluidic device to obtain comprehensive angiogenesis data. Introducing Angio-Net, a novel solution that replaces conventional immunocytochemistry, we convert brightfield images into label-free virtual fluorescence images through the fusion of SegNet and cGAN. Moreover, we develop a tool capable of extracting morphological blood vessel features and automating their measurement, facilitating precise quantitative analysis. This integrated system proves to be invaluable for evaluating drug efficacy, including the assessment of anticancer drugs on targets such as the tumor microenvironment. Additionally, its unique ability to enable live cell imaging without the need for cell fixation promises to broaden the horizons of pharmaceutical and biological research. Our study pioneers a powerful approach to high-throughput angiogenesis analysis, marking a significant advancement in MPS.
Collapse
Affiliation(s)
- Suryong Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jungseub Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jihoon Ko
- Department of BioNano Technology, Gachon University, Gyeonggi, 13120, Republic of Korea
| | - Seonghyuk Park
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Seung-Ryeol Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Youngtaek Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Taeseung Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Sunbeen Choi
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jiho Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Wonbae Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Yoojin Chung
- Division of Computer Engineering, Hankuk University of Foreign Studies, Yongin, 17035, Republic of Korea
| | - Oh-Heum Kwon
- Department of IT convergence and Applications Engineering, Pukyong National University, Busan, 48513, Republic of Korea
| | - Noo Li Jeon
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
- Institute of Advanced Machines and Design, Seoul National University, Seoul, 08826, Republic of Korea
| |
Collapse
|
20
|
Burguete-Lopez A, Makarenko M, Bonifazi M, Menezes de Oliveira BN, Getman F, Tian Y, Mazzone V, Li N, Giammona A, Liberale C, Fratalocchi A. Real-time simultaneous refractive index and thickness mapping of sub-cellular biology at the diffraction limit. Commun Biol 2024; 7:154. [PMID: 38321111 PMCID: PMC10847501 DOI: 10.1038/s42003-024-05839-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 01/20/2024] [Indexed: 02/08/2024] Open
Abstract
Mapping the cellular refractive index (RI) is a central task for research involving the composition of microorganisms and the development of models providing automated medical screenings with accuracy beyond 95%. These models require significantly enhancing the state-of-the-art RI mapping capabilities to provide large amounts of accurate RI data at high throughput. Here, we present a machine-learning-based technique that obtains a biological specimen's real-time RI and thickness maps from a single image acquired with a conventional color camera. This technology leverages a suitably engineered nanostructured membrane that stretches a biological analyte over its surface and absorbs transmitted light, generating complex reflection spectra from each sample point. The technique does not need pre-existing sample knowledge. It achieves 10-4 RI sensitivity and sub-nanometer thickness resolution on diffraction-limited spatial areas. We illustrate practical application by performing sub-cellular segmentation of HCT-116 colorectal cancer cells, obtaining complete three-dimensional reconstruction of the cellular regions with a characteristic length of 30 μm. These results can facilitate the development of real-time label-free technologies for biomedical studies on microscopic multicellular dynamics.
Collapse
Affiliation(s)
- Arturo Burguete-Lopez
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Maksim Makarenko
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Marcella Bonifazi
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
- Physik-Institut, University of Zurich, Winterthurerstrasse 190, Zurich, 8057, Switzerland
| | - Barbara Nicoly Menezes de Oliveira
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Fedor Getman
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Yi Tian
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Valerio Mazzone
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
- Physik-Institut, University of Zurich, Winterthurerstrasse 190, Zurich, 8057, Switzerland
| | - Ning Li
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Alessandro Giammona
- Biological and Environmental Science and Engineering Division (BESE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
- Institute of Molecular Bioimaging and Physiology (IBFM), National Research Council (CNR), Segrate, Italy
| | - Carlo Liberale
- Biological and Environmental Science and Engineering Division (BESE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Andrea Fratalocchi
- PRIMALIGHT, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia.
| |
Collapse
|
21
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
22
|
Jan M, Spangaro A, Lenartowicz M, Mattiazzi Usaj M. From pixels to insights: Machine learning and deep learning for bioimage analysis. Bioessays 2024; 46:e2300114. [PMID: 38058114 DOI: 10.1002/bies.202300114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 10/25/2023] [Accepted: 11/13/2023] [Indexed: 12/08/2023]
Abstract
Bioimage analysis plays a critical role in extracting information from biological images, enabling deeper insights into cellular structures and processes. The integration of machine learning and deep learning techniques has revolutionized the field, enabling the automated, reproducible, and accurate analysis of biological images. Here, we provide an overview of the history and principles of machine learning and deep learning in the context of bioimage analysis. We discuss the essential steps of the bioimage analysis workflow, emphasizing how machine learning and deep learning have improved preprocessing, segmentation, feature extraction, object tracking, and classification. We provide examples that showcase the application of machine learning and deep learning in bioimage analysis. We examine user-friendly software and tools that enable biologists to leverage these techniques without extensive computational expertise. This review is a resource for researchers seeking to incorporate machine learning and deep learning in their bioimage analysis workflows and enhance their research in this rapidly evolving field.
Collapse
Affiliation(s)
- Mahta Jan
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Allie Spangaro
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Michelle Lenartowicz
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Mojca Mattiazzi Usaj
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
23
|
Sun H, Li J, Murphy RF. Expanding the coverage of spatial proteomics: a machine learning approach. Bioinformatics 2024; 40:btae062. [PMID: 38310340 PMCID: PMC10873576 DOI: 10.1093/bioinformatics/btae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 02/15/2024] [Accepted: 02/15/2024] [Indexed: 02/05/2024] Open
Abstract
MOTIVATION Multiplexed protein imaging methods use a chosen set of markers and provide valuable information about complex tissue structure and cellular heterogeneity. However, the number of markers that can be measured in the same tissue sample is inherently limited. RESULTS In this paper, we present an efficient method to choose a minimal predictive subset of markers that for the first time allows the prediction of full images for a much larger set of markers. We demonstrate that our approach also outperforms previous methods for predicting cell-level protein composition. Most importantly, we demonstrate that our approach can be used to select a marker set that enables prediction of a much larger set than could be measured concurrently. AVAILABILITY AND IMPLEMENTATION All code and intermediate results are available in a Reproducible Research Archive at https://github.com/murphygroup/CODEXPanelOptimization.
Collapse
Affiliation(s)
- Huangqingbo Sun
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Jiayi Li
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Robert F Murphy
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| |
Collapse
|
24
|
Opstad IS, Birgisdottir ÅB, Agarwal K. Fluorescence microscopy and correlative brightfield videos of mitochondria and vesicles in H9c2 cardiomyoblasts. Sci Data 2024; 11:125. [PMID: 38272930 PMCID: PMC10810863 DOI: 10.1038/s41597-024-02970-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 01/15/2024] [Indexed: 01/27/2024] Open
Abstract
This paper presents data acquired to study the dynamics and interactions of mitochondria and subcellular vesicles in living cardiomyoblasts. The study was motivated by the importance of mitochondrial quality control and turnover in cardiovascular health. Although fluorescence microscopy is an invaluable tool, it presents several limitations. Correlative fluorescence and brightfield images (label-free) were therefore acquired with the purpose of achieving virtual labelling via machine learning. In comparison with the fluorescence images of mitochondria, the brightfield images show vesicles and subcellular components, providing additional insights about sub-cellular components. A large part of the data contains correlative fluorescence images of lysosomes and/or endosomes over a duration of up to 400 timepoints (>30 min). The data can be reused for biological inferences about mitochondrial and vesicular morphology, dynamics, and interactions. Furthermore, virtual labelling of mitochondria or subcellular vesicles can be achieved using these datasets. Finally, the data can inspire new imaging experiments for cellular investigations or computational developments. The data is available through two large, open datasets on DataverseNO.
Collapse
Affiliation(s)
- Ida S Opstad
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway
| | - Åsa B Birgisdottir
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromsø, Norway
- Division of Cardiothoracic and Respiratory Medicine, University Hospital of North Norway, Tromsø, Norway
| | - Krishna Agarwal
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway.
| |
Collapse
|
25
|
Waliman M, Johnson RL, Natesan G, Tan S, Santella A, Hong RL, Shah PK. Automated Cell Lineage Reconstruction using Label-Free 4D Microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576449. [PMID: 38328064 PMCID: PMC10849476 DOI: 10.1101/2024.01.20.576449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Here we describe embGAN, a deep learning pipeline that addresses the challenge of automated cell detection and tracking in label-free 3D time lapse imaging. embGAN requires no manual data annotation for training, learns robust detections that exhibits a high degree of scale invariance and generalizes well to images acquired in multiple labs on multiple instruments.
Collapse
Affiliation(s)
- Matthew Waliman
- Department of Electrical and Computer Engineering, University of California, Los Angeles, California, United States of America
| | - Ryan L Johnson
- Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles, California, United State of America
| | - Gunalan Natesan
- Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles, California, United State of America
| | - Shiqin Tan
- Department of Computational and Systems Biology, University of California, Los Angeles, California, United States of America
| | - Anthony Santella
- Molecular Cytology Core, Memorial Sloan Kettering Cancer Center, New York, New York, United States of America
| | - Ray L Hong
- Department of Biology, California State University, Northridge, California, United States of America
| | - Pavak K Shah
- Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles, California, United State of America
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, California, United States of America
| |
Collapse
|
26
|
Park R, Kang MS, Heo G, Shin YC, Han DW, Hong SW. Regulated Behavior in Living Cells with Highly Aligned Configurations on Nanowrinkled Graphene Oxide Substrates: Deep Learning Based on Interplay of Cellular Contact Guidance. ACS NANO 2024; 18:1325-1344. [PMID: 38099607 DOI: 10.1021/acsnano.2c09815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Micro-/nanotopographical cues have emerged as a practical and promising strategy for controlling cell fate and reprogramming, which play a key role as biophysical regulators in diverse cellular processes and behaviors. Extracellular biophysical factors can trigger intracellular physiological signaling via mechanotransduction and promote cellular responses such as cell adhesion, migration, proliferation, gene/protein expression, and differentiation. Here, we engineered a highly ordered nanowrinkled graphene oxide (GO) surface via the mechanical deformation of an ultrathin GO film on an elastomeric substrate to observe specific cellular responses based on surface-mediated topographical cues. The ultrathin GO film on the uniaxially prestrained elastomeric substrate through self-assembly and subsequent compressive force produced GO nanowrinkles with periodic amplitude. To examine the acute cellular behaviors on the GO-based cell interface with nanostructured arrays of wrinkles, we cultured L929 fibroblasts and HT22 hippocampal neuronal cells. As a result, our developed cell-culture substrate obviously provided a directional guidance effect. In addition, based on the observed results, we adapted a deep learning (DL)-based data processing technique to precisely interpret the cell behaviors on the nanowrinkled GO surfaces. According to the learning/transfer learning protocol of the DL network, we detected cell boundaries, elongation, and orientation and quantitatively evaluated cell velocity, traveling distance, displacement, and orientation. The presented experimental results have intriguing implications such that the nanotopographical microenvironment could engineer the living cells' morphological polarization to assemble them into useful tissue chips consisting of multiple cell types.
Collapse
Affiliation(s)
- Rowoon Park
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Moon Sung Kang
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Gyeonghwa Heo
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Yong Cheol Shin
- Department of Inflammation and Immunity, Lerner Research Institute, Cleveland Clinic, Ohio 44195, United States
| | - Dong-Wook Han
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Suck Won Hong
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
- Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Pusan National University, Busan 46241, Republic of Korea
| |
Collapse
|
27
|
Yang X, Yang Y, Zhang Z, Li M. Deep Learning Image Recognition-Assisted Atomic Force Microscopy for Single-Cell Efficient Mechanics in Co-culture Environments. LANGMUIR : THE ACS JOURNAL OF SURFACES AND COLLOIDS 2024; 40:837-852. [PMID: 38154137 DOI: 10.1021/acs.langmuir.3c03046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
Atomic force microscopy (AFM)-based force spectroscopy assay has become an important method for characterizing the mechanical properties of single living cells under aqueous conditions, but a disadvantage is its reliance on manual operation and experience as well as the resulting low throughput. Particularly, providing a capacity to accurately identify the type of the cell grown in co-culture environments without the need of fluorescent labeling will further facilitate the applications of AFM in life sciences. Here, we present a study of deep learning image recognition-assisted AFM, which not only enables fluorescence-independent recognition of the identity of single co-cultured cells but also allows efficient downstream AFM force measurements of the identified cells. With the use of the deep learning-based image recognition model, the viability and type of individual cells grown in co-culture environments were identified directly from the optical bright-field images, which were confirmed by the following cell growth and fluorescent labeling results. Based on the image recognition results, the positional relationship between the AFM probe and the targeted cell was automatically determined, allowing the precise movement of the AFM probe to the target cell to perform force measurements. The experimental results show that the presented method was applicable not only to the conventional (microsphere-modified) AFM probe used in AFM indentation assay for measuring the Young's modulus of single co-cultured cells but also to the single-cell probe used in AFM-based single-cell force spectroscopy (SCFS) assay for measuring the adhesion forces of single co-cultured cells. The study illustrates deep learning imaging recognition-assisted AFM as a promising approach for label-free and high-throughput detection of single-cell mechanics under co-culture conditions, which will facilitate unraveling the mechanical cues involved in cell-cell interactions in their native states at the single-cell level and will benefit the field of mechanobiology.
Collapse
Affiliation(s)
- Xuliang Yang
- School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
| | - Yanqi Yang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhihui Zhang
- School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China
| | - Mi Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
28
|
Chen C, Smith ZJ, Fang J, Chu K. Organelle-specific phase contrast microscopy (OS-PCM) enables facile correlation study of organelles and proteins. BIOMEDICAL OPTICS EXPRESS 2024; 15:199-211. [PMID: 38223195 PMCID: PMC10783919 DOI: 10.1364/boe.510243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 11/29/2023] [Accepted: 12/03/2023] [Indexed: 01/16/2024]
Abstract
Current methods for studying organelle and protein interactions and correlations depend on multiplex fluorescent labeling, which is experimentally complex and harmful to cells. Here we propose to solve this challenge via OS-PCM, where organelles are imaged and segmented without labels, and combined with standard fluorescence microscopy of protein distributions. In this work, we develop new neural networks to obtain unlabeled organelle, nucleus and membrane predictions from a single 2D image. Automated analysis is also implemented to obtain quantitative information regarding the spatial distribution and co-localization of both protein and organelle, as well as their relationship to the landmark structures of nucleus and membrane. Using mitochondria and DRP1 protein as a proof-of-concept, we conducted a correlation study where only DRP1 is labeled, with results consistent with prior reports utilizing multiplex labeling. Thus our work demonstrates that OS-PCM simplifies the correlation study of organelles and proteins.
Collapse
Affiliation(s)
- Chen Chen
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Zachary J Smith
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Jingde Fang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Kaiqin Chu
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230027, China
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu 215123, China
| |
Collapse
|
29
|
Seifert R, Markert SM, Britz S, Perschin V, Erbacher C, Stigloher C, Kollmannsberger P. DeepCLEM: automated registration for correlative light and electron microscopy using deep learning. F1000Res 2023; 9:1275. [PMID: 37397873 PMCID: PMC10311120 DOI: 10.12688/f1000research.27158.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/27/2023] [Indexed: 08/25/2023] Open
Abstract
In correlative light and electron microscopy (CLEM), the fluorescent images must be registered to the EM images with high precision. Due to the different contrast of EM and fluorescence images, automated correlation-based alignment is not directly possible, and registration is often done by hand using a fluorescent stain, or semi-automatically with fiducial markers. We introduce "DeepCLEM", a fully automated CLEM registration workflow. A convolutional neural network predicts the fluorescent signal from the EM images, which is then automatically registered to the experimentally measured chromatin signal from the sample using correlation-based alignment. The complete workflow is available as a Fiji plugin and could in principle be adapted for other imaging modalities as well as for 3D stacks.
Collapse
Affiliation(s)
- Rick Seifert
- Center for Computational and Theoretical Biology, University of Würzburg, Würzburg, 97074, Germany
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Sebastian M. Markert
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Sebastian Britz
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Veronika Perschin
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Christoph Erbacher
- Department of Neurology, University of Würzburg, Würzburg, 97074, Germany
| | - Christian Stigloher
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Philip Kollmannsberger
- Center for Computational and Theoretical Biology, University of Würzburg, Würzburg, 97074, Germany
| |
Collapse
|
30
|
Sun G, Liu S, Shi C, Liu X, Guo Q. 3DCNAS: A universal method for predicting the location of fluorescent organelles in living cells in three-dimensional space. Exp Cell Res 2023; 433:113807. [PMID: 37852350 DOI: 10.1016/j.yexcr.2023.113807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/09/2023] [Accepted: 10/09/2023] [Indexed: 10/20/2023]
Abstract
Cellular biology research relies on microscopic imaging techniques for studying the complex structures and dynamic processes within cells. Fluorescence microscopy provides high sensitivity and subcellular resolution but has limitations such as photobleaching and sample preparation challenges. Transmission light microscopy offers a label-free alternative but lacks contrast for detailed interpretation. Deep learning methods have shown promise in analyzing cell images and extracting meaningful information. However, accurately learning and simulating diverse subcellular structures remain challenging. In this study, we propose a method named three-dimensional cell neural architecture search (3DCNAS) to predict subcellular structures of fluorescence using unlabeled transmitted light microscope images. By leveraging the automated search capability of differentiable neural architecture search (NAS), our method partially mitigates the issues of overfitting and underfitting caused by the distinct details of various subcellular structures. Furthermore, we apply our method to analyze cell dynamics in genome-edited human induced pluripotent stem cells during mitotic events. This allows us to study the functional roles of organelles and their involvement in cellular processes, contributing to a comprehensive understanding of cell biology and offering insights into disease pathogenesis.
Collapse
Affiliation(s)
- Guocheng Sun
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Shitou Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Chaojing Shi
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Xi Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Qianjin Guo
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China.
| |
Collapse
|
31
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
32
|
Xu X, Xiao Z, Zhang F, Wang C, Wei B, Wang Y, Cheng B, Jia Y, Li Y, Li B, Guo H, Xu F. CellVisioner: A Generalizable Cell Virtual Staining Toolbox based on Few-Shot Transfer Learning for Mechanobiological Analysis. RESEARCH (WASHINGTON, D.C.) 2023; 6:0285. [PMID: 38434246 PMCID: PMC10907024 DOI: 10.34133/research.0285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/16/2023] [Indexed: 03/05/2024]
Abstract
Visualizing cellular structures especially the cytoskeleton and the nucleus is crucial for understanding mechanobiology, but traditional fluorescence staining has inherent limitations such as phototoxicity and photobleaching. Virtual staining techniques provide an alternative approach to addressing these issues but often require substantial amount of user training data. In this study, we develop a generalizable cell virtual staining toolbox (termed CellVisioner) based on few-shot transfer learning that requires substantially reduced user training data. CellVisioner can virtually stain F-actin and nuclei for various types of cells and extract single-cell parameters relevant to mechanobiology research. Taking the label-free single-cell images as input, CellVisioner can predict cell mechanobiological status (e.g., Yes-associated protein nuclear/cytoplasmic ratio) and perform long-term monitoring for living cells. We envision that CellVisioner would be a powerful tool to facilitate on-site mechanobiological research.
Collapse
Affiliation(s)
- Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Zhanfeng Xiao
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Fan Zhang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Changxiang Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bo Wei
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yaohui Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bo Cheng
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yuanbo Jia
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yuan Li
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bin Li
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Hui Guo
- Department of Medical Oncology,
The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, P.R. China
| | - Feng Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| |
Collapse
|
33
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
34
|
Ibrahim KA, Grußmayer KS, Riguet N, Feletti L, Lashuel HA, Radenovic A. Label-free identification of protein aggregates using deep learning. Nat Commun 2023; 14:7816. [PMID: 38016971 PMCID: PMC10684545 DOI: 10.1038/s41467-023-43440-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023] Open
Abstract
Protein misfolding and aggregation play central roles in the pathogenesis of various neurodegenerative diseases (NDDs), including Huntington's disease, which is caused by a genetic mutation in exon 1 of the Huntingtin protein (Httex1). The fluorescent labels commonly used to visualize and monitor the dynamics of protein expression have been shown to alter the biophysical properties of proteins and the final ultrastructure, composition, and toxic properties of the formed aggregates. To overcome this limitation, we present a method for label-free identification of NDD-associated aggregates (LINA). Our approach utilizes deep learning to detect unlabeled and unaltered Httex1 aggregates in living cells from transmitted-light images, without the need for fluorescent labeling. Our models are robust across imaging conditions and on aggregates formed by different constructs of Httex1. LINA enables the dynamic identification of label-free aggregates and measurement of their dry mass and area changes during their growth process, offering high speed, specificity, and simplicity to analyze protein aggregation dynamics and obtain high-fidelity information.
Collapse
Affiliation(s)
- Khalid A Ibrahim
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Kristin S Grußmayer
- Department of Bionanoscience and Kavli Institute of Nanoscience Delft, Delft University of Technology, Delft, Netherlands.
| | - Nathan Riguet
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Lely Feletti
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Hilal A Lashuel
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Aleksandra Radenovic
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
35
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
36
|
Yilmaz A, Aydin T, Varol R. Virtual staining for pixel-wise and quantitative analysis of single cell images. Sci Rep 2023; 13:19178. [PMID: 37932315 PMCID: PMC10628122 DOI: 10.1038/s41598-023-45150-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/16/2023] [Indexed: 11/08/2023] Open
Abstract
Immunocytochemical staining of microorganisms and cells has long been a popular method for examining their specific subcellular structures in greater detail. Recently, generative networks have emerged as an alternative to traditional immunostaining techniques. These networks infer fluorescence signatures from various imaging modalities and then virtually apply staining to the images in a digital environment. In numerous studies, virtual staining models have been trained on histopathology slides or intricate subcellular structures to enhance their accuracy and applicability. Despite the advancements in virtual staining technology, utilizing this method for quantitative analysis of microscopic images still poses a significant challenge. To address this issue, we propose a straightforward and automated approach for pixel-wise image-to-image translation. Our primary objective in this research is to leverage advanced virtual staining techniques to accurately measure the DNA fragmentation index in unstained sperm images. This not only offers a non-invasive approach to gauging sperm quality, but also paves the way for streamlined and efficient analyses without the constraints and potential biases introduced by traditional staining processes. This novel approach takes into account the limitations of conventional techniques and incorporates improvements to bolster the reliability of the virtual staining process. To further refine the results, we discuss various denoising techniques that can be employed to reduce the impact of background noise on the digital images. Additionally, we present a pixel-wise image matching algorithm designed to minimize the error caused by background noise and to prevent the introduction of bias into the analysis. By combining these approaches, we aim to develop a more effective and reliable method for quantitative analysis of virtually stained microscopic images, ultimately enhancing the study of microorganisms and cells at the subcellular level.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Universität der Bundeswehr München, 85579, Neubiberg, Germany
- Imperial College London, London, SW7 2BX, United Kingdom
| | - Tuelay Aydin
- Universität der Bundeswehr München, 85579, Neubiberg, Germany
| | | |
Collapse
|
37
|
Alieva M, Wezenaar AKL, Wehrens EJ, Rios AC. Bridging live-cell imaging and next-generation cancer treatment. Nat Rev Cancer 2023; 23:731-745. [PMID: 37704740 DOI: 10.1038/s41568-023-00610-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/25/2023] [Indexed: 09/15/2023]
Abstract
By providing spatial, molecular and morphological data over time, live-cell imaging can provide a deeper understanding of the cellular and signalling events that determine cancer response to treatment. Understanding this dynamic response has the potential to enhance clinical outcome by identifying biomarkers or actionable targets to improve therapeutic efficacy. Here, we review recent applications of live-cell imaging for uncovering both tumour heterogeneity in treatment response and the mode of action of cancer-targeting drugs. Given the increasing uses of T cell therapies, we discuss the unique opportunity of time-lapse imaging for capturing the interactivity and motility of immunotherapies. Although traditionally limited in the number of molecular features captured, novel developments in multidimensional imaging and multi-omics data integration offer strategies to connect single-cell dynamics to molecular phenotypes. We review the effect of these recent technological advances on our understanding of the cellular dynamics of tumour targeting and discuss their implication for next-generation precision medicine.
Collapse
Affiliation(s)
- Maria Alieva
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Instituto de Investigaciones Biomedicas Sols-Morreale (IIBM), CSIC-UAM, Madrid, Spain
| | - Amber K L Wezenaar
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Oncode Institute, Utrecht, The Netherlands
| | - Ellen J Wehrens
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Oncode Institute, Utrecht, The Netherlands.
| | - Anne C Rios
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Oncode Institute, Utrecht, The Netherlands.
| |
Collapse
|
38
|
Timonen VA, Kerkelä E, Impola U, Penna L, Partanen J, Kilpivaara O, Arvas M, Pitkänen E. DeepIFC: Virtual fluorescent labeling of blood cells in imaging flow cytometry data with deep learning. Cytometry A 2023; 103:807-817. [PMID: 37276178 DOI: 10.1002/cyto.a.24770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 05/16/2023] [Accepted: 06/02/2023] [Indexed: 06/07/2023]
Abstract
Imaging flow cytometry (IFC) combines flow cytometry with microscopy, allowing rapid characterization of cellular and molecular properties via high-throughput single-cell fluorescent imaging. However, fluorescent labeling is costly and time-consuming. We present a computational method called DeepIFC based on the Inception U-Net neural network architecture, able to generate fluorescent marker images and learn morphological features from IFC brightfield and darkfield images. Furthermore, the DeepIFC workflow identifies cell types from the generated fluorescent images and visualizes the single-cell features generated in a 2D space. We demonstrate that rarer cell types are predicted well when a balanced data set is used to train the model, and the model is able to recognize red blood cells not seen during model training as a distinct entity. In summary, DeepIFC allows accurate cell reconstruction, typing and recognition of unseen cell types from brightfield and darkfield images via virtual fluorescent labeling.
Collapse
Affiliation(s)
- Veera A Timonen
- Institute for Molecular Medicine Finland (FIMM), Helsinki Institute of Life Science (HiLIFE), University of Helsinki, Helsinki, Finland
- Applied Tumor Genomics Research Program, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Erja Kerkelä
- Advanced Cell Therapy Centre, Finnish Red Cross Blood Service, Vantaa, Finland
| | - Ulla Impola
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Leena Penna
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Jukka Partanen
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Outi Kilpivaara
- Applied Tumor Genomics Research Program, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Medical and Clinical Genetics, Medicum, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- HUSLAB Laboratory of Genetics, HUS Diagnostic Center, Helsinki University Hospital, Helsinki, Finland
- iCAN Digital Precision Cancer Medicine Flagship, Helsinki, Finland
| | - Mikko Arvas
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Esa Pitkänen
- Institute for Molecular Medicine Finland (FIMM), Helsinki Institute of Life Science (HiLIFE), University of Helsinki, Helsinki, Finland
- Applied Tumor Genomics Research Program, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- iCAN Digital Precision Cancer Medicine Flagship, Helsinki, Finland
| |
Collapse
|
39
|
Feng X, Yu Z, Fang H, Jiang H, Yang G, Chen L, Zhou X, Hu B, Qin C, Hu G, Xing G, Zhao B, Shi Y, Guo J, Liu F, Han B, Zechmann B, He Y, Liu F. Plantorganelle Hunter is an effective deep-learning-based method for plant organelle phenotyping in electron microscopy. NATURE PLANTS 2023; 9:1760-1775. [PMID: 37749240 DOI: 10.1038/s41477-023-01527-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 08/25/2023] [Indexed: 09/27/2023]
Abstract
Accurate delineation of plant cell organelles from electron microscope images is essential for understanding subcellular behaviour and function. Here we develop a deep-learning pipeline, called the organelle segmentation network (OrgSegNet), for pixel-wise segmentation to identify chloroplasts, mitochondria, nuclei and vacuoles. OrgSegNet was evaluated on a large manually annotated dataset collected from 19 plant species and achieved state-of-the-art segmentation performance. We defined three digital traits (shape complexity, electron density and cross-sectional area) to track the quantitative features of individual organelles in 2D images and released an open-source web tool called Plantorganelle Hunter for quantitatively profiling subcellular morphology. In addition, the automatic segmentation method was successfully applied to a serial-sectioning scanning microscope technique to create a 3D cell model that offers unique views of the morphology and distribution of these organelles. The functionalities of Plantorganelle Hunter can be easily operated, which will increase efficiency and productivity for the plant science community, and enhance understanding of subcellular biology.
Collapse
Affiliation(s)
- Xuping Feng
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- The Rural Development Academy & Agricultural Experiment Station, Zhejiang University, Huzhou, China
| | - Zeyu Yu
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- The Rural Development Academy & Agricultural Experiment Station, Zhejiang University, Huzhou, China
| | - Hui Fang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Huzhou Institute of Zhejiang University, Hangzhou, China
| | - Hangjin Jiang
- Center for Data Science, Zhejiang University, Hangzhou, China
| | - Guofeng Yang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- The Rural Development Academy & Agricultural Experiment Station, Zhejiang University, Huzhou, China
| | - Liting Chen
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Xinran Zhou
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Bing Hu
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Chun Qin
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Gang Hu
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Guipei Xing
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Boxi Zhao
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Yongqiang Shi
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Jiansheng Guo
- Center of Cryo-Electron Microscopy, Zhejiang University School of Medicine, Hangzhou, China
| | - Feng Liu
- School of Mathematics and Statistics, University of Melbourne, Parkville, Australia
| | - Bo Han
- Department of Computer Science, Hong Kong Baptist University, Hong Kong, China
| | - Bernd Zechmann
- Center for Microscopy and Imaging, Baylor University, Waco, TX, USA
| | - Yong He
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China.
| | - Feng Liu
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China.
| |
Collapse
|
40
|
Huang K, Li Q, Xue Y, Wang Q, Chen Z, Gu Z. Application of colloidal photonic crystals in study of organoids. Adv Drug Deliv Rev 2023; 201:115075. [PMID: 37625595 DOI: 10.1016/j.addr.2023.115075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 07/09/2023] [Accepted: 08/20/2023] [Indexed: 08/27/2023]
Abstract
As alternative disease models, other than 2D cell lines and patient-derived xenografts, organoids have preferable in vivo physiological relevance. However, both endogenous and exogenous limitations impede the development and clinical translation of these organoids. Fortunately, colloidal photonic crystals (PCs), which benefit from favorable biocompatibility, brilliant optical manipulation, and facile chemical decoration, have been applied to the engineering of organoids and have achieved the desirable recapitulation of the ECM niche, well-defined geometrical onsets for initial culture, in situ multiphysiological parameter monitoring, single-cell biomechanical sensing, and high-throughput drug screening with versatile functional readouts. Herein, we review the latest progress in engineering organoids fabricated from colloidal PCs and provide inputs for future research.
Collapse
Affiliation(s)
- Kai Huang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Qiwei Li
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Yufei Xue
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Qiong Wang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; Institute of Biomaterials and Medical Devices, Southeast University, Suzhou, Jiangsu 215163, China.
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China.
| |
Collapse
|
41
|
Butler DJ, Keim AP, Ray S, Azim E. Large-scale capture of hidden fluorescent labels for training generalizable markerless motion capture models. Nat Commun 2023; 14:5866. [PMID: 37752123 PMCID: PMC10522643 DOI: 10.1038/s41467-023-41565-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 09/08/2023] [Indexed: 09/28/2023] Open
Abstract
Deep learning-based markerless tracking has revolutionized studies of animal behavior. Yet the generalizability of trained models tends to be limited, as new training data typically needs to be generated manually for each setup or visual environment. With each model trained from scratch, researchers track distinct landmarks and analyze the resulting kinematic data in idiosyncratic ways. Moreover, due to inherent limitations in manual annotation, only a sparse set of landmarks are typically labeled. To address these issues, we developed an approach, which we term GlowTrack, for generating orders of magnitude more training data, enabling models that generalize across experimental contexts. We describe: a) a high-throughput approach for producing hidden labels using fluorescent markers; b) a multi-camera, multi-light setup for simulating diverse visual conditions; and c) a technique for labeling many landmarks in parallel, enabling dense tracking. These advances lay a foundation for standardized behavioral pipelines and more complete scrutiny of movement.
Collapse
Affiliation(s)
- Daniel J Butler
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA
| | - Alexander P Keim
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA
| | - Shantanu Ray
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA
| | - Eiman Azim
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA.
| |
Collapse
|
42
|
Johnson GT, Agmon E, Akamatsu M, Lundberg E, Lyons B, Ouyang W, Quintero-Carmona OA, Riel-Mehan M, Rafelski S, Horwitz R. Building the next generation of virtual cells to understand cellular biology. Biophys J 2023; 122:3560-3569. [PMID: 37050874 PMCID: PMC10541477 DOI: 10.1016/j.bpj.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 03/19/2023] [Accepted: 04/06/2023] [Indexed: 04/14/2023] Open
Abstract
Cell science has made significant progress by focusing on understanding individual cellular processes through reductionist approaches. However, the sheer volume of knowledge collected presents challenges in integrating this information across different scales of space and time to comprehend cellular behaviors, as well as making the data and methods more accessible for the community to tackle complex biological questions. This perspective proposes the creation of next-generation virtual cells, which are dynamic 3D models that integrate information from diverse sources, including simulations, biophysical models, image-based models, and evidence-based knowledge graphs. These virtual cells would provide statistically accurate and holistic views of real cells, bridging the gap between theoretical concepts and experimental data, and facilitating productive new collaborations among researchers across related fields.
Collapse
Affiliation(s)
| | - Eran Agmon
- Center for Cell Analysis and Modeling, University of Connecticut Health, Farmington, Connecticut
| | - Matthew Akamatsu
- Department of Biology, University of Washington, Seattle, Washington
| | - Emma Lundberg
- Department of Applied Physics, Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden; Department of Bioengineering, Stanford University, Stanford, California; Department of Pathology, Stanford University, Stanford, California; Chan Zuckerberg Biohub, San Francisco, California
| | - Blair Lyons
- Allen Institute for Cell Science, Seattle, Washington
| | - Wei Ouyang
- Department of Applied Physics, Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | | | | | | | - Rick Horwitz
- Allen Institute for Cell Science, Seattle, Washington.
| |
Collapse
|
43
|
Strawbridge SE, Kurowski A, Corujo-Simon E, Fletcher AN, Nichols J, Fletcher AG. insideOutside: an accessible algorithm for classifying interior and exterior points, with applications in embryology. Biol Open 2023; 12:bio060055. [PMID: 37623821 PMCID: PMC10461464 DOI: 10.1242/bio.060055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 07/27/2023] [Indexed: 08/26/2023] Open
Abstract
A crucial aspect of embryology is relating the position of individual cells to the broader geometry of the embryo. A classic example of this is the first cell-fate decision of the mouse embryo, where interior cells become inner cell mass and exterior cells become trophectoderm. Fluorescent labelling, imaging, and quantification of tissue-specific proteins have advanced our understanding of this dynamic process. However, instances arise where these markers are either not available, or not reliable, and we are left only with the cells' spatial locations. Therefore, a simple, robust method for classifying interior and exterior cells of an embryo using spatial information is required. Here, we describe a simple mathematical framework and an unsupervised machine learning approach, termed insideOutside, for classifying interior and exterior points of a three-dimensional point-cloud, a common output from imaged cells within the early mouse embryo. We benchmark our method against other published methods to demonstrate that it yields greater accuracy in classification of nuclei from the pre-implantation mouse embryos and greater accuracy when challenged with local surface concavities. We have made MATLAB and Python implementations of the method freely available. This method should prove useful for embryology, with broader applications to similar data arising in the life sciences.
Collapse
Affiliation(s)
- Stanley E. Strawbridge
- Wellcome-MRC Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
- Department of Physiology, Neuroscience and Development, University of Cambridge, Cambridge, UK
| | - Agata Kurowski
- Department of Pharmacological Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Elena Corujo-Simon
- Wellcome-MRC Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
- Department of Physiology, Neuroscience and Development, University of Cambridge, Cambridge, UK
- MRC Human Genetics Unit, University of Edinburgh, Edinburgh, UK
| | - Alastair N. Fletcher
- Department of Mathematical Sciences, Northern Illinois University, DeKalb, IL, USA
| | - Jennifer Nichols
- Wellcome-MRC Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
- Department of Physiology, Neuroscience and Development, University of Cambridge, Cambridge, UK
- MRC Human Genetics Unit, University of Edinburgh, Edinburgh, UK
- Centre for Trophoblast Research, University of Cambridge, Cambridge, UK
| | - Alexander G. Fletcher
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK
- The Bateson Centre, University of Sheffield, Sheffield, UK
| |
Collapse
|
44
|
Garcia Valencia OA, Thongprayoon C, Jadlowiec CC, Mao SA, Miao J, Cheungpasitporn W. Enhancing Kidney Transplant Care through the Integration of Chatbot. Healthcare (Basel) 2023; 11:2518. [PMID: 37761715 PMCID: PMC10530762 DOI: 10.3390/healthcare11182518] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/03/2023] [Accepted: 09/09/2023] [Indexed: 09/29/2023] Open
Abstract
Kidney transplantation is a critical treatment option for end-stage kidney disease patients, offering improved quality of life and increased survival rates. However, the complexities of kidney transplant care necessitate continuous advancements in decision making, patient communication, and operational efficiency. This article explores the potential integration of a sophisticated chatbot, an AI-powered conversational agent, to enhance kidney transplant practice and potentially improve patient outcomes. Chatbots and generative AI have shown promising applications in various domains, including healthcare, by simulating human-like interactions and generating contextually appropriate responses. Noteworthy AI models like ChatGPT by OpenAI, BingChat by Microsoft, and Bard AI by Google exhibit significant potential in supporting evidence-based research and healthcare decision making. The integration of chatbots in kidney transplant care may offer transformative possibilities. As a clinical decision support tool, it could provide healthcare professionals with real-time access to medical literature and guidelines, potentially enabling informed decision making and improved knowledge dissemination. Additionally, the chatbot has the potential to facilitate patient education by offering personalized and understandable information, addressing queries, and providing guidance on post-transplant care. Furthermore, under clinician or transplant pharmacist supervision, it has the potential to support post-transplant care and medication management by analyzing patient data, which may lead to tailored recommendations on dosages, monitoring schedules, and potential drug interactions. However, to fully ascertain its effectiveness and safety in these roles, further studies and validation are required. Its integration with existing clinical decision support systems may enhance risk stratification and treatment planning, contributing to more informed and efficient decision making in kidney transplant care. Given the importance of ethical considerations and bias mitigation in AI integration, future studies may evaluate long-term patient outcomes, cost-effectiveness, user experience, and the generalizability of chatbot recommendations. By addressing these factors and potentially leveraging AI capabilities, the integration of chatbots in kidney transplant care holds promise for potentially improving patient outcomes, enhancing decision making, and fostering the equitable and responsible use of AI in healthcare.
Collapse
Affiliation(s)
- Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| | - Caroline C. Jadlowiec
- Division of Transplant Surgery, Department of Surgery, Mayo Clinic, Phoenix, AZ 85054, USA;
| | - Shennen A. Mao
- Division of Transplant Surgery, Department of Transplantation, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| |
Collapse
|
45
|
Lv S, Wang X, Wang G, Yang W, Cheng K. Efficient evaluation of photodynamic therapy on tumor based on deep learning. Photodiagnosis Photodyn Ther 2023; 43:103658. [PMID: 37339692 DOI: 10.1016/j.pdpdt.2023.103658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/30/2023] [Accepted: 06/16/2023] [Indexed: 06/22/2023]
Abstract
Photodynamic therapy (PDT) is a non-invasive treatment method for treating tumors. Under laser irradiation, photosensitizers in tumor tissues generate biotoxic reactive oxygen, which can kill tumor cells. The traditional live/dead staining method of evaluating the cell mortality caused by PDT mainly depends on manual counting, which is time-consuming and relies on dye quality. In this paper, we have constructed a dataset of cells after PDT treatment and trained the cell detection model YOLOv3 to count both the dead and live cells. YOLO is a real time AI object detection algorithm. The achieved results demonstrate that the proposed method has a good performance in cell detection, with a mean average precision (mAP) of 94% for live cells and 71.3% for dead cells. This approach can efficiently evaluate the effectiveness of PDT treatment, thus speeding up treatment development effectively.
Collapse
Affiliation(s)
- Shuangshuang Lv
- College of Electronic Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road. Haidian Dist, Beijing 100876, China
| | - Xiaohui Wang
- College of Electronic Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road. Haidian Dist, Beijing 100876, China.
| | - Guisheng Wang
- Department of Radiology, the Third medical centre, Chinese PLA General Hospital, No. 69, Yongding Road, Haidian Dist, Beijing 100039, China
| | - Wei Yang
- College of Electronic Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road. Haidian Dist, Beijing 100876, China
| | - Kun Cheng
- College of Electronic Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road. Haidian Dist, Beijing 100876, China.
| |
Collapse
|
46
|
Piansaddhayanon C, Koracharkornradt C, Laosaengpha N, Tao Q, Ingrungruanglert P, Israsena N, Chuangsuwanich E, Sriswasdi S. Label-free tumor cells classification using deep learning and high-content imaging. Sci Data 2023; 10:570. [PMID: 37634014 PMCID: PMC10460430 DOI: 10.1038/s41597-023-02482-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 08/16/2023] [Indexed: 08/28/2023] Open
Abstract
Many studies have shown that cellular morphology can be used to distinguish spiked-in tumor cells in blood sample background. However, most validation experiments included only homogeneous cell lines and inadequately captured the broad morphological heterogeneity of cancer cells. Furthermore, normal, non-blood cells could be erroneously classified as cancer because their morphology differ from blood cells. Here, we constructed a dataset of microscopic images of organoid-derived cancer and normal cell with diverse morphology and developed a proof-of-concept deep learning model that can distinguish cancer cells from normal cells within an unlabeled microscopy image. In total, more than 75,000 organoid-drived cells from 3 cholangiocarcinoma patients were collected. The model achieved an area under the receiver operating characteristics curve (AUROC) of 0.78 and can generalize to cell images from an unseen patient. These resources serve as a foundation for an automated, robust platform for circulating tumor cell detection.
Collapse
Affiliation(s)
- Chawan Piansaddhayanon
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
- Chula Intelligent and Complex Systems, Faculty of Science, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Chonnuttida Koracharkornradt
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Napat Laosaengpha
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Qingyi Tao
- NVIDIA AI Technology Center, Singapore, Singapore
| | - Praewphan Ingrungruanglert
- Center of Excellence for Stem Cell and Cell Therapy, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Nipan Israsena
- Center of Excellence for Stem Cell and Cell Therapy, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
- Department of Pharmacology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Ekapol Chuangsuwanich
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand.
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Sira Sriswasdi
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
- Center for Artificial Intelligence in Medicine, Research Affairs, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| |
Collapse
|
47
|
Yu M, Shi H, Shen H, Chen X, Zhang L, Zhu J, Qian G, Feng B, Yu S. Simple and Rapid Discrimination of Methicillin-Resistant Staphylococcus aureus Based on Gram Staining and Machine Vision. Microbiol Spectr 2023; 11:e0528222. [PMID: 37395643 PMCID: PMC10433844 DOI: 10.1128/spectrum.05282-22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/24/2023] [Indexed: 07/04/2023] Open
Abstract
Methicillin-resistant Staphylococcus aureus (MRSA) is a clinical threat with high morbidity and mortality. Here, we describe a new simple, rapid identification method for MRSA using oxacillin sodium salt, a cell wall synthesis inhibitor, combined with Gram staining and machine vision (MV) analysis. Gram staining classifies bacteria as positive (purple) or negative (pink) according to the cell wall structure and chemical composition. In the presence of oxacillin, the integrity of the cell wall for methicillin-susceptible S. aureus (MSSA) was destroyed immediately and appeared Gram negative. In contrast, MRSA was relatively stable and appeared Gram positive. This color change can be detected by MV. The feasibility of this method was demonstrated in 150 images of the staining results for 50 clinical S. aureus strains. Based on effective feature extraction and machine learning, the accuracies of the linear linear discriminant analysis (LDA) model and nonlinear artificial neural network (ANN) model for MRSA identification were 96.7% and 97.3%, respectively. Combined with MV analysis, this simple strategy improved the detection efficiency and significantly shortened the time needed to detect antibiotic resistance. The whole process can be completed within 1 h. Unlike the traditional antibiotic susceptibility test, overnight incubation is avoided. This new strategy could be used for other bacteria and represents a new rapid method for detection of clinical antibiotic resistance. IMPORTANCE Oxacillin sodium salt destroys the integrity of the cell wall of MSSA immediately, appearing Gram negative, whereas MRSA is relatively stable and still appears Gram positive. This color change can be detected by microscopic examination and MV analysis. This new strategy has significantly reduced the time to detect resistance. The results show that using oxacillin sodium salt combined with Gram staining and MV analysis is a new, simple and rapid method for identification of MRSA.
Collapse
Affiliation(s)
- Menghuan Yu
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Haimei Shi
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Hao Shen
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Xueqin Chen
- Department of Intensive Care Unit, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Li Zhang
- Department of Clinical Lab, Peking Union Medical College Hospital, Peking Union Medical College & Chinese Academy Medical Science, Beijing, China
| | - Jianhua Zhu
- Department of Intensive Care Unit, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Guoqing Qian
- Department of Intensive Care Unit, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Bin Feng
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Shaoning Yu
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| |
Collapse
|
48
|
D’Sa K, Evans JR, Virdi GS, Vecchi G, Adam A, Bertolli O, Fleming J, Chang H, Leighton C, Horrocks MH, Athauda D, Choi ML, Gandhi S. Prediction of mechanistic subtypes of Parkinson's using patient-derived stem cell models. NAT MACH INTELL 2023; 5:933-946. [PMID: 37615030 PMCID: PMC10442231 DOI: 10.1038/s42256-023-00702-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 07/06/2023] [Indexed: 08/25/2023]
Abstract
Parkinson's disease is a common, incurable neurodegenerative disorder that is clinically heterogeneous: it is likely that different cellular mechanisms drive the pathology in different individuals. So far it has not been possible to define the cellular mechanism underlying the neurodegenerative disease in life. We generated a machine learning-based model that can simultaneously predict the presence of disease and its primary mechanistic subtype in human neurons. We used stem cell technology to derive control or patient-derived neurons, and generated different disease subtypes through chemical induction or the presence of mutation. Multidimensional fluorescent labelling of organelles was performed in healthy control neurons and in four different disease subtypes, and both the quantitative single-cell fluorescence features and the images were used to independently train a series of classifiers to build deep neural networks. Quantitative cellular profile-based classifiers achieve an accuracy of 82%, whereas image-based deep neural networks predict control and four distinct disease subtypes with an accuracy of 95%. The machine learning-trained classifiers achieve their accuracy across all subtypes, using the organellar features of the mitochondria with the additional contribution of the lysosomes, confirming the biological importance of these pathways in Parkinson's. Altogether, we show that machine learning approaches applied to patient-derived cells are highly accurate at predicting disease subtypes, providing proof of concept that this approach may enable mechanistic stratification and precision medicine approaches in the future.
Collapse
Affiliation(s)
- Karishma D’Sa
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- The Francis Crick Institute, King’s Cross, London, UK
| | - James R. Evans
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- The Francis Crick Institute, King’s Cross, London, UK
| | - Gurvir S. Virdi
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- The Francis Crick Institute, King’s Cross, London, UK
| | | | | | | | - James Fleming
- The Francis Crick Institute, King’s Cross, London, UK
| | - Hojong Chang
- Institute for IT Convergence, KAIST, Daejeon, Republic of Korea
| | - Craig Leighton
- EaStCHEM School of Chemistry, The University of Edinburgh, Edinburgh, UK
- IRR Chemistry Hub, Institute for Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Mathew H. Horrocks
- EaStCHEM School of Chemistry, The University of Edinburgh, Edinburgh, UK
- IRR Chemistry Hub, Institute for Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Dilan Athauda
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- The Francis Crick Institute, King’s Cross, London, UK
| | - Minee L. Choi
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- The Francis Crick Institute, King’s Cross, London, UK
- Department of Brain & Cognitive Sciences, KAIST, Daejeon, Republic of Korea
| | - Sonia Gandhi
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- The Francis Crick Institute, King’s Cross, London, UK
| |
Collapse
|
49
|
Liu X, Li B, Liu C, Ta D. Virtual Fluorescence Translation for Biological Tissue by Conditional Generative Adversarial Network. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:408-420. [PMID: 37589024 PMCID: PMC10425324 DOI: 10.1007/s43657-023-00094-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 08/18/2023]
Abstract
Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time. Supplementary Information The online version contains supplementary material available at 10.1007/s43657-023-00094-1.
Collapse
Affiliation(s)
- Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, 200433 China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
- Center for Biomedical Engineering, Fudan University, Shanghai, 200433 China
| |
Collapse
|
50
|
Atwell S, Waibel DJE, Boushehri SS, Wiedenmann S, Marr C, Meier M. Label-free imaging of 3D pluripotent stem cell differentiation dynamics on chip. CELL REPORTS METHODS 2023; 3:100523. [PMID: 37533640 PMCID: PMC10391578 DOI: 10.1016/j.crmeth.2023.100523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 05/09/2023] [Accepted: 06/15/2023] [Indexed: 08/04/2023]
Abstract
Massive, parallelized 3D stem cell cultures for engineering in vitro human cell types require imaging methods with high time and spatial resolution to fully exploit technological advances in cell culture technologies. Here, we introduce a large-scale integrated microfluidic chip platform for automated 3D stem cell differentiation. To fully enable dynamic high-content imaging on the chip platform, we developed a label-free deep learning method called Bright2Nuc to predict in silico nuclear staining in 3D from confocal microscopy bright-field images. Bright2Nuc was trained and applied to hundreds of 3D human induced pluripotent stem cell cultures differentiating toward definitive endoderm on a microfluidic platform. Combined with existing image analysis tools, Bright2Nuc segmented individual nuclei from bright-field images, quantified their morphological properties, predicted stem cell differentiation state, and tracked the cells over time. Our methods are available in an open-source pipeline, enabling researchers to upscale image acquisition and phenotyping of 3D cell culture.
Collapse
Affiliation(s)
- Scott Atwell
- Helmholtz Pioneer Campus, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik Jens Elias Waibel
- Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
- TUM School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Sayedali Shetab Boushehri
- Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
- Department of Mathematics, Technical University of Munich, Munich, Germany
- Data & Analytics, Pharmaceutical Research and Early Development, Roche Innovation Center Munich (RICM), Penzberg, Germany
| | - Sandra Wiedenmann
- Helmholtz Pioneer Campus, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
| | - Matthias Meier
- Helmholtz Pioneer Campus, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
- Center for Biotechnology and Biomedicine, University of Leipzig, Leipzig, Germany
| |
Collapse
|