1
|
Kuhn TM, Paulsen M, Cuylen-Haering S. Accessible high-speed image-activated cell sorting. Trends Cell Biol 2024; 34:657-670. [PMID: 38789300 DOI: 10.1016/j.tcb.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/15/2024] [Accepted: 04/23/2024] [Indexed: 05/26/2024]
Abstract
Over the past six decades, fluorescence-activated cell sorting (FACS) has become an essential technology for basic and clinical research by enabling the isolation of cells of interest in high throughput. Recent technological advancements have started a new era of flow cytometry. By combining the spatial resolution of microscopy with high-speed cell sorting, new instruments allow cell sorting based on simple image-derived parameters or sophisticated image analysis algorithms, thereby greatly expanding the scope of applications. In this review, we discuss the systems that are commercially available or have been described in enough methodological and engineering detail to allow their replication. We summarize their strengths and limitations and highlight applications that have the potential to transform various fields in basic life science research and clinical settings.
Collapse
Affiliation(s)
- Terra M Kuhn
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Malte Paulsen
- Novo Nordisk Foundation Center for Stem Cell Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Sara Cuylen-Haering
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany.
| |
Collapse
|
2
|
Daniel J, Rose JTA, Vinnarasi FSF, Rajinikanth V. VGG-UNet/VGG-SegNet Supported Automatic Segmentation of Endoplasmic Reticulum Network in Fluorescence Microscopy Images. SCANNING 2022; 2022:7733860. [PMID: 35800206 PMCID: PMC9200602 DOI: 10.1155/2022/7733860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 05/05/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
This research work aims to implement an automated segmentation process to extract the endoplasmic reticulum (ER) network in fluorescence microscopy images (FMI) using pretrained convolutional neural network (CNN). The threshold level of the raw FMT is complex, and extraction of the ER network is a challenging task. Hence, an image conversion procedure is initially employed to reduce its complexity. This work employed the pretrained CNN schemes, such as VGG-UNet and VGG-SegNet, to mine the ER network from the chosen FMI test images. The proposed ER segmentation pipeline consists of the following phases; (i) clinical image collection, 16-bit to 8-bit conversion and resizing; (ii) implementation of pretrained VGG-UNet and VGG-SegNet; (iii) extraction of the binary form of ER network; (iv) comparing the mined ER with ground-truth; and (v) computation of image measures and validation. The considered FMI dataset consists of 223 test images, and image augmentation is then implemented to increase these images. The result of this scheme is then confirmed against other CNN methods, such as U-Net, SegNet, and Res-UNet. The experimental outcome confirms a segmentation accuracy of >98% with VGG-UNet and VGG-SegNet. The results of this research authenticate that the proposed pipeline can be considered to examine the clinical-grade FMI.
Collapse
Affiliation(s)
- Jesline Daniel
- Department of Computer Science and Engineering, St. Joseph's College of Engineering, OMR, Chennai, 600 119 Tamil Nadu, India
| | - J. T. Anita Rose
- Department of Computer Science and Engineering, St. Joseph's College of Engineering, OMR, Chennai, 600 119 Tamil Nadu, India
| | | | - Venkatesan Rajinikanth
- Department of Electronics and Instrumentation Engineering, St. Joseph's College of Engineering, OMR, Chennai, 600 119 Tamil Nadu, India
| |
Collapse
|
3
|
Lamoureux ES, Islamzada E, Wiens MVJ, Matthews K, Duffy SP, Ma H. Assessing red blood cell deformability from microscopy images using deep learning. LAB ON A CHIP 2021; 22:26-39. [PMID: 34874395 DOI: 10.1039/d1lc01006a] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Red blood cells (RBCs) must be highly deformable to transit through the microvasculature to deliver oxygen to tissues. The loss of RBC deformability resulting from pathology, natural aging, or storage in blood bags can impede the proper function of these cells. A variety of methods have been developed to measure RBC deformability, but these methods require specialized equipment, long measurement time, and highly skilled personnel. To address this challenge, we investigated whether a machine learning approach could be used to predict donor RBC deformability based on morphological features from single cell microscope images. We used the microfluidic ratchet device to sort RBCs based on deformability. Sorted cells are then imaged and used to train a deep learning model to classify RBC based image features related to cell deformability. This model correctly predicted deformability of individual RBCs with 81 ± 11% accuracy averaged across ten donors. Using this model to score the deformability of RBC samples was accurate to within 10.4 ± 6.8% of the value obtained using the microfluidic ratchet device. While machine learning methods are frequently developed to automate human image analysis, our study is remarkable in showing that deep learning of single cell microscopy images could be used to assess RBC deformability, a property not normally measurable by imaging. Measuring RBC deformability by imaging is also desirable because it can be performed rapidly using a standard microscopy system, potentially enabling RBC deformability studies to be performed as part of routine clinical assessments.
Collapse
Affiliation(s)
- Erik S Lamoureux
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Emel Islamzada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Matthew V J Wiens
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Kerryn Matthews
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Simon P Duffy
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- British Columbia Institute of Technology, Burnaby, BC, Canada
| | - Hongshen Ma
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Vancouver Prostate Centre, Vancouver General Hospital, Vancouver, BC, Canada
| |
Collapse
|
4
|
Liu Y, Yin M, Sun S. DetexNet: Accurately Diagnosing Frequent and Challenging Pediatric Malignant Tumors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:395-404. [PMID: 32991280 DOI: 10.1109/tmi.2020.3027547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The most frequent extracranial solid tumors of childhood, named peripheral neuroblastic tumors (pNTs), are very challenging to diagnose due to their diversified categories and varying forms. Auxiliary diagnosis methods of such pediatric malignant cancers are highly needed to provide pathologists assistance and reduce the risk of misdiagnosis before treatments. In this paper, inspired by the particularity of microscopic pathology images, we integrate neural networks with the texture energy measure (TEM) and propose a novel network architecture named DetexNet (deep texture network). This method enforces the low-level representation pattern clearer via embedding the expert knowledge as prior, so that the network can seize the key information of a relatively small pathological dataset more smoothly. By applying and finetuning TEM filters in the bottom layer of a network, we greatly improve the performance of the baseline. We further pre-train the model on unlabeled data with an auto-encoder architecture and implement a color space conversion on input images. Two kinds of experiments under different assumptions in the condition of limited training data are performed, and in both of them, the proposed method achieves the best performance compared with other state-of-the-art models and doctor diagnosis.
Collapse
|
5
|
Berryman S, Matthews K, Lee JH, Duffy SP, Ma H. Image-based phenotyping of disaggregated cells using deep learning. Commun Biol 2020; 3:674. [PMID: 33188302 PMCID: PMC7666170 DOI: 10.1038/s42003-020-01399-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 10/20/2020] [Indexed: 12/14/2022] Open
Abstract
The ability to phenotype cells is fundamentally important in biological research and medicine. Current methods rely primarily on fluorescence labeling of specific markers. However, there are many situations where this approach is unavailable or undesirable. Machine learning has been used for image cytometry but has been limited by cell agglomeration and it is currently unclear if this approach can reliably phenotype cells that are difficult to distinguish by the human eye. Here, we show disaggregated single cells can be phenotyped with a high degree of accuracy using low-resolution bright-field and non-specific fluorescence images of the nucleus, cytoplasm, and cytoskeleton. Specifically, we trained a convolutional neural network using automatically segmented images of cells from eight standard cancer cell-lines. These cells could be identified with an average F1-score of 95.3%, tested using separately acquired images. Our results demonstrate the potential to develop an "electronic eye" to phenotype cells directly from microscopy images.
Collapse
Affiliation(s)
- Samuel Berryman
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Kerryn Matthews
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Jeong Hyun Lee
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Simon P Duffy
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- British Columbia Institute of Technology, Burnaby, BC, Canada
| | - Hongshen Ma
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada.
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Vancouver Prostate Centre, Vancouver General Hospital, Vancouver, BC, Canada.
| |
Collapse
|
6
|
Hung J, Goodman A, Ravel D, Lopes SCP, Rangel GW, Nery OA, Malleret B, Nosten F, Lacerda MVG, Ferreira MU, Rénia L, Duraisingh MT, Costa FTM, Marti M, Carpenter AE. Keras R-CNN: library for cell detection in biological images using deep neural networks. BMC Bioinformatics 2020; 21:300. [PMID: 32652926 PMCID: PMC7353739 DOI: 10.1186/s12859-020-03635-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Accepted: 06/24/2020] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND A common yet still manual task in basic biology research, high-throughput drug screening and digital pathology is identifying the number, location, and type of individual cells in images. Object detection methods can be useful for identifying individual cells as well as their phenotype in one step. State-of-the-art deep learning for object detection is poised to improve the accuracy and efficiency of biological image analysis. RESULTS We created Keras R-CNN to bring leading computational research to the everyday practice of bioimage analysts. Keras R-CNN implements deep learning object detection techniques using Keras and Tensorflow ( https://github.com/broadinstitute/keras-rcnn ). We demonstrate the command line tool's simplified Application Programming Interface on two important biological problems, nucleus detection and malaria stage classification, and show its potential for identifying and classifying a large number of cells. For malaria stage classification, we compare results with expert human annotators and find comparable performance. CONCLUSIONS Keras R-CNN is a Python package that performs automated cell identification for both brightfield and fluorescence images and can process large image sets. Both the package and image datasets are freely available on GitHub and the Broad Bioimage Benchmark Collection.
Collapse
Affiliation(s)
- Jane Hung
- Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- The Broad Institute, Cambridge, MA, USA
| | | | - Deepali Ravel
- Harvard T.H.Chan School of Public Health, Boston, MA, USA
| | - Stefanie C P Lopes
- Instituto Leônidas e Maria Deane, Fundação Oswaldo Cruz (FIOCRUZ), Manaus, Amazonas, Brazil
- Fundação de Medicina Tropical Dr. Heitor Vieira Dourado, Gerência de Malária, Manaus, Amazonas, Brazil
| | | | | | - Benoit Malleret
- Department of Microbiology & Immunology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 119077, Singapore
- Singapore Immunology Network (SIgN), Agency for Science Research & Technology, Singapore, 138632, Singapore
| | - Francois Nosten
- Shoklo Malaria Research Unit, Mahidol-Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Mae Sot, Thailand
- Centre for Tropical Medicine and Global Health, Nuffield, Oxford, UK
| | - Marcus V G Lacerda
- Instituto Leônidas e Maria Deane, Fundação Oswaldo Cruz (FIOCRUZ), Manaus, Amazonas, Brazil
- Fundação de Medicina Tropical Dr. Heitor Vieira Dourado, Gerência de Malária, Manaus, Amazonas, Brazil
| | | | - Laurent Rénia
- Singapore Immunology Network (SIgN), Agency for Science Research & Technology, Singapore, 138632, Singapore
| | | | - Fabio T M Costa
- Department of Genetics, Evolution, Microbiology and Immunology, University of Campinas, Campinas, SP, Brazil
| | - Matthias Marti
- Harvard T.H.Chan School of Public Health, Boston, MA, USA
- Wellcome Centre for Integrative Parasitology Institute of Infection, Immunity and Inflammation, College of Medical Veterinary & Life Sciences, University of Glasgow, Glasgow, UK
| | | |
Collapse
|
7
|
Cutiongco MFA, Jensen BS, Reynolds PM, Gadegaard N. Predicting gene expression using morphological cell responses to nanotopography. Nat Commun 2020; 11:1384. [PMID: 32170111 PMCID: PMC7070086 DOI: 10.1038/s41467-020-15114-1] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 02/06/2020] [Indexed: 02/07/2023] Open
Abstract
Cells respond in complex ways to their environment, making it challenging to predict a direct relationship between the two. A key problem is the lack of informative representations of parameters that translate directly into biological function. Here we present a platform to relate the effects of cell morphology to gene expression induced by nanotopography. This platform utilizes the ‘morphome’, a multivariate dataset of cell morphology parameters. We create a Bayesian linear regression model that uses the morphome to robustly predict changes in bone, cartilage, muscle and fibrous gene expression induced by nanotopography. Furthermore, through this model we effectively predict nanotopography-induced gene expression from a complex co-culture microenvironment. The information from the morphome uncovers previously unknown effects of nanotopography on altering cell–cell interaction and osteogenic gene expression at the single cell level. The predictive relationship between morphology and gene expression arising from cell-material interaction shows promise for exploration of new topographies. The surface nanotopography of biomaterials direct cell behavior, but screening for desired effects is inefficient. Here, the authors introduce a platform that enables prediction of nanotopography-induced gene expression changes from changes in cell morphology, including in co-culture environments.
Collapse
Affiliation(s)
- Marie F A Cutiongco
- Divison of Biomedical Engineering, School of Engineering, University of Glasgow, Glasgow, UK
| | | | - Paul M Reynolds
- Divison of Biomedical Engineering, School of Engineering, University of Glasgow, Glasgow, UK
| | - Nikolaj Gadegaard
- Divison of Biomedical Engineering, School of Engineering, University of Glasgow, Glasgow, UK.
| |
Collapse
|
8
|
Yang SJ, Lipnick SL, Makhortova NR, Venugopalan S, Fan M, Armstrong Z, Schlaeger TM, Deng L, Chung WK, O'Callaghan L, Geraschenko A, Whye D, Berndl M, Hazard J, Williams B, Narayanaswamy A, Ando DM, Nelson P, Rubin LL. Applying Deep Neural Network Analysis to High-Content Image-Based Assays. SLAS DISCOVERY 2019; 24:829-841. [PMID: 31284814 PMCID: PMC6710615 DOI: 10.1177/2472555219857715] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The etiological underpinnings of many CNS disorders are not well understood. This is likely due to the fact that individual diseases aggregate numerous pathological subtypes, each associated with a complex landscape of genetic risk factors. To overcome these challenges, researchers are integrating novel data types from numerous patients, including imaging studies capturing broadly applicable features from patient-derived materials. These datasets, when combined with machine learning, potentially hold the power to elucidate the subtle patterns that stratify patients by shared pathology. In this study, we interrogated whether high-content imaging of primary skin fibroblasts, using the Cell Painting method, could reveal disease-relevant information among patients. First, we showed that technical features such as batch/plate type, plate, and location within a plate lead to detectable nuisance signals, as revealed by a pre-trained deep neural network and analysis with deep image embeddings. Using a plate design and image acquisition strategy that accounts for these variables, we performed a pilot study with 12 healthy controls and 12 subjects affected by the severe genetic neurological disorder spinal muscular atrophy (SMA), and evaluated whether a convolutional neural network (CNN) generated using a subset of the cells could distinguish disease states on cells from the remaining unseen control–SMA pair. Our results indicate that these two populations could effectively be differentiated from one another and that model selectivity is insensitive to batch/plate type. One caveat is that the samples were also largely separated by source. These findings lay a foundation for how to conduct future studies exploring diseases with more complex genetic contributions and unknown subtypes.
Collapse
Affiliation(s)
| | - Scott L Lipnick
- 2 Department of Stem Cell and Regenerative Biology, Harvard University, Cambridge, MA, USA.,3 Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.,4 Center for Assessment Technology & Continuous Health (CATCH), Massachusetts General Hospital, Boston, MA, USA
| | - Nina R Makhortova
- 2 Department of Stem Cell and Regenerative Biology, Harvard University, Cambridge, MA, USA.,5 Stem Cell Program, Boston Children's Hospital, Boston, MA, USA
| | | | | | | | | | - Liyong Deng
- 6 Departments of Pediatrics and Medicine, Columbia University Medical Center, New York, NY, USA
| | - Wendy K Chung
- 6 Departments of Pediatrics and Medicine, Columbia University Medical Center, New York, NY, USA
| | | | | | - Dosh Whye
- 2 Department of Stem Cell and Regenerative Biology, Harvard University, Cambridge, MA, USA
| | | | | | | | | | | | | | - Lee L Rubin
- 2 Department of Stem Cell and Regenerative Biology, Harvard University, Cambridge, MA, USA.,7 Harvard Stem Cell Institute, Cambridge, MA, USA
| |
Collapse
|
9
|
Nguyen D, Uhlmann V, Planchette AL, Marchand PJ, Van De Ville D, Lasser T, Radenovic A. Supervised learning to quantify amyloidosis in whole brains of an Alzheimer's disease mouse model acquired with optical projection tomography. BIOMEDICAL OPTICS EXPRESS 2019; 10:3041-3060. [PMID: 31259073 PMCID: PMC6583328 DOI: 10.1364/boe.10.003041] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 05/19/2019] [Accepted: 05/19/2019] [Indexed: 05/14/2023]
Abstract
Alzheimer's disease (AD) is characterized by amyloidosis of brain tissues. This phenomenon is studied with genetically-modified mouse models. We propose a method to quantify amyloidosis in whole 5xFAD mouse brains, a model of AD. We use optical projection tomography (OPT) and a random forest voxel classifier to segment and measure amyloid plaques. We validate our method in a preliminary cross-sectional study, where we measure 6136 ± 1637, 8477 ± 3438, and 17267 ± 4241 plaques (AVG ± SD) at 11, 17, and 31 weeks. Overall, this method can be used in the evaluation of new treatments against AD.
Collapse
Affiliation(s)
- David Nguyen
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
- Medical Image Processing Lab, École Polytechnique Fédérale de Lausanne, Genève, Genève,
Switzerland
- Laboratoire d’Optique Biomédicale, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
| | - Virginie Uhlmann
- Biomedical Imaging Group, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
- European Bioinformatics Institute, EMBL-EBI, Cambridge,
United Kingdom
| | - Arielle L. Planchette
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
- Laboratoire d’Optique Biomédicale, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
| | - Paul J. Marchand
- Laboratoire d’Optique Biomédicale, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
| | - Dimitri Van De Ville
- Medical Image Processing Lab, École Polytechnique Fédérale de Lausanne, Genève, Genève,
Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Genève, Genève,
Switzerland
| | - Theo Lasser
- Laboratoire d’Optique Biomédicale, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
| | - Aleksandra Radenovic
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne, Lausanne, Vaud,
Switzerland
| |
Collapse
|
10
|
Janosch A, Kaffka C, Bickle M. Unbiased Phenotype Detection Using Negative Controls. SLAS DISCOVERY : ADVANCING LIFE SCIENCES R & D 2019; 24:234-241. [PMID: 30616488 PMCID: PMC6484531 DOI: 10.1177/2472555218818053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 11/13/2018] [Accepted: 11/19/2018] [Indexed: 01/22/2023]
Abstract
Phenotypic screens using automated microscopy allow comprehensive measurement of the effects of compounds on cells due to the number of markers that can be scored and the richness of the parameters that can be extracted. The high dimensionality of the data is both a rich source of information and a source of noise that might hide information. Many methods have been proposed to deal with this complex data in order to reduce the complexity and identify interesting phenotypes. Nevertheless, the majority of laboratories still only use one or two parameters in their analysis, likely due to the computational challenges of carrying out a more sophisticated analysis. Here, we present a novel method that allows discovering new, previously unknown phenotypes based on negative controls only. The method is compared with L1-norm regularization, a standard method to obtain a sparse matrix. The analytical pipeline is implemented in the open-source software KNIME, allowing the implementation of the method in many laboratories, even ones without advanced computing knowledge.
Collapse
Affiliation(s)
- Antje Janosch
- Technology Development Studio, Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Carolin Kaffka
- Fraunhofer-Institut für Verkehrs- und Infrastruktursysteme, Dresden, Germany
| | - Marc Bickle
- Technology Development Studio, Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| |
Collapse
|
11
|
Godinez WJ, Hossain I, Lazic SE, Davies JW, Zhang X. A multi-scale convolutional neural network for phenotyping high-content cellular images. Bioinformatics 2018; 33:2010-2019. [PMID: 28203779 DOI: 10.1093/bioinformatics/btx069] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2016] [Accepted: 02/13/2017] [Indexed: 12/27/2022] Open
Abstract
Motivation Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Results Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. Availability and Implementation The network specifications and solver definitions are provided in Supplementary Software 1. Contact william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- William J Godinez
- Novartis Institutes for BioMedical Research Inc., Basel, Switzerland
| | - Imtiaz Hossain
- Novartis Institutes for BioMedical Research Inc., Basel, Switzerland
| | - Stanley E Lazic
- Novartis Institutes for BioMedical Research Inc., Basel, Switzerland
| | - John W Davies
- Novartis Institutes for BioMedical Research Inc., Cambridge, MA, USA
| | - Xian Zhang
- Novartis Institutes for BioMedical Research Inc., Basel, Switzerland
| |
Collapse
|
12
|
Marée R. The Need for Careful Data Collection for Pattern Recognition in Digital Pathology. J Pathol Inform 2017; 8:19. [PMID: 28480122 PMCID: PMC5404354 DOI: 10.4103/jpi.jpi_94_16] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Accepted: 03/15/2017] [Indexed: 01/04/2023] Open
Abstract
Effective pattern recognition requires carefully designed ground-truth datasets. In this technical note, we first summarize potential data collection issues in digital pathology and then propose guidelines to build more realistic ground-truth datasets and to control their quality. We hope our comments will foster the effective application of pattern recognition approaches in digital pathology.
Collapse
Affiliation(s)
- Raphaël Marée
- Department of Electrical Engineering and Computer Science, Montefiore Institute, University of Liège, 4000 Liège, Belgium
| |
Collapse
|
13
|
Protein subcellular localization of fluorescence microscopy images: Employing new statistical and Texton based image features and SVM based ensemble classification. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.01.064] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
14
|
CP-CHARM: segmentation-free image classification made accessible. BMC Bioinformatics 2016; 17:51. [PMID: 26817459 PMCID: PMC4729047 DOI: 10.1186/s12859-016-0895-y] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Accepted: 01/18/2016] [Indexed: 11/10/2022] Open
Abstract
Background Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. Results We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM’s results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. Conclusions The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-0895-y) contains supplementary material, which is available to authorized users.
Collapse
|
15
|
Kandaswamy C, Silva LM, Alexandre LA, Santos JM. High-Content Analysis of Breast Cancer Using Single-Cell Deep Transfer Learning. ACTA ACUST UNITED AC 2016; 21:252-9. [PMID: 26746583 DOI: 10.1177/1087057115623451] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2015] [Accepted: 11/30/2015] [Indexed: 01/17/2023]
Abstract
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Collapse
Affiliation(s)
- Chetak Kandaswamy
- Instituto de Engenharia Biomédica (INEB), Porto, Portugal Departamento de Engenharia Eletrotécnica e de Computadores, Faculdade de Engenharia da Universidade do Porto, Porto, Portugal
| | - Luís M Silva
- Instituto de Engenharia Biomédica (INEB), Porto, Portugal Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal Departamento de Matemática, Universidade de Aveiro, Aveiro, Portugal
| | - Luís A Alexandre
- Universidade da Beira Interior, Instituto de Telecomunicações, Covilhã, Portugal
| | - Jorge M Santos
- Instituto de Engenharia Biomédica (INEB), Porto, Portugal Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal Departamento de Matemática, Instituto Superior de Engenharia do Instituto Politécnico do Porto, Porto, Portugal
| |
Collapse
|
16
|
Jeanray N, Marée R, Pruvot B, Stern O, Geurts P, Wehenkel L, Muller M. Phenotype classification of zebrafish embryos by supervised learning. PLoS One 2015; 10:e0116989. [PMID: 25574849 PMCID: PMC4289190 DOI: 10.1371/journal.pone.0116989] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2014] [Accepted: 12/18/2014] [Indexed: 11/18/2022] Open
Abstract
Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.
Collapse
Affiliation(s)
- Nathalie Jeanray
- GIGA-Development, Stem Cells and Regenerative Medicine, Organogenesis and Regeneration, University of Liège, Liège, Belgium
- GIGA-Systems Biology and Chemical Biology, Dept. EE & CS, University of Liège, Liège, Belgium
| | - Raphaël Marée
- GIGA Bioinformatics Core Facility, University of Liège, Liège, Belgium
| | - Benoist Pruvot
- GIGA-Development, Stem Cells and Regenerative Medicine, Organogenesis and Regeneration, University of Liège, Liège, Belgium
| | - Olivier Stern
- GIGA-Systems Biology and Chemical Biology, Dept. EE & CS, University of Liège, Liège, Belgium
| | - Pierre Geurts
- GIGA-Systems Biology and Chemical Biology, Dept. EE & CS, University of Liège, Liège, Belgium
| | - Louis Wehenkel
- GIGA-Systems Biology and Chemical Biology, Dept. EE & CS, University of Liège, Liège, Belgium
- GIGA Bioinformatics Core Facility, University of Liège, Liège, Belgium
| | - Marc Muller
- GIGA-Development, Stem Cells and Regenerative Medicine, Organogenesis and Regeneration, University of Liège, Liège, Belgium
- * E-mail:
| |
Collapse
|
17
|
Krauß SD, Petersen D, Niedieker D, Fricke I, Freier E, El-Mashtoly SF, Gerwert K, Mosig A. Colocalization of fluorescence and Raman microscopic images for the identification of subcellular compartments: a validation study. Analyst 2015; 140:2360-8. [DOI: 10.1039/c4an02153c] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
This paper introduces algorithms for identifying overlapping observations between Raman and fluorescence microscopic images of one and the same sample.
Collapse
Affiliation(s)
- Sascha D. Krauß
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| | - Dennis Petersen
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| | - Daniel Niedieker
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| | - Inka Fricke
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| | - Erik Freier
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| | | | - Klaus Gerwert
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| | - Axel Mosig
- Department of Biophysics
- Ruhr-University Bochum
- 44780 Bochum
- Germany
| |
Collapse
|
18
|
Sommer C, Gerlich DW. Machine learning in cell biology - teaching computers to recognize phenotypes. J Cell Sci 2013; 126:5529-39. [PMID: 24259662 DOI: 10.1242/jcs.123604] [Citation(s) in RCA: 169] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.
Collapse
Affiliation(s)
- Christoph Sommer
- Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA), 1030 Vienna, Austria
| | | |
Collapse
|
19
|
Zhou J, Lamichhane S, Sterne G, Ye B, Peng H. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation. BMC Bioinformatics 2013; 14:291. [PMID: 24090164 PMCID: PMC3854450 DOI: 10.1186/1471-2105-14-291] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2013] [Accepted: 09/11/2013] [Indexed: 11/29/2022] Open
Abstract
Background Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. Results We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be “chained” in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. Conclusions BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological problems involving image classification and annotation. We also demonstrate the effectiveness of 3D anisotropic wavelet in classifying both 3D image sets and ROIs.
Collapse
Affiliation(s)
- Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL 60115, USA.
| | | | | | | | | |
Collapse
|
20
|
Ljosa V, Caie PD, Ter Horst R, Sokolnicki KL, Jenkins EL, Daya S, Roberts ME, Jones TR, Singh S, Genovesio A, Clemons PA, Carragher NO, Carpenter AE. Comparison of methods for image-based profiling of cellular morphological responses to small-molecule treatment. ACTA ACUST UNITED AC 2013; 18:1321-9. [PMID: 24045582 DOI: 10.1177/1087057113503553] [Citation(s) in RCA: 115] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Quantitative microscopy has proven a versatile and powerful phenotypic screening technique. Recently, image-based profiling has shown promise as a means for broadly characterizing molecules' effects on cells in several drug-discovery applications, including target-agnostic screening and predicting a compound's mechanism of action (MOA). Several profiling methods have been proposed, but little is known about their comparative performance, impeding the wider adoption and further development of image-based profiling. We compared these methods by applying them to a widely applicable assay of cultured cells and measuring the ability of each method to predict the MOA of a compendium of drugs. A very simple method that is based on population means performed as well as methods designed to take advantage of the measurements of individual cells. This is surprising because many treatments induced a heterogeneous phenotypic response across the cell population in each sample. Another simple method, which performs factor analysis on the cellular measurements before averaging them, provided substantial improvement and was able to predict MOA correctly for 94% of the treatments in our ground-truth set. To facilitate the ready application and future development of image-based phenotypic profiling methods, we provide our complete ground-truth and test data sets, as well as open-source implementations of the various methods in a common software framework.
Collapse
Affiliation(s)
- Vebjorn Ljosa
- 1Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
21
|
Mikut R, Dickmeis T, Driever W, Geurts P, Hamprecht FA, Kausler BX, Ledesma-Carbayo MJ, Marée R, Mikula K, Pantazis P, Ronneberger O, Santos A, Stotzka R, Strähle U, Peyriéras N. Automated processing of zebrafish imaging data: a survey. Zebrafish 2013; 10:401-21. [PMID: 23758125 PMCID: PMC3760023 DOI: 10.1089/zeb.2013.0886] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Collapse
Affiliation(s)
- Ralf Mikut
- Karlsruhe Institute of Technology, Karlsruhe, Germany.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Swamidoss IN, Kårsnäs A, Uhlmann V, Ponnusamy P, Kampf C, Simonsson M, Wählby C, Strand R. Automated classification of immunostaining patterns in breast tissue from the human protein atlas. J Pathol Inform 2013; 4:S14. [PMID: 23766936 PMCID: PMC3678740 DOI: 10.4103/2153-3539.109881] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2013] [Accepted: 01/21/2013] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND The Human Protein Atlas (HPA) is an effort to map the location of all human proteins (http://www.proteinatlas.org/). It contains a large number of histological images of sections from human tissue. Tissue micro arrays (TMA) are imaged by a slide scanning microscope, and each image represents a thin slice of a tissue core with a dark brown antibody specific stain and a blue counter stain. When generating antibodies for protein profiling of the human proteome, an important step in the quality control is to compare staining patterns of different antibodies directed towards the same protein. This comparison is an ultimate control that the antibody recognizes the right protein. In this paper, we propose and evaluate different approaches for classifying sub-cellular antibody staining patterns in breast tissue samples. MATERIALS AND METHODS The proposed methods include the computation of various features including gray level co-occurrence matrix (GLCM) features, complex wavelet co-occurrence matrix (CWCM) features, and weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM)-inspired features. The extracted features are used into two different multivariate classifiers (support vector machine (SVM) and linear discriminant analysis (LDA) classifier). Before extracting features, we use color deconvolution to separate different tissue components, such as the brownly stained positive regions and the blue cellular regions, in the immuno-stained TMA images of breast tissue. RESULTS We present classification results based on combinations of feature measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. CONCLUSIONS Both human experts and the proposed automated methods have difficulties discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for quantification of staining patterns in histopathology have many applications, ranging from antibody quality control to tumor grading.
Collapse
Affiliation(s)
- Issac Niwas Swamidoss
- Department of Electronics and Communication Engineering, National Institute of Technology (NIT), Tiruchirappalli, Tamil Nadu, India ; Centre for Image Analysis (CBA) and SciLife Lab, Uppsala University, Uppsala, Sweden
| | | | | | | | | | | | | | | |
Collapse
|