1
|
Liu R, Dai W, Wu C, Wu T, Wang M, Zhou J, Zhang X, Li WJ, Liu J. Deep Learning-Based Microscopic Cell Detection Using Inverse Distance Transform and Auxiliary Counting. IEEE J Biomed Health Inform 2024; 28:6092-6104. [PMID: 38900626 DOI: 10.1109/jbhi.2024.3417229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Microscopic cell detection is a challenging task due to significant inter-cell occlusions in dense clusters and diverse cell morphologies. This paper introduces a novel framework designed to enhance automated cell detection. The proposed approach integrates a deep learning model that produces an inverse distance transform-based detection map from the given image, accompanied by a secondary network designed to regress a cell density map from the same input. The inverse distance transform-based map effectively highlights each cell instance in the densely populated areas, while the density map accurately estimates the total cell count in the image. Then, a custom counting-aided cell center extraction strategy leverages the cell count obtained by integrating over the density map to refine the detection process, significantly reducing false responses and thereby boosting overall accuracy. The proposed framework demonstrated superior performance with F-scores of 96.93%, 91.21%, and 92.00% on the VGG, MBM, and ADI datasets, respectively, surpassing existing state-of-the-art methods. It also achieved the lowest distance error, further validating the effectiveness of the proposed approach. These results demonstrate significant potential for automated cell analysis in biomedical applications.
Collapse
|
2
|
Wang R, Yang S, Li Q, Zhong D. CytoGAN: Unpaired staining transfer by structure preservation for cytopathology image analysis. Comput Biol Med 2024; 180:108942. [PMID: 39096614 DOI: 10.1016/j.compbiomed.2024.108942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 07/23/2024] [Accepted: 07/24/2024] [Indexed: 08/05/2024]
Abstract
With the development of digital pathology, deep learning is increasingly being applied to endometrial cell morphology analysis for cancer screening. And cytology images with different staining may degrade the performance of these analysis algorithms. To address the impact of staining patterns, many strategies have been proposed and hematoxylin and eosin (H&E) images have been transferred to other staining styles. However, none of the existing methods are able to generate realistic cytological images with preserved cellular layout, and many important clinical structural information is lost. To address the above issues, we propose a different staining transformation model, CytoGAN, which can quickly and realistically generate images with different staining styles. It includes a novel structure preservation module that preserves the cell structure well, even if the resolution or cell size between the source and target domains do not match. Meanwhile, a stain adaptive module is designed to help the model generate realistic and high-quality endometrial cytology images. We compared our model with ten state-of-the-art stain transformation models and evaluated by two pathologists. Furthermore, in the downstream endometrial cancer classification task, our algorithm improves the robustness of the classification model on multimodal datasets, with more than 20 % improvement in accuracy. We found that generating specified specific stains from existing H&E images improves the diagnosis of endometrial cancer. Our code will be available on github.
Collapse
Affiliation(s)
- Ruijie Wang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China.
| | - Sicheng Yang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China.
| | - Qiling Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China.
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China; Pazhou Laboratory, Guangzhou, 510335, PR China; Research Institute of Xi'an Jiaotong University, Zhejiang, 311215, PR China.
| |
Collapse
|
3
|
Lboukili I, Stamatas G, Descombes X. DermoGAN: multi-task cycle generative adversarial networks for unsupervised automatic cell identification on in-vivo reflectance confocal microscopy images of the human epidermis. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:086003. [PMID: 39099678 PMCID: PMC11294601 DOI: 10.1117/1.jbo.29.8.086003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 05/17/2024] [Accepted: 05/30/2024] [Indexed: 08/06/2024]
Abstract
Significance Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity. Aim We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images. Approach Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells. Results The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the F 1 -score. Conclusions We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.
Collapse
Affiliation(s)
- Imane Lboukili
- Johnson & Johnson Santé Beauté France, Paris, France
- UCA, INRIA, I3S/CNRS, Sophia Antipolis, France
| | | | | |
Collapse
|
4
|
Zhang W, Wang Z. An approach of separating the overlapped cells or nuclei based on the outer Canny edges and morphological erosion. Cytometry A 2024; 105:266-275. [PMID: 38111162 DOI: 10.1002/cyto.a.24819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 11/23/2023] [Accepted: 11/27/2023] [Indexed: 12/20/2023]
Abstract
In biomedicine, the automatic processing of medical microscope images plays a key role in the subsequent analysis and diagnosis. Cell or nucleus segmentation is one of the most challenging tasks for microscope image processing. Due to the frequently occurred overlapping, few segmentation methods can achieve satisfactory segmentation accuracy yet. In this paper, we propose an approach to separate the overlapped cells or nuclei based on the outer Canny edges and morphological erosion. The threshold selection is first used to segment the foreground and background of cell or nucleus images. For each binary connected domain in the segmentation image, an intersection based edge selection method is proposed to choose the outer Canny edges of the overlapped cells or nuclei. The outer Canny edges are used to generate a binary cell or nucleus image that is then used to compute the cell or nucleus seeds by the proposed morphological erosion method. The nuclei of the Human U2OS cells, the mouse NIH3T3 cells and the synthetic cells are used for evaluating our proposed approach. The quantitative quantification accuracy is computed by the Dice score and 95.53% is achieved by the proposed approach. Both the quantitative and the qualitative comparisons show that the accuracy of the proposed approach is better than those of the area constrained morphological erosion (ACME) method, the iterative erosion (IE) method, the morphology and watershed (MW) method, the Generalized Laplacian of Gaussian filters (GLGF) method and ellipse fitting (EF) method in separating the cells or nuclei in three publicly available datasets.
Collapse
Affiliation(s)
- Wenfei Zhang
- College of Electrical and Electronic Engineering, Shandong University of Technology, Zibo, China
| | - Zhenzhou Wang
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, China
| |
Collapse
|
5
|
Wiesner D, Suk J, Dummer S, Nečasová T, Ulman V, Svoboda D, Wolterink JM. Generative modeling of living cells with SO(3)-equivariant implicit neural representations. Med Image Anal 2024; 91:102991. [PMID: 37839341 DOI: 10.1016/j.media.2023.102991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 08/20/2023] [Accepted: 10/02/2023] [Indexed: 10/17/2023]
Abstract
Data-driven cell tracking and segmentation methods in biomedical imaging require diverse and information-rich training data. In cases where the number of training samples is limited, synthetic computer-generated data sets can be used to improve these methods. This requires the synthesis of cell shapes as well as corresponding microscopy images using generative models. To synthesize realistic living cell shapes, the shape representation used by the generative model should be able to accurately represent fine details and changes in topology, which are common in cells. These requirements are not met by 3D voxel masks, which are restricted in resolution, and polygon meshes, which do not easily model processes like cell growth and mitosis. In this work, we propose to represent living cell shapes as level sets of signed distance functions (SDFs) which are estimated by neural networks. We optimize a fully-connected neural network to provide an implicit representation of the SDF value at any point in a 3D+time domain, conditioned on a learned latent code that is disentangled from the rotation of the cell shape. We demonstrate the effectiveness of this approach on cells that exhibit rapid deformations (Platynereis dumerilii), cells that grow and divide (C. elegans), and cells that have growing and branching filopodial protrusions (A549 human lung carcinoma cells). A quantitative evaluation using shape features and Dice similarity coefficients of real and synthetic cell shapes shows that our model can generate topologically plausible complex cell shapes in 3D+time with high similarity to real living cell shapes. Finally, we show how microscopy images of living cells that correspond to our generated cell shapes can be synthesized using an image-to-image model.
Collapse
Affiliation(s)
- David Wiesner
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic.
| | - Julian Suk
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Sven Dummer
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- IT4Innovations, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Jelmer M Wolterink
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands.
| |
Collapse
|
6
|
Du G, Zhang P, Guo J, Pang X, Kan G, Zeng B, Chen X, Liang J, Zhan Y. MF-Net: Automated Muscle Fiber Segmentation From Immunofluorescence Images Using a Local-Global Feature Fusion Network. J Digit Imaging 2023; 36:2411-2426. [PMID: 37714969 PMCID: PMC10584774 DOI: 10.1007/s10278-023-00890-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/20/2023] [Accepted: 07/24/2023] [Indexed: 09/17/2023] Open
Abstract
Histological assessment of skeletal muscle slices is very important for the accurate evaluation of weightless muscle atrophy. The accurate identification and segmentation of muscle fiber boundary is an important prerequisite for the evaluation of skeletal muscle fiber atrophy. However, there are many challenges to segment muscle fiber from immunofluorescence images, including the presence of low contrast in fiber boundaries in immunofluorescence images and the influence of background noise. Due to the limitations of traditional convolutional neural network-based segmentation methods in capturing global information, they cannot achieve ideal segmentation results. In this paper, we propose a muscle fiber segmentation network (MF-Net) method for effective segmentation of macaque muscle fibers in immunofluorescence images. The network adopts a dual encoder branch composed of convolutional neural networks and transformer to effectively capture local and global feature information in the immunofluorescence image, highlight foreground features, and suppress irrelevant background noise. In addition, a low-level feature decoder module is proposed to capture more global context information by combining different image scales to supplement the missing detail pixels. In this study, a comprehensive experiment was carried out on the immunofluorescence datasets of six macaques' weightlessness models and compared with the state-of-the-art deep learning model. It is proved from five segmentation indices that the proposed automatic segmentation method can be accurately and effectively applied to muscle fiber segmentation in shank immunofluorescence images.
Collapse
Affiliation(s)
| | - Peng Zhang
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Jianzhong Guo
- Institute of Applied Acoustics, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, 710062, China
| | - Xiangsheng Pang
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Guanghan Kan
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Bin Zeng
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Xiaoping Chen
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China.
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi'an, Shaanxi, 710071, China.
| | - Yonghua Zhan
- School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi, 710126, China.
| |
Collapse
|
7
|
Ding Y, Zheng Y, Han Z, Yang X. Using optimal transport theory to optimize a deep convolutional neural network microscopic cell counting method. Med Biol Eng Comput 2023; 61:2939-2950. [PMID: 37532907 DOI: 10.1007/s11517-023-02862-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/17/2023] [Indexed: 08/04/2023]
Abstract
Medical image processing has become increasingly important in recent years, particularly in the field of microscopic cell imaging. However, accurately counting the number of cells in an image can be a challenging task due to the significant variations in cell size and shape. To tackle this problem, many existing methods rely on deep learning techniques, such as convolutional neural networks (CNNs), to count cells in an image or use regression counting methods to learn the similarities between an input image and a predicted cell image density map. In this paper, we propose a novel approach to monitor the cell counting process by optimizing the loss function using the optimal transport method, a rigorous measure to calculate the difference between the predicted count map and the dot annotation map generated by the CNN. We evaluated our algorithm on three publicly available cell count benchmarks: the synthetic fluorescence microscopy (VGG) dataset, the modified bone marrow (MBM) dataset, and the human subcutaneous adipose tissue (ADI) dataset. Our method outperforms other state-of-the-art methods, achieving a mean absolute error (MAE) of 2.3, 4.8, and 13.1 on the VGG, MBM, and ADI datasets, respectively, with smaller standard deviations. By using the optimal transport method, our approach provides a more accurate and reliable cell counting method for medical image processing.
Collapse
Affiliation(s)
- Yuanyuan Ding
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China.
| | - Zeyu Han
- School of Mathematics and Statistics, Shandong University (Weihai), Weihai, 264209, Shandong, China
| | - Xinbo Yang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China
| |
Collapse
|
8
|
Tasnadi E, Sliz-Nagy A, Horvath P. Structure preserving adversarial generation of labeled training samples for single-cell segmentation. CELL REPORTS METHODS 2023; 3:100592. [PMID: 37725984 PMCID: PMC10545934 DOI: 10.1016/j.crmeth.2023.100592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 05/09/2023] [Accepted: 08/24/2023] [Indexed: 09/21/2023]
Abstract
We introduce a generative data augmentation strategy to improve the accuracy of instance segmentation of microscopy data for complex tissue structures. Our pipeline uses regular and conditional generative adversarial networks (GANs) for image-to-image translation to construct synthetic microscopy images along with their corresponding masks to simulate the distribution and shape of the objects and their appearance. The synthetic samples are then used for training an instance segmentation network (for example, StarDist or Cellpose). We show on two single-cell-resolution tissue datasets that our method improves the accuracy of downstream instance segmentation tasks compared with traditional training strategies using either the raw data or basic augmentations. We also compare the quality of the object masks with those generated by a traditional cell population simulation method, finding that our synthesized masks are closer to the ground truth considering Fréchet inception distances.
Collapse
Affiliation(s)
- Ervin Tasnadi
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary; Doctoral School of Computer Science, University of Szeged, 6720 Szeged, Hungary; Single-Cell Technologies, Ltd, 6726 Szeged, Hungary.
| | - Alex Sliz-Nagy
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary; Single-Cell Technologies, Ltd, 6726 Szeged, Hungary; Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland.
| |
Collapse
|
9
|
Gao G, Walter NG. Critical Assessment of Condensate Boundaries in Dual-Color Single Particle Tracking. J Phys Chem B 2023; 127:7694-7707. [PMID: 37669232 DOI: 10.1021/acs.jpcb.3c03776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
Biomolecular condensates are membraneless cellular compartments generated by phase separation that regulate a broad variety of cellular functions by enriching some biomolecules while excluding others. Live-cell single particle tracking of individual fluorophore-labeled condensate components has provided insights into a condensate's mesoscopic organization and biological functions, such as revealing the recruitment, translation, and decay of RNAs within ribonucleoprotein (RNP) granules. Specifically, during dual-color tracking, one imaging channel provides a time series of individual biomolecule locations, while the other channel monitors the location of the condensate relative to these molecules. Therefore, an accurate assessment of a condensate's boundary is critical for combined live-cell single particle-condensate tracking. Despite its importance, a quantitative benchmarking and objective comparison of the various available boundary detection methods is missing due to the lack of an absolute ground truth for condensate images. Here, we use synthetic data of defined ground truth to generate noise-overlaid images of condensates with realistic phase separation parameters to benchmark the most commonly used methods for condensate boundary detection, including an emerging machine-learning method. We find that it is critical to carefully choose an optimal boundary detection method for a given dataset to obtain accurate measurements of single particle-condensate interactions. The criteria proposed in this study to guide the selection of an optimal boundary detection method can be broadly applied to imaging-based studies of condensates.
Collapse
Affiliation(s)
- Guoming Gao
- Biophysics Graduate Program, University of Michigan, Ann Arbor, Michigan 48109, United States
- Center for RNA Biomedicine, University of Michigan, Ann Arbor, Michigan 48109, United States
| | - Nils G Walter
- Center for RNA Biomedicine, University of Michigan, Ann Arbor, Michigan 48109, United States
- Department of Chemistry, University of Michigan, Ann Arbor, Michigan 48109, United States
| |
Collapse
|
10
|
Shim C, Kim W, Nguyen TTD, Kim DY, Choi YS, Chung YD. CellTrackVis: interactive browser-based visualization for analyzing cell trajectories and lineages. BMC Bioinformatics 2023; 24:124. [PMID: 36991341 DOI: 10.1186/s12859-023-05218-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 02/28/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Automatic cell tracking methods enable practitioners to analyze cell behaviors efficiently. Notwithstanding the continuous development of relevant software, user-friendly visualization tools have room for further improvements. Typical visualization mostly comes with main cell tracking tools as a simple plug-in, or relies on specific software/platforms. Although some tools are standalone, limited visual interactivity is provided, or otherwise cell tracking outputs are partially visualized. RESULTS This paper proposes a self-reliant visualization system, CellTrackVis, to support quick and easy analysis of cell behaviors. Interconnected views help users discover meaningful patterns of cell motions and divisions in common web browsers. Specifically, cell trajectory, lineage, and quantified information are respectively visualized in a coordinated interface. In particular, immediate interactions among modules enable the study of cell tracking outputs to be more effective, and also each component is highly customizable for various biological tasks. CONCLUSIONS CellTrackVis is a standalone browser-based visualization tool. Source codes and data sets are freely available at http://github.com/scbeom/celltrackvis with the tutorial at http://scbeom.github.io/ctv_tutorial .
Collapse
Affiliation(s)
- Changbeom Shim
- School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, Australia
| | - Wooil Kim
- Data Intelligence Team, Samsung Research, Seoul, South Korea
| | - Tran Thien Dat Nguyen
- School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, Australia
| | - Du Yong Kim
- School of Engineering, RMIT University, Melbourne, Australia
| | - Yu Suk Choi
- School of Human Sciences, University of Western Australia, Perth, Australia
| | - Yon Dohn Chung
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
11
|
Homeyer A, Geißler C, Schwen LO, Zakrzewski F, Evans T, Strohmenger K, Westphal M, Bülow RD, Kargl M, Karjauv A, Munné-Bertran I, Retzlaff CO, Romero-López A, Sołtysiński T, Plass M, Carvalho R, Steinbach P, Lan YC, Bouteldja N, Haber D, Rojas-Carulla M, Vafaei Sadr A, Kraft M, Krüger D, Fick R, Lang T, Boor P, Müller H, Hufnagl P, Zerbe N. Recommendations on compiling test datasets for evaluating artificial intelligence solutions in pathology. Mod Pathol 2022; 35:1759-1769. [PMID: 36088478 PMCID: PMC9708586 DOI: 10.1038/s41379-022-01147-y] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/24/2022] [Accepted: 07/25/2022] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations on compiling test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help pathologists and regulatory agencies verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.
Collapse
Affiliation(s)
- André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany.
| | - Christian Geißler
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | - Lars Ole Schwen
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany
| | - Falk Zakrzewski
- Institute of Pathology, Carl Gustav Carus University Hospital Dresden (UKD), TU Dresden (TUD), Fetscherstrasse 74, 01307, Dresden, Germany
| | - Theodore Evans
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | - Klaus Strohmenger
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| | - Max Westphal
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany
| | - Roman David Bülow
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Michaela Kargl
- Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| | - Aray Karjauv
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | - Isidre Munné-Bertran
- MoticEurope, S.L.U., C. Les Corts, 12 Poligono Industrial, 08349, Barcelona, Spain
| | - Carl Orge Retzlaff
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | | | | | - Markus Plass
- Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| | - Rita Carvalho
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| | - Peter Steinbach
- Helmholtz-Zentrum Dresden Rossendorf, Bautzner Landstraße 400, 01328, Dresden, Germany
| | - Yu-Chia Lan
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Nassim Bouteldja
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - David Haber
- Lakera AI AG, Zelgstrasse 7, 8003, Zürich, Switzerland
| | | | - Alireza Vafaei Sadr
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | | | - Daniel Krüger
- Olympus Soft Imaging Solutions GmbH, Johann-Krane-Weg 39, 48149, Münster, Germany
| | - Rutger Fick
- Tribun Health, 2 Rue du Capitaine Scott, 75015, Paris, France
| | - Tobias Lang
- Mindpeak GmbH, Zirkusweg 2, 20359, Hamburg, Germany
| | - Peter Boor
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Heimo Müller
- Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| | - Peter Hufnagl
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| | - Norman Zerbe
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
12
|
ELMGAN: A GAN-based efficient lightweight multi-scale-feature-fusion multi-task model. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
13
|
Sachs CC, Ruzaeva K, Seiffarth J, Wiechert W, Berkels B, Nöh K. CellSium: versatile cell simulator for microcolony ground truth generation. BIOINFORMATICS ADVANCES 2022; 2:vbac053. [PMID: 36699390 PMCID: PMC9710621 DOI: 10.1093/bioadv/vbac053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/25/2022] [Accepted: 07/28/2022] [Indexed: 01/28/2023]
Abstract
Summary To train deep learning-based segmentation models, large ground truth datasets are needed. To address this need in microfluidic live-cell imaging, we present CellSium, a flexibly configurable cell simulator built to synthesize realistic image sequences of bacterial microcolonies growing in monolayers. We illustrate that the simulated images are suitable for training neural networks. Synthetic time-lapse videos with and without fluorescence, using programmable cell growth models, and simulation-ready 3D colony geometries for computational fluid dynamics are also supported. Availability and implementation CellSium is free and open source software under the BSD license, implemented in Python, available at github.com/modsim/cellsium (DOI: 10.5281/zenodo.6193033), along with documentation, usage examples and Docker images. Supplementary information Supplementary data are available at Bioinformatics Advances online.
Collapse
Affiliation(s)
- Christian Carsten Sachs
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
| | | | | | - Wolfgang Wiechert
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany,Computational Systems Biotechnology (AVT.CSB), RWTH Aachen University, 52074 Aachen, Germany
| | - Benjamin Berkels
- Aachen Institute for Advanced Study in Computational Engineering Science (AICES), RWTH Aachen University, 52062 Aachen, Germany
| | | |
Collapse
|
14
|
Guo Y, Krupa O, Stein J, Wu G, Krishnamurthy A. SAU-Net: A Unified Network for Cell Counting in 2D and 3D Microscopy Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:1920-1932. [PMID: 34133284 PMCID: PMC8924707 DOI: 10.1109/tcbb.2021.3089608] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Image-based cell counting is a fundamental yet challenging task with wide applications in biological research. In this paper, we propose a novel unified deep network framework designed to solve this problem for various cell types in both 2D and 3D images. Specifically, we first propose SAU-Net for cell counting by extending the segmentation network U-Net with a Self-Attention module. Second, we design an extension of Batch Normalization (BN) to facilitate the training process for small datasets. In addition, a new 3D benchmark dataset based on the existing mouse blastocyst (MBC) dataset is developed and released to the community. Our SAU-Net achieves state-of-the-art results on four benchmark 2D datasets - synthetic fluorescence microscopy (VGG) dataset, Modified Bone Marrow (MBM) dataset, human subcutaneous adipose tissue (ADI) dataset, and Dublin Cell Counting (DCC) dataset, and the new 3D dataset, MBC. The BN extension is validated using extensive experiments on the 2D datasets, since GPU memory constraints preclude use of 3D datasets. The source code is available at https://github.com/mzlr/sau-net.
Collapse
|
15
|
Generation of microbial colonies dataset with deep learning style transfer. Sci Rep 2022; 12:5212. [PMID: 35338253 PMCID: PMC8956727 DOI: 10.1038/s41598-022-09264-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 03/21/2022] [Indexed: 01/02/2023] Open
Abstract
We introduce an effective strategy to generate an annotated synthetic dataset of microbiological images of Petri dishes that can be used to train deep learning models in a fully supervised fashion. The developed generator employs traditional computer vision algorithms together with a neural style transfer method for data augmentation. We show that the method is able to synthesize a dataset of realistic looking images that can be used to train a neural network model capable of localising, segmenting, and classifying five different microbial species. Our method requires significantly fewer resources to obtain a useful dataset than collecting and labeling a whole large set of real images with annotations. We show that starting with only 100 real images, we can generate data to train a detector that achieves comparable results (detection mAP \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$=0.416$$\end{document}=0.416, and counting MAE \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$=4.49$$\end{document}=4.49) to the same detector but trained on a real, several dozen times bigger dataset (mAP \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$=0.520$$\end{document}=0.520, MAE \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$=4.31$$\end{document}=4.31), containing over 7 k images. We prove the usefulness of the method in microbe detection and segmentation, but we expect that it is general and flexible and can also be applicable in other domains of science and industry to detect various objects.
Collapse
|
16
|
Rahali R, Dridi N, Ben Salem Y, Descombes X, Debreuve E, De Graeve F, Dahman H. Biological image segmentation using Region-Scalable Fitting Energy with B-spline level set implementation and Watershed. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
17
|
Litrico M, Battiato S, Tsaftaris SA, Giuffrida MV. Semi-Supervised Domain Adaptation for Holistic Counting under Label Gap. J Imaging 2021; 7:jimaging7100198. [PMID: 34677284 PMCID: PMC8541592 DOI: 10.3390/jimaging7100198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/20/2021] [Accepted: 09/21/2021] [Indexed: 11/16/2022] Open
Abstract
This paper proposes a novel approach for semi-supervised domain adaptation for holistic regression tasks, where a DNN predicts a continuous value y∈R given an input image x. The current literature generally lacks specific domain adaptation approaches for this task, as most of them mostly focus on classification. In the context of holistic regression, most of the real-world datasets not only exhibit a covariate (or domain) shift, but also a label gap-the target dataset may contain labels not included in the source dataset (and vice versa). We propose an approach tackling both covariate and label gap in a unified training framework. Specifically, a Generative Adversarial Network (GAN) is used to reduce covariate shift, and label gap is mitigated via label normalisation. To avoid overfitting, we propose a stopping criterion that simultaneously takes advantage of the Maximum Mean Discrepancy and the GAN Global Optimality condition. To restore the original label range-that was previously normalised-a handful of annotated images from the target domain are used. Our experimental results, run on 3 different datasets, demonstrate that our approach drastically outperforms the state-of-the-art across the board. Specifically, for the cell counting problem, the mean squared error (MSE) is reduced from 759 to 5.62; in the case of the pedestrian dataset, our approach lowered the MSE from 131 to 1.47. For the last experimental setup, we borrowed a task from plant biology, i.e., counting the number of leaves in a plant, and we ran two series of experiments, showing the MSE is reduced from 2.36 to 0.88 (intra-species), and from 1.48 to 0.6 (inter-species).
Collapse
Affiliation(s)
- Mattia Litrico
- Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy; (M.L.); (S.B.)
| | - Sebastiano Battiato
- Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy; (M.L.); (S.B.)
| | | | - Mario Valerio Giuffrida
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
- Correspondence: ; Tel.: +44-131-455-2744
| |
Collapse
|
18
|
Depto DS, Rahman S, Hosen MM, Akter MS, Reme TR, Rahman A, Zunair H, Rahman MS, Mahdy MRC. Automatic segmentation of blood cells from microscopic slides: A comparative analysis. Tissue Cell 2021; 73:101653. [PMID: 34555777 DOI: 10.1016/j.tice.2021.101653] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 09/02/2021] [Accepted: 09/15/2021] [Indexed: 11/15/2022]
Abstract
With the recent developments in deep learning, automatic cell segmentation from images of microscopic examination slides seems to be a solved problem as recent methods have achieved comparable results on existing benchmark datasets. However, most of the existing cell segmentation benchmark datasets either contain a single cell type, few instances of the cells, not publicly available. Therefore, it is unclear whether the performance improvements can generalize on more diverse datasets. In this paper, we present a large and diverse cell segmentation dataset BBBC041Seg1, which consists both of uninfected cells (i.e., red blood cells/RBCs, leukocytes) and infected cells (i.e., gametocytes, rings, trophozoites, and schizonts). Additionally, all cell types do not have equal instances, which encourages researchers to develop algorithms for learning from imbalanced classes in a few shot learning paradigm. Furthermore, we conduct a comparative study using both classical rule-based and recent deep learning state-of-the-art (SOTA) methods for automatic cell segmentation and provide them as strong baselines. We believe the introduction of BBBC041Seg will promote future research towards clinically applicable cell segmentation methods from microscopic examinations, which can be later used for downstream tasks such as detecting hematological diseases (i.e., malaria).
Collapse
Affiliation(s)
- Deponker Sarker Depto
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Shazidur Rahman
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Md Mekayel Hosen
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Mst Shapna Akter
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Tamanna Rahman Reme
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Aimon Rahman
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | | | - M Sohel Rahman
- Department of Computer Science & Engineering, Bangladesh University of Engineering and Technology, ECE Building, West Palasi, Dhaka, 1205, Bangladesh.
| | - M R C Mahdy
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| |
Collapse
|
19
|
Szkalisity A, Piccinini F, Beleon A, Balassa T, Varga IG, Migh E, Molnar C, Paavolainen L, Timonen S, Banerjee I, Ikonen E, Yamauchi Y, Ando I, Peltonen J, Pietiäinen V, Honti V, Horvath P. Regression plane concept for analysing continuous cellular processes with machine learning. Nat Commun 2021; 12:2532. [PMID: 33953203 PMCID: PMC8100172 DOI: 10.1038/s41467-021-22866-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 03/30/2021] [Indexed: 01/16/2023] Open
Abstract
Biological processes are inherently continuous, and the chance of phenotypic discovery is significantly restricted by discretising them. Using multi-parametric active regression we introduce the Regression Plane (RP), a user-friendly discovery tool enabling class-free phenotypic supervised machine learning, to describe and explore biological data in a continuous manner. First, we compare traditional classification with regression in a simulated experimental setup. Second, we use our framework to identify genes involved in regulating triglyceride levels in human cells. Subsequently, we analyse a time-lapse dataset on mitosis to demonstrate that the proposed methodology is capable of modelling complex processes at infinite resolution. Finally, we show that hemocyte differentiation in Drosophila melanogaster has continuous characteristics.
Collapse
Affiliation(s)
- Abel Szkalisity
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary
- Department of Anatomy and Stem Cells and Metabolism Research Program, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Filippo Piccinini
- Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST) IRCCS, Meldola, FC, Italy
| | - Attila Beleon
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary
| | - Tamas Balassa
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary
| | | | - Ede Migh
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary
| | - Csaba Molnar
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary
| | - Lassi Paavolainen
- Institute for Molecular Medicine Finland-FIMM, Helsinki Institute of Life Science-HiLIFE, University of Helsinki, Helsinki, Finland
| | - Sanna Timonen
- Institute for Molecular Medicine Finland-FIMM, Helsinki Institute of Life Science-HiLIFE, University of Helsinki, Helsinki, Finland
| | - Indranil Banerjee
- Indian Institute of Science Education and Research (IISER), Mohali, India
| | - Elina Ikonen
- Department of Anatomy and Stem Cells and Metabolism Research Program, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Yohei Yamauchi
- School of Cellular and Molecular Medicine, University of Bristol, BS8 1TD University Walk, Bristol, UK
| | - Istvan Ando
- Institute of Genetics, Biological Research Center (BRC), Szeged, Hungary
| | - Jaakko Peltonen
- Faculty of Information Technology and Communication Sciences, Tampere University, FI-33014 Tampere University, Tampere, Finland
- Department of Computer Science, Aalto University, Aalto, Finland
| | - Vilja Pietiäinen
- Institute for Molecular Medicine Finland-FIMM, Helsinki Institute of Life Science-HiLIFE, University of Helsinki, Helsinki, Finland
| | - Viktor Honti
- Institute of Genetics, Biological Research Center (BRC), Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary.
- Institute for Molecular Medicine Finland-FIMM, Helsinki Institute of Life Science-HiLIFE, University of Helsinki, Helsinki, Finland.
- Single-Cell Technologies Ltd., Szeged, Hungary.
| |
Collapse
|
20
|
He S, Minn KT, Solnica-Krezel L, Anastasio MA, Li H. Deeply-supervised density regression for automatic cell counting in microscopy images. Med Image Anal 2021; 68:101892. [PMID: 33285481 PMCID: PMC7856299 DOI: 10.1016/j.media.2020.101892] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/21/2022]
Abstract
Accurately counting the number of cells in microscopy images is required in many medical diagnosis and biological studies. This task is tedious, time-consuming, and prone to subjective errors. However, designing automatic counting methods remains challenging due to low image contrast, complex background, large variance in cell shapes and counts, and significant cell occlusions in two-dimensional microscopy images. In this study, we proposed a new density regression-based method for automatically counting cells in microscopy images. The proposed method processes two innovations compared to other state-of-the-art density regression-based methods. First, the density regression model (DRM) is designed as a concatenated fully convolutional regression network (C-FCRN) to employ multi-scale image features for the estimation of cell density maps from given images. Second, auxiliary convolutional neural networks (AuxCNNs) are employed to assist in the training of intermediate layers of the designed C-FCRN to improve the DRM performance on unseen datasets. Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method.
Collapse
Affiliation(s)
- Shenghua He
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA
| | - Kyaw Thu Minn
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA; Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Lilianna Solnica-Krezel
- Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA; Center of Regenerative Medicine, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Mark A Anastasio
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
| | - Hua Li
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Carle Cancer Center, Carle Foundation Hospital, Urbana, IL 61801 USA.
| |
Collapse
|
21
|
Ahmad A, Frindel C, Rousseau D. Detecting Differences of Fluorescent Markers Distribution in Single Cell Microscopy: Textural or Pointillist Feature Space? Front Robot AI 2021; 7:39. [PMID: 33501207 PMCID: PMC7805927 DOI: 10.3389/frobt.2020.00039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 03/09/2020] [Indexed: 12/22/2022] Open
Abstract
We consider the detection of change in spatial distribution of fluorescent markers inside cells imaged by single cell microscopy. Such problems are important in bioimaging since the density of these markers can reflect the healthy or pathological state of cells, the spatial organization of DNA, or cell cycle stage. With the new super-resolved microscopes and associated microfluidic devices, bio-markers can be detected in single cells individually or collectively as a texture depending on the quality of the microscope impulse response. In this work, we propose, via numerical simulations, to address detection of changes in spatial density or in spatial clustering with an individual (pointillist) or collective (textural) approach by comparing their performances according to the size of the impulse response of the microscope. Pointillist approaches show good performances for small impulse response sizes only, while all textural approaches are found to overcome pointillist approaches with small as well as with large impulse response sizes. These results are validated with real fluorescence microscopy images with conventional resolution. This, a priori non-intuitive result in the perspective of the quest of super-resolution, demonstrates that, for difference detection tasks in single cell microscopy, super-resolved microscopes may not be mandatory and that lower cost, sub-resolved, microscopes can be sufficient.
Collapse
Affiliation(s)
- Ali Ahmad
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes, UMR INRAE IRHS, Université d'Angers, Angers, France.,Centre de Recherche en Acquisition et Traitement de l'Image pour la Santé, CNRS UMR 5220-INSERM U1206, Université Lyon 1, INSA de Lyon, Lyon, France
| | - Carole Frindel
- Centre de Recherche en Acquisition et Traitement de l'Image pour la Santé, CNRS UMR 5220-INSERM U1206, Université Lyon 1, INSA de Lyon, Lyon, France
| | - David Rousseau
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes, UMR INRAE IRHS, Université d'Angers, Angers, France
| |
Collapse
|
22
|
Ding X, Zhang Q, Welch WJ. Classification Beats Regression: Counting of Cells from Greyscale Microscopic Images Based on Annotation-Free Training Samples. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_56] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
Połap D. An adaptive genetic algorithm as a supporting mechanism for microscopy image analysis in a cascade of convolution neural networks. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106824] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
24
|
Cheng KS, Pan R, Pan H, Li B, Meena SS, Xing H, Ng YJ, Qin K, Liao X, Kosgei BK, Wang Z, Han RP. ALICE: a hybrid AI paradigm with enhanced connectivity and cybersecurity for a serendipitous encounter with circulating hybrid cells. Am J Cancer Res 2020; 10:11026-11048. [PMID: 33042268 PMCID: PMC7532685 DOI: 10.7150/thno.44053] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 05/11/2020] [Indexed: 12/12/2022] Open
Abstract
A fully automated and accurate assay of rare cell phenotypes in densely-packed fluorescently-labeled liquid biopsy images remains elusive. Methods: Employing a hybrid artificial intelligence (AI) paradigm that combines traditional rule-based morphological manipulations with modern statistical machine learning, we deployed a next generation software, ALICE (Automated Liquid Biopsy Cell Enumerator) to identify and enumerate minute amounts of tumor cell phenotypes bestrewed in massive populations of leukocytes. As a code designed for futurity, ALICE is armed with internet of things (IOT) connectivity to promote pedagogy and continuing education and also, an advanced cybersecurity system to safeguard against digital attacks from malicious data tampering. Results: By combining robust principal component analysis, random forest classifier and cubic support vector machine, ALICE was able to detect synthetic, anomalous and tampered input images with an average recall and precision of 0.840 and 0.752, respectively. In terms of phenotyping enumeration, ALICE was able to enumerate various circulating tumor cell (CTC) phenotypes with a reliability ranging from 0.725 (substantial agreement) to 0.961 (almost perfect) as compared to human analysts. Further, two subpopulations of circulating hybrid cells (CHCs) were serendipitously discovered and labeled as CHC-1 (DAPI+/CD45+/E-cadherin+/vimentin-) and CHC-2 (DAPI+ /CD45+/E-cadherin+/vimentin+) in the peripheral blood of pancreatic cancer patients. CHC-1 was found to correlate with nodal staging and was able to classify lymph node metastasis with a sensitivity of 0.615 (95% CI: 0.374-0.898) and specificity of 1.000 (95% CI: 1.000-1.000). Conclusion: This study presented a machine-learning-augmented rule-based hybrid AI algorithm with enhanced cybersecurity and connectivity for the automatic and flexibly-adapting enumeration of cellular liquid biopsies. ALICE has the potential to be used in a clinical setting for an accurate and reliable enumeration of CTC phenotypes.
Collapse
|
25
|
Gao Q, Xu Y, Amason J, Loksztejn A, Cousins S, Pajic M, Hadziahmetovic M. Automated Recognition of Retinal Pigment Epithelium Cells on Limited Training Samples Using Neural Networks. Transl Vis Sci Technol 2020; 9:31. [PMID: 32832204 PMCID: PMC7414692 DOI: 10.1167/tvst.9.2.31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 04/07/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To develop a neural network (NN)-based approach, with limited training resources, that identifies and counts the number of retinal pigment epithelium (RPE) cells in confocal microscopy images obtained from cell culture or mice RPE/choroid flat-mounts. Methods Training and testing dataset contained two image types: wild-type mice RPE/choroid flat-mounts and ARPE 19 cells, stained for Rhodamine-phalloidin, and imaged with confocal microscopy. After image preprocessing for denoising and contrast adjustment, scale-invariant feature transform descriptors were used for feature extraction. Training labels were derived from cells in the original training images, annotated and converted to Gaussian density maps. NNs were trained using the set of training input features, such that the obtained NN models accurately predicted corresponding Gaussian density maps and thus accurately identifies/counts the cells in any such image. Results Training and testing datasets contained 229 images from ARPE19 and 85 images from RPE/choroid flat-mounts. Within two data sets, 30% and 10% of the images, were selected for validation. We achieved 96.48% ± 6.56% and 96.88% ± 3.68% accuracy (95% CI), on ARPE19 and RPE/choroid flat-mounts. Conclusions We developed an NN-based approach that can accurately estimate the number of RPE cells contained in confocal images. Our method achieved high accuracy with limited training images, proved that it can be effectively used on images with unclear and curvy boundaries, and outperformed existing relevant methods by decreasing prediction error and variance. Translational Relevance This approach allows efficient and effective characterization of RPE pathology and furthermore allows the assessment of novel therapeutics.
Collapse
Affiliation(s)
- Qitong Gao
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Ying Xu
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Joshua Amason
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Anna Loksztejn
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Scott Cousins
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Miroslav Pajic
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA.,Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
26
|
Rubens U, Mormont R, Paavolainen L, Bäcker V, Pavie B, Scholz LA, Michiels G, Maška M, Ünay D, Ball G, Hoyoux R, Vandaele R, Golani O, Stanciu SG, Sladoje N, Paul-Gilloteaux P, Marée R, Tosi S. BIAFLOWS: A Collaborative Framework to Reproducibly Deploy and Benchmark Bioimage Analysis Workflows. PATTERNS (NEW YORK, N.Y.) 2020; 1:100040. [PMID: 33205108 PMCID: PMC7660398 DOI: 10.1016/j.patter.2020.100040] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 04/04/2020] [Accepted: 04/27/2020] [Indexed: 01/26/2023]
Abstract
Image analysis is key to extracting quantitative information from scientific microscopy images, but the methods involved are now often so refined that they can no longer be unambiguously described by written protocols. We introduce BIAFLOWS, an open-source web tool enabling to reproducibly deploy and benchmark bioimage analysis workflows coming from any software ecosystem. A curated instance of BIAFLOWS populated with 34 image analysis workflows and 15 microscopy image datasets recapitulating common bioimage analysis problems is available online. The workflows can be launched and assessed remotely by comparing their performance visually and according to standard benchmark metrics. We illustrated these features by comparing seven nuclei segmentation workflows, including deep-learning methods. BIAFLOWS enables to benchmark and share bioimage analysis workflows, hence safeguarding research results and promoting high-quality standards in image analysis. The platform is thoroughly documented and ready to gather annotated microscopy datasets and workflows contributed by the bioimaging community.
Collapse
Affiliation(s)
- Ulysse Rubens
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | - Romain Mormont
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | | | - Volker Bäcker
- MRI, BioCampus Montpellier, Montpellier 34094, France
| | | | | | | | | | - Devrim Ünay
- Faculty of Engineering İzmir, Demokrasi University, 35330 Balçova, Turkey
| | - Graeme Ball
- Dundee Imaging Facility, School of Life Sciences, University of Dundee, Dundee DD1 5EH, UK
| | | | - Rémy Vandaele
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | - Ofra Golani
- Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot 7610001, Israel
| | | | - Natasa Sladoje
- Uppsala University, P.O. Box 256, 751 05 Uppsala, Sweden
| | - Perrine Paul-Gilloteaux
- Structure Fédérative de Recherche François Bonamy, Université de Nantes, CNRS, INSERM, Nantes Cedex 1 13522 44035, France
| | - Raphaël Marée
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | - Sébastien Tosi
- Institute for Research in Biomedicine, IRB Barcelona, Barcelona Institute of Science and Technology, BIST, 08028 Barcelona, Spain
| |
Collapse
|
27
|
nucleAIzer: A Parameter-free Deep Learning Framework for Nucleus Segmentation Using Image Style Transfer. Cell Syst 2020; 10:453-458.e6. [PMID: 34222682 PMCID: PMC8247631 DOI: 10.1016/j.cels.2020.04.003] [Citation(s) in RCA: 123] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Single-cell segmentation is typically a crucial task of image-based cellular analysis. We present nucleAIzer, a deep-learning approach aiming toward a truly general method for localizing 2D cell nuclei across a diverse range of assays and light microscopy modalities. We outperform the 739 methods submitted to the 2018 Data Science Bowl on images representing a variety of realistic conditions, some of which were not represented in the training data. The key to our approach is that during training nucleAIzer automatically adapts its nucleus-style model to unseen and unlabeled data using image style transfer to automatically generate augmented training samples. This allows the model to recognize nuclei in new and different experiments efficiently without requiring expert annotations, making deep learning for nucleus segmentation fairly simple and labor free for most biological light microscopy experiments. It can also be used online, integrated into CellProfiler and freely downloaded at www.nucleaizer.org. A record of this paper’s transparent peer review process is included in the Supplemental Information. Microscopy image analysis of single cells can be challenging but also eased and improved. We developed a deep learning method to segment cell nuclei. Our strategy is adapting to unexpected circumstances automatically by synthesizing artificial microscopy images in such a domain as training samples.
Collapse
|
28
|
Kowal M, Korbicz J. Refinement of Convolutional Neural Network Based Cell Nuclei Detection Using Bayesian Inference. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:7216-7222. [PMID: 31947499 DOI: 10.1109/embc.2019.8857950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Cytological samples provide useful data for cancer diagnostics but their visual analysis under a microscope is tedious and time-consuming. Moreover, some scientific tests indicate that various pathologists can classify the same sample differently or the same pathologist can classify the sample differently if there is a long interval between subsequent examinations. We can help pathologists by providing tools for automatic analysis of cellular structures. Unfortunately, cytological samples usually consist of clumped structures, so it is difficult to extract single cells to measure their morphometric parameters. To deal with this problem, we are proposing a nuclei detection approach, which combines convolutional neural network and Bayesian inference. The input image is preprocessed by the stain separation procedure to extract a blue dye (hematoxylin) which is mainly absorbed by nuclei. Next, a convolutional neural network is trained to provide a semantic segmentation of the image. Finally, the segmentation results are post processed in order to detect nuclei. To do that, we model the nuclei distribution on a plane using marked point process and apply the Besag's iterated conditional modes to find the configuration of ellipses that fit the nuclei distribution. Thanks to this we can represent clusters of occluded cell nuclei as a set of an overlapping ellipses. The accuracy of the proposed method was tested on 50 cytological images of breast cancer. Reference data was generated by the manual labeling of cell nuclei in images. The effectiveness of the proposed method was compared with the marker-controlled watershed. We applied our method and marker controlled watershed to detect nuclei in the semantic segmentation maps generated by the convolutional neural network. The accuracy of nuclei detection is measured as the number of true positive (TP) detections and false positive (FP) detections. It was recorded that the method can detect correctly 93.5% of nuclei (TP) and at the same time it generates only 6.1% of FP. The proposed approach has led to better results than the marker-controlled watershed both in the number of correctly detected nuclei and in the number of false detections.
Collapse
|
29
|
Kozubek M. When Deep Learning Meets Cell Image Synthesis. Cytometry A 2019; 97:222-225. [PMID: 31889406 DOI: 10.1002/cyto.a.23957] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 12/03/2019] [Indexed: 02/03/2023]
Affiliation(s)
- Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Czech Republic
| |
Collapse
|
30
|
Consistent validation of gray-level thresholding image segmentation algorithms based on machine learning classifiers. Stat Pap (Berl) 2019. [DOI: 10.1007/s00362-019-01138-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
31
|
Scalbert M, Couzinie-Devy F, Fezzani R. Generic Isolated Cell Image Generator. Cytometry A 2019; 95:1198-1206. [PMID: 31593370 PMCID: PMC6899488 DOI: 10.1002/cyto.a.23899] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 08/30/2019] [Accepted: 09/10/2019] [Indexed: 11/24/2022]
Abstract
Building automated cancer screening systems based on image analysis is currently a hot topic in computer vision and medical imaging community. One of the biggest challenges of such systems, especially those using state‐of‐the‐art deep learning techniques, is that they usually require a large amount of training data to be accurate. However, in the medical field, the confidentiality of the data and the need for medical expertise to label them significantly reduce the amount of training data available. A common practice to overcome this problem is to apply data set augmentation techniques to artificially increase the size of the training data set. Classical data set augmentation methods such as geometrical or color transformations are efficient but still produce a limited amount of new data. Hence, there has been interest in data set augmentation methods using generative models able to synthesize a wider variety of new data. VitaDX is actually developing an automated bladder cancer screening system based on the analysis of cell images contained in urinary cytology digital slides. Currently, the number of available labeled cell images is limited and therefore exploitation of the full potential of deep learning techniques is not possible. In an attempt to increase the number of labeled cell images, a new generic generator for 2D cell images has been developed and is described in this article. This framework combines previous works on cell image generation and a recent style transfer method referred to as doodle‐style transfer in this article. To the best of our knowledge, we are the first to use a doodle‐style transfer method for synthetic cell image generation. This framework is quite modular and could be applied to other cell image generation problems. A statistical evaluation has shown that features of real and synthetic cell images followed roughly the same distribution. Finally, the realism of the synthetic cell images has been assessed through a visual evaluation performed with the help of medical experts. © 2019 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Marin Scalbert
- Department of Research & Development, VitaDX, Paris, France
| | | | - Riadh Fezzani
- Department of Research & Development, VitaDX, Paris, France
| |
Collapse
|
32
|
Liu Q, Junker A, Murakami K, Hu P. Automated Counting of Cancer Cells by Ensembling Deep Features. Cells 2019; 8:cells8091019. [PMID: 31480740 PMCID: PMC6770845 DOI: 10.3390/cells8091019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 08/12/2019] [Accepted: 08/29/2019] [Indexed: 01/03/2023] Open
Abstract
High-content and high-throughput digital microscopes have generated large image sets in biological experiments and clinical practice. Automatic image analysis techniques, such as cell counting, are in high demand. Here, cell counting was treated as a regression problem using image features (phenotypes) extracted by deep learning models. Three deep convolutional neural network models were developed to regress image features to their cell counts in an end-to-end way. Theoretically, ensembling imaging phenotypes should have better representative ability than a single type of imaging phenotype. We implemented this idea by integrating two types of imaging phenotypes (dot density map and foreground mask) extracted by two autoencoders and regressing the ensembled imaging phenotypes to cell counts afterwards. Two publicly available datasets with synthetic microscopic images were used to train and test the proposed models. Root mean square error, mean absolute error, mean absolute percent error, and Pearson correlation were applied to evaluate the models’ performance. The well-trained models were also applied to predict the cancer cell counts of real microscopic images acquired in a biological experiment to evaluate the roles of two colorectal-cancer-related genes. The proposed model by ensembling deep imaging features showed better performance in terms of smaller errors and larger correlations than those based on a single type of imaging feature. Overall, all models’ predictions showed a high correlation with the true cell counts. The ensembling-based model integrated high-level imaging phenotypes to improve the estimation of cell counts from high-content and high-throughput microscopic images.
Collapse
Affiliation(s)
- Qian Liu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, MB R3E 0J9, Canada
| | - Anna Junker
- European Institute of Molecular Imaging, University of Münster, D-48149 Münster, Germany
| | - Kazuhiro Murakami
- Cancer Research Institute, Kanazawa University, Kanazawa 920 1192, Japan
| | - Pingzhao Hu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, MB R3E 0J9, Canada.
- Research Institute in Oncology and Hematology, CancerCare Manitoba, Winnipeg, MB R3E 0V9, Canada.
| |
Collapse
|
33
|
Guo Y, Wu G, Stein J, Krishnamurthy A. SAU-Net: A Universal Deep Network for Cell Counting. ACM-BCB ... ... : THE ... ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE. ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE 2019; 2019:299-306. [PMID: 34046647 PMCID: PMC8153189 DOI: 10.1145/3307339.3342153] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Image-based cell counting is a fundamental yet challenging task with wide applications in biological research. In this paper, we propose a novel Deep Network designed to universally solve this problem for various cell types. Specifically, we first extend the segmentation network, U-Net with a Self-Attention module, named SAU-Net, for cell counting. Second, we design an online version of Batch Normalization to mitigate the generalization gap caused by data augmentation in small datasets. We evaluate the proposed method on four public cell counting benchmarks - synthetic fluorescence microscopy (VGG) dataset, Modified Bone Marrow (MBM) dataset, human subcutaneous adipose tissue (ADI) dataset, and Dublin Cell Counting (DCC) dataset. Our method surpasses the current state-of-the-art performance in the three real datasets (MBM, ADI and DCC) and achieves competitive results in the synthetic dataset (VGG). The source code is available at https://github.com/mzlr/sau-net.
Collapse
Affiliation(s)
- Yue Guo
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jason Stein
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | | |
Collapse
|
34
|
A Feature based Reconstruction Model for Fluorescence Microscopy Image Denoising. Sci Rep 2019; 9:7725. [PMID: 31118450 PMCID: PMC6531475 DOI: 10.1038/s41598-019-43973-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 05/03/2019] [Indexed: 11/08/2022] Open
Abstract
The advent of Fluorescence Microscopy over the last few years have dramatically improved the problem of visualization and tracking of specific cellular objects for biological inference. But like any other imaging system, fluorescence microscopy has its own limitations. The resultant images suffer from the effect of noise due to both signal dependent and signal independent factors, thereby limiting the possibility of biological inferencing. Denoising is a class of image processing algorithms that aim to remove noise from acquired images and has gained wide attention in the field of fluorescence microscopy image restoration. In this paper, we propose an image denoising algorithm based on the concept of feature extraction through multifractal decomposition and then estimate a noise free image from the gradients restricted to these features. Experimental results over simulated and real fluorescence microscopy data prove the merit of the proposed approach, both visually and quantitatively.
Collapse
|
35
|
Chai X, Ba Q, Yang G. Characterizing robustness and sensitivity of convolutional neural networks for quantitative analysis of mitochondrial morphology. QUANTITATIVE BIOLOGY 2018. [DOI: 10.1007/s40484-018-0156-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
36
|
Glotsos D, Kostopoulos S, Ravazoula P, Cavouras D. Image quilting and wavelet fusion for creation of synthetic microscopy nuclei images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:177-186. [PMID: 29903484 DOI: 10.1016/j.cmpb.2018.05.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 05/09/2018] [Accepted: 05/16/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE In this study a texture simulation methodology is proposed for composing synthetic tissue microscopy images that could serve as a quantitative gold standard for the evaluation of the reliability, accuracy and performance of segmentation algorithms in computer-aided diagnosis. METHODS A library of background and nuclei regions was generated using pre-segmented Haematoxylin and Eosin images of brain tumours. Background image samples were used as input to an image quilting algorithm that produced the synthetic background image. Randomly selected pre-segmented nuclei were randomly fused on the synthetic background using a wavelet-based fusion approach. To investigate whether the produced synthetic images are meaningful and similar to real world images, two different tests were performed, one qualitative by an experienced histopathologist and one quantitative using the normalized mutual information and the Kullback-Leibler tests. To illustrate the challenges that synthetic images may pose to object recognition algorithms, two segmentation methodologies were utilized for nuclei detection, one based on the Otsu thresholding and another based on the seeded region growing approach. RESULTS Results showed a satisfactory to good resemblance of the synthetic with the real world images according to both qualitative and quantitative tests. The segmentation accuracy was slightly higher for the seeded region growing algorithm (87.2%) than the Otsu's algorithm (86.3%). CONCLUSIONS Since we know the exact coordinates of the regions of interest within the synthesised images, these images could then serve as a 'gold standard' for evaluation of segmentation algorithms in computer-aided diagnosis in tissue microscopy.
Collapse
Affiliation(s)
- Dimitris Glotsos
- Medical Image and Signal Processing (medisp) Lab, Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo, 122 10 Athens, Greece.
| | - Spiros Kostopoulos
- Medical Image and Signal Processing (medisp) Lab, Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo, 122 10 Athens, Greece
| | | | - Dionisis Cavouras
- Medical Image and Signal Processing (medisp) Lab, Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo, 122 10 Athens, Greece
| |
Collapse
|
37
|
Beheshti M, Ashapure A, Rahnemoonfar M, Faichney J. Fluorescence microscopy image segmentation based on graph and fuzzy methods: A comparison with ensemble method. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2018. [DOI: 10.3233/jifs-17466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Maedeh Beheshti
- School of Information and Communication Technology, Griffith University, Australia
| | - Akash Ashapure
- College of Science and Engineering, Texas A&M University-Corpus Christi, USA
| | - Maryam Rahnemoonfar
- College of Science and Engineering, Texas A&M University-Corpus Christi, USA
| | - Jolon Faichney
- School of Information and Communication Technology, Griffith University, Australia
| |
Collapse
|
38
|
Lee J, Kolb I, Forest CR, Rozell CJ. Cell Membrane Tracking in Living Brain Tissue Using Differential Interference Contrast Microscopy. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1847-1861. [PMID: 29346099 PMCID: PMC5839128 DOI: 10.1109/tip.2017.2787625] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Differential interference contrast (DIC) microscopy is widely used for observing unstained biological samples that are otherwise optically transparent. Combining this optical technique with machine vision could enable the automation of many life science experiments; however, identifying relevant features under DIC is challenging. In particular, precise tracking of cell boundaries in a thick ( ) slice of tissue has not previously been accomplished. We present a novel deconvolution algorithm that achieves the state-of-the-art performance at identifying and tracking these membrane locations. Our proposed algorithm is formulated as a regularized least squares optimization that incorporates a filtering mechanism to handle organic tissue interference and a robust edge-sparsity regularizer that integrates dynamic edge tracking capabilities. As a secondary contribution, this paper also describes new community infrastructure in the form of a MATLAB toolbox for accurately simulating DIC microscopy images of in vitro brain slices. Building on existing DIC optics modeling, our simulation framework additionally contributes an accurate representation of interference from organic tissue, neuronal cell-shapes, and tissue motion due to the action of the pipette. This simulator allows us to better understand the image statistics (to improve algorithms), as well as quantitatively test cell segmentation and tracking algorithms in scenarios, where ground truth data is fully known.
Collapse
|
39
|
Riccio D, Brancati N, Frucci M, Gragnaniello D. A New Unsupervised Approach for Segmenting and Counting Cells in High-Throughput Microscopy Image Sets. IEEE J Biomed Health Inform 2018; 23:437-448. [PMID: 29994162 DOI: 10.1109/jbhi.2018.2817485] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
New technological advances in automated microscopy have given rise to large volumes of data, which have made human-based analysis infeasible, heightening the need for automatic systems for high-throughput microscopy applications. In particular, in the field of fluorescence microscopy, automatic tools for image analysis are making an essential contribution in order to increase the statistical power of the cell analysis process. The development of these automatic systems is a difficult task due to both the diversification of the staining patterns and the local variability of the images. In this paper, we present an unsupervised approach for automatic cell segmentation and counting, namely CSC, in high-throughput microscopy images. The segmentation is performed by dividing the whole image into square patches that undergo a gray level clustering followed by an adaptive thresholding. Subsequently, the cell labeling is obtained by detecting the centers of the cells, using both distance transform and curvature analysis, and by applying a region growing process. The advantages of CSC are manifold. The foreground detection process works on gray levels rather than on individual pixels, so it proves to be very efficient. Moreover, the combination of distance transform and curvature analysis makes the counting process very robust to clustered cells. A further strength of the CSC method is the limited number of parameters that must be tuned. Indeed, two different versions of the method have been considered, CSC-7 and CSC-3, depending on the number of parameters to be tuned. The CSC method has been tested on several publicly available image datasets of real and synthetic images. Results in terms of standard metrics and spatially aware measures show that CSC outperforms the current state-of-the-art techniques.
Collapse
|
40
|
Piccinini F, Balassa T, Szkalisity A, Molnar C, Paavolainen L, Kujala K, Buzas K, Sarazova M, Pietiainen V, Kutay U, Smith K, Horvath P. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data. Cell Syst 2017. [DOI: 10.1016/j.cels.2017.05.012] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
41
|
Song J, Xiao L, Lian Z. Boundary-to-Marker Evidence-Controlled Segmentation and MDL-Based Contour Inference for Overlapping Nuclei. IEEE J Biomed Health Inform 2017; 21:451-464. [DOI: 10.1109/jbhi.2015.2504422] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
42
|
Abdellah M, Bilgili A, Eilemann S, Shillcock J, Markram H, Schürmann F. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation. BMC Bioinformatics 2017; 18:62. [PMID: 28251871 PMCID: PMC5333179 DOI: 10.1186/s12859-016-1444-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Results Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. Conclusion We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.
Collapse
Affiliation(s)
- Marwan Abdellah
- Blue Brain Project (BBP), École Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Chemin des Mines 9, Geneva, 1202, Switzerland
| | - Ahmet Bilgili
- Blue Brain Project (BBP), École Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Chemin des Mines 9, Geneva, 1202, Switzerland
| | - Stefan Eilemann
- Blue Brain Project (BBP), École Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Chemin des Mines 9, Geneva, 1202, Switzerland
| | - Julian Shillcock
- Blue Brain Project (BBP), École Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Chemin des Mines 9, Geneva, 1202, Switzerland
| | - Henry Markram
- Blue Brain Project (BBP), École Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Chemin des Mines 9, Geneva, 1202, Switzerland
| | - Felix Schürmann
- Blue Brain Project (BBP), École Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Chemin des Mines 9, Geneva, 1202, Switzerland.
| |
Collapse
|
43
|
Descombes X. Multiple objects detection in biological images using a marked point process framework. Methods 2017; 115:2-8. [DOI: 10.1016/j.ymeth.2016.09.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 09/14/2016] [Accepted: 09/19/2016] [Indexed: 11/26/2022] Open
|
44
|
Arganda-Carreras I, Andrey P. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach. Methods Mol Biol 2017; 1563:185-207. [PMID: 28324610 DOI: 10.1007/978-1-4939-6810-7_13] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Collapse
Affiliation(s)
- Ignacio Arganda-Carreras
- Ikerbasque, Basque Foundation for Science, 48013, Bilbao, Spain
- Computer Science and Artificial Intelligence Department, Basque Country University (UPV/EHU), 20018, Donostia-San Sebastian, Spain
- Donostia International Physics Center (DIPC), 20018, Donostia-San Sebastian, Spain
| | - Philippe Andrey
- Institut Jean-Pierre Bourgin, INRA, AgroParisTech, CNRS, Université Paris-Saclay, RD10, 78000, Versailles, France.
- Sorbonne Universités, UPMC Univ Paris 06, UFR 927, Paris, France.
| |
Collapse
|
45
|
Svoboda D, Ulman V. MitoGen: A Framework for Generating 3D Synthetic Time-Lapse Sequences of Cell Populations in Fluorescence Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:310-321. [PMID: 27623575 DOI: 10.1109/tmi.2016.2606545] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The proper analysis of biological microscopy images is an important and complex task. Therefore, it requires verification of all steps involved in the process, including image segmentation and tracking algorithms. It is generally better to verify algorithms with computer-generated ground truth datasets, which, compared to manually annotated data, nowadays have reached high quality and can be produced in large quantities even for 3D time-lapse image sequences. Here, we propose a novel framework, called MitoGen, which is capable of generating ground truth datasets with fully 3D time-lapse sequences of synthetic fluorescence-stained cell populations. MitoGen shows biologically justified cell motility, shape and texture changes as well as cell divisions. Standard fluorescence microscopy phenomena such as photobleaching, blur with real point spread function (PSF), and several types of noise, are simulated to obtain realistic images. The MitoGen framework is scalable in both space and time. MitoGen generates visually plausible data that shows good agreement with real data in terms of image descriptors and mean square displacement (MSD) trajectory analysis. Additionally, it is also shown in this paper that four publicly available segmentation and tracking algorithms exhibit similar performance on both real and MitoGen-generated data. The implementation of MitoGen is freely available.
Collapse
|
46
|
Liu L, Kan A, Leckie C, Hodgkin PD. Comparative evaluation of performance measures for shading correction in time-lapse fluorescence microscopy. J Microsc 2016; 266:15-27. [PMID: 28000921 DOI: 10.1111/jmi.12512] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Accepted: 11/10/2016] [Indexed: 01/10/2023]
Abstract
Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested.
Collapse
Affiliation(s)
- L Liu
- Department of Computing and Information Systems, The University of Melbourne, Parkville, Australia
| | - A Kan
- Division of Immunology, Walter and Eliza Hall Institute of Medical Research, Parkville, Australia
| | - C Leckie
- Department of Computing and Information Systems, The University of Melbourne, Parkville, Australia
| | - P D Hodgkin
- Division of Immunology, Walter and Eliza Hall Institute of Medical Research, Parkville, Australia
| |
Collapse
|
47
|
Ulman V, Svoboda D, Nykter M, Kozubek M, Ruusuvuori P. Virtual cell imaging: A review on simulation methods employed in image cytometry. Cytometry A 2016; 89:1057-1072. [PMID: 27922735 DOI: 10.1002/cyto.a.23031] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 07/20/2016] [Accepted: 11/14/2016] [Indexed: 02/03/2023]
Abstract
The simulations of cells and microscope images thereof have been used to facilitate the development, selection, and validation of image analysis algorithms employed in cytometry as well as for modeling and understanding cell structure and dynamics beyond what is visible in the eyepiece. The simulation approaches vary from simple parametric models of specific cell components-especially shapes of cells and cell nuclei-to learning-based synthesis and multi-stage simulation models for complex scenes that simultaneously visualize multiple object types and incorporate various properties of the imaged objects and laws of image formation. This review covers advances in artificial digital cell generation at scales ranging from particles up to tissue synthesis and microscope image simulation methods, provides examples of the use of simulated images for various purposes ranging from subcellular object detection to cell tracking, and discusses how such simulators have been validated. Finally, the future possibilities and limitations of simulation-based validation are considered. © 2016 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Matti Nykter
- Institute of Biosciences and Medical Technology - BioMediTech, University of Tampere, Tampere, Finland
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Pekka Ruusuvuori
- Institute of Biosciences and Medical Technology - BioMediTech, University of Tampere, Tampere, Finland.,Pori Campus, Tampere University of Technology, Pori, Finland
| |
Collapse
|
48
|
Kovacheva VN, Rajpoot NM. Subcellular protein expression models for microsatellite instability in colorectal adenocarcinoma tissue images. BMC Bioinformatics 2016; 17:430. [PMID: 27770786 PMCID: PMC5075203 DOI: 10.1186/s12859-016-1243-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 09/08/2016] [Indexed: 11/12/2022] Open
Abstract
Background New bioimaging techniques capable of visualising the co-location of numerous proteins within individual cells have been proposed to study tumour heterogeneity of neighbouring cells within the same tissue specimen. These techniques have highlighted the need to better understand the interplay between proteins in terms of their colocalisation. Results We recently proposed a cellular-level model of the healthy and cancerous colonic crypt microenvironments. Here, we extend the model to include detailed models of protein expression to generate synthetic multiplex fluorescence data. As a first step, we present models for various cell organelles learned from real immunofluorescence data from the Human Protein Atlas. Comparison between the distribution of various features obtained from the real and synthetic organelles has shown very good agreement. This has included both features that have been used as part of the model input and ones that have not been explicitly considered. We then develop models for six proteins which are important colorectal cancer biomarkers and are associated with microsatellite instability, namely MLH1, PMS2, MSH2, MSH6, P53 and PTEN. The protein models include their complex expression patterns and which cell phenotypes express them. The models have been validated by comparing distributions of real and synthesised parameters and by application of frameworks for analysing multiplex immunofluorescence image data. Conclusions The six proteins have been chosen as a case study to illustrate how the model can be used to generate synthetic multiplex immunofluorescence data. Further proteins could be included within the model in a similar manner to enable the study of a larger set of proteins of interest and their interactions. To the best of our knowledge, this is the first model for expression of multiple proteins in anatomically intact tissue, rather than within cells in culture.
Collapse
Affiliation(s)
- Violeta N Kovacheva
- Department of Systems Biology, University of Warwick, Coventry, CV4 7AL, UK. .,Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK. .,Centre for Molecular Pathology, Institute of Cancer Research, London, SM2 5NG, UK. .,Centre for Evolution and Cancer, Institute of Cancer Research, London, SM2 5NG, UK. .,Division of Molecular Pathology, The Institute of Cancer Research, London, SM2 5NG, UK.
| | - Nasir M Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.,Department of Computer Science and Engineering, Qatar University, Doha, Qatar.,Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Coventry, CV2 2DX, UK
| |
Collapse
|
49
|
Molnar C, Jermyn IH, Kato Z, Rahkama V, Östling P, Mikkonen P, Pietiäinen V, Horvath P. Accurate Morphology Preserving Segmentation of Overlapping Cells based on Active Contours. Sci Rep 2016; 6:32412. [PMID: 27561654 PMCID: PMC5001623 DOI: 10.1038/srep32412] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 08/01/2016] [Indexed: 11/09/2022] Open
Abstract
The identification of fluorescently stained cell nuclei is the basis of cell detection, segmentation, and feature extraction in high content microscopy experiments. The nuclear morphology of single cells is also one of the essential indicators of phenotypic variation. However, the cells used in experiments can lose their contact inhibition, and can therefore pile up on top of each other, making the detection of single cells extremely challenging using current segmentation methods. The model we present here can detect cell nuclei and their morphology even in high-confluency cell cultures with many overlapping cell nuclei. We combine the "gas of near circles" active contour model, which favors circular shapes but allows slight variations around them, with a new data model. This captures a common property of many microscopic imaging techniques: the intensities from superposed nuclei are additive, so that two overlapping nuclei, for example, have a total intensity that is approximately double the intensity of a single nucleus. We demonstrate the power of our method on microscopic images of cells, comparing the results with those obtained from a widely used approach, and with manual image segmentations by experts.
Collapse
Affiliation(s)
- Csaba Molnar
- Synthetic and System Biology Unit, Biological Research Centre of the Hungarian Academy of Sciences, Szeged, Hungary
| | - Ian H Jermyn
- Department of Mathematical Sciences, Durham University, Durham, UK
| | - Zoltan Kato
- Department of Mathematics and Informatics, J. Selye University, Komarno, Slovakia
| | - Vesa Rahkama
- Institute for Molecular Medicine Finland, University of Helsinki, Helsinki, Finland
| | - Päivi Östling
- Institute for Molecular Medicine Finland, University of Helsinki, Helsinki, Finland
| | - Piia Mikkonen
- Institute for Molecular Medicine Finland, University of Helsinki, Helsinki, Finland
| | - Vilja Pietiäinen
- Institute for Molecular Medicine Finland, University of Helsinki, Helsinki, Finland
| | - Peter Horvath
- Synthetic and System Biology Unit, Biological Research Centre of the Hungarian Academy of Sciences, Szeged, Hungary.,Institute for Molecular Medicine Finland, University of Helsinki, Helsinki, Finland
| |
Collapse
|
50
|
Kovacheva VN, Snead D, Rajpoot NM. A model of the spatial tumour heterogeneity in colorectal adenocarcinoma tissue. BMC Bioinformatics 2016; 17:255. [PMID: 27342072 PMCID: PMC4919876 DOI: 10.1186/s12859-016-1126-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 06/07/2016] [Indexed: 01/27/2023] Open
Abstract
Background There have been great advancements in the field of digital pathology. The surge in development of analytical methods for such data makes it crucial to develop benchmark synthetic datasets for objectively validating and comparing these methods. In addition, developing a spatial model of the tumour microenvironment can aid our understanding of the underpinning laws of tumour heterogeneity. Results We propose a model of the healthy and cancerous colonic crypt microenvironment. Our model is designed to generate synthetic histology image data with parameters that allow control over cancer grade, cellularity, cell overlap ratio, image resolution, and objective level. Conclusions To the best of our knowledge, ours is the first model to simulate histology image data at sub-cellular level for healthy and cancerous colon tissue, where the cells have different compartments and are organised to mimic the microenvironment of tissue in situ rather than dispersed cells in a cultured environment. Qualitative and quantitative validation has been performed on the model results demonstrating good similarity to the real data. The simulated data could be used to validate techniques such as image restoration, cell and crypt segmentation, and cancer grading. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1126-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Violeta N Kovacheva
- Department of Systems Biology, University of Warwick, Coventry, CV4 7AL, UK.
| | - David Snead
- Department of HistopathologyUniversity Hospitals Coventry and Warwickshire, Coventry, CV2 2DX, UK
| | - Nasir M Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.,Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| |
Collapse
|