1
|
Riendeau JM, Gillette AA, Guzman EC, Cruz MC, Kralovec A, Udgata S, Schmitz A, Deming DA, Cimini BA, Skala MC. Cellpose as a reliable method for single-cell segmentation of autofluorescence microscopy images. Sci Rep 2025; 15:5548. [PMID: 39952935 PMCID: PMC11828867 DOI: 10.1038/s41598-024-82639-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 12/06/2024] [Indexed: 02/17/2025] Open
Abstract
Autofluorescence microscopy uses intrinsic sources of molecular contrast to provide cellular-level information without extrinsic labels. However, traditional cell segmentation tools are often optimized for high signal-to-noise ratio (SNR) images, such as fluorescently labeled cells, and unsurprisingly perform poorly on low SNR autofluorescence images. Therefore, new cell segmentation tools are needed for autofluorescence microscopy. Cellpose is a deep learning network that is generalizable across diverse cell microscopy images and automatically segments single cells to improve throughput and reduce inter-human biases. This study aims to validate Cellpose for autofluorescence imaging, specifically using multiphoton intensity images of NAD(P)H. Manually segmented nuclear masks of NAD(P)H images were used to train a new autofluorescence-trained model (ATM) in Cellpose for nuclear segmentation of NAD(P)H intensity images. These models were applied to PANC-1 cells treated with metabolic inhibitors and patient-derived cancer organoids (9 patients) treated with chemotherapies. These datasets include co-registered fluorescence lifetime imaging microscopy (FLIM) of NAD(P)H and FAD, so fluorescence decay parameters and the optical redox ratio (ORR) were compared between masks generated by the new ATM and manual segmentation. The Dice score between repeated manually segmented masks was significantly lower than that of repeated ATM masks (p < 0.0001) indicating greater reproducibility between ATM masks. There was also a high correlation (R2 > 0.9) between ATM and manually segmented masks for the ORR, mean NAD(P)H lifetime, and mean FAD lifetime across 2D and 3D cell culture treatment conditions. Masks generated from ATM and manual segmentation also maintain similar means, variances, and effect sizes between treatments for the ORR and FLIM parameters. Overall, the Cellpose ATM provides a fast, reliable, reproducible, and accurate method to segment single cells in autofluorescence microscopy images such that functional changes in cells are accurately captured in both 2D and 3D culture.
Collapse
Affiliation(s)
- Jeremiah M Riendeau
- Department of Biomedical Engineering, University of Wisconsin, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| | | | | | - Mario Costa Cruz
- Broad Institute of Harvard and MIT, Imaging Platform, Cambridge, MA, USA
| | | | - Shirsa Udgata
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Alexa Schmitz
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Dustin A Deming
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
- McArdle Laboratory for Cancer Research, Department of Oncology, University of Wisconsin, Madison, WI, USA
- University of Wisconsin Carbone Cancer Center, Madison, WI, USA
| | - Beth A Cimini
- Broad Institute of Harvard and MIT, Imaging Platform, Cambridge, MA, USA
| | - Melissa C Skala
- Department of Biomedical Engineering, University of Wisconsin, Madison, WI, USA.
- Morgridge Institute for Research, Madison, WI, USA.
| |
Collapse
|
2
|
Todhunter ME, Jubair S, Verma R, Saqe R, Shen K, Duffy B. Artificial intelligence and machine learning applications for cultured meat. Front Artif Intell 2024; 7:1424012. [PMID: 39381621 PMCID: PMC11460582 DOI: 10.3389/frai.2024.1424012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 08/21/2024] [Indexed: 10/10/2024] Open
Abstract
Cultured meat has the potential to provide a complementary meat industry with reduced environmental, ethical, and health impacts. However, major technological challenges remain which require time-and resource-intensive research and development efforts. Machine learning has the potential to accelerate cultured meat technology by streamlining experiments, predicting optimal results, and reducing experimentation time and resources. However, the use of machine learning in cultured meat is in its infancy. This review covers the work available to date on the use of machine learning in cultured meat and explores future possibilities. We address four major areas of cultured meat research and development: establishing cell lines, cell culture media design, microscopy and image analysis, and bioprocessing and food processing optimization. In addition, we have included a survey of datasets relevant to CM research. This review aims to provide the foundation necessary for both cultured meat and machine learning scientists to identify research opportunities at the intersection between cultured meat and machine learning.
Collapse
Affiliation(s)
| | - Sheikh Jubair
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Ruchika Verma
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Rikard Saqe
- Department of Biology, University of Waterloo, Waterloo, ON, Canada
| | - Kevin Shen
- Department of Mathematics, University of Waterloo, Waterloo, ON, Canada
| | | |
Collapse
|
3
|
Riendeau JM, Gillette AA, Guzman EC, Cruz MC, Kralovec A, Udgata S, Schmitz A, Deming DA, Cimini BA, Skala MC. Cellpose as a reliable method for single-cell segmentation of autofluorescence microscopy images. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.07.597994. [PMID: 38915614 PMCID: PMC11195115 DOI: 10.1101/2024.06.07.597994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Autofluorescence microscopy uses intrinsic sources of molecular contrast to provide cellular-level information without extrinsic labels. However, traditional cell segmentation tools are often optimized for high signal-to-noise ratio (SNR) images, such as fluorescently labeled cells, and unsurprisingly perform poorly on low SNR autofluorescence images. Therefore, new cell segmentation tools are needed for autofluorescence microscopy. Cellpose is a deep learning network that is generalizable across diverse cell microscopy images and automatically segments single cells to improve throughput and reduce inter-human biases. This study aims to validate Cellpose for autofluorescence imaging, specifically from multiphoton intensity images of NAD(P)H. Manually segmented nuclear masks of NAD(P)H images were used to train new Cellpose models. These models were applied to PANC-1 cells treated with metabolic inhibitors and patient-derived cancer organoids (across 9 patients) treated with chemotherapies. These datasets include co-registered fluorescence lifetime imaging microscopy (FLIM) of NAD(P)H and FAD, so fluorescence decay parameters and the optical redox ratio (ORR) were compared between masks generated by the new Cellpose model and manual segmentation. The Dice score between repeated manually segmented masks was significantly lower than that of repeated Cellpose masks (p<0.0001) indicating greater reproducibility between Cellpose masks. There was also a high correlation (R2>0.9) between Cellpose and manually segmented masks for the ORR, mean NAD(P)H lifetime, and mean FAD lifetime across 2D and 3D cell culture treatment conditions. Masks generated from Cellpose and manual segmentation also maintain similar means, variances, and effect sizes between treatments for the ORR and FLIM parameters. Overall, Cellpose provides a fast, reliable, reproducible, and accurate method to segment single cells in autofluorescence microscopy images such that functional changes in cells are accurately captured in both 2D and 3D culture.
Collapse
Affiliation(s)
- Jeremiah M Riendeau
- University of Wisconsin, Madison, Department of Biomedical Imaging, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| | | | | | - Mario Costa Cruz
- Broad Institute of Harvard and MIT, Imaging Platform, Cambridge, Massachusetts
| | | | - Shirsa Udgata
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, University of Wisconsin School of Medicine and Public Health, University of Wisconsin, Madison, WI
| | - Alexa Schmitz
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, University of Wisconsin School of Medicine and Public Health, University of Wisconsin, Madison, WI
| | - Dustin A Deming
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, University of Wisconsin School of Medicine and Public Health, University of Wisconsin, Madison, WI
- McArdle Laboratory for Cancer Research, Department of Oncology, University of Wisconsin, Madison, WI
- University of Wisconsin Carbone Cancer Center, Madison, WI
| | - Beth A Cimini
- Broad Institute of Harvard and MIT, Imaging Platform, Cambridge, Massachusetts
| | - Melissa C Skala
- University of Wisconsin, Madison, Department of Biomedical Imaging, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| |
Collapse
|
4
|
Pohl C, Kunzmann M, Brandt N, Koppe C, Waletzko-Hellwig J, Bader R, Kalle F, Kersting S, Behrendt D, Schlosser M, Hoene A. Quantitative analysis of trabecular bone tissue cryosections via a fully automated neural network-based approach. PLoS One 2024; 19:e0298830. [PMID: 38625969 PMCID: PMC11020490 DOI: 10.1371/journal.pone.0298830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 01/30/2024] [Indexed: 04/18/2024] Open
Abstract
Cryosectioning is known as a common and well-established histological method, due to its easy accessibility, speed, and cost efficiency. However, the creation of bone cryosections is especially difficult. In this study, a cryosectioning protocol for trabecular bone that offers a relatively cheap and undemanding alternative to paraffin or resin embedded sectioning was developed. Sections are stainable with common histological dying methods while maintaining sufficient quality to answer a variety of scientific questions. Furthermore, this study introduces an automated protocol for analysing such sections, enabling users to rapidly access a wide range of different stainings. Therefore, an automated 'QuPath' neural network-based image analysis protocol for histochemical analysis of trabecular bone samples was established, and compared to other automated approaches as well as manual analysis regarding scattering, quality, and reliability. This highly automated protocol can handle enormous amounts of image data with no significant differences in its results when compared with a manual method. Even though this method was applied specifically for bone tissue, it works for a wide variety of different tissues and scientific questions.
Collapse
Affiliation(s)
- Christopher Pohl
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| | - Moritz Kunzmann
- University of Heidelberg, BioQuant Center, Heidelberg, Germany
| | - Nico Brandt
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| | - Charlotte Koppe
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| | - Janine Waletzko-Hellwig
- Department of Oral, Maxillofacial and Plastic Surgery, Rostock University Medical Center, Rostock, Germany
- Department of Orthopaedics, Research Laboratory of Biomechanics and Implant Technology, Rostock University Medical Center, Rostock, Germany
| | - Rainer Bader
- Department of Orthopaedics, Research Laboratory of Biomechanics and Implant Technology, Rostock University Medical Center, Rostock, Germany
| | - Friederike Kalle
- Department of Oto-Rhino-Laryngology, Head and Neck Surgery, Rostock University Medical Center, Rostock, Germany
| | - Stephan Kersting
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| | - Daniel Behrendt
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| | - Michael Schlosser
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| | - Andreas Hoene
- Department of General Surgery, Visceral, Thoracic and Vascular Surgery, University Medical Center Greifswald, Greifswald, Germany
| |
Collapse
|
5
|
Liu P, Li J, Chang J, Hu P, Sun Y, Jiang Y, Zhang F, Shao H. Software Tools for 2D Cell Segmentation. Cells 2024; 13:352. [PMID: 38391965 PMCID: PMC10886800 DOI: 10.3390/cells13040352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
Cell segmentation is an important task in the field of image processing, widely used in the life sciences and medical fields. Traditional methods are mainly based on pixel intensity and spatial relationships, but have limitations. In recent years, machine learning and deep learning methods have been widely used, providing more-accurate and efficient solutions for cell segmentation. The effort to develop efficient and accurate segmentation software tools has been one of the major focal points in the field of cell segmentation for years. However, each software tool has unique characteristics and adaptations, and no universal cell-segmentation software can achieve perfect results. In this review, we used three publicly available datasets containing multiple 2D cell-imaging modalities. Common segmentation metrics were used to evaluate the performance of eight segmentation tools to compare their generality and, thus, find the best-performing tool.
Collapse
Affiliation(s)
- Ping Liu
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
| | - Jun Li
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Jiaxing Chang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Pinli Hu
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yue Sun
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yanan Jiang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Fan Zhang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Haojing Shao
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| |
Collapse
|
6
|
Wu L, Chen A, Salama P, Winfree S, Dunn KW, Delp EJ. NISNet3D: three-dimensional nuclear synthesis and instance segmentation for fluorescence microscopy images. Sci Rep 2023; 13:9533. [PMID: 37308499 PMCID: PMC10261124 DOI: 10.1038/s41598-023-36243-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 05/31/2023] [Indexed: 06/14/2023] Open
Abstract
The primary step in tissue cytometry is the automated distinction of individual cells (segmentation). Since cell borders are seldom labeled, cells are generally segmented by their nuclei. While tools have been developed for segmenting nuclei in two dimensions, segmentation of nuclei in three-dimensional volumes remains a challenging task. The lack of effective methods for three-dimensional segmentation represents a bottleneck in the realization of the potential of tissue cytometry, particularly as methods of tissue clearing present the opportunity to characterize entire organs. Methods based on deep learning have shown enormous promise, but their implementation is hampered by the need for large amounts of manually annotated training data. In this paper, we describe 3D Nuclei Instance Segmentation Network (NISNet3D) that directly segments 3D volumes through the use of a modified 3D U-Net, 3D marker-controlled watershed transform, and a nuclei instance segmentation system for separating touching nuclei. NISNet3D is unique in that it provides accurate segmentation of even challenging image volumes using a network trained on large amounts of synthetic nuclei derived from relatively few annotated volumes, or on synthetic data obtained without annotated volumes. We present a quantitative comparison of results obtained from NISNet3D with results obtained from a variety of existing nuclei segmentation techniques. We also examine the performance of the methods when no ground truth is available and only synthetic volumes were used for training.
Collapse
Affiliation(s)
- Liming Wu
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Alain Chen
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Paul Salama
- Department of Electrical and Computer Engineering, Indiana University-Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Seth Winfree
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Kenneth W Dunn
- School of Medicine, Indiana University, Indianapolis, IN, 46202, USA
| | - Edward J Delp
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
7
|
McElliott MC, Al-Suraimi A, Telang AC, Ference-Salo JT, Chowdhury M, Soofi A, Dressler GR, Beamish JA. High-throughput image analysis with deep learning captures heterogeneity and spatial relationships after kidney injury. Sci Rep 2023; 13:6361. [PMID: 37076596 PMCID: PMC10115810 DOI: 10.1038/s41598-023-33433-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/12/2023] [Indexed: 04/21/2023] Open
Abstract
Recovery from acute kidney injury can vary widely in patients and in animal models. Immunofluorescence staining can provide spatial information about heterogeneous injury responses, but often only a fraction of stained tissue is analyzed. Deep learning can expand analysis to larger areas and sample numbers by substituting for time-intensive manual or semi-automated quantification techniques. Here we report one approach to leverage deep learning tools to quantify heterogenous responses to kidney injury that can be deployed without specialized equipment or programming expertise. We first demonstrated that deep learning models generated from small training sets accurately identified a range of stains and structures with performance similar to that of trained human observers. We then showed this approach accurately tracks the evolution of folic acid induced kidney injury in mice and highlights spatially clustered tubules that fail to repair. We then demonstrated that this approach captures the variation in recovery across a robust sample of kidneys after ischemic injury. Finally, we showed markers of failed repair after ischemic injury were correlated both spatially within and between animals and that failed repair was inversely correlated with peritubular capillary density. Combined, we demonstrate the utility and versatility of our approach to capture spatially heterogenous responses to kidney injury.
Collapse
Affiliation(s)
- Madison C McElliott
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Anas Al-Suraimi
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Asha C Telang
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Jenna T Ference-Salo
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Mahboob Chowdhury
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Abdul Soofi
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | | | - Jeffrey A Beamish
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
8
|
Scuiller Y, Hemon P, Le Rochais M, Pers JO, Jamin C, Foulquier N. YOUPI: Your powerful and intelligent tool for segmenting cells from imaging mass cytometry data. Front Immunol 2023; 14:1072118. [PMID: 36936977 PMCID: PMC10019895 DOI: 10.3389/fimmu.2023.1072118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
The recent emergence of imaging mass cytometry technology has led to the generation of an increasing amount of high-dimensional data and, with it, the need for suitable performant bioinformatics tools dedicated to specific multiparametric studies. The first and most important step in treating the acquired images is the ability to perform highly efficient cell segmentation for subsequent analyses. In this context, we developed YOUPI (Your Powerful and Intelligent tool) software. It combines advanced segmentation techniques based on deep learning algorithms with a friendly graphical user interface for non-bioinformatics users. In this article, we present the segmentation algorithm developed for YOUPI. We have set a benchmark with mathematics-based segmentation approaches to estimate its robustness in segmenting different tissue biopsies.
Collapse
Affiliation(s)
| | | | | | | | - Christophe Jamin
- LBAI, UMR 1227, Univ Brest, Inserm, Brest, France
- CHU de Brest, Brest, France
- *Correspondence: Christophe Jamin,
| | | |
Collapse
|