1
|
Kurnia KA, Lin YT, Farhan A, Malhotra N, Luong CT, Hung CH, Roldan MJM, Tsao CC, Cheng TS, Hsiao CD. Deep Learning-Based Automatic Duckweed Counting Using StarDist and Its Application on Measuring Growth Inhibition Potential of Rare Earth Elements as Contaminants of Emerging Concerns. Toxics 2023; 11:680. [PMID: 37624185 PMCID: PMC10457735 DOI: 10.3390/toxics11080680] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 08/03/2023] [Accepted: 08/06/2023] [Indexed: 08/26/2023]
Abstract
In recent years, there have been efforts to utilize surface water as a power source, material, and food. However, these efforts are impeded due to the vast amounts of contaminants and emerging contaminants introduced by anthropogenic activities. Herbicides such as Glyphosate and Glufosinate are commonly known to contaminate surface water through agricultural industries. In contrast, some emerging contaminants, such as rare earth elements, have started to enter the surface water from the production and waste of electronic products. Duckweeds are angiosperms from the Lemnaceae family and have been used for toxicity tests in aquatic environments, mainly those from the genus Lemna, and have been approved by OECD. In this study, we used duckweed from the genus Wolffia, which is smaller and considered a good indicator of metal pollutants in the aquatic environment. The growth rate of duckweed is the most common endpoint in observing pollutant toxicity. In order to observe and mark the fronds automatically, we used StarDist, a machine learning-based tool. StarDist is available as a plugin in ImageJ, simplifying and assisting the counting process. Python also helps arrange, manage, and calculate the inhibition percentage after duckweeds are exposed to contaminants. The toxicity test results showed Dysprosium to be the most toxic, with an IC50 value of 14.6 ppm, and Samarium as the least toxic, with an IC50 value of 279.4 ppm. In summary, we can provide a workflow for automatic frond counting using StarDist integrated with ImageJ and Python to simplify the detection, counting, data management, and calculation process.
Collapse
Affiliation(s)
- Kevin Adi Kurnia
- Department of Chemistry, Chung Yuan Christian University, Chung-Li 32023, Taiwan; (K.A.K.); (A.F.)
- Department of Bioscience Technology, Chung Yuan Christian University, Chung-Li 32023, Taiwan;
| | - Ying-Ting Lin
- Department of Biotechnology, College of Life Science, Kaohsiung Medical University, Kaohsiung City 80708, Taiwan;
- Drug Development & Value Creation Research Center, Kaohsiung Medical University, Kaohsiung City 80708, Taiwan
| | - Ali Farhan
- Department of Chemistry, Chung Yuan Christian University, Chung-Li 32023, Taiwan; (K.A.K.); (A.F.)
- Department of Bioscience Technology, Chung Yuan Christian University, Chung-Li 32023, Taiwan;
| | - Nemi Malhotra
- Department of Bioscience Technology, Chung Yuan Christian University, Chung-Li 32023, Taiwan;
| | - Cao Thang Luong
- Department of Chemical Engineering & Institute of Biotechnology and Chemical Engineering, I-Shou University, Da-Shu, Kaohsiung City 84001, Taiwan; (C.T.L.); (C.-H.H.)
| | - Chih-Hsin Hung
- Department of Chemical Engineering & Institute of Biotechnology and Chemical Engineering, I-Shou University, Da-Shu, Kaohsiung City 84001, Taiwan; (C.T.L.); (C.-H.H.)
| | - Marri Jmelou M. Roldan
- Faculty of Pharmacy, The Graduate School, University of Santo Tomas, Manila 1008, Philippines;
| | - Che-Chia Tsao
- Department of Biological Sciences and Technology, National University of Tainan, Tainan 70005, Taiwan;
| | - Tai-Sheng Cheng
- Department of Biological Sciences and Technology, National University of Tainan, Tainan 70005, Taiwan;
| | - Chung-Der Hsiao
- Department of Chemistry, Chung Yuan Christian University, Chung-Li 32023, Taiwan; (K.A.K.); (A.F.)
- Department of Bioscience Technology, Chung Yuan Christian University, Chung-Li 32023, Taiwan;
- Center for Nanotechnology, Chung Yuan Christian University, Chung-Li 32023, Taiwan
- Research Center for Aquatic Toxicology and Pharmacology, Chung Yuan Christian University, Chung-Li 32023, Taiwan
| |
Collapse
|
2
|
Shovon MSH, Islam MJ, Nabil MNAK, Molla MM, Jony AI, Mridha MF. Strategies for Enhancing the Multi-Stage Classification Performances of HER2 Breast Cancer from Hematoxylin and Eosin Images. Diagnostics (Basel) 2022; 12. [PMID: 36428885 DOI: 10.3390/diagnostics12112825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 11/10/2022] [Accepted: 11/11/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is a significant health concern among women. Prompt diagnosis can diminish the mortality rate and direct patients to take steps for cancer treatment. Recently, deep learning has been employed to diagnose breast cancer in the context of digital pathology. To help in this area, a transfer learning-based model called 'HE-HER2Net' has been proposed to diagnose multiple stages of HER2 breast cancer (HER2-0, HER2-1+, HER2-2+, HER2-3+) on H&E (hematoxylin & eosin) images from the BCI dataset. HE-HER2Net is the modified version of the Xception model, which is additionally comprised of global average pooling, several batch normalization layers, dropout layers, and dense layers with a swish activation function. This proposed model exceeds all existing models in terms of accuracy (0.87), precision (0.88), recall (0.86), and AUC score (0.98) immensely. In addition, our proposed model has been explained through a class-discriminative localization technique using Grad-CAM to build trust and to make the model more transparent. Finally, nuclei segmentation has been performed through the StarDist method.
Collapse
|
3
|
Stevens M, Nanou A, Terstappen LWMM, Driemel C, Stoecklein NH, Coumans FAW. StarDist Image Segmentation Improves Circulating Tumor Cell Detection. Cancers (Basel) 2022; 14:cancers14122916. [PMID: 35740582 PMCID: PMC9221404 DOI: 10.3390/cancers14122916] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 06/03/2022] [Accepted: 06/09/2022] [Indexed: 12/11/2022] Open
Abstract
Simple Summary Automated enumeration of circulating tumor cells (CTC) from immunofluorescence images starts with a selection of areas containing potential CTC. The CellSearch system has a built-in selection algorithm that has been observed to fail in samples with high cell density, thereby underestimating the true CTC load. We evaluated the deep learning method StarDist for the selection of possible CTC. In whole blood sample images, StarDist recovered 99.95% of CTC detected by CellSearch and segmented 10% additional CTC. In diagnostic leukapheresis (DLA) samples, StarDist segmented 20% additional CTC and performed well, whereas CellSearch had serious failures in 9% of samples. Abstract After a CellSearch-processed circulating tumor cell (CTC) sample is imaged, a segmentation algorithm selects nucleic acid positive (DAPI+), cytokeratin-phycoerythrin expressing (CK-PE+) events for further review by an operator. Failures in this segmentation can result in missed CTCs. The CellSearch segmentation algorithm was not designed to handle samples with high cell density, such as diagnostic leukapheresis (DLA) samples. Here, we evaluate deep-learning-based segmentation method StarDist as an alternative to the CellSearch segmentation. CellSearch image archives from 533 whole blood samples and 601 DLA samples were segmented using CellSearch and StarDist and inspected visually. In 442 blood samples from cancer patients, StarDist segmented 99.95% of CTC segmented by CellSearch, produced good outlines for 98.3% of these CTC, and segmented 10% more CTC than CellSearch. Visual inspection of the segmentations of DLA images showed that StarDist continues to perform well when the cell density is very high, whereas CellSearch failed and generated extremely large segmentations (up to 52% of the sample surface). Moreover, in a detailed examination of seven DLA samples, StarDist segmented 20% more CTC than CellSearch. Segmentation is a critical first step for CTC enumeration in dense samples and StarDist segmentation convincingly outperformed CellSearch segmentation.
Collapse
Affiliation(s)
- Michiel Stevens
- Medical Cell Biophysics Group, Techmed Center, Faculty of Science and Technology, University of Twente, 7500 AE Enschede, The Netherlands; (M.S.); (A.N.); (L.W.M.M.T.)
| | - Afroditi Nanou
- Medical Cell Biophysics Group, Techmed Center, Faculty of Science and Technology, University of Twente, 7500 AE Enschede, The Netherlands; (M.S.); (A.N.); (L.W.M.M.T.)
| | - Leon W. M. M. Terstappen
- Medical Cell Biophysics Group, Techmed Center, Faculty of Science and Technology, University of Twente, 7500 AE Enschede, The Netherlands; (M.S.); (A.N.); (L.W.M.M.T.)
| | - Christiane Driemel
- General, Visceral and Pediatric Surgery, University Hospital and Medical Faculty, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany; (C.D.); (N.H.S.)
| | - Nikolas H. Stoecklein
- General, Visceral and Pediatric Surgery, University Hospital and Medical Faculty, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany; (C.D.); (N.H.S.)
| | - Frank A. W. Coumans
- Medical Cell Biophysics Group, Techmed Center, Faculty of Science and Technology, University of Twente, 7500 AE Enschede, The Netherlands; (M.S.); (A.N.); (L.W.M.M.T.)
- Correspondence:
| |
Collapse
|
4
|
Cortada M, Sauteur L, Lanz M, Levano S, Bodmer D. A deep learning approach to quantify auditory hair cells. Hear Res 2021; 409:108317. [PMID: 34343849 DOI: 10.1016/j.heares.2021.108317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 06/16/2021] [Accepted: 07/19/2021] [Indexed: 01/16/2023]
Abstract
Hearing loss affects millions of people worldwide. Yet, there are still no curative therapies for sensorineural hearing loss. Frequent causes of sensorineural hearing loss are due to damage or loss of the sensory hair cells, the spiral ganglion neurons, or the synapses between them. Culturing the organ of Corti allows the study of all these structures in an experimental model, which is easy to manipulate. Therefore, the in vitro culture of the neonatal mammalian organ of Corti remains a frequently used experimental system, in which hair cell survival is routinely assessed. However, the analysis of the surviving hair cells is commonly performed via manual counting, which is a time-consuming process and the inter-rater reliability can be an issue. Here, we describe a deep learning approach to quantify hair cell survival in the murine organ of Corti explants. We used StarDist, a publicly available platform and plugin for Fiji (Fiji is just ImageJ), to train and apply our own custom deep learning model. We successfully validated our model in untreated, cisplatin, and gentamicin treated organ of Corti explants. Therefore, deep learning is a valuable approach for quantifying hair cell survival in organ of Corti explants. Moreover, we also demonstrate how the publicly available Fiji plugin StarDist can be efficiently used for this purpose.
Collapse
Affiliation(s)
- Maurizio Cortada
- Department of Biomedicine, University of Basel, Hebelstrasse 20, Basel 4031, Switzerland.
| | - Loïc Sauteur
- Department of Biomedicine, University of Basel, Hebelstrasse 20, Basel 4031, Switzerland.
| | - Michael Lanz
- Department of Biomedicine, University of Basel, Hebelstrasse 20, Basel 4031, Switzerland.
| | - Soledad Levano
- Department of Biomedicine, University of Basel, Hebelstrasse 20, Basel 4031, Switzerland.
| | - Daniel Bodmer
- Department of Biomedicine, University of Basel, Hebelstrasse 20, Basel 4031, Switzerland; Clinic for Otorhinolaryngology, Head and Neck Surgery, University of Basel Hospital, Petersgraben 4, Basel CH-4031, Switzerland.
| |
Collapse
|
5
|
Mela CA, Liu Y. Application of convolutional neural networks towards nuclei segmentation in localization-based super-resolution fluorescence microscopy images. BMC Bioinformatics 2021; 22:325. [PMID: 34130628 PMCID: PMC8204587 DOI: 10.1186/s12859-021-04245-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 05/25/2021] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Automated segmentation of nuclei in microscopic images has been conducted to enhance throughput in pathological diagnostics and biological research. Segmentation accuracy and speed has been significantly enhanced with the advent of convolutional neural networks. A barrier in the broad application of neural networks to nuclei segmentation is the necessity to train the network using a set of application specific images and image labels. Previous works have attempted to create broadly trained networks for universal nuclei segmentation; however, such networks do not work on all imaging modalities, and best results are still commonly found when the network is retrained on user specific data. Stochastic optical reconstruction microscopy (STORM) based super-resolution fluorescence microscopy has opened a new avenue to image nuclear architecture at nanoscale resolutions. Due to the large size and discontinuous features typical of super-resolution images, automatic nuclei segmentation can be difficult. In this study, we apply commonly used networks (Mask R-CNN and UNet architectures) towards the task of segmenting super-resolution images of nuclei. First, we assess whether networks broadly trained on conventional fluorescence microscopy datasets can accurately segment super-resolution images. Then, we compare the resultant segmentations with results obtained using networks trained directly on our super-resolution data. We next attempt to optimize and compare segmentation accuracy using three different neural network architectures. RESULTS Results indicate that super-resolution images are not broadly compatible with neural networks trained on conventional bright-field or fluorescence microscopy images. When the networks were trained on super-resolution data, however, we attained nuclei segmentation accuracies (F1-Score) in excess of 0.8, comparable to past results found when conducting nuclei segmentation on conventional fluorescence microscopy images. Overall, we achieved the best results utilizing the Mask R-CNN architecture. CONCLUSIONS We found that convolutional neural networks are powerful tools capable of accurately and quickly segmenting localization-based super-resolution microscopy images of nuclei. While broadly trained and widely applicable segmentation algorithms are desirable for quick use with minimal input, optimal results are still found when the network is both trained and tested on visually similar images. We provide a set of Colab notebooks to disseminate the software into the broad scientific community ( https://github.com/YangLiuLab/Super-Resolution-Nuclei-Segmentation ).
Collapse
Affiliation(s)
- Christopher A Mela
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Biomedical Optical Imaging Laboratory, Departments of Medicine and Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Yang Liu
- Biomedical Optical Imaging Laboratory, Departments of Medicine and Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| |
Collapse
|
6
|
Abstract
The ability of cells to migrate is a fundamental physiological process involved in embryonic development, tissue homeostasis, immune surveillance, and wound healing. Therefore, the mechanisms governing cellular locomotion have been under intense scrutiny over the last 50 years. One of the main tools of this scrutiny is live-cell quantitative imaging, where researchers image cells over time to study their migration and quantitatively analyze their dynamics by tracking them using the recorded images. Despite the availability of computational tools, manual tracking remains widely used among researchers due to the difficulty setting up robust automated cell tracking and large-scale analysis. Here we provide a detailed analysis pipeline illustrating how the deep learning network StarDist can be combined with the popular tracking software TrackMate to perform 2D automated cell tracking and provide fully quantitative readouts. Our proposed protocol is compatible with both fluorescent and widefield images. It only requires freely available and open-source software (ZeroCostDL4Mic and Fiji), and does not require any coding knowledge from the users, making it a versatile and powerful tool for the field. We demonstrate this pipeline's usability by automatically tracking cancer cells and T cells using fluorescent and brightfield images. Importantly, we provide, as supplementary information, a detailed step-by-step protocol to allow researchers to implement it with their images.
Collapse
Affiliation(s)
- Elnaz Fazeli
- Laboratory of Biophysics, Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
| | - Nathan H. Roy
- Department of Pathology and Laboratory Medicine, Children's Hospital of Philadelphia Research Institute, Philadelphia, PA 19104, USA
| | - Gautier Follain
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Cell Biology, Faculty of Science and Engineering, Åbo Akademi University, Turku, Finland
| | - Romain F. Laine
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
| | - Lucas von Chamier
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Pekka E. Hänninen
- Laboratory of Biophysics, Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
| | - John E. Eriksson
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Cell Biology, Faculty of Science and Engineering, Åbo Akademi University, Turku, Finland
| | | | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Cell Biology, Faculty of Science and Engineering, Åbo Akademi University, Turku, Finland
| |
Collapse
|
7
|
Rasse TM, Hollandi R, Horvath P. OpSeF: Open Source Python Framework for Collaborative Instance Segmentation of Bioimages. Front Bioeng Biotechnol 2020; 8:558880. [PMID: 33117778 PMCID: PMC7576117 DOI: 10.3389/fbioe.2020.558880] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 09/15/2020] [Indexed: 11/13/2022] Open
Abstract
Various pre-trained deep learning models for the segmentation of bioimages have been made available as developer-to-end-user solutions. They are optimized for ease of use and usually require neither knowledge of machine learning nor coding skills. However, individually testing these tools is tedious and success is uncertain. Here, we present the Open Segmentation Framework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts' knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and postprocessing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and postprocessing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data. We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows. Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little; the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.
Collapse
Affiliation(s)
- Tobias M. Rasse
- Scientific Service Group Microscopy, Max Planck Institute for Heart and Lung Research, Bad Nauheim, Germany
| | - Réka Hollandi
- Synthetic and Systems Biology Unit, Biological Research Center (BRC), Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Center (BRC), Szeged, Hungary
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland
| |
Collapse
|