1
|
Ma X, Huang J, Long M, Li X, Ye Z, Hu W, Yalikun Y, Wang D, Hu T, Mei L, Lei C. CellSAM: Advancing Pathologic Image Cell Segmentation via Asymmetric Large-Scale Vision Model Feature Distillation Aggregation Network. Microsc Res Tech 2025; 88:501-515. [PMID: 39440549 DOI: 10.1002/jemt.24716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 08/21/2024] [Accepted: 10/06/2024] [Indexed: 10/25/2024]
Abstract
Segment anything model (SAM) has attracted extensive interest as a potent large-scale image segmentation model, with prior efforts adapting it for use in medical imaging. However, the precise segmentation of cell nucleus instances remains a formidable challenge in computational pathology, given substantial morphological variations and the dense clustering of nuclei with unclear boundaries. This study presents an innovative cell segmentation algorithm named CellSAM. CellSAM has the potential to improve the effectiveness and precision of disease identification and therapy planning. As a variant of SAM, CellSAM integrates dual-image encoders and employs techniques such as knowledge distillation and mask fusion. This innovative model exhibits promising capabilities in capturing intricate cell structures and ensuring adaptability in resource-constrained scenarios. The experimental results indicate that this structure effectively enhances the quality and precision of cell segmentation. Remarkably, CellSAM demonstrates outstanding results even with minimal training data. In the evaluation of particular cell segmentation tasks, extensive comparative analyzes show that CellSAM outperforms both general fundamental models and state-of-the-art (SOTA) task-specific models. Comprehensive evaluation metrics yield scores of 0.884, 0.876, and 0.768 for mean accuracy, recall, and precision respectively. Extensive experiments show that CellSAM excels in capturing subtle details and complex structures and is capable of segmenting cells in images accurately. Additionally, CellSAM demonstrates excellent performance on clinical data, indicating its potential for robust applications in treatment planning and disease diagnosis, thereby further improving the efficiency of computer-aided medicine.
Collapse
Affiliation(s)
- Xiao Ma
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Jin Huang
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Mengping Long
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Department of Pathology, Peking University Cancer Hospital, Beijing, China
| | - Xiaoxiao Li
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Zhaoyi Ye
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Wanting Hu
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Yaxiaer Yalikun
- Division of Materials Science, Nara Institute of Science and Technology, Nara, Japan
| | - Du Wang
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Taobo Hu
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Department of Breast Surgery, Peking University People's Hospital, Beijing, China
| | - Liye Mei
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Cheng Lei
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Suzhou Institute of Wuhan University, Suzhou, China
- Shenzhen Institute of Wuhan University, Shenzhen, China
| |
Collapse
|
2
|
Ramakanth S, Kennedy T, Yalcinkaya B, Neupane S, Tadic N, Buchler NE, Argüello-Miranda O. Deep learning-driven imaging of cell division and cell growth across an entire eukaryotic life cycle. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.25.591211. [PMID: 38712227 PMCID: PMC11071524 DOI: 10.1101/2024.04.25.591211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The life cycle of biomedical and agriculturally relevant eukaryotic microorganisms involves complex transitions between proliferative and non-proliferative states such as dormancy, mating, meiosis, and cell division. New drugs, pesticides, and vaccines can be created by targeting specific life cycle stages of parasites and pathogens. However, defining the structure of a microbial life cycle often relies on partial observations that are theoretically assembled in an ideal life cycle path. To create a more quantitative approach to studying complete eukaryotic life cycles, we generated a deep learning-driven imaging framework to track microorganisms across sexually reproducing generations. Our approach combines microfluidic culturing, life cycle stage-specific segmentation of microscopy images using convolutional neural networks, and a novel cell tracking algorithm, FIEST, based on enhancing the overlap of single cell masks in consecutive images through deep learning video frame interpolation. As proof of principle, we used this approach to quantitatively image and compare cell growth and cell cycle regulation across the sexual life cycle of Saccharomyces cerevisiae. We developed a fluorescent reporter system based on a fluorescently labeled Whi5 protein, the yeast analog of mammalian Rb, and a new High-Cdk1 activity sensor, LiCHI, designed to report during DNA replication, mitosis, meiotic homologous recombination, meiosis I, and meiosis II. We found that cell growth preceded the exit from non-proliferative states such as mitotic G1, pre-meiotic G1, and the G0 spore state during germination. A decrease in the total cell concentration of Whi5 characterized the exit from non-proliferative states, which is consistent with a Whi5 dilution model. The nuclear accumulation of Whi5 was developmentally regulated, being at its highest during meiotic exit and spore formation. The temporal coordination of cell division and growth was not significantly different across three sexually reproducing generations. Our framework could be used to quantitatively characterize other single-cell eukaryotic life cycles that remain incompletely described. An off-the-shelf user interface Yeastvision provides free access to our image processing and single-cell tracking algorithms.
Collapse
Affiliation(s)
- Shreya Ramakanth
- Department of Plant and Microbial Biology, North Carolina State University
| | - Taylor Kennedy
- Department of Plant and Microbial Biology, North Carolina State University
| | - Berk Yalcinkaya
- Department of Plant and Microbial Biology, North Carolina State University
| | - Sandhya Neupane
- Department of Plant and Microbial Biology, North Carolina State University
| | - Nika Tadic
- Department of Plant and Microbial Biology, North Carolina State University
| | - Nicolas E Buchler
- Department of Molecular Biomedical Sciences, North Carolina State University
| | | |
Collapse
|
3
|
Toma TT, Wang Y, Gahlmann A, Acton ST. DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
Affiliation(s)
- Tanjin Taher Toma
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Yibo Wang
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Andreas Gahlmann
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
- Department of Molecular Physiology and Biological Physics, University of Virginia, Charlottesville, 22903, Virginia, USA
| | - Scott T. Acton
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| |
Collapse
|
4
|
Blöbaum L, Torello Pianale L, Olsson L, Grünberger A. Quantifying microbial robustness in dynamic environments using microfluidic single-cell cultivation. Microb Cell Fact 2024; 23:44. [PMID: 38336674 PMCID: PMC10854032 DOI: 10.1186/s12934-024-02318-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 01/25/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Microorganisms must respond to changes in their environment. Analysing the robustness of functions (i.e. performance stability) to such dynamic perturbations is of great interest in both laboratory and industrial settings. Recently, a quantification method capable of assessing the robustness of various functions, such as specific growth rate or product yield, across different conditions, time frames, and populations has been developed for microorganisms grown in a 96-well plate. In micro-titer-plates, environmental change is slow and undefined. Dynamic microfluidic single-cell cultivation (dMSCC) enables the precise maintenance and manipulation of microenvironments, while tracking single cells over time using live-cell imaging. Here, we combined dMSCC and a robustness quantification method to a pipeline for assessing performance stability to changes occurring within seconds or minutes. RESULTS Saccharomyces cerevisiae CEN.PK113-7D, harbouring a biosensor for intracellular ATP levels, was exposed to glucose feast-starvation cycles, with each condition lasting from 1.5 to 48 min over a 20 h period. A semi-automated image and data analysis pipeline was developed and applied to assess the performance and robustness of various functions at population, subpopulation, and single-cell resolution. We observed a decrease in specific growth rate but an increase in intracellular ATP levels with longer oscillation intervals. Cells subjected to 48 min oscillations exhibited the highest average ATP content, but the lowest stability over time and the highest heterogeneity within the population. CONCLUSION The proposed pipeline enabled the investigation of function stability in dynamic environments, both over time and within populations. The strategy allows for parallelisation and automation, and is easily adaptable to new organisms, biosensors, cultivation conditions, and oscillation frequencies. Insights on the microbial response to changing environments will guide strain development and bioprocess optimisation.
Collapse
Affiliation(s)
- Luisa Blöbaum
- Multiscale Bioengineering, Technical Faculty, Bielefeld University, Bielefeld, Germany
- CeBiTec, Bielefeld University, Bielefeld, Germany
| | - Luca Torello Pianale
- Industrial Biotechnology Division, Department of Life Sciences, Chalmers University of Technology, Gothenburg, Sweden
| | - Lisbeth Olsson
- Industrial Biotechnology Division, Department of Life Sciences, Chalmers University of Technology, Gothenburg, Sweden
| | - Alexander Grünberger
- Multiscale Bioengineering, Technical Faculty, Bielefeld University, Bielefeld, Germany.
- Microsystems in Bioprocess Engineering, Institute of Process Engineering in Life Sciences, Karlsruhe Institute of Technology, Karlsruhe, Germany.
| |
Collapse
|
5
|
Gao Z, Li Y. Enhancing single-cell biology through advanced AI-powered microfluidics. BIOMICROFLUIDICS 2023; 17:051301. [PMID: 37799809 PMCID: PMC10550334 DOI: 10.1063/5.0170050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 09/23/2023] [Indexed: 10/07/2023]
Abstract
Microfluidic technology has largely benefited both fundamental biological research and translational clinical diagnosis with its advantages in high-throughput, single-cell resolution, high integrity, and wide-accessibility. Despite the merits we obtained from microfluidics in the last two decades, the current requirement of intelligence in biomedicine urges the microfluidic technology to process biological big data more efficiently and intelligently. Thus, the current readout technology based on the direct detection of the signals in either optics or electrics was not able to meet the requirement. The implementation of artificial intelligence (AI) in microfluidic technology matches up with the large-scale data usually obtained in the high-throughput assays of microfluidics. At the same time, AI is able to process the multimodal datasets obtained from versatile microfluidic devices, including images, videos, electric signals, and sequences. Moreover, AI provides the microfluidic technology with the capability to understand and decipher the obtained datasets rather than simply obtaining, which eventually facilitates fundamental and translational research in many areas, including cell type discovery, cell signaling, single-cell genetics, and diagnosis. In this Perspective, we will highlight the recent advances in employing AI for single-cell biology and present an outlook on the future direction with more advanced AI algorithms.
Collapse
Affiliation(s)
- Zhaolong Gao
- The Key Laboratory for Biomedical Photonics of MOE at Wuhan National Laboratory for Optoelectronics—Hubei Bioinformatics and Molecular Imaging Key Laboratory, Department of Biomedical Engineering, Systems Biology Theme, College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Yiwei Li
- The Key Laboratory for Biomedical Photonics of MOE at Wuhan National Laboratory for Optoelectronics—Hubei Bioinformatics and Molecular Imaging Key Laboratory, Department of Biomedical Engineering, Systems Biology Theme, College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
6
|
Reich C, Prangemeier T, Francani AO, Koeppl H. An Instance Segmentation Dataset of Yeast Cells in Microstructures. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083295 DOI: 10.1109/embc40787.2023.10340268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Extracting single-cell information from microscopy data requires accurate instance-wise segmentations. Obtaining pixel-wise segmentations from microscopy imagery remains a challenging task, especially with the added complexity of microstructured environments. This paper presents a novel dataset for segmenting yeast cells in microstructures. We offer pixel-wise instance segmentation labels for both cells and trap microstructures. In total, we release 493 densely annotated microscopy images. To facilitate a unified comparison between novel segmentation algorithms, we propose a standardized evaluation strategy for our dataset. The aim of the dataset and evaluation strategy is to facilitate the development of new cell segmentation approaches. The dataset is publicly available at https://christophreich1996.github.io/yeast_in_microstructures_dataset/.
Collapse
|
7
|
Siu DMD, Lee KCM, Chung BMF, Wong JSJ, Zheng G, Tsia KK. Optofluidic imaging meets deep learning: from merging to emerging. LAB ON A CHIP 2023; 23:1011-1033. [PMID: 36601812 DOI: 10.1039/d2lc00813k] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Propelled by the striking advances in optical microscopy and deep learning (DL), the role of imaging in lab-on-a-chip has dramatically been transformed from a silo inspection tool to a quantitative "smart" engine. A suite of advanced optical microscopes now enables imaging over a range of spatial scales (from molecules to organisms) and temporal window (from microseconds to hours). On the other hand, the staggering diversity of DL algorithms has revolutionized image processing and analysis at the scale and complexity that were once inconceivable. Recognizing these exciting but overwhelming developments, we provide a timely review of their latest trends in the context of lab-on-a-chip imaging, or coined optofluidic imaging. More importantly, here we discuss the strengths and caveats of how to adopt, reinvent, and integrate these imaging techniques and DL algorithms in order to tailor different lab-on-a-chip applications. In particular, we highlight three areas where the latest advances in lab-on-a-chip imaging and DL can form unique synergisms: image formation, image analytics and intelligent image-guided autonomous lab-on-a-chip. Despite the on-going challenges, we anticipate that they will represent the next frontiers in lab-on-a-chip imaging that will spearhead new capabilities in advancing analytical chemistry research, accelerating biological discovery, and empowering new intelligent clinical applications.
Collapse
Affiliation(s)
- Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Bob M F Chung
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| |
Collapse
|
8
|
Du X, Chen Z, Li Q, Yang S, Jiang L, Yang Y, Li Y, Gu Z. Organoids revealed: morphological analysis of the profound next generation in-vitro model with artificial intelligence. Biodes Manuf 2023; 6:319-339. [PMID: 36713614 PMCID: PMC9867835 DOI: 10.1007/s42242-022-00226-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/06/2022] [Indexed: 01/21/2023]
Abstract
In modern terminology, "organoids" refer to cells that grow in a specific three-dimensional (3D) environment in vitro, sharing similar structures with their source organs or tissues. Observing the morphology or growth characteristics of organoids through a microscope is a commonly used method of organoid analysis. However, it is difficult, time-consuming, and inaccurate to screen and analyze organoids only manually, a problem which cannot be easily solved with traditional technology. Artificial intelligence (AI) technology has proven to be effective in many biological and medical research fields, especially in the analysis of single-cell or hematoxylin/eosin stained tissue slices. When used to analyze organoids, AI should also provide more efficient, quantitative, accurate, and fast solutions. In this review, we will first briefly outline the application areas of organoids and then discuss the shortcomings of traditional organoid measurement and analysis methods. Secondly, we will summarize the development from machine learning to deep learning and the advantages of the latter, and then describe how to utilize a convolutional neural network to solve the challenges in organoid observation and analysis. Finally, we will discuss the limitations of current AI used in organoid research, as well as opportunities and future research directions. Graphic abstract
Collapse
Affiliation(s)
- Xuan Du
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Qiwei Li
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Sheng Yang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, School of Public Health, Southeast University, Nanjing, 210009 China
| | - Lincao Jiang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Yi Yang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Yanhui Li
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210008 China
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| |
Collapse
|
9
|
Eldem H, Ülker E, Yaşar Işıklı O. Encoder–decoder semantic segmentation models for pressure wound images. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2163531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Affiliation(s)
- Hüseyin Eldem
- Vocational School of Technical Sciences, Computer Technologies Department, Karamanoğlu Mehmetbey University, Karaman, Turkey
| | - Erkan Ülker
- Faculty of Engineering and Natural Sciences, Department of Computer Engineering, Konya Technical University, Konya, Turkey
| | - Osman Yaşar Işıklı
- Karaman Education and Research Hospital, Vascular Surgery Department, Karaman, Turkey
| |
Collapse
|
10
|
Aspert T, Hentsch D, Charvin G. DetecDiv, a generalist deep-learning platform for automated cell division tracking and survival analysis. eLife 2022; 11:79519. [PMID: 35976090 PMCID: PMC9444243 DOI: 10.7554/elife.79519] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Automating the extraction of meaningful temporal information from sequences of microscopy images represents a major challenge to characterize dynamical biological processes. So far, strong limitations in the ability to quantitatively analyze single-cell trajectories have prevented large-scale investigations to assess the dynamics of entry into replicative senescence in yeast. Here, we have developed DetecDiv, a microfluidic-based image acquisition platform combined with deep learning-based software for high-throughput single-cell division tracking. We show that DetecDiv can automatically reconstruct cellular replicative lifespans with high accuracy and performs similarly with various imaging platforms and geometries of microfluidic traps. In addition, this methodology provides comprehensive temporal cellular metrics using time-series classification and image semantic segmentation. Last, we show that this method can be further applied to automatically quantify the dynamics of cellular adaptation and real-time cell survival upon exposure to environmental stress. Hence, this methodology provides an all-in-one toolbox for high-throughput phenotyping for cell cycle, stress response, and replicative lifespan assays.
Collapse
Affiliation(s)
- Théo Aspert
- Department of Developmental Biology and Stem Cells, Institute of Genetics and Molecular and Cellular Biology, Illkirch, France
| | - Didier Hentsch
- Department of Developmental Biology and Stem Cells, Institute of Genetics and Molecular and Cellular Biology, Illkirch, France
| | - Gilles Charvin
- Department of Developmental Biology and Stem Cells, Institute of Genetics and Molecular and Cellular Biology, Illkirch, France
| |
Collapse
|
11
|
Ruz GA, Ashlock D, Allmendinger R, Fogel GB. Editorial: 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (IEEE CIBCB 2020). Biosystems 2022; 218:104698. [PMID: 35568273 DOI: 10.1016/j.biosystems.2022.104698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Gonzalo A Ruz
- Facultad de Ingenier´ıa y Ciencias, Universidad Adolfo Ib´an˜ez Santiago, Chile; Center of Applied Ecology and Sustainability (CAPES), Santiago, Chile.
| | - Daniel Ashlock
- Department of Mathematics and Statistics, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | | | - Gary B Fogel
- Natural Selection, Inc., 6480 Weathers Place, Suite 350, San Diego, CA 92121, USA
| |
Collapse
|