1
|
Bao R, Weiss RJ, Bates SV, Song Y, He S, Li J, Bjornerud A, Hirschtick RL, Grant PE, Ou Y. PARADISE: Personalized and regional adaptation for HIE disease identification and segmentation. Med Image Anal 2025; 102:103419. [PMID: 40147073 DOI: 10.1016/j.media.2024.103419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 09/16/2024] [Accepted: 11/28/2024] [Indexed: 03/29/2025]
Abstract
Hypoxic ischemic encephalopathy (HIE) is a brain dysfunction occurring in approximately 1-5/1000 term-born neonates. Accurate segmentation of HIE lesions in brain MRI is crucial for prognosis and diagnosis but presents a unique challenge due to the diffuse and small nature of these abnormalities, which resulted in a substantial gap between the performance of machine learning-based segmentation methods and clinical expert annotations for HIE. To address this challenge, we introduce ParadiseNet, an algorithm specifically designed for HIE lesion segmentation. ParadiseNet incorporates global-local learning, progressive uncertainty learning, and self-evolution learning modules, all inspired by clinical interpretation of neonatal brain MRIs. These modules target issues such as unbalanced data distribution, boundary uncertainty, and imprecise lesion detection, respectively. Extensive experiments demonstrate that ParadiseNet significantly enhances small lesion detection (<1%) accuracy in HIE, achieving an over 4% improvement in Dice, 6% improvement in NSD compared to U-Net and other general medical image segmentation algorithms.
Collapse
Affiliation(s)
- Rina Bao
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| | | | | | | | - Sheng He
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Jingpeng Li
- Boston Children's Hospital, Boston, MA, USA; Oslo University Hospital; University of Oslo, Norway
| | | | - Randy L Hirschtick
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA
| | - P Ellen Grant
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Yangming Ou
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
2
|
Annasamudram N, Zhao J, Oluwadare O, Prashanth A, Makrogiannis S. Scale selection and machine learning based cell segmentation and tracking in time lapse microscopy. Sci Rep 2025; 15:11717. [PMID: 40188205 PMCID: PMC11972337 DOI: 10.1038/s41598-025-95993-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 03/25/2025] [Indexed: 04/07/2025] Open
Abstract
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques. The code of our method is publicly available on https://github.com/smakrogi/CSTQ_Pub/ , (release v.3.2).
Collapse
Affiliation(s)
- Nagasoujanya Annasamudram
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Jian Zhao
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Olaitan Oluwadare
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Aashish Prashanth
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA.
| |
Collapse
|
3
|
Toubal IE, Al-Shakarji N, Cornelison DDW, Palaniappan K. Ensemble Deep Learning Object Detection Fusion for Cell Tracking, Mitosis, and Lineage. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:443-458. [PMID: 39906165 PMCID: PMC11793856 DOI: 10.1109/ojemb.2023.3288470] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/10/2023] [Accepted: 06/13/2023] [Indexed: 02/06/2025] Open
Abstract
Cell tracking and motility analysis are essential for understanding multicellular processes, automated quantification in biomedical experiments, and medical diagnosis and treatment. However, manual tracking is labor-intensive, tedious, and prone to selection bias and errors. Building upon our previous work, we propose a new deep learning-based method, EDNet, for cell detection, tracking, and motility analysis that is more robust to shape across different cell lines, and models cell lineage and proliferation. EDNet uses an ensemble approach for 2D cell detection that is deep-architecture-agnostic and achieves state-of-the-art performance surpassing single-model YOLO and FasterRCNN convolutional neural networks. EDNet detections are used in our M2Track multiobject tracking algorithm for tracking cells, detecting cell mitosis (cell division) events, and cell lineage graphs. Our methods produce state-of-the-art performance on the Cell Tracking and Mitosis (CTMCv1) dataset with a Multiple Object Tracking Accuracy (MOTA) score of 50.6% and tracking lineage graph edit (TRA) score of 52.5%. Additionally, we compare our detection and tracking methods to human performance on external data in studying the motility of muscle stem cells with different physiological and molecular stimuli. We believe that our method has the potential to improve the accuracy and efficiency of cell tracking and motility analysis. This could lead to significant advances in biomedical research and medical diagnosis. Our code is made publicly available on GitHub.
Collapse
Affiliation(s)
- Imad Eddine Toubal
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMO65211USA
| | - Noor Al-Shakarji
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMO65211USA
| | - D. D. W. Cornelison
- Christopher S. Bond Life Sciences CenterUniversity of MissouriColumbiaMO65211USA
| | - Kannappan Palaniappan
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMO65211USA
| |
Collapse
|
4
|
Wu L, Chen A, Salama P, Winfree S, Dunn KW, Delp EJ. NISNet3D: three-dimensional nuclear synthesis and instance segmentation for fluorescence microscopy images. Sci Rep 2023; 13:9533. [PMID: 37308499 PMCID: PMC10261124 DOI: 10.1038/s41598-023-36243-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 05/31/2023] [Indexed: 06/14/2023] Open
Abstract
The primary step in tissue cytometry is the automated distinction of individual cells (segmentation). Since cell borders are seldom labeled, cells are generally segmented by their nuclei. While tools have been developed for segmenting nuclei in two dimensions, segmentation of nuclei in three-dimensional volumes remains a challenging task. The lack of effective methods for three-dimensional segmentation represents a bottleneck in the realization of the potential of tissue cytometry, particularly as methods of tissue clearing present the opportunity to characterize entire organs. Methods based on deep learning have shown enormous promise, but their implementation is hampered by the need for large amounts of manually annotated training data. In this paper, we describe 3D Nuclei Instance Segmentation Network (NISNet3D) that directly segments 3D volumes through the use of a modified 3D U-Net, 3D marker-controlled watershed transform, and a nuclei instance segmentation system for separating touching nuclei. NISNet3D is unique in that it provides accurate segmentation of even challenging image volumes using a network trained on large amounts of synthetic nuclei derived from relatively few annotated volumes, or on synthetic data obtained without annotated volumes. We present a quantitative comparison of results obtained from NISNet3D with results obtained from a variety of existing nuclei segmentation techniques. We also examine the performance of the methods when no ground truth is available and only synthetic volumes were used for training.
Collapse
Affiliation(s)
- Liming Wu
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Alain Chen
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Paul Salama
- Department of Electrical and Computer Engineering, Indiana University-Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Seth Winfree
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Kenneth W Dunn
- School of Medicine, Indiana University, Indianapolis, IN, 46202, USA
| | - Edward J Delp
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
5
|
Maška M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Nečasová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, Mikut R, Guo T, Wang Y, Allebach JP, Bao R, Al-Shakarji NM, Rahmon G, Toubal IE, Palaniappan K, Lux F, Matula P, Sugawara K, Magnusson KEG, Aho L, Cohen AR, Arbelle A, Ben-Haim T, Raviv TR, Isensee F, Jäger PF, Maier-Hein KH, Zhu Y, Ederra C, Urbiola A, Meijering E, Cunha A, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solórzano C. The Cell Tracking Challenge: 10 years of objective benchmarking. Nat Methods 2023:10.1038/s41592-023-01879-y. [PMID: 37202537 PMCID: PMC10333123 DOI: 10.1038/s41592-023-01879-y] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 04/13/2023] [Indexed: 05/20/2023]
Abstract
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
Collapse
Affiliation(s)
- Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Estibaliz Gómez-de-Mariscal
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Fidel A Guerrero Peña
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tsang Ing Ren
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
| | - Elliot M Meyerowitz
- Division of Biology and Biological Engineering and Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Tianqi Guo
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Yin Wang
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Jan P Allebach
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Rina Bao
- Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Noor M Al-Shakarji
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Gani Rahmon
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Imad Eddine Toubal
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Kannappan Palaniappan
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de Lyon, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | | | - Layton Aho
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Andrew R Cohen
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Assaf Arbelle
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tal Ben-Haim
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tammy Riklin Raviv
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Paul F Jäger
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
- Griffith University, Nathan, Queensland, Australia
| | - Cristina Ederra
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Ainhoa Urbiola
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandre Cunha
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Arrate Muñoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic.
| | - Carlos Ortiz-de-Solórzano
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain.
| |
Collapse
|
6
|
Zhu Y, Yin X, Meijering E. A Compound Loss Function With Shape Aware Weight Map for Microscopy Cell Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1278-1288. [PMID: 36455082 DOI: 10.1109/tmi.2022.3226226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Microscopy cell segmentation is a crucial step in biological image analysis and a challenging task. In recent years, deep learning has been widely used to tackle this task, with promising results. A critical aspect of training complex neural networks for this purpose is the selection of the loss function, as it affects the learning process. In the field of cell segmentation, most of the recent research in improving the loss function focuses on addressing the problem of inter-class imbalance. Despite promising achievements, more work is needed, as the challenge of cell segmentation is not only the inter-class imbalance but also the intra-class imbalance (the cost imbalance between the false positives and false negatives of the inference model), the segmentation of cell minutiae, and the missing annotations. To deal with these challenges, in this paper, we propose a new compound loss function employing a shape aware weight map. The proposed loss function is inspired by Youden's J index to handle the problem of inter-class imbalance and uses a focal cross-entropy term to penalize the intra-class imbalance and weight easy/hard samples. The proposed shape aware weight map can handle the problem of missing annotations and facilitate valid segmentation of cell minutiae. Results of evaluations on all ten 2D+time datasets from the public cell tracking challenge demonstrate 1) the superiority of the proposed loss function with the shape aware weight map, and 2) that the performance of recent deep learning-based cell segmentation methods can be improved by using the proposed compound loss function.
Collapse
|
7
|
Human Monkeypox Classification from Skin Lesion Images with Deep Pre-trained Network using Mobile Application. J Med Syst 2022; 46:79. [PMID: 36210365 PMCID: PMC9548428 DOI: 10.1007/s10916-022-01863-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 09/07/2022] [Indexed: 11/26/2022]
Abstract
Recently, human monkeypox outbreaks have been reported in many countries. According to the reports and studies, quick determination and isolation of infected people are essential to reduce the spread rate. This study presents an Android mobile application that uses deep learning to assist this situation. The application has been developed with Android Studio using Java programming language and Android SDK 12. Video images gathered through the mobile device’s camera are dispatched to a deep convolutional neural network that runs on the same device. Camera2 API of the Android platform has been used for camera access and operations. The network then classifies images as positive or negative for monkeypox detection. The network’s training has been carried out using skin lesion images of monkeypox-infected people and other skin lesion images. For this purpose, a publicly available dataset and a deep transfer learning approach have been used. All training and testing steps have been applied on Matlab using different pre-trained networks. Then, the network that has the best accuracy has been recreated and trained using TensorFlow. The TensorFlow model has been adapted to mobile devices by converting to the TensorFlow Lite model. The TensorFlow Lite model has been then embedded into the mobile application together with the TensorFlow Lite library for monkeypox detection. The application has been run on three devices successfully. During the run-time, the inference times have been gathered. 197 ms, 91 ms, and 138 ms average inference times have been observed. The presented system allows people with body lesions to quickly make a preliminary diagnosis. Thus, monkeypox-infected people can be encouraged to act rapidly to see an expert for a definitive diagnosis. According to the test results, the system can classify the images with 91.11% accuracy. In addition, the proposed mobile application can be trained for the preliminary diagnosis of other skin diseases.
Collapse
|
8
|
Ufuktepe DK, Yang F, Kassim YM, Yu H, Maude RJ, Palaniappan K, Jaeger S. Deep Learning-Based Cell Detection and Extraction in Thin Blood Smears for Malaria Diagnosis. IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP : [PROCEEDINGS]. IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP 2021; 2021:9762109. [PMID: 36483328 PMCID: PMC7613898 DOI: 10.1109/aipr52630.2021.9762109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Malaria is a major health threat caused by Plasmodium parasites that infect the red blood cells. Two predominant types of Plasmodium parasites are Plasmodium vivax (P. vivax) and Plasmodium falciparum (P. falciparum). Diagnosis of malaria typically involves visual microscopy examination of blood smears for malaria parasites. This is a tedious, error-prone visual inspection task requiring microscopy expertise which is often lacking in resource-poor settings. To address these problems, attempts have been made in recent years to automate malaria diagnosis using machine learning approaches. Several challenges need to be met for a machine learning approach to be successful in malaria diagnosis. Microscopy images acquired at different sites often vary in color, contrast, and consistency caused by different smear preparation and staining methods. Moreover, touching and overlapping cells complicate the red blood cell detection process, which can lead to inaccurate blood cell counts and thus incorrect parasitemia calculations. In this work, we propose a red blood cell detection and extraction framework to enable processing and analysis of single cells for follow-up processes like counting infected cells or identifying parasite species in thin blood smears. This framework consists of two modules: a cell detection module and a cell extraction module. The cell detection module trains a modified Channel-wise Feature Pyramid Network for Medicine (CFPNet-M) deep learning network that takes the green channel of the image and the color-deconvolution processed image as inputs, and learns a truncated distance transform image of cell annotations. CFPNet-M is chosen due to its low resource requirements, while the distance transform allows achieving more accurate cell counts for dense cells. Once the cells are detected by the network, the cell extraction module is used to extract single cells from the original image and count the number of cells. Our preliminary results based on 193 patients (including 148 P. Falciparum infected patients, and 45 uninfected patients) show that our framework achieves cell count accuracy of 92.2%.
Collapse
Affiliation(s)
| | - Feng Yang
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Yasmin M. Kassim
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Hang Yu
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Richard J. Maude
- Mahidol Oxford Tropical Medicine Research Unit, Mahidol University, Bangkok, Thailand
- Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK
- Harvard TH Chan School of Public Health, Harvard University, Boston, MA, USA
| | | | - Stefan Jaeger
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|