1
|
Nguyen KC, Jameson CD, Baldwin SA, Nardini JT, Smith RC, Haugh JM, Flores KB. Quantifying collective motion patterns in mesenchymal cell populations using topological data analysis and agent-based modeling. Math Biosci 2024; 370:109158. [PMID: 38373479 DOI: 10.1016/j.mbs.2024.109158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 02/06/2024] [Accepted: 02/11/2024] [Indexed: 02/21/2024]
Abstract
Fibroblasts in a confluent monolayer are known to adopt elongated morphologies in which cells are oriented parallel to their neighbors. We collected and analyzed new microscopy movies to show that confluent fibroblasts are motile and that neighboring cells often move in anti-parallel directions in a collective motion phenomenon we refer to as "fluidization" of the cell population. We used machine learning to perform cell tracking for each movie and then leveraged topological data analysis (TDA) to show that time-varying point-clouds generated by the tracks contain significant topological information content that is driven by fluidization, i.e., the anti-parallel movement of individual neighboring cells and neighboring groups of cells over long distances. We then utilized the TDA summaries extracted from each movie to perform Bayesian parameter estimation for the D'Orsgona model, an agent-based model (ABM) known to produce a wide array of different patterns, including patterns that are qualitatively similar to fluidization. Although the D'Orsgona ABM is a phenomenological model that only describes inter-cellular attraction and repulsion, the estimated region of D'Orsogna model parameter space was consistent across all movies, suggesting that a specific level of inter-cellular repulsion force at close range may be a mechanism that helps drive fluidization patterns in confluent mesenchymal cell populations.
Collapse
Affiliation(s)
- Kyle C Nguyen
- Biomathematics Graduate Program, North Carolina State University, Raleigh, NC 27607, USA; Center for Research in Scientific Computation, North Carolina State University, Raleigh, NC 27607, USA.
| | | | - Scott A Baldwin
- Department of Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, NC 27695, USA
| | - John T Nardini
- Department of Mathematics and Statistics, The College of New Jersey, Ewing, NJ 08628, USA
| | - Ralph C Smith
- Department of Mathematics, North Carolina State University, Raleigh, NC 27607, USA
| | - Jason M Haugh
- Department of Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, NC 27695, USA
| | - Kevin B Flores
- Center for Research in Scientific Computation, North Carolina State University, Raleigh, NC 27607, USA; Department of Mathematics, North Carolina State University, Raleigh, NC 27607, USA
| |
Collapse
|
2
|
Liu Z, Cai Y, Tang Q. Nuclei detection in breast histopathology images with iterative correction. Med Biol Eng Comput 2024; 62:465-478. [PMID: 37914958 DOI: 10.1007/s11517-023-02947-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/09/2023] [Indexed: 11/03/2023]
Abstract
This work presents a deep network architecture to improve nuclei detection performance and achieve the high localization accuracy of nuclei in breast cancer histopathology images. The proposed model consists of two parts, generating nuclear candidate module and refining nuclear localization module. We first design a novel patch learning method to obtain high-quality nuclear candidates, where in addition to categories, location representations are also added to the patch information to implement the multi-task learning process of nuclear classification and localization; meanwhile, the deep supervision mechanism is introduced to obtain the coherent contributions from each scale layer. In order to refine nuclear localization, we propose an iterative correction strategy to make the prediction progressively closer to the ground truth, which significantly improves the accuracy of nuclear localization and facilitates neighbor size selection in the nonmaximum suppression step. Experimental results demonstrate the superior performance of our method for nuclei detection on the H&E stained histopathological image dataset as compared to previous state-of-the-art methods, especially in multiple cluttered nuclei detection, can achieve better results than existing techniques.
Collapse
Affiliation(s)
- Ziyi Liu
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China
- Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264001, People's Republic of China
| | - Yu Cai
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China
| | - Qiling Tang
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China.
| |
Collapse
|
3
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
4
|
Miranda Ruiz F, Lahrmann B, Bartels L, Krauthoff A, Keil A, Härtel S, Tao AS, Ströbel P, Clarke MA, Wentzensen N, Grabe N. CNN stability training improves robustness to scanner and IHC-based image variability for epithelium segmentation in cervical histology. Front Med (Lausanne) 2023; 10:1173616. [PMID: 37476610 PMCID: PMC10354251 DOI: 10.3389/fmed.2023.1173616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 06/06/2023] [Indexed: 07/22/2023] Open
Abstract
Background In digital pathology, image properties such as color, brightness, contrast and blurriness may vary based on the scanner and sample preparation. Convolutional Neural Networks (CNNs) are sensitive to these variations and may underperform on images from a different domain than the one used for training. Robustness to these image property variations is required to enable the use of deep learning in clinical practice and large scale clinical research. Aims CNN Stability Training (CST) is proposed and evaluated as a method to increase CNN robustness to scanner and Immunohistochemistry (IHC)-based image variability. Methods CST was applied to segment epithelium in immunohistological cervical Whole Slide Images (WSIs). CST randomly distorts input tiles and factors the difference between the CNN prediction for the original and distorted inputs within the loss function. CNNs were trained using 114 p16-stained WSIs from the same scanner, and evaluated on 6 WSI test sets, each with 23 to 24 WSIs of the same tissue but different scanner/IHC combinations. Relative robustness (rAUC) was measured as the difference between the AUC on the training domain test set (i.e., baseline test set) and the remaining test sets. Results Across all test sets, The AUC of CST models outperformed "No CST" models (AUC: 0.940-0.989 vs. 0.905-0.986, p < 1e - 8), and obtained an improved robustness (rAUC: [-0.038, -0.003] vs. [-0.081, -0.002]). At a WSI level, CST models showed an increase in performance in 124 of the 142 WSIs. CST models also outperformed models trained with random on-the-fly data augmentation (DA) in all test sets ([0.002, 0.021], p < 1e-6). Conclusion CST offers a path to improve CNN performance without the need for more data and allows customizing distortions to specific use cases. A python implementation of CST is publicly available at https://github.com/TIGACenter/CST_v1.
Collapse
Affiliation(s)
- Felipe Miranda Ruiz
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Bernd Lahrmann
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Liam Bartels
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexandra Krauthoff
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Andreas Keil
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Steffen Härtel
- Medical Faculty, Center of Medical Informatics and Telemedicine (CIMT), University of Chile, Santiago, Chile
| | - Amy S. Tao
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Philipp Ströbel
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
| | - Megan A. Clarke
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Nicolas Wentzensen
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Niels Grabe
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| |
Collapse
|
5
|
Abousamra S, Gupta R, Kurc T, Samaras D, Saltz J, Chen C. Topology-Guided Multi-Class Cell Context Generation for Digital Pathology. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 2023; 2023:3323-3333. [PMID: 38741683 PMCID: PMC11090253 DOI: 10.1109/cvpr52729.2023.00324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
In digital pathology, the spatial context of cells is important for cell classification, cancer diagnosis and prognosis. To model such complex cell context, however, is challenging. Cells form different mixtures, lineages, clusters and holes. To model such structural patterns in a learnable fashion, we introduce several mathematical tools from spatial statistics and topological data analysis. We incorporate such structural descriptors into a deep generative model as both conditional inputs and a differentiable loss. This way, we are able to generate high quality multi-class cell layouts for the first time. We show that the topology-rich cell layouts can be used for data augmentation and improve the performance of downstream tasks such as cell classification.
Collapse
Affiliation(s)
| | - Rajarsi Gupta
- Stony Brook University, Department of Biomedical Informatics, USA
| | - Tahsin Kurc
- Stony Brook University, Department of Biomedical Informatics, USA
| | | | - Joel Saltz
- Stony Brook University, Department of Biomedical Informatics, USA
| | - Chao Chen
- Stony Brook University, Department of Biomedical Informatics, USA
| |
Collapse
|
6
|
Du X, Chen Z, Li Q, Yang S, Jiang L, Yang Y, Li Y, Gu Z. Organoids revealed: morphological analysis of the profound next generation in-vitro model with artificial intelligence. Biodes Manuf 2023; 6:319-339. [PMID: 36713614 PMCID: PMC9867835 DOI: 10.1007/s42242-022-00226-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/06/2022] [Indexed: 01/21/2023]
Abstract
In modern terminology, "organoids" refer to cells that grow in a specific three-dimensional (3D) environment in vitro, sharing similar structures with their source organs or tissues. Observing the morphology or growth characteristics of organoids through a microscope is a commonly used method of organoid analysis. However, it is difficult, time-consuming, and inaccurate to screen and analyze organoids only manually, a problem which cannot be easily solved with traditional technology. Artificial intelligence (AI) technology has proven to be effective in many biological and medical research fields, especially in the analysis of single-cell or hematoxylin/eosin stained tissue slices. When used to analyze organoids, AI should also provide more efficient, quantitative, accurate, and fast solutions. In this review, we will first briefly outline the application areas of organoids and then discuss the shortcomings of traditional organoid measurement and analysis methods. Secondly, we will summarize the development from machine learning to deep learning and the advantages of the latter, and then describe how to utilize a convolutional neural network to solve the challenges in organoid observation and analysis. Finally, we will discuss the limitations of current AI used in organoid research, as well as opportunities and future research directions. Graphic abstract
Collapse
Affiliation(s)
- Xuan Du
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Qiwei Li
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Sheng Yang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, School of Public Health, Southeast University, Nanjing, 210009 China
| | - Lincao Jiang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Yi Yang
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| | - Yanhui Li
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210008 China
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096 China
| |
Collapse
|
7
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE Winter Conf Appl Comput Vis 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington.,Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
8
|
Malin-Mayor C, Hirsch P, Guignard L, McDole K, Wan Y, Lemon WC, Kainmueller D, Keller PJ, Preibisch S, Funke J. Automated reconstruction of whole-embryo cell lineages by learning from sparse annotations. Nat Biotechnol 2023; 41:44-49. [PMID: 36065022 PMCID: PMC7614077 DOI: 10.1038/s41587-022-01427-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 07/12/2022] [Indexed: 01/19/2023]
Abstract
We present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
Collapse
Affiliation(s)
| | - Peter Hirsch
- Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
- Faculty of Mathematics and Natural Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Leo Guignard
- HHMI Janelia, Ashburn, VA, USA
- CNRS, UTLN, LIS 7020, Turing Centre for Living Systems, Aix Marseille University, Marseille, France
| | - Katie McDole
- HHMI Janelia, Ashburn, VA, USA
- MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Yinan Wan
- HHMI Janelia, Ashburn, VA, USA
- Biozentrum, University of Basel, Basel, Switzerland
| | | | - Dagmar Kainmueller
- Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
- Faculty of Mathematics and Natural Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | | | | | | |
Collapse
|
9
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Altini N, Brunetti A, Puro E, Taccogna MG, Saponaro C, Zito FA, De Summa S, Bevilacqua V. NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
11
|
Davri A, Birbas E, Kanavos T, Ntritsos G, Giannakeas N, Tzallas AT, Batistatou A. Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:837. [PMID: 35453885 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
12
|
Fu F, Guenther A, Sakhdari A, McKee TD, Xia D. Deep Learning Accurately Quantifies Plasma Cell Percentages on CD138-Stained Bone Marrow Samples. J Pathol Inform 2022; 13:100011. [PMID: 35242448 PMCID: PMC8873946 DOI: 10.1016/j.jpi.2022.100011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 01/03/2022] [Indexed: 11/08/2022] Open
Abstract
The diagnosis of plasma cell neoplasms requires accurate, and ideally precise, percentages. This plasma cell percentage is often determined by visual estimation of CD138-stained bone marrow biopsies and clot sections. While not necessarily inaccurate, estimates are by definition imprecise. For this study, we hypothesized that deep learning can be used to improve precision. We trained a semantic segmentation-based convolutional neural network (CNN) using annotations of CD138+ and CD138- cells provided by one pathologist on small image patches of bone marrow and validated the CNN on an independent test set of image patches using annotations from two pathologists and a non-deep learning commercial software. On validation, we found that the intraclass correlation coefficients for plasma cell percentages between the CNN and pathologist #1, a non-deep learning commercial software and pathologist #1, and pathologists #1 and #2 were 0.975, 0.892, and 0.994, respectively. The overall results show that CNN labels were almost as accurate as pathologist labels at a cell-by-cell level. Once satisfied with performance, we scaled-up the CNN to evaluate whole slide images (WSIs), and deployed the system as a workflow friendly web application to measure plasma cell percentages using snapshots taken from microscope cameras.
Collapse
Affiliation(s)
- Fred Fu
- STTARR Innovation Centre, University Health Network, Toronto, ON, Canada
| | - Angela Guenther
- Division of Hematopathology and Transfusion Medicine, University Health Network, Toronto, ON, Canada.,Scarborough Health Network, Toronto, ON, Canada
| | - Ali Sakhdari
- Division of Hematopathology and Transfusion Medicine, University Health Network, Toronto, ON, Canada
| | - Trevor D McKee
- STTARR Innovation Centre, University Health Network, Toronto, ON, Canada.,HistoWiz Inc., Brooklyn, NY, USA
| | - Daniel Xia
- Division of Hematopathology and Transfusion Medicine, University Health Network, Toronto, ON, Canada
| |
Collapse
|
13
|
Homeyer A, Geißler C, Schwen LO, Zakrzewski F, Evans T, Strohmenger K, Westphal M, Bülow RD, Kargl M, Karjauv A, Munné-Bertran I, Retzlaff CO, Romero-López A, Sołtysiński T, Plass M, Carvalho R, Steinbach P, Lan YC, Bouteldja N, Haber D, Rojas-Carulla M, Vafaei Sadr A, Kraft M, Krüger D, Fick R, Lang T, Boor P, Müller H, Hufnagl P, Zerbe N. Recommendations on compiling test datasets for evaluating artificial intelligence solutions in pathology. Mod Pathol 2022; 35:1759-1769. [PMID: 36088478 PMCID: PMC9708586 DOI: 10.1038/s41379-022-01147-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/24/2022] [Accepted: 07/25/2022] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations on compiling test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help pathologists and regulatory agencies verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.
Collapse
Affiliation(s)
- André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany.
| | - Christian Geißler
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | - Lars Ole Schwen
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Falk Zakrzewski
- grid.412282.f0000 0001 1091 2917Institute of Pathology, Carl Gustav Carus University Hospital Dresden (UKD), TU Dresden (TUD), Fetscherstrasse 74, 01307 Dresden, Germany
| | - Theodore Evans
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | - Klaus Strohmenger
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| | - Max Westphal
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Roman David Bülow
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Michaela Kargl
- grid.11598.340000 0000 8988 2476Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010 Graz, Austria
| | - Aray Karjauv
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | - Isidre Munné-Bertran
- MoticEurope, S.L.U., C. Les Corts, 12 Poligono Industrial, 08349 Barcelona, Spain
| | - Carl Orge Retzlaff
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | | | | | - Markus Plass
- grid.11598.340000 0000 8988 2476Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010 Graz, Austria
| | - Rita Carvalho
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| | - Peter Steinbach
- grid.40602.300000 0001 2158 0612Helmholtz-Zentrum Dresden Rossendorf, Bautzner Landstraße 400, 01328 Dresden, Germany
| | - Yu-Chia Lan
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Nassim Bouteldja
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - David Haber
- Lakera AI AG, Zelgstrasse 7, 8003 Zürich, Switzerland
| | | | - Alireza Vafaei Sadr
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | | | - Daniel Krüger
- grid.474385.90000 0004 4676 7928Olympus Soft Imaging Solutions GmbH, Johann-Krane-Weg 39, 48149 Münster, Germany
| | - Rutger Fick
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France
| | - Tobias Lang
- Mindpeak GmbH, Zirkusweg 2, 20359 Hamburg, Germany
| | - Peter Boor
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Heimo Müller
- grid.11598.340000 0000 8988 2476Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010 Graz, Austria
| | - Peter Hufnagl
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| | - Norman Zerbe
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| |
Collapse
|
14
|
Jose L, Liu S, Russo C, Nadort A, Ieva AD. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments
in advanced imaging modalities and new technologies. Generative adversarial
networks (GANs) are a recent development in the field of artificial intelligence
and since their inception, have boosted considerable interest in digital
pathology. GANs and their extensions have opened several ways to tackle many
challenging histopathological image processing problems such as color
normalization, virtual staining, ink removal, image enhancement, automatic
feature extraction, segmentation of nuclei, domain adaptation and data
augmentation. This paper reviews recent advances in histopathological image
processing using GANs with special emphasis on the future perspectives related
to the use of such a technique. The papers included in this review were
retrieved by conducting a keyword search on Google Scholar and manually
selecting the papers on the subject of H&E stained digital pathology
images for histopathological image processing. In the first part, we describe
recent literature that use GANs in various image preprocessing tasks such as
stain normalization, virtual staining, image enhancement, ink removal, and data
augmentation. In the second part, we describe literature that use GANs for image
analysis, such as nuclei detection, segmentation, and feature extraction. This
review illustrates the role of GANs in digital pathology with the objective to
trigger new research on the application of generative models in future research
in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.,ARC Centre of Excellence for Nanoscale Biophotonics, Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.,Australian Institute of Health Innovation, Centre for Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics, Macquarie University, Sydney, Australia.,Department of Physics and Astronomy, Faculty of Science and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
15
|
Lee SMW, Shaw A, Simpson JL, Uminsky D, Garratt LW. Differential cell counts using center-point networks achieves human-level accuracy and efficiency over segmentation. Sci Rep 2021; 11:16917. [PMID: 34413367 PMCID: PMC8377024 DOI: 10.1038/s41598-021-96067-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 08/03/2021] [Indexed: 11/08/2022] Open
Abstract
Differential cell counts is a challenging task when applying computer vision algorithms to pathology. Existing approaches to train cell recognition require high availability of multi-class segmentation and/or bounding box annotations and suffer in performance when objects are tightly clustered. We present differential count network ("DCNet"), an annotation efficient modality that utilises keypoint detection to locate in brightfield images the centre points of cells (not nuclei) and their cell class. The single centre point annotation for DCNet lowered burden for experts to generate ground truth data by 77.1% compared to bounding box labeling. Yet centre point annotation still enabled high accuracy when training DCNet on a multi-class algorithm on whole cell features, matching human experts in all 5 object classes in average precision and outperforming humans in consistency. The efficacy and efficiency of the DCNet end-to-end system represents a significant progress toward an open source, fully computationally approach to differential cell count based diagnosis that can be adapted to any pathology need.
Collapse
Affiliation(s)
- Sarada M W Lee
- Perth Machine Learning Group, Perth, WA, 6000, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - Andrew Shaw
- Data Institute, University of San Francisco, San Francisco, CA, 94117, USA
| | - Jodie L Simpson
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
- Priority Research Centre for Healthy Lungs, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - David Uminsky
- Department of Computer Science, University of Chicago, Chicago, IL, 60637, USA
| | - Luke W Garratt
- Wal-yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Nedlands, WA, 6009, Australia.
| |
Collapse
|
16
|
Zhang S, Yuan Z, Wang Y, Bai Y, Chen B, Wang H. REUR: A unified deep framework for signet ring cell detection in low-resolution pathological images. Comput Biol Med 2021; 136:104711. [PMID: 34388466 DOI: 10.1016/j.compbiomed.2021.104711] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 11/15/2022]
Abstract
Detecting signet ring cells (SRCs) in pathological images is essential for carcinoma diagnosis. However, it is time consuming for pathologists to detect SRCs manually from pathological images, and the accuracy of detecting them is also relatively low because of their small sizes. Recently, the exploration of deep learning methods in pathology analysis has been widely investigated by researchers. Nevertheless, the automatic detection of SRCs from real pathological images faces two problems. One is that labeled pathological images are insufficient and usually incomplete. The other is that the training data and the real clinical data have a large difference in resolution. Hence, adopting the transfer learning method affects the performance of deep learning methods. To address these two problems, we present a unified framework named REUR [RetinaNet combining USRNet (unfolding super-resolution network) with the RGHMC (revised gradient harmonizing mechanism classification) loss] that can accurately detect SRCs in low-resolution (LR) pathological images. First, the framework with the super-resolution (SR) module can address the difference in resolution between the training data and the real clinical data. Second, the framework with the label correction module can obtain the revised ground-truth labels from noisy examples, which are embedded into the gradient harmonizing mechanism to acquire the RGHMC loss. The results of the numerical experiments showed that the framework can perform better than other one-stage detectors based on the RetinaNet architecture in the high-resolution (HR) noisy dataset. It achieved a kappa value of 0.74 and an accuracy of 0.89 in the test with 27 randomly selected whole slide images (WSIs), and, thus, it can assist pathologists in better analyzing WSIs. The framework provides an essential method in computer-aided diagnosis for medical applications.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yang Bai
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| |
Collapse
|
17
|
Sarker MMK, Makhlouf Y, Craig SG, Humphries MP, Loughrey M, James JA, Salto-Tellez M, O’Reilly P, Maxwell P. A Means of Assessing Deep Learning-Based Detection of ICOS Protein Expression in Colon Cancer. Cancers (Basel) 2021; 13:3825. [PMID: 34359723 PMCID: PMC8345140 DOI: 10.3390/cancers13153825] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 07/20/2021] [Accepted: 07/23/2021] [Indexed: 02/07/2023] Open
Abstract
Biomarkers identify patient response to therapy. The potential immune-checkpoint biomarker, Inducible T-cell COStimulator (ICOS), expressed on regulating T-cell activation and involved in adaptive immune responses, is of great interest. We have previously shown that open-source software for digital pathology image analysis can be used to detect and quantify ICOS using cell detection algorithms based on traditional image processing techniques. Currently, artificial intelligence (AI) based on deep learning methods is significantly impacting the domain of digital pathology, including the quantification of biomarkers. In this study, we propose a general AI-based workflow for applying deep learning to the problem of cell segmentation/detection in IHC slides as a basis for quantifying nuclear staining biomarkers, such as ICOS. It consists of two main parts: a simplified but robust annotation process, and cell segmentation/detection models. This results in an optimised annotation process with a new user-friendly tool that can interact with1 other open-source software and assists pathologists and scientists in creating and exporting data for deep learning. We present a set of architectures for cell-based segmentation/detection to quantify and analyse the trade-offs between them, proving to be more accurate and less time consuming than traditional methods. This approach can identify the best tool to deliver the prognostic significance of ICOS protein expression.
Collapse
Affiliation(s)
- Md Mostafa Kamal Sarker
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Yasmine Makhlouf
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Stephanie G. Craig
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Matthew P. Humphries
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Maurice Loughrey
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK;
| | - Jacqueline A. James
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK;
- Northern Ireland Biobank, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Manuel Salto-Tellez
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK;
- Division of Molecular Pathology, The Institute of Cancer Research, Sutton SM2 5NG, UK
| | - Paul O’Reilly
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
- Sonrai Analytics LTD, Lisburn Road, Belfast BT9 7BL, UK
| | - Perry Maxwell
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| |
Collapse
|
18
|
Javed S, Mahmood A, Dias J, Werghi N, Rajpoot N. Spatially Constrained Context-Aware Hierarchical Deep Correlation Filters for Nucleus Detection in Histology Images. Med Image Anal 2021; 72:102104. [PMID: 34242872 DOI: 10.1016/j.media.2021.102104] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 05/10/2021] [Accepted: 05/12/2021] [Indexed: 09/30/2022]
Abstract
Nucleus detection in histology images is a fundamental step for cellular-level analysis in computational pathology. In clinical practice, quantitative nuclear morphology can be used for diagnostic decision making, prognostic stratification, and treatment outcome prediction. Nucleus detection is a challenging task because of large variations in the shape of different types of nucleus such as nuclear clutter, heterogeneous chromatin distribution, and irregular and fuzzy boundaries. To address these challenges, we aim to accurately detect nuclei using spatially constrained context-aware correlation filters using hierarchical deep features extracted from multiple layers of a pre-trained network. During training, we extract contextual patches around each nucleus which are used as negative examples while the actual nucleus patch is used as a positive example. In order to spatially constrain the correlation filters, we propose to construct a spatial structural graph across different nucleus components encoding pairwise similarities. The correlation filters are constrained to act as eigenvectors of the Laplacian of the spatial graphs enforcing these to capture the nucleus structure. A novel objective function is proposed by embedding graph-based structural information as well as the contextual information within the discriminative correlation filter framework. The learned filters are constrained to be orthogonal to both the contextual patches and the spatial graph-Laplacian basis to improve the localization and discriminative performance. The proposed objective function trains a hierarchy of correlation filters on different deep feature layers to capture the heterogeneity in nuclear shape and texture. The proposed algorithm is evaluated on three publicly available datasets and compared with 15 current state-of-the-art methods demonstrating competitive performance in terms of accuracy, speed, and generalization.
Collapse
Affiliation(s)
- Sajid Javed
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Jorge Dias
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Naoufel Werghi
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE..
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, U.K.; Department of Pathology, University Hospitals Coventry and Warwickshire, Walsgrave, Coventry, CV2 2DX, U.K.; The Alan Turing Institute, London, NW1 2DB, U.K
| |
Collapse
|
19
|
Lapierre-Landry M, Liu Z, Ling S, Bayat M, Wilson DL, Jenkins MW. Nuclei Detection for 3D Microscopy With a Fully Convolutional Regression Network. IEEE Access 2021; 9:60396-60408. [PMID: 35024261 PMCID: PMC8751907 DOI: 10.1109/access.2021.3073894] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Advances in three-dimensional microscopy and tissue clearing are enabling whole-organ imaging with single-cell resolution. Fast and reliable image processing tools are needed to analyze the resulting image volumes, including automated cell detection, cell counting and cell analytics. Deep learning approaches have shown promising results in two- and three-dimensional nuclei detection tasks, however detecting overlapping or non-spherical nuclei of different sizes and shapes in the presence of a blurring point spread function remains challenging and often leads to incorrect nuclei merging and splitting. Here we present a new regression-based fully convolutional network that located a thousand nuclei centroids with high accuracy in under a minute when combined with V-net, a popular three-dimensional semantic-segmentation architecture. High nuclei detection F1-scores of 95.3% and 92.5% were obtained in two different whole quail embryonic hearts, a tissue type difficult to segment because of its high cell density, and heterogeneous and elliptical nuclei. Similar high scores were obtained in the mouse brain stem, demonstrating that this approach is highly transferable to nuclei of different shapes and intensities. Finally, spatial statistics were performed on the resulting centroids. The spatial distribution of nuclei obtained by our approach most resembles the spatial distribution of manually identified nuclei, indicating that this approach could serve in future spatial analyses of cell organization.
Collapse
Affiliation(s)
- Maryse Lapierre-Landry
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Zexuan Liu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Shan Ling
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Mahdi Bayat
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Radiology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Michael W Jenkins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Pediatrics, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
20
|
Tran ST, Cheng CH, Nguyen TT, Le MH, Liu DG. TMD-Unet: Triple-Unet with Multi-Scale Input Features and Dense Skip Connection for Medical Image Segmentation. Healthcare (Basel) 2021; 9:54. [PMID: 33419018 PMCID: PMC7825313 DOI: 10.3390/healthcare9010054] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/29/2020] [Accepted: 01/02/2021] [Indexed: 11/18/2022] Open
Abstract
Deep learning is one of the most effective approaches to medical image processing applications. Network models are being studied more and more for medical image segmentation challenges. The encoder-decoder structure is achieving great success, in particular the Unet architecture, which is used as a baseline architecture for the medical image segmentation networks. Traditional Unet and Unet-based networks still have a limitation that is not able to fully exploit the output features of the convolutional units in the node. In this study, we proposed a new network model named TMD-Unet, which had three main enhancements in comparison with Unet: (1) modifying the interconnection of the network node, (2) using dilated convolution instead of the standard convolution, and (3) integrating the multi-scale input features on the input side of the model and applying a dense skip connection instead of a regular skip connection. Our experiments were performed on seven datasets, including many different medical image modalities such as colonoscopy, electron microscopy (EM), dermoscopy, computed tomography (CT), and magnetic resonance imaging (MRI). The segmentation applications implemented in the paper include EM, nuclei, polyp, skin lesion, left atrium, spleen, and liver segmentation. The dice score of our proposed models achieved 96.43% for liver segmentation, 95.51% for spleen segmentation, 92.65% for polyp segmentation, 94.11% for EM segmentation, 92.49% for nuclei segmentation, 91.81% for left atrium segmentation, and 87.27% for skin lesion segmentation. The experimental results showed that the proposed model was superior to the popular models for all seven applications, which demonstrates the high generality of the proposed model.
Collapse
Affiliation(s)
- Song-Toan Tran
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
- Department of Electrical and Electronics, Tra Vinh University, Tra Vinh 87000, Vietnam
| | - Ching-Hwa Cheng
- Department of Electronic Engineering, Feng Chia University, Taichung 40724, Taiwan;
| | - Thanh-Tuan Nguyen
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
| | - Minh-Hai Le
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
- Department of Electrical and Electronics, Tra Vinh University, Tra Vinh 87000, Vietnam
| | - Don-Gey Liu
- Program of Electrical and Communications Engineering, Feng Chia University, Taichung 40724, Taiwan; (T.-T.N.); (M.-H.L.); (D.-G.L.)
- Department of Electronic Engineering, Feng Chia University, Taichung 40724, Taiwan;
| |
Collapse
|
21
|
Feng M, Chen J, Xiang X, Deng Y, Zhou Y, Zhang Z, Zheng Z, Bao J, Bu H. An Advanced Automated Image Analysis Model for Scoring of ER, PR, HER-2 and Ki-67 in Breast Carcinoma. IEEE Access 2021; 9:108441-108451. [DOI: 10.1109/access.2020.3011294] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
22
|
Hailstone M, Waithe D, Samuels TJ, Yang L, Costello I, Arava Y, Robertson E, Parton RM, Davis I. CytoCensus, mapping cell identity and division in tissues and organs using machine learning. eLife 2020; 9:e51085. [PMID: 32423529 PMCID: PMC7237217 DOI: 10.7554/elife.51085] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 03/17/2020] [Indexed: 01/16/2023] Open
Abstract
A major challenge in cell and developmental biology is the automated identification and quantitation of cells in complex multilayered tissues. We developed CytoCensus: an easily deployed implementation of supervised machine learning that extends convenient 2D 'point-and-click' user training to 3D detection of cells in challenging datasets with ill-defined cell boundaries. In tests on such datasets, CytoCensus outperforms other freely available image analysis software in accuracy and speed of cell detection. We used CytoCensus to count stem cells and their progeny, and to quantify individual cell divisions from time-lapse movies of explanted Drosophila larval brains, comparing wild-type and mutant phenotypes. We further illustrate the general utility and future potential of CytoCensus by analysing the 3D organisation of multiple cell classes in Zebrafish retinal organoids and cell distributions in mouse embryos. CytoCensus opens the possibility of straightforward and robust automated analysis of developmental phenotypes in complex tissues.
Collapse
Affiliation(s)
- Martin Hailstone
- Department of Biochemistry, University of OxfordOxfordUnited Kingdom
| | - Dominic Waithe
- Wolfson Imaging Center & MRC WIMM Centre for Computational Biology MRC Weather all Institute of Molecular Medicine University of OxfordOxfordUnited Kingdom
| | - Tamsin J Samuels
- Department of Biochemistry, University of OxfordOxfordUnited Kingdom
| | - Lu Yang
- Department of Biochemistry, University of OxfordOxfordUnited Kingdom
| | - Ita Costello
- The Dunn School of Pathology,University of OxfordOxfordUnited Kingdom
| | - Yoav Arava
- Department of Biology, Technion - Israel Institute of TechnologyHaifaIsrael
| | | | - Richard M Parton
- Department of Biochemistry, University of OxfordOxfordUnited Kingdom
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of OxfordOxfordUnited Kingdom
| | - Ilan Davis
- Department of Biochemistry, University of OxfordOxfordUnited Kingdom
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
23
|
Iesmantas T, Paulauskaite-taraseviciene A, Sutiene K. Enhancing Multi-tissue and Multi-scale Cell Nuclei Segmentation with Deep Metric Learning. Applied Sciences 2020; 10:615. [DOI: 10.3390/app10020615] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
(1) Background: The segmentation of cell nuclei is an essential task in a wide range of biomedical studies and clinical practices. The full automation of this process remains a challenge due to intra- and internuclear variations across a wide range of tissue morphologies, differences in staining protocols and imaging procedures. (2) Methods: A deep learning model with metric embeddings such as contrastive loss and triplet loss with semi-hard negative mining is proposed in order to accurately segment cell nuclei in a diverse set of microscopy images. The effectiveness of the proposed model was tested on a large-scale multi-tissue collection of microscopy image sets. (3) Results: The use of deep metric learning increased the overall segmentation prediction by 3.12% in the average value of Dice similarity coefficients as compared to no metric learning. In particular, the largest gain was observed for segmenting cell nuclei in H&E -stained images when deep learning network and triplet loss with semi-hard negative mining were considered for the task. (4) Conclusion: We conclude that deep metric learning gives an additional boost to the overall learning process and consequently improves the segmentation performance. Notably, the improvement ranges approximately between 0.13% and 22.31% for different types of images in the terms of Dice coefficients when compared to no metric deep learning.
Collapse
|
24
|
Pocevičiūtė M, Eilertsen G, Lundström C. Survey of XAI in Digital Pathology. Artificial Intelligence and Machine Learning for Digital Pathology 2020. [DOI: 10.1007/978-3-030-50402-1_4] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
25
|
Kong B, Wang X, Bai J, Lu Y, Gao F, Cao K, Xia J, Song Q, Yin Y. Learning tree-structured representation for 3D coronary artery segmentation. Comput Med Imaging Graph 2019; 80:101688. [PMID: 31926366 DOI: 10.1016/j.compmedimag.2019.101688] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 11/13/2019] [Accepted: 12/06/2019] [Indexed: 12/13/2022]
Abstract
Extensive research has been devoted to the segmentation of the coronary artery. However, owing to its complex anatomical structure, it is extremely challenging to automatically segment the coronary artery from 3D coronary computed tomography angiography (CCTA). Inspired by recent ideas to use tree-structured long short-term memory (LSTM) to model the underlying tree structures for NLP tasks, we propose a novel tree-structured convolutional gated recurrent unit (ConvGRU) model to learn the anatomical structure of the coronary artery. However, unlike tree-structured LSTM proposed for semantic relatedness as well as sentiment classification in natural language processing, our tree-structured ConvGRU model considers the local spatial correlations in the input data as the convolutions are used for input-to-state as well as state-to-state transitions, thus more suitable for image analysis. To conduct voxel-wise segmentation, a tree-structured segmentation framework is presented. It consists of a fully convolutional network (FCN) for multi-scale discriminative feature extraction and the final prediction, and a tree-structured ConvGRU layer for anatomical structure modeling. The proposed framework is extensively evaluated on four large-scale 3D CCTA dataset (the largest to the best of our knowledge), and experiments show that our method is more accurate as well as efficient, compared with other coronary artery segmentation approaches.
Collapse
Affiliation(s)
- Bin Kong
- Department of Computer Science, UNC Charlotte, Charlotte, NC, USA.
| | - Xin Wang
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Junjie Bai
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Yi Lu
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Feng Gao
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Kunlin Cao
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Guangdong, China
| | - Qi Song
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Youbing Yin
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| |
Collapse
|
26
|
Ibrahim A, Gamble P, Jaroensri R, Abdelsamea MM, Mermel CH, Chen PHC, Rakha EA. Artificial intelligence in digital breast pathology: Techniques and applications. Breast 2019; 49:267-273. [PMID: 31935669 PMCID: PMC7375550 DOI: 10.1016/j.breast.2019.12.007] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 12/12/2019] [Indexed: 12/16/2022] Open
Abstract
Breast cancer is the most common cancer and second leading cause of cancer-related death worldwide. The mainstay of breast cancer workup is histopathological diagnosis - which guides therapy and prognosis. However, emerging knowledge about the complex nature of cancer and the availability of tailored therapies have exposed opportunities for improvements in diagnostic precision. In parallel, advances in artificial intelligence (AI) along with the growing digitization of pathology slides for the primary diagnosis are a promising approach to meet the demand for more accurate detection, classification and prediction of behaviour of breast tumours. In this article, we cover the current and prospective uses of AI in digital pathology for breast cancer, review the basics of digital pathology and AI, and outline outstanding challenges in the field.
Collapse
Affiliation(s)
- Asmaa Ibrahim
- Department of Histopathology, Division of Cancer and Stem Cells, School of Medicine, The University of Nottingham and Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham, NG5 1PB, UK
| | | | | | - Mohammed M Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham, UK
| | | | | | - Emad A Rakha
- Department of Histopathology, Division of Cancer and Stem Cells, School of Medicine, The University of Nottingham and Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham, NG5 1PB, UK.
| |
Collapse
|
27
|
Fernandez K, Korinek M, Camp J, Lieske J, Holmes D. Automatic detection of calcium phosphate deposit plugs at the terminal ends of kidney tubules. Healthc Technol Lett 2019; 6:271-274. [PMID: 32038870 PMCID: PMC6952263 DOI: 10.1049/htl.2019.0086] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 10/02/2019] [Indexed: 11/20/2022] Open
Abstract
Kidney stones are a common urologic condition with a high amount of recurrence. Recurrence depends on a multitude of factors the incidence of precursors to kidney stones, plugs, and plaques. One method of characterising the stone precursors is endoscopic assessment, though it is manual and time-consuming. Deep learning has become a popular technique for semantic segmentation because of the high accuracy that has been demonstrated. The present Letter examined the efficacy of deep learning to segment the renal papilla, plaque, and plugs. A U-Net model with ResNet-34 encoder was tested; the Letter examined dropout (to avoid overtraining) and two different loss functions (to address the class imbalance problem. The models were then trained in 1666 images and tested on 185 images. The Jaccard-cross-entropy loss function was more effective than the focal loss function. The model with the dropout rate 0.4 was found to be more effective due to its generalisability. The model was largely successful at delineating the papilla. The model was able to correctly detect the plaques and plugs; however, small plaques were challenging. Deep learning was found to be applicable for segmentation of an endoscopic image for the papilla, plaque, and plug, with room for improvement.
Collapse
Affiliation(s)
- Katrina Fernandez
- Biomedical Imaging Resource, Mayo Clinic, Rochester, MN, USA
- University of Minnesota, Minneapolis, MN, USA
| | - Mark Korinek
- Biomedical Imaging Resource, Mayo Clinic, Rochester, MN, USA
| | - Jon Camp
- Biomedical Imaging Resource, Mayo Clinic, Rochester, MN, USA
| | - John Lieske
- Department of Nephrology & Hypertension, Mayo Clinic, Rochester, MN, USA
| | - David Holmes
- Biomedical Imaging Resource, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
28
|
Savelli B, Bria A, Molinara M, Marrocco C, Tortorella F. A multi-context CNN ensemble for small lesion detection. Artif Intell Med 2019; 103:101749. [PMID: 32143786 DOI: 10.1016/j.artmed.2019.101749] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 10/23/2019] [Accepted: 10/27/2019] [Indexed: 12/27/2022]
Abstract
In this paper, we propose a novel method for the detection of small lesions in digital medical images. Our approach is based on a multi-context ensemble of convolutional neural networks (CNNs), aiming at learning different levels of image spatial context and improving detection performance. The main innovation behind the proposed method is the use of multiple-depth CNNs, individually trained on image patches of different dimensions and then combined together. In this way, the final ensemble is able to find and locate abnormalities on the images by exploiting both the local features and the surrounding context of a lesion. Experiments were focused on two well-known medical detection problems that have been recently faced with CNNs: microcalcification detection on full-field digital mammograms and microaneurysm detection on ocular fundus images. To this end, we used two publicly available datasets, INbreast and E-ophtha. Statistically significantly better detection performance were obtained by the proposed ensemble with respect to other approaches in the literature, demonstrating its effectiveness in the detection of small abnormalities.
Collapse
Affiliation(s)
- B Savelli
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - A Bria
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - M Molinara
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - C Marrocco
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - F Tortorella
- Department of Electrical, Information Engineering and Applied Mathematics, University of Salerno, via Giovanni Paolo II 132, 84084 Fisciano (SA), Italy.
| |
Collapse
|
29
|
Bera K, Schalper KA, Rimm DL, Velcheti V, Madabhushi A. Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology. Nat Rev Clin Oncol 2019; 16:703-715. [PMID: 31399699 PMCID: PMC6880861 DOI: 10.1038/s41571-019-0252-y] [Citation(s) in RCA: 608] [Impact Index Per Article: 121.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/04/2019] [Indexed: 02/06/2023]
Abstract
In the past decade, advances in precision oncology have resulted in an increased demand for predictive assays that enable the selection and stratification of patients for treatment. The enormous divergence of signalling and transcriptional networks mediating the crosstalk between cancer, stromal and immune cells complicates the development of functionally relevant biomarkers based on a single gene or protein. However, the result of these complex processes can be uniquely captured in the morphometric features of stained tissue specimens. The possibility of digitizing whole-slide images of tissue has led to the advent of artificial intelligence (AI) and machine learning tools in digital pathology, which enable mining of subvisual morphometric phenotypes and might, ultimately, improve patient management. In this Perspective, we critically evaluate various AI-based computational approaches for digital pathology, focusing on deep neural networks and 'hand-crafted' feature-based methodologies. We aim to provide a broad framework for incorporating AI and machine learning tools into clinical oncology, with an emphasis on biomarker development. We discuss some of the challenges relating to the use of AI, including the need for well-curated validation datasets, regulatory approval and fair reimbursement strategies. Finally, we present potential future opportunities for precision oncology.
Collapse
Affiliation(s)
- Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Kurt A Schalper
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - David L Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - Vamsidhar Velcheti
- Thoracic Medical Oncology, Perlmutter Cancer Center, New York University, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
30
|
Chen P, Gao L, Shi X, Allen K, Yang L. Fully automatic knee osteoarthritis severity grading using deep neural networks with a novel ordinal loss. Comput Med Imaging Graph 2019; 75:84-92. [PMID: 31238184 DOI: 10.1016/j.compmedimag.2019.06.002] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Revised: 05/23/2019] [Accepted: 06/12/2019] [Indexed: 02/02/2023]
Abstract
Knee osteoarthritis (OA) is one major cause of activity limitation and physical disability in older adults. Early detection and intervention can help slow down the OA degeneration. Physicians' grading based on visual inspection is subjective, varied across interpreters, and highly relied on their experience. In this paper, we successively apply two deep convolutional neural networks (CNN) to automatically measure the knee OA severity, as assessed by the Kellgren-Lawrence (KL) grading system. Firstly, considering the size of knee joints distributed in X-ray images with small variability, we detect knee joints using a customized one-stage YOLOv2 network. Secondly, we fine-tune the most popular CNN models, including variants of ResNet, VGG, and DenseNet as well as InceptionV3, to classify the detected knee joint images with a novel adjustable ordinal loss. To be specific, motivated by the ordinal nature of the knee KL grading task, we assign higher penalty to misclassification with larger distance between the predicted KL grade and the real KL grade. The baseline X-ray images from the Osteoarthritis Initiative (OAI) dataset are used for evaluation. On the knee joint detection, we achieve mean Jaccard index of 0.858 and recall of 92.2% under the Jaccard index threshold of 0.75. On the knee KL grading task, the fine-tuned VGG-19 model with the proposed ordinal loss obtains the best classification accuracy of 69.7% and mean absolute error (MAE) of 0.344. Both knee joint detection and knee KL grading achieve state-of-the-art performance. The code, dataset, and models are released at https://github.com/PingjunChen/KneeAnalysis.
Collapse
Affiliation(s)
- Pingjun Chen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States
| | - Linlin Gao
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States
| | - Kyle Allen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States.
| |
Collapse
|
31
|
Höfener H, Homeyer A, Förster M, Drieschner N, Schildhaus HU, Hahn HK. Automated density-based counting of FISH amplification signals for HER2 status assessment. Comput Methods Programs Biomed 2019; 173:77-85. [PMID: 31046998 DOI: 10.1016/j.cmpb.2019.03.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 02/14/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Automated image analysis can make quantification of FISH signals in histological sections more efficient and reproducible. Current detection-based methods, however, often fail to accurately quantify densely clustered FISH signals. METHODS We propose a novel density-based approach to quantifying FISH signals. Instead of detecting individual signals, this approach quantifies FISH signals in terms of the integral over a density map predicted by Deep Learning. We apply the density-based approach to the task of counting and determining ratios of ERBB2 and CEN17 signals and compare it to common detection-based and area-based approaches. RESULTS The ratios determined by our approach were strongly correlated with results obtained by manual annotation of individual FISH signals (Pearson's r = 0.907). In addition, they were highly consistent with cutoff-scores determined by a pathologist (balanced concordance = 0.971). The density-based approach generally outperformed the other approaches. Its superiority was particularly evident in the presence of dense signal clusters. CONCLUSIONS The presented approach enables accurate and efficient automated quantification of FISH signals. Since signals in clusters can hardly be detected individually even by human observers, the density-based quantification performs better than detection-based approaches.
Collapse
Affiliation(s)
| | - André Homeyer
- Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany.
| | | | | | - Hans-Ulrich Schildhaus
- Institute of Pathology, University Hospital Göttingen, Robert-Koch-Straße 40, 37075 Göttingen, Germany; Institute of Pathology, University Hospital Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Horst K Hahn
- Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany; Jacobs University, Campus Ring 1, 28759 Bremen, Germany.
| |
Collapse
|