1
|
Chu WL, Chang QW, Jian BL. Unsupervised anomaly detection in the textile texture database. MICROSYSTEM TECHNOLOGIES 2024; 30:1609-1621. [DOI: 10.1007/s00542-024-05711-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 06/12/2024] [Indexed: 01/05/2025]
|
2
|
Ameen YA, Badary DM, Abonnoor AEI, Hussain KF, Sewisy AA. Which data subset should be augmented for deep learning? a simulation study using urothelial cell carcinoma histopathology images. BMC Bioinformatics 2023; 24:75. [PMID: 36869300 PMCID: PMC9983182 DOI: 10.1186/s12859-023-05199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 02/21/2023] [Indexed: 03/05/2023] Open
Abstract
BACKGROUND Applying deep learning to digital histopathology is hindered by the scarcity of manually annotated datasets. While data augmentation can ameliorate this obstacle, its methods are far from standardized. Our aim was to systematically explore the effects of skipping data augmentation; applying data augmentation to different subsets of the whole dataset (training set, validation set, test set, two of them, or all of them); and applying data augmentation at different time points (before, during, or after dividing the dataset into three subsets). Different combinations of the above possibilities resulted in 11 ways to apply augmentation. The literature contains no such comprehensive systematic comparison of these augmentation ways. RESULTS Non-overlapping photographs of all tissues on 90 hematoxylin-and-eosin-stained urinary bladder slides were obtained. Then, they were manually classified as either inflammation (5948 images), urothelial cell carcinoma (5811 images), or invalid (3132 images; excluded). If done, augmentation was eight-fold by flipping and rotation. Four convolutional neural networks (Inception-v3, ResNet-101, GoogLeNet, and SqueezeNet), pre-trained on the ImageNet dataset, were fine-tuned to binary classify images of our dataset. This task was the benchmark for our experiments. Model testing performance was evaluated using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve. Model validation accuracy was also estimated. The best testing performance was achieved when augmentation was done to the remaining data after test-set separation, but before division into training and validation sets. This leaked information between the training and the validation sets, as evidenced by the optimistic validation accuracy. However, this leakage did not cause the validation set to malfunction. Augmentation before test-set separation led to optimistic results. Test-set augmentation yielded more accurate evaluation metrics with less uncertainty. Inception-v3 had the best overall testing performance. CONCLUSIONS In digital histopathology, augmentation should include both the test set (after its allocation), and the remaining combined training/validation set (before being split into separate training and validation sets). Future research should try to generalize our results.
Collapse
Affiliation(s)
- Yusra A Ameen
- Department of Computer Science, Faculty of Computers and Information, Assiut University, Asyut, Egypt.
| | - Dalia M Badary
- Department of Pathology, Faculty of Medicine, Assiut University, Asyut, Egypt
| | | | - Khaled F Hussain
- Department of Computer Science, Faculty of Computers and Information, Assiut University, Asyut, Egypt
| | - Adel A Sewisy
- Department of Computer Science, Faculty of Computers and Information, Assiut University, Asyut, Egypt
| |
Collapse
|
3
|
Lutnick B, Lucarelli N, Sarder P. Generative Modeling of Histology Tissue Reduces Human Annotation Effort for Segmentation Model Development. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12471:124711Q. [PMID: 37818351 PMCID: PMC10563116 DOI: 10.1117/12.2655282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
Segmentation of histology tissue whole side images is an important step for tissue analysis. Given enough annotated training data, modern neural networks are capable of accurate reproducible segmentation; however, the annotation of training datasets is time consuming. Techniques such as human-in-the-loop annotation attempt to reduce this annotation burden, but still require vast initial annotation. Semi-supervised learning-a technique which leverages both labeled and unlabeled data to learn features-has shown promise for easing the burden of annotation. Towards this goal, we employ a recently published semi-supervised method, datasetGAN, for the segmentation of glomeruli from renal biopsy images. We compare the performance of models trained using datasetGAN and traditional annotation and show that datasetGAN significantly reduces the amount of annotation required to develop a highly performing segmentation model. We also explore the usefulness of datasetGAN for transfer learning and find that this method greatly enhances the performance when a limited number of whole slide images are used for training.
Collapse
Affiliation(s)
- Brendon Lutnick
- Department of Pathology and Anatomical Sciences, University at Buffalo - The State University of New York, Buffalo, NY, USA
| | - Nicholas Lucarelli
- Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Pinaki Sarder
- Division of Nephrology, Hypertension, and Renal Transplantation, Department of Medicine, University of Florida, Gainesville, FL, USA
| |
Collapse
|
4
|
Sheng C, Wang L, Huang Z, Wang T, Guo Y, Hou W, Xu L, Wang J, Yan X. Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs. JOURNAL OF SYSTEMS SCIENCE AND COMPLEXITY 2022; 36:257-272. [PMID: 36258771 PMCID: PMC9561331 DOI: 10.1007/s11424-022-2057-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 03/23/2022] [Indexed: 05/28/2023]
Abstract
Panoramic radiographs can assist dentist to quickly evaluate patients' overall oral health status. The accurate detection and localization of tooth tissue on panoramic radiographs is the first step to identify pathology, and also plays a key role in an automatic diagnosis system. However, the evaluation of panoramic radiographs depends on the clinical experience and knowledge of dentist, while the interpretation of panoramic radiographs might lead misdiagnosis. Therefore, it is of great significance to use artificial intelligence to segment teeth on panoramic radiographs. In this study, SWin-Unet, the transformer-based Ushaped encoder-decoder architecture with skip-connections, is introduced to perform panoramic radiograph segmentation. To well evaluate the tooth segmentation performance of SWin-Unet, the PLAGH-BH dataset is introduced for the research purpose. The performance is evaluated by F1 score, mean intersection and Union (IoU) and Acc, Compared with U-Net, Link-Net and FPN baselines, SWin-Unet performs much better in PLAGH-BH tooth segmentation dataset. These results indicate that SWin-Unet is more feasible on panoramic radiograph segmentation, and is valuable for the potential clinical application.
Collapse
Affiliation(s)
- Chen Sheng
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
| | - Lin Wang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Zhenhuan Huang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Tian Wang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Yalin Guo
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Wenjie Hou
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Laiqing Xu
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Jiazhu Wang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Xue Yan
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| |
Collapse
|
5
|
Lutnick B, Manthey D, Becker JU, Ginley B, Moos K, Zuckerman JE, Rodrigues L, Gallan AJ, Barisoni L, Alpers CE, Wang XX, Myakala K, Jones BA, Levi M, Kopp JB, Yoshida T, Zee J, Han SS, Jain S, Rosenberg AZ, Jen KY, Sarder P. A user-friendly tool for cloud-based whole slide image segmentation with examples from renal histopathology. COMMUNICATIONS MEDICINE 2022; 2:105. [PMID: 35996627 PMCID: PMC9391340 DOI: 10.1038/s43856-022-00138-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 06/09/2022] [Indexed: 01/21/2023] Open
Abstract
BACKGROUND Image-based machine learning tools hold great promise for clinical applications in pathology research. However, the ideal end-users of these computational tools (e.g., pathologists and biological scientists) often lack the programming experience required for the setup and use of these tools which often rely on the use of command line interfaces. METHODS We have developed Histo-Cloud, a tool for segmentation of whole slide images (WSIs) that has an easy-to-use graphical user interface. This tool runs a state-of-the-art convolutional neural network (CNN) for segmentation of WSIs in the cloud and allows the extraction of features from segmented regions for further analysis. RESULTS By segmenting glomeruli, interstitial fibrosis and tubular atrophy, and vascular structures from renal and non-renal WSIs, we demonstrate the scalability, best practices for transfer learning, and effects of dataset variability. Finally, we demonstrate an application for animal model research, analyzing glomerular features in three murine models. CONCLUSIONS Histo-Cloud is open source, accessible over the internet, and adaptable for segmentation of any histological structure regardless of stain.
Collapse
Affiliation(s)
- Brendon Lutnick
- Department of Pathology and Anatomical Sciences, SUNY Buffalo, Buffalo, USA
| | | | - Jan U. Becker
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | - Brandon Ginley
- Department of Pathology and Anatomical Sciences, SUNY Buffalo, Buffalo, USA
| | - Katharina Moos
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | - Jonathan E. Zuckerman
- Department of Pathology and Laboratory Medicine, University of California at Los Angeles, Los Angeles, USA
| | - Luis Rodrigues
- University Clinic of Nephrology, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | | | - Laura Barisoni
- Departments of Pathology and Medicine, Duke University, Durham, USA
| | - Charles E. Alpers
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, USA
| | - Xiaoxin X. Wang
- Departments of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC USA
| | - Komuraiah Myakala
- Departments of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC USA
| | - Bryce A. Jones
- Department of Pharmacology and Physiology, Georgetown University, Washington, DC USA
| | - Moshe Levi
- Departments of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC USA
| | | | | | - Jarcy Zee
- Department of Biostatistics, Epidemiology, & Informatics, University of Pennsylvania, Philadelphia, USA
| | - Seung Seok Han
- Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Sanjay Jain
- Department of Medicine, Nephrology, Washington University School of Medicine, St. Louis, USA
| | - Avi Z. Rosenberg
- Department of Pathology, Johns Hopkins University, Baltimore, USA
| | - Kuang Yu. Jen
- Department of Pathology and Laboratory Medicine, University of California at Davis, Sacramento, USA
| | - Pinaki Sarder
- Department of Pathology and Anatomical Sciences, SUNY Buffalo, Buffalo, USA
| |
Collapse
|
6
|
Allender F, Allègre R, Wemmert C, Dischler JM. Data augmentation based on spatial deformations for histopathology: An evaluation in the context of glomeruli segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106919. [PMID: 35701252 DOI: 10.1016/j.cmpb.2022.106919] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The effective application of deep learning to digital histopathology is hampered by the shortage of high-quality annotated images. In this paper we focus on the supervised segmentation of glomerular structures in patches of whole slide images of renal histopathological slides. Considering a U-Net model employed for segmentation, our goal is to evaluate the impact of augmenting training data with random spatial deformations. METHODS The effective application of deep learning to digital histopathology is hampered by the shortage of high-quality annotated images. In this paper we focus on the supervised segmentation of glomerular structures in patches of whole slide images of renal histopathological slides. Considering a U-Net model employed for segmentation, our goal is to evaluate the impact of augmenting training data with random spatial deformations. RESULTS We show that augmenting training data with spatially deformed images yields an improvement of up to 0.23 in average Dice score, with respect to training with no augmentation. We demonstrate that deformations with relatively strong distortions yield the best performance increase, while previous work only report the use of deformations with low distortions. The selected deformation models yield similar performance increase, provided that their parameters are properly adjusted. We provide bounds on the optimal parameter values, obtained through parameter sampling, which is achieved in a lower computational complexity with our single-parameter method. The paper is accompanied by a framework for evaluating the impact of random spatial deformations on the performance of any U-Net segmentation model. CONCLUSION To our knowledge, this study is the first to evaluate the impact of random spatial deformations on the segmentation of histopathological images. Our study and framework provide tools to help practitioners and researchers to make a better usage of random spatial deformations when training deep models for segmentation.
Collapse
|
7
|
Explaining smartphone-based acoustic data in bipolar disorder: Semi-supervised fuzzy clustering and relative linguistic summaries. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.12.049] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
8
|
Wang H, Gu H, Qin P, Wang J. U-shaped GAN for Semi-Supervised Learning and Unsupervised Domain Adaptation in High Resolution Chest Radiograph Segmentation. Front Med (Lausanne) 2022; 8:782664. [PMID: 35096877 PMCID: PMC8792862 DOI: 10.3389/fmed.2021.782664] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/14/2021] [Indexed: 01/03/2023] Open
Abstract
Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.
Collapse
Affiliation(s)
- Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
9
|
Miao R, Toth R, Zhou Y, Madabhushi A, Janowczyk A. Quick Annotator: an open-source digital pathology based rapid image annotation tool. J Pathol Clin Res 2021; 7:542-547. [PMID: 34288586 PMCID: PMC8503896 DOI: 10.1002/cjp2.229] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/16/2021] [Accepted: 05/22/2021] [Indexed: 11/23/2022]
Abstract
Image-based biomarker discovery typically requires accurate segmentation of histologic structures (e.g. cell nuclei, tubules, and epithelial regions) in digital pathology whole slide images (WSIs). Unfortunately, annotating each structure of interest is laborious and often intractable even in moderately sized cohorts. Here, we present an open-source tool, Quick Annotator (QA), designed to improve annotation efficiency of histologic structures by orders of magnitude. While the user annotates regions of interest (ROIs) via an intuitive web interface, a deep learning (DL) model is concurrently optimized using these annotations and applied to the ROI. The user iteratively reviews DL results to either (1) accept accurately annotated regions or (2) correct erroneously segmented structures to improve subsequent model suggestions, before transitioning to other ROIs. We demonstrate the effectiveness of QA over comparable manual efforts via three use cases. These include annotating (1) 337,386 nuclei in 5 pancreatic WSIs, (2) 5,692 tubules in 10 colorectal WSIs, and (3) 14,187 regions of epithelium in 10 breast WSIs. Efficiency gains in terms of annotations per second of 102×, 9×, and 39× were, respectively, witnessed while retaining f-scores >0.95, suggesting that QA may be a valuable tool for efficiently fully annotating WSIs employed in downstream biomarker studies.
Collapse
Affiliation(s)
- Runtian Miao
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | | | - Yu Zhou
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Veterans Administration Medical CenterClevelandOHUSA
| | - Andrew Janowczyk
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Precision Oncology CenterLausanne University HospitalLausanneSwitzerland
| |
Collapse
|
10
|
Modanwal G, Vellal A, Mazurowski MA. Normalization of breast MRIs using cycle-consistent generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106225. [PMID: 34198016 DOI: 10.1016/j.cmpb.2021.106225] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 05/29/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography for early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g., GE Healthcare, and Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners. In this work, we propose a method to solve this problem by normalizing images between various scanners. METHODS MRI normalization is challenging because it requires normalizing intensity values and mapping noise distributions between scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping and perform normalization between MRIs produced by GE Healthcare and Siemens scanners in an unpaired setting. Initial experiments demonstrate that the traditional CycleGAN architecture struggles to preserve the anatomical structures of the breast during normalization. Thus, we propose two technical innovations in order to preserve both the shape of the breast as well as the tissue structures within the breast. First, we incorporate mutual information loss during training in order to ensure anatomical consistency. Second, we propose a modified discriminator architecture that utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. RESULTS Quantitative and qualitative evaluations show that the second innovation consistently preserves the breast shape and tissue structures while also performing the proper intensity normalization and noise distribution mapping. CONCLUSION Our results demonstrate that the proposed model can successfully learn a bidirectional mapping and perform normalization between MRIs produced by different vendors, potentially enabling improved diagnosis and detection of breast cancer. All the data used in this study are publicly available at https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70226903.
Collapse
Affiliation(s)
| | - Adithya Vellal
- Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
11
|
Bioinformatics approach to spatially resolved transcriptomics. Emerg Top Life Sci 2021; 5:669-674. [PMID: 34369559 DOI: 10.1042/etls20210131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/20/2021] [Accepted: 07/20/2021] [Indexed: 11/17/2022]
Abstract
Spatially resolved transcriptomics encompasses a growing number of methods developed to enable gene expression profiling of individual cells within a tissue. Different technologies are available and they vary with respect to: the method used to define regions of interest, the method used to assess gene expression, and resolution. Since techniques based on next-generation sequencing are the most prevalent, and provide single-cell resolution, many bioinformatics tools for spatially resolved data are shared with single-cell RNA-seq. The analysis pipelines diverge at the level of quantification matrix, downstream of which spatial techniques require specific tools to answer key biological questions. Those questions include: (i) cell type classification; (ii) detection of genes with specific spatial distribution; (iii) identification of novel tissue regions based on gene expression patterns; (iv) cell-cell interactions. On the other hand, analysis of spatially resolved data is burdened by several specific challenges. Defining regions of interest, e.g. neoplastic tissue, often calls for manual annotation of images, which then poses a bottleneck in the pipeline. Another specific issue is the third spatial dimension and the need to expand the analysis beyond a single slice. Despite the problems, it can be predicted that the popularity of spatial techniques will keep growing until they replace single-cell assays (which will remain limited to specific cases, like blood). As soon as the computational protocol reach the maturity (e.g. bulk RNA-seq), one can foresee the expansion of spatial techniques beyond basic or translational research, even into routine medical diagnostics.
Collapse
|
12
|
Calderaro J, Kather JN. Artificial intelligence-based pathology for gastrointestinal and hepatobiliary cancers. Gut 2021; 70:1183-1193. [PMID: 33214163 DOI: 10.1136/gutjnl-2020-322880] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 10/03/2020] [Accepted: 10/27/2020] [Indexed: 12/11/2022]
Abstract
Artificial intelligence (AI) can extract complex information from visual data. Histopathology images of gastrointestinal (GI) and liver cancer contain a very high amount of information which human observers can only partially make sense of. Complementing human observers, AI allows an in-depth analysis of digitised histological slides of GI and liver cancer and offers a wide range of clinically relevant applications. First, AI can automatically detect tumour tissue, easing the exponentially increasing workload on pathologists. In addition, and possibly exceeding pathologist's capacities, AI can capture prognostically relevant tissue features and thus predict clinical outcome across GI and liver cancer types. Finally, AI has demonstrated its capacity to infer molecular and genetic alterations of cancer tissues from histological digital slides. These are likely only the first of many AI applications that will have important clinical implications. Thus, pathologists and clinicians alike should be aware of the principles of AI-based pathology and its ability to solve clinically relevant problems, along with its limitations and biases.
Collapse
Affiliation(s)
- Julien Calderaro
- U955, INSERM, Créteil, France .,Pathology, Hopital Henri Mondor, Creteil, Île-de-France, France
| | - Jakob Nikolas Kather
- Applied Tumor Immunity, Deutsches Krebsforschungszentrum, Heidelberg, BW, Germany.,Department of Medicine III, University Hospital RWTH, Aachen, Germany
| |
Collapse
|
13
|
Cornish TC. Artificial intelligence for automating the measurement of histologic image biomarkers. J Clin Invest 2021; 131:147966. [PMID: 33855974 DOI: 10.1172/jci147966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence has been applied to histopathology for decades, but the recent increase in interest is attributable to well-publicized successes in the application of deep-learning techniques, such as convolutional neural networks, for image analysis. Recently, generative adversarial networks (GANs) have provided a method for performing image-to-image translation tasks on histopathology images, including image segmentation. In this issue of the JCI, Koyuncu et al. applied GANs to whole-slide images of p16-positive oropharyngeal squamous cell carcinoma (OPSCC) to automate the calculation of a multinucleation index (MuNI) for prognostication in p16-positive OPSCC. Multivariable analysis showed that the MuNI was prognostic for disease-free survival, overall survival, and metastasis-free survival. These results are promising, as they present a prognostic method for p16-positive OPSCC and highlight methods for using deep learning to measure image biomarkers from histopathologic samples in an inherently explainable manner.
Collapse
|
14
|
Stanitsas P, Cherian A, Morellas V, Tejpaul R, Papanikolopoulos N, Truskinovsky A. Image Descriptors for Weakly Annotated Histopathological Breast Cancer Data. Front Digit Health 2020; 2. [PMID: 33345255 PMCID: PMC7749086 DOI: 10.3389/fdgth.2020.572671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Introduction Cancerous Tissue Recognition (CTR) methodologies are continuously integrating advancements at the forefront of machine learning and computer vision, providing a variety of inference schemes for histopathological data. Histopathological data, in most cases, come in the form of high-resolution images, and thus methodologies operating at the patch level are more computationally attractive. Such methodologies capitalize on pixel level annotations (tissue delineations) from expert pathologists, which are then used to derive labels at the patch level. In this work, we envision a digital connected health system that augments the capabilities of the clinicians by providing powerful feature descriptors that may describe malignant regions. Material and Methods We start with a patch level descriptor, termed Covariance-Kernel Descriptor (CKD), capable of compactly describing tissue architectures associated with carcinomas. To leverage the recognition capability of the CKDs to larger slide regions, we resort to a multiple instance learning framework. In that direction, we derive the Weakly Annotated Image Descriptor (WAID) as the parameters of classifier decision boundaries in a Multiple Instance Learning framework. The WAID is computed on bags of patches corresponding to larger image regions for which binary labels (malignant vs. benign) are provided, thus obviating the necessity for tissue delineations. Results The CKD was seen to outperform all the considered descriptors, reaching classification accuracy (ACC) of 92.83%. and area under the curve (AUC) of 0.98. The CKD captures higher order correlations between features and was shown to achieve superior performance against a large collection of computer vision features on a private breast cancer dataset. The WAID outperform all other descriptors on the Breast Cancer Histopathological database (BreakHis) where correctly classified malignant (CCM) instances reached 91.27 and 92.00% at the patient and image level, respectively, without resorting to a deep learning scheme achieves state-of-the-art performance. Discussion Our proposed derivation of the CKD and WAID can help medical experts accomplish their work accurately and faster than the current state-of-the-art.
Collapse
Affiliation(s)
- Panagiotis Stanitsas
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Anoop Cherian
- Australian Center for Robotic Vision, Australian National University, Canberra, ACT, Australia
| | - Vassilios Morellas
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Resha Tejpaul
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Nikolaos Papanikolopoulos
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Alexander Truskinovsky
- Department of Pathology & Laboratory Medicine, Roswell Park Cancer Institute, Buffalo, NY, United States
| |
Collapse
|