51
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
52
|
Hybrid high performance intelligent computing approach of CACNN and RNN for skin cancer image grading. Soft comput 2022. [DOI: 10.1007/s00500-022-06989-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
53
|
Ahmedt-Aristizabal D, Armin MA, Denman S, Fookes C, Petersson L. A survey on graph-based deep learning for computational histopathology. Comput Med Imaging Graph 2022; 95:102027. [DOI: 10.1016/j.compmedimag.2021.102027] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 11/25/2021] [Accepted: 12/04/2021] [Indexed: 12/21/2022]
|
54
|
SAFRON: Stitching Across the Frontier Network for Generating Colorectal Cancer Histology Images. Med Image Anal 2021; 77:102337. [PMID: 35016078 DOI: 10.1016/j.media.2021.102337] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/12/2022]
Abstract
Automated synthesis of histology images has several potential applications including the development of data-efficient deep learning algorithms. In the field of computational pathology, where histology images are large in size and visual context is crucial, synthesis of large high-resolution images via generative modeling is an important but challenging task due to memory and computational constraints. To address this challenge, we propose a novel framework called SAFRON (Stitching Across the FROntier Network) to construct realistic, large high-resolution tissue images conditioned on input tissue component masks. The main novelty in the framework is integration of stitching in its loss function which enables generation of images of arbitrarily large sizes after training on relatively small image patches while preserving morphological features with minimal boundary artifacts. We have used the proposed framework for generating, to the best of our knowledge, the largest-sized synthetic histology images to date (up to 11K×8K pixels). Compared to existing approaches, our framework is efficient in terms of the memory required for training and computations needed for synthesizing large high-resolution images. The quality of generated images was assessed quantitatively using Frechet Inception Distance as well as by 7 trained pathologists, who assigned a realism score to a set of images generated by SAFRON. The average realism score across all pathologists for synthetic images was as high as that of real images. We also show that training with additional synthetic data generated by SAFRON can significantly boost prediction performance of gland segmentation and cancer detection algorithms in colorectal cancer histology images.
Collapse
|
55
|
Bilal M, Raza SEA, Azam A, Graham S, Ilyas M, Cree IA, Snead D, Minhas F, Rajpoot NM. Development and validation of a weakly supervised deep learning framework to predict the status of molecular pathways and key mutations in colorectal cancer from routine histology images: a retrospective study. Lancet Digit Health 2021; 3:e763-e772. [PMID: 34686474 PMCID: PMC8609154 DOI: 10.1016/s2589-7500(21)00180-1] [Citation(s) in RCA: 123] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 07/01/2021] [Accepted: 08/05/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Determining the status of molecular pathways and key mutations in colorectal cancer is crucial for optimal therapeutic decision making. We therefore aimed to develop a novel deep learning pipeline to predict the status of key molecular pathways and mutations from whole-slide images of haematoxylin and eosin-stained colorectal cancer slides as an alternative to current tests. METHODS In this retrospective study, we used 502 diagnostic slides of primary colorectal tumours from 499 patients in The Cancer Genome Atlas colon and rectal cancer (TCGA-CRC-DX) cohort and developed a weakly supervised deep learning framework involving three separate convolutional neural network models. Whole-slide images were divided into equally sized tiles and model 1 (ResNet18) extracted tumour tiles from non-tumour tiles. These tumour tiles were inputted into model 2 (adapted ResNet34), trained by iterative draw and rank sampling to calculate a prediction score for each tile that represented the likelihood of a tile belonging to the molecular labels of high mutation density (vs low mutation density), microsatellite instability (vs microsatellite stability), chromosomal instability (vs genomic stability), CpG island methylator phenotype (CIMP)-high (vs CIMP-low), BRAFmut (vs BRAFWT), TP53mut (vs TP53WT), and KRASWT (vs KRASmut). These scores were used to identify the top-ranked titles from each slide, and model 3 (HoVer-Net) segmented and classified the different types of cell nuclei in these tiles. We calculated the area under the convex hull of the receiver operating characteristic curve (AUROC) as a model performance measure and compared our results with those of previously published methods. FINDINGS Our iterative draw and rank sampling method yielded mean AUROCs for the prediction of hypermutation (0·81 [SD 0·03] vs 0·71), microsatellite instability (0·86 [0·04] vs 0·74), chromosomal instability (0·83 [0·02] vs 0·73), BRAFmut (0·79 [0·01] vs 0·66), and TP53mut (0·73 [0·02] vs 0·64) in the TCGA-CRC-DX cohort that were higher than those from previously published methods, and an AUROC for KRASmut that was similar to previously reported methods (0·60 [SD 0·04] vs 0·60). Mean AUROC for predicting CIMP-high status was 0·79 (SD 0·05). We found high proportions of tumour-infiltrating lymphocytes and necrotic tumour cells to be associated with microsatellite instability, and high proportions of tumour-infiltrating lymphocytes and a low proportion of necrotic tumour cells to be associated with hypermutation. INTERPRETATION After large-scale validation, our proposed algorithm for predicting clinically important mutations and molecular pathways, such as microsatellite instability, in colorectal cancer could be used to stratify patients for targeted therapies with potentially lower costs and quicker turnaround times than sequencing-based or immunohistochemistry-based approaches. FUNDING The UK Medical Research Council.
Collapse
Affiliation(s)
- Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Ayesha Azam
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mohammad Ilyas
- Faculty of Medicine and Health Sciences, University of Nottingham, Nottingham, UK
| | - Ian A Cree
- International Agency for Research on Cancer, Lyon, France
| | - David Snead
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK.
| |
Collapse
|
56
|
Deep Learning Approaches to Colorectal Cancer Diagnosis: A Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210982] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Unprecedented breakthroughs in the development of graphical processing systems have led to great potential for deep learning (DL) algorithms in analyzing visual anatomy from high-resolution medical images. Recently, in digital pathology, the use of DL technologies has drawn a substantial amount of attention for use in the effective diagnosis of various cancer types, especially colorectal cancer (CRC), which is regarded as one of the dominant causes of cancer-related deaths worldwide. This review provides an in-depth perspective on recently published research articles on DL-based CRC diagnosis and prognosis. Overall, we provide a retrospective synopsis of simple image-processing-based and machine learning (ML)-based computer-aided diagnosis (CAD) systems, followed by a comprehensive appraisal of use cases with different types of state-of-the-art DL algorithms for detecting malignancies. We first list multiple standardized and publicly available CRC datasets from two imaging types: colonoscopy and histopathology. Secondly, we categorize the studies based on the different types of CRC detected (tumor tissue, microsatellite instability, and polyps), and we assess the data preprocessing steps and the adopted DL architectures before presenting the optimum diagnostic results. CRC diagnosis with DL algorithms is still in the preclinical phase, and therefore, we point out some open issues and provide some insights into the practicability and development of robust diagnostic systems in future health care and oncology.
Collapse
|
57
|
Pati P, Jaume G, Foncubierta-Rodríguez A, Feroce F, Anniciello AM, Scognamiglio G, Brancati N, Fiche M, Dubruc E, Riccio D, Di Bonito M, De Pietro G, Botti G, Thiran JP, Frucci M, Goksel O, Gabrani M. Hierarchical graph representations in digital pathology. Med Image Anal 2021; 75:102264. [PMID: 34781160 DOI: 10.1016/j.media.2021.102264] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/22/2021] [Accepted: 10/06/2021] [Indexed: 01/01/2023]
Abstract
Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net.
Collapse
Affiliation(s)
- Pushpak Pati
- IBM Zurich Research Lab, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland.
| | - Guillaume Jaume
- IBM Zurich Research Lab, Zurich, Switzerland; Signal Processing Laboratory 5, EPFL, Lausanne, Switzerland
| | | | - Florinda Feroce
- National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy
| | | | | | - Nadia Brancati
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Maryse Fiche
- Aurigen- Centre de Pathologie, Lausanne, Switzerland
| | | | - Daniel Riccio
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | | | - Giuseppe De Pietro
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Gerardo Botti
- National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy
| | | | - Maria Frucci
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| | | |
Collapse
|
58
|
Shaban M, Raza SEA, Hassan M, Jamshed A, Mushtaq S, Loya A, Batis N, Brooks J, Nankivell P, Sharma N, Robinson M, Mehanna H, Khurram SA, Rajpoot N. A digital score of tumour-associated stroma infiltrating lymphocytes predicts survival in head and neck squamous cell carcinoma. J Pathol 2021; 256:174-185. [PMID: 34698394 DOI: 10.1002/path.5819] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 10/01/2021] [Accepted: 10/23/2021] [Indexed: 12/20/2022]
Abstract
The infiltration of T-lymphocytes in the stroma and tumour is an indication of an effective immune response against the tumour, resulting in better survival. In this study, our aim was to explore the prognostic significance of tumour-associated stroma infiltrating lymphocytes (TASILs) in head and neck squamous cell carcinoma (HNSCC) through an AI-based automated method. A deep learning-based automated method was employed to segment tumour, tumour-associated stroma, and lymphocytes in digitally scanned whole slide images of HNSCC tissue slides. The spatial patterns of lymphocytes and tumour-associated stroma were digitally quantified to compute the tumour-associated stroma infiltrating lymphocytes score (TASIL-score). Finally, the prognostic significance of the TASIL-score for disease-specific and disease-free survival was investigated using the Cox proportional hazard analysis. Three different cohorts of haematoxylin and eosin (H&E)-stained tissue slides of HNSCC cases (n = 537 in total) were studied, including publicly available TCGA head and neck cancer cases. The TASIL-score carries prognostic significance (p = 0.002) for disease-specific survival of HNSCC patients. The TASIL-score also shows a better separation between low- and high-risk patients compared with the manual tumour-infiltrating lymphocytes (TILs) scoring by pathologists for both disease-specific and disease-free survival. A positive correlation of TASIL-score with molecular estimates of CD8+ T cells was also found, which is in line with existing findings. To the best of our knowledge, this is the first study to automate the quantification of TASILs from routine H&E slides of head and neck cancer. Our TASIL-score-based findings are aligned with the clinical knowledge, with the added advantages of objectivity, reproducibility, and strong prognostic value. Although we validated our method on three different cohorts (n = 537 cases in total), a comprehensive evaluation on large multicentric cohorts is required before the proposed digital score can be adopted in clinical practice. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Mariam Hassan
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Arif Jamshed
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Sajid Mushtaq
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Asif Loya
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Nikolaos Batis
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Jill Brooks
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Paul Nankivell
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Neil Sharma
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Max Robinson
- School of Dental Sciences, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - Hisham Mehanna
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Syed Ali Khurram
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK.,The Alan Turing Institute, London, UK.,Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Coventry, UK
| |
Collapse
|
59
|
Vuong TTL, Kim K, Song B, Kwak JT. Joint categorical and ordinal learning for cancer grading in pathology images. Med Image Anal 2021; 73:102206. [PMID: 34399153 DOI: 10.1016/j.media.2021.102206] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 07/26/2021] [Accepted: 07/28/2021] [Indexed: 02/07/2023]
Abstract
Cancer grading in pathology image analysis is one of the most critical tasks since it is related to patient outcomes and treatment planning. Traditionally, it has been considered a categorical problem, ignoring the natural ordering among the cancer grades, i.e., the higher the grade is, the more aggressive it is, and the worse the outcome is. Herein, we propose a joint categorical and ordinal learning framework for cancer grading in pathology images. The approach simultaneously performs both categorical classification and ordinal classification and aims to leverage the distinctive features from the two tasks. Moreover, we propose a new loss function for the ordinal classification task that offers an improved contrast between the correctly classified examples and misclassified examples. The proposed method is evaluated on multiple collections of colorectal and prostate pathology images that underwent different acquisition and processing procedures. Both quantitative and qualitative assessments of the experimental results confirm the effectiveness and robustness of the proposed method in comparison to other competing methods. The results suggest that the proposed approach could permit improved histopathologic analysis of cancer grades in pathology images.
Collapse
Affiliation(s)
- Trinh Thi Le Vuong
- School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kyungeun Kim
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul 03181, Republic of Korea
| | - Boram Song
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul 03181, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
60
|
Zhang XM, Liang L, Liu L, Tang MJ. Graph Neural Networks and Their Current Applications in Bioinformatics. Front Genet 2021; 12:690049. [PMID: 34394185 PMCID: PMC8360394 DOI: 10.3389/fgene.2021.690049] [Citation(s) in RCA: 78] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 05/28/2021] [Indexed: 12/22/2022] Open
Abstract
Graph neural networks (GNNs), as a branch of deep learning in non-Euclidean space, perform particularly well in various tasks that process graph structure data. With the rapid accumulation of biological network data, GNNs have also become an important tool in bioinformatics. In this research, a systematic survey of GNNs and their advances in bioinformatics is presented from multiple perspectives. We first introduce some commonly used GNN models and their basic principles. Then, three representative tasks are proposed based on the three levels of structural information that can be learned by GNNs: node classification, link prediction, and graph generation. Meanwhile, according to the specific applications for various omics data, we categorize and discuss the related studies in three aspects: disease prediction, drug discovery, and biomedical imaging. Based on the analysis, we provide an outlook on the shortcomings of current studies and point out their developing prospect. Although GNNs have achieved excellent results in many biological tasks at present, they still face challenges in terms of low-quality data processing, methodology, and interpretability and have a long road ahead. We believe that GNNs are potentially an excellent method that solves various biological problems in bioinformatics research.
Collapse
Affiliation(s)
- Xiao-Meng Zhang
- School of Information, Yunnan Normal University, Kunming, China
| | - Li Liang
- School of Information, Yunnan Normal University, Kunming, China
| | - Lin Liu
- School of Information, Yunnan Normal University, Kunming, China
- Key Laboratory of Educational Informatization for Nationalities Ministry of Education, Yunnan Normal University, Kunming, China
| | - Ming-Jing Tang
- Key Laboratory of Educational Informatization for Nationalities Ministry of Education, Yunnan Normal University, Kunming, China
- School of Life Sciences, Yunnan Normal University, Kunming, China
| |
Collapse
|
61
|
Vuong TTL, Song B, Kim K, Cho YM, Kwak JT. Multi-scale binary pattern encoding network for cancer classification in pathology images. IEEE J Biomed Health Inform 2021; 26:1152-1163. [PMID: 34310334 DOI: 10.1109/jbhi.2021.3099817] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multi-scale approaches have been widely studied in pathology image analysis. These offer an ability to characterize tissues in an image at various scales, in which the tissues may appear differently. Many of such methods have focused on extracting multi-scale hand-crafted features and applied them to various tasks in pathology image analysis. Even, several deep learning methods explicitly adopt the multi-scale approaches. However, most of these methods simply merge the multi-scale features together or adopt the coarse-to-fine/fine-to-coarse strategy, which uses the features one at a time in a sequential manner. Utilizing the multi-scale features in a cooperative and discriminative fashion, the learning capabilities could be further improved. Herein, we propose a multi-scale approach that can identify and leverage the patterns of the multiple scales within a deep neural network and provide the superior capability of cancer classification. The patterns of the features across multiple scales are encoded as a binary pattern code and further converted to a decimal number, which can be easily embedded in the current framework of the deep neural networks. To evaluate the proposed method, multiple sets of pathology images are employed. Under the various experimental settings, the proposed method is systematically assessed and shows an improved classification performance in comparison to other competing methods.
Collapse
|
62
|
Oliveira SP, Neto PC, Fraga J, Montezuma D, Monteiro A, Monteiro J, Ribeiro L, Gonçalves S, Pinto IM, Cardoso JS. CAD systems for colorectal cancer from WSI are still not ready for clinical acceptance. Sci Rep 2021; 11:14358. [PMID: 34257363 PMCID: PMC8277780 DOI: 10.1038/s41598-021-93746-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 06/28/2021] [Indexed: 02/07/2023] Open
Abstract
Most oncological cases can be detected by imaging techniques, but diagnosis is based on pathological assessment of tissue samples. In recent years, the pathology field has evolved to a digital era where tissue samples are digitised and evaluated on screen. As a result, digital pathology opened up many research opportunities, allowing the development of more advanced image processing techniques, as well as artificial intelligence (AI) methodologies. Nevertheless, despite colorectal cancer (CRC) being the second deadliest cancer type worldwide, with increasing incidence rates, the application of AI for CRC diagnosis, particularly on whole-slide images (WSI), is still a young field. In this review, we analyse some relevant works published on this particular task and highlight the limitations that hinder the application of these works in clinical practice. We also empirically investigate the feasibility of using weakly annotated datasets to support the development of computer-aided diagnosis systems for CRC from WSI. Our study underscores the need for large datasets in this field and the use of an appropriate learning methodology to gain the most benefit from partially annotated datasets. The CRC WSI dataset used in this study, containing 1,133 colorectal biopsy and polypectomy samples, is available upon reasonable request.
Collapse
Affiliation(s)
- Sara P Oliveira
- INESCTEC, 4200-465, Porto, Portugal.
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal.
| | - Pedro C Neto
- INESCTEC, 4200-465, Porto, Portugal
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal
| | - João Fraga
- IMP Diagnostics, 4150-146, Porto, Portugal
| | - Diana Montezuma
- IMP Diagnostics, 4150-146, Porto, Portugal
- ICBAS, University of Porto, 4050-313, Porto , Portugal
- Cancer Biology and Epigenetics Group, IPO-Porto, 4200-072, Porto, Portugal
| | | | | | | | | | | | - Jaime S Cardoso
- INESCTEC, 4200-465, Porto, Portugal
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal
| |
Collapse
|
63
|
Senousy Z, Abdelsamea MM, Mohamed MM, Gaber MM. 3E-Net: Entropy-Based Elastic Ensemble of Deep Convolutional Neural Networks for Grading of Invasive Breast Carcinoma Histopathological Microscopic Images. ENTROPY (BASEL, SWITZERLAND) 2021; 23:620. [PMID: 34065765 PMCID: PMC8156865 DOI: 10.3390/e23050620] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/13/2021] [Accepted: 05/14/2021] [Indexed: 12/21/2022]
Abstract
Automated grading systems using deep convolution neural networks (DCNNs) have proven their capability and potential to distinguish between different breast cancer grades using digitized histopathological images. In digital breast pathology, it is vital to measure how confident a DCNN is in grading using a machine-confidence metric, especially with the presence of major computer vision challenging problems such as the high visual variability of the images. Such a quantitative metric can be employed not only to improve the robustness of automated systems, but also to assist medical professionals in identifying complex cases. In this paper, we propose Entropy-based Elastic Ensemble of DCNN models (3E-Net) for grading invasive breast carcinoma microscopy images which provides an initial stage of explainability (using an uncertainty-aware mechanism adopting entropy). Our proposed model has been designed in a way to (1) exclude images that are less sensitive and highly uncertain to our ensemble model and (2) dynamically grade the non-excluded images using the certain models in the ensemble architecture. We evaluated two variations of 3E-Net on an invasive breast carcinoma dataset and we achieved grading accuracy of 96.15% and 99.50%.
Collapse
Affiliation(s)
- Zakaria Senousy
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
| | - Mohammed M. Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
- Faculty of Computers and Information, Assiut University, Assiut 71515, Egypt
| | - Mona Mostafa Mohamed
- Department of Zoology, Faculty of Science, Cairo University, Giza 12613, Egypt;
- Faculty of Basic Sciences, Galala University, Suez 435611, Egypt
| | - Mohamed Medhat Gaber
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| |
Collapse
|
64
|
Tizhoosh HR, Diamandis P, Campbell CJV, Safarpoor A, Kalra S, Maleki D, Riasatian A, Babaie M. Searching Images for Consensus: Can AI Remove Observer Variability in Pathology? THE AMERICAN JOURNAL OF PATHOLOGY 2021; 191:1702-1708. [PMID: 33636179 DOI: 10.1016/j.ajpath.2021.01.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 01/05/2021] [Accepted: 01/25/2021] [Indexed: 02/07/2023]
Abstract
One of the major obstacles in reaching diagnostic consensus is observer variability. With the recent success of artificial intelligence, particularly the deep networks, the question emerges as to whether the fundamental challenge of diagnostic imaging can now be resolved. This article briefly reviews the problem and how eventually both supervised and unsupervised AI technologies could help to overcome it.
Collapse
Affiliation(s)
| | - Phedias Diamandis
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada
| | - Clinton J V Campbell
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Amir Safarpoor
- Kimia Laboratory, University of Waterloo, Waterloo, Canada
| | - Shivam Kalra
- Kimia Laboratory, University of Waterloo, Waterloo, Canada
| | - Danial Maleki
- Kimia Laboratory, University of Waterloo, Waterloo, Canada
| | | | - Morteza Babaie
- Kimia Laboratory, University of Waterloo, Waterloo, Canada
| |
Collapse
|
65
|
An end-to-end breast tumour classification model using context-based patch modelling - A BiLSTM approach for image classification. Comput Med Imaging Graph 2020; 87:101838. [PMID: 33340945 DOI: 10.1016/j.compmedimag.2020.101838] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 10/31/2020] [Accepted: 11/29/2020] [Indexed: 11/20/2022]
Abstract
Researchers working on computational analysis of Whole Slide Images (WSIs) in histopathology have primarily resorted to patch-based modelling due to large resolution of each WSI. The large resolution makes WSIs infeasible to be fed directly into the machine learning models due to computational constraints. However, due to patch-based analysis, most of the current methods fail to exploit the underlying spatial relationship among the patches. In our work, we have tried to integrate this relationship along with feature-based correlation among the extracted patches from the particular tumorous region. The tumour regions extracted from WSI have arbitrary dimensions having the range 20,570 to 195 pixels across width and 17,290 to 226 pixels across height. For the given task of classification, we have used BiLSTMs to model both forward and backward contextual relationship. Also, using RNN based model, the limitation of sequence size is eliminated which allows the modelling of variable size images within a deep learning model. We have also incorporated the effect of spatial continuity by exploring different scanning techniques used to sample patches. To establish the efficiency of our approach, we trained and tested our model on two datasets, microscopy images and WSI tumour regions. Both datasets were published by ICIAR BACH Challenge 2018. Finally, we compared our results with top 5 teams who participated in the BACH challenge and achieved the top accuracy of 90% for microscopy image dataset. For WSI tumour region dataset, we compared the classification results with state of the art deep learning networks such as ResNet, DenseNet, and InceptionV3 using maximum voting technique. We achieved the highest performance accuracy of 84%. We found out that BiLSTMs with CNN features have performed much better in modelling patches into an end-to-end Image classification network. Additionally, the variable dimensions of WSI tumour regions were used for classification without the need for resizing. This suggests that our method is independent of tumour image size and can process large dimensional images without losing the resolution details.
Collapse
|
66
|
Graham S, Epstein D, Rajpoot N. Dense Steerable Filter CNNs for Exploiting Rotational Symmetry in Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4124-4136. [PMID: 32746153 DOI: 10.1109/tmi.2020.3013246] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Histology images are inherently symmetric under rotation, where each orientation is equally as likely to appear. However, this rotational symmetry is not widely utilised as prior knowledge in modern Convolutional Neural Networks (CNNs), resulting in data hungry models that learn independent features at each orientation. Allowing CNNs to be rotation-equivariant removes the necessity to learn this set of transformations from the data and instead frees up model capacity, allowing more discriminative features to be learned. This reduction in the number of required parameters also reduces the risk of overfitting. In this paper, we propose Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework. Each filter is defined as a linear combination of steerable basis filters, enabling exact rotation and decreasing the number of trainable parameters compared to standard filters. We also provide the first in-depth comparison of different rotation-equivariant CNNs for histology image analysis and demonstrate the advantage of encoding rotational symmetry into modern architectures. We show that DSF-CNNs achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology: breast tumour classification, colon gland segmentation and multi-tissue nuclear segmentation.
Collapse
|