1
|
Graham S, Vu QD, Jahanifar M, Weigert M, Schmidt U, Zhang W, Zhang J, Yang S, Xiang J, Wang X, Rumberger JL, Baumann E, Hirsch P, Liu L, Hong C, Aviles-Rivero AI, Jain A, Ahn H, Hong Y, Azzuni H, Xu M, Yaqub M, Blache MC, Piégu B, Vernay B, Scherr T, Böhland M, Löffler K, Li J, Ying W, Wang C, Snead D, Raza SEA, Minhas F, Rajpoot NM. CoNIC Challenge: Pushing the frontiers of nuclear detection, segmentation, classification and counting. Med Image Anal 2024; 92:103047. [PMID: 38157647 DOI: 10.1016/j.media.2023.103047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/19/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | | | - Wenhua Zhang
- The Department of Computer Science, The University of Hong Kong, Hong Kong
| | | | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jinxi Xiang
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Josef Lorenz Rumberger
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany; Charité University Medicine, Berlin, Germany
| | | | - Peter Hirsch
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany
| | - Lihao Liu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Chenyang Hong
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Ayushi Jain
- Softsensor.ai, Bridgewater, NJ, United States of America; PRR.ai, TX, United States of America
| | - Heeyoung Ahn
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Yiyu Hong
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Hussam Azzuni
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Min Xu
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Benoît Piégu
- CNRS, IFCE, INRAE, Université de Tours, PRC, 3780, Nouzilly, France
| | - Bertrand Vernay
- Institut de Génétique et de Biologie Moléculaire et Cellulaire, Illkirch, France; Centre National de la Recherche Scientifique, UMR7104, Illkirch, France; Institut National de la Santé et de la Recherche Médicale, INSERM, U1258, Illkirch, France; Université de Strasbourg, Strasbourg, France
| | - Tim Scherr
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Moritz Böhland
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Jiachen Li
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Weiqin Ying
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Chixin Wang
- School of software engineering, South China University of Technology, Guangzhou, China
| | - David Snead
- Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom; Division of Biomedical Sciences, Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom
| |
Collapse
|
2
|
Dawood M, Vu QD, Young LS, Branson K, Jones L, Rajpoot N, Minhas FUAA. Cancer drug sensitivity prediction from routine histology images. NPJ Precis Oncol 2024; 8:5. [PMID: 38184744 PMCID: PMC10771481 DOI: 10.1038/s41698-023-00491-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 12/08/2023] [Indexed: 01/08/2024] Open
Abstract
Drug sensitivity prediction models can aid in personalising cancer therapy, biomarker discovery, and drug design. Such models require survival data from randomised controlled trials which can be time consuming and expensive. In this proof-of-concept study, we demonstrate for the first time that deep learning can link histological patterns in whole slide images (WSIs) of Haematoxylin & Eosin (H&E) stained breast cancer sections with drug sensitivities inferred from cell lines. We employ patient-wise drug sensitivities imputed from gene expression-based mapping of drug effects on cancer cell lines to train a deep learning model that predicts patients' sensitivity to multiple drugs from WSIs. We show that it is possible to use routine WSIs to predict the drug sensitivity profile of a cancer patient for a number of approved and experimental drugs. We also show that the proposed approach can identify cellular and histological patterns associated with drug sensitivity profiles of cancer patients.
Collapse
Affiliation(s)
- Muhammad Dawood
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Lawrence S Young
- Warwick Medical School, University of Warwick, Coventry, UK
- Cancer Research Centre, University of Warwick, Coventry, UK
| | - Kim Branson
- Artificial Intelligence & Machine Learning, GlaxoSmithKline, San Francisco, CA, USA
| | - Louise Jones
- Barts Cancer Institute, Queen Mary University of London, London, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
- Cancer Research Centre, University of Warwick, Coventry, UK
- The Alan Turing Institute, London, UK
| | - Fayyaz Ul Amir Afsar Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
- Cancer Research Centre, University of Warwick, Coventry, UK
| |
Collapse
|
3
|
Vu QD, Rajpoot K, Raza SEA, Rajpoot N. Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images. Med Image Anal 2023; 85:102743. [PMID: 36702037 DOI: 10.1016/j.media.2023.102743] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 11/30/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Kashif Rajpoot
- School of Computer Science, University of Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK.
| |
Collapse
|
4
|
Graham S, Vu QD, Jahanifar M, Raza SEA, Minhas F, Snead D, Rajpoot N. One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification. Med Image Anal 2023; 83:102685. [PMID: 36410209 DOI: 10.1016/j.media.2022.102685] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 10/20/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - David Snead
- Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| |
Collapse
|
5
|
Verma R, Kumar N, Patil A, Kurian NC, Rane S, Graham S, Vu QD, Zwager M, Raza SEA, Rajpoot N, Wu X, Chen H, Huang Y, Wang L, Jung H, Brown GT, Liu Y, Liu S, Jahromi SAF, Khani AA, Montahaei E, Baghshah MS, Behroozi H, Semkin P, Rassadin A, Dutande P, Lodaya R, Baid U, Baheti B, Talbar S, Mahbod A, Ecker R, Ellinger I, Luo Z, Dong B, Xu Z, Yao Y, Lv S, Feng M, Xu K, Zunair H, Hamza AB, Smiley S, Yin TK, Fang QR, Srivastava S, Mahapatra D, Trnavska L, Zhang H, Narayanan PL, Law J, Yuan Y, Tejomay A, Mitkari A, Koka D, Ramachandra V, Kini L, Sethi A. MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge. IEEE Trans Med Imaging 2021; 40:3413-3423. [PMID: 34086562 DOI: 10.1109/tmi.2021.3085712] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
Collapse
|
6
|
Vu QD, Fong C, von Loga K, Raza SEA, Nava Rodrigues D, Patel B, Peckitt C, Begum R, Athauda A, Starling N, Chau I, Rao S, Watkins DJ, Rebelatto M, Waddell T, Wadsley J, Roques T, Hewish M, Cunningham D, Rajpoot N. Digital histological markers based on routine H&E slides to predict benefit from maintenance immunotherapy in esophagogastric adenocarcinoma. J Clin Oncol 2021. [DOI: 10.1200/jco.2021.39.15_suppl.e16074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
e16074 Background: Immune checkpoint inhibition (ICI) is an effective treatment for a subset of patients with inoperable esophagogastric (EG) adenocarcinoma. Robust predictive biomarkers are required to identify these patients and a variety of strategies including immunohistochemical staining of PD-L1 and tumor mutational burden (TMB) assessment have been employed. Here, we explore digital histological (dHis) markers based on routine hematoxylin and eosin (H&E) slides alone or in combination with molecular markers (PD-L1 and TMB) as predictive biomarkers of benefit from maintenance immunotherapy in patients with inoperable EG adenocarcinoma. Methods: We developed a deep learning based algorithm to construct novel digital histological (dHis) markers by summarizing the statistics of all different types of nuclei present in the tumor tissue sections, their morphological features and their colocalization across each of the whole slide image. The dHis markers were then input into a decision-tree based approach to test for prognostic and predictive power alone or in combination with molecular markers. We assessed two cohorts of patients randomized to surveillance (n=38) or maintenance durvalumab (n=35) after 18 weeks of first-line platinum-based chemotherapy in the PLATFORM trial (NCT02678182) according to the 12-week progression-free rate. We measured the accuracy as the area under the receiver operating characteristics curve (AUROC) to determine the prognostic and predictive power of each marker set. We conducted a stratified 3-fold cross-validation, repeated 5 times and report the overall AUROC results. Results: Molecular markers alone yielded an AUROC of 0.5581±0.0939 on the surveillance arm, 0.6671±0.1479 on the treatment arm, and 0.6376±0.0958 for both the arms. Digital histological markers alone yielded an AUROC of 0.8952±0.0638, 0.8995±0.0719 and 0.8488±0.0700 on surveillance, immunotherapy and both arms, respectively. When using these two sets of markers together for both arms, molecular markers offered a limited improvement (around 0.02). Patients with TMB in the highest tertile were associated with lower likelihood of having progressive disease 12 weeks after randomization. Interestingly, dHis markers from morphology of connective and inflammatory nuclei were highly predictive for treatment benefit. Conclusions: Preliminary results suggest digital histological markers offer significant improvement over PD-L1 and TMB markers alone for predicting benefit from immunotherapy in EG adenocarcinoma with the added advantages of scalable, rapid, low-cost and objective quantification on routine histology sections. We are further validating their effectiveness on a larger cohort. Clinical trial information: NCT02678182.
Collapse
Affiliation(s)
| | - Caroline Fong
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Katharina von Loga
- The Royal Marsden NHS Foundation Trust, London and Sutton, United Kingdom
| | | | | | - Bijal Patel
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Clare Peckitt
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Ruwaida Begum
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Avani Athauda
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Naureen Starling
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Ian Chau
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | - Sheela Rao
- The Royal Marsden NHS Foundation Trust, London and Sutton, United Kingdom
| | - David J. Watkins
- The Royal Marsden Hospital NHS Foundation Trust, London and Sutton, United Kingdom
| | | | - Tom Waddell
- The Christie NHS Foundation Trust, Manchester, United Kingdom
| | | | - Tom Roques
- Norfolk and Norwich University Hospitals NHS Foundation Trust, Norwich, United Kingdom
| | | | | | | |
Collapse
|
7
|
Abstract
Grading for cancer, based upon the degree of cancer differentiation, plays a major role in describing the characteristics and behavior of the cancer and determining treatment plan for patients. The grade is determined by a subjective and qualitative assessment of tissues under microscope, which suffers from high inter- and intra-observer variability among pathologists. Digital pathology offers an alternative means to automate the procedure as well as to improve the accuracy and robustness of cancer grading. However, most of such methods tend to mimic or reproduce cancer grade determined by human experts. Herein, we propose an alternative, quantitative means of assessing and characterizing cancers in an unsupervised manner. The proposed method utilizes conditional generative adversarial networks to characterize tissues. The proposed method is evaluated using whole slide images (WSIs) and tissue microarrays (TMAs) of colorectal cancer specimens. The results suggest that the proposed method holds a potential for quantifying cancer characteristics and improving cancer pathology.
Collapse
|
8
|
Kumar N, Verma R, Anand D, Zhou Y, Onder OF, Tsougenis E, Chen H, Heng PA, Li J, Hu Z, Wang Y, Koohbanani NA, Jahanifar M, Tajeddin NZ, Gooya A, Rajpoot N, Ren X, Zhou S, Wang Q, Shen D, Yang CK, Weng CH, Yu WH, Yeh CY, Yang S, Xu S, Yeung PH, Sun P, Mahbod A, Schaefer G, Ellinger I, Ecker R, Smedby O, Wang C, Chidester B, Ton TV, Tran MT, Ma J, Do MN, Graham S, Vu QD, Kwak JT, Gunda A, Chunduri R, Hu C, Zhou X, Lotfi D, Safdari R, Kascenas A, O'Neil A, Eschweiler D, Stegmaier J, Cui Y, Yin B, Chen K, Tian X, Gruening P, Barth E, Arbel E, Remer I, Ben-Dor A, Sirazitdinova E, Kohl M, Braunewell S, Li Y, Xie X, Shen L, Ma J, Baksi KD, Khan MA, Choo J, Colomer A, Naranjo V, Pei L, Iftekharuddin KM, Roy K, Bhattacharjee D, Pedraza A, Bueno MG, Devanathan S, Radhakrishnan S, Koduganty P, Wu Z, Cai G, Liu X, Wang Y, Sethi A. A Multi-Organ Nucleus Segmentation Challenge. IEEE Trans Med Imaging 2020; 39:1380-1391. [PMID: 31647422 PMCID: PMC10439521 DOI: 10.1109/tmi.2019.2947628] [Citation(s) in RCA: 117] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
Collapse
|
9
|
Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, Rajpoot N. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal 2019; 58:101563. [PMID: 31561183 DOI: 10.1016/j.media.2019.101563] [Citation(s) in RCA: 305] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 09/04/2019] [Accepted: 09/16/2019] [Indexed: 12/21/2022]
Abstract
Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.
Collapse
Affiliation(s)
- Simon Graham
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, UK; Department of Computer Science, University of Warwick, UK.
| | - Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, South Korea
| | - Shan E Ahmed Raza
- Department of Computer Science, University of Warwick, UK; Centre for Evolution and Cancer & Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Ayesha Azam
- Department of Computer Science, University of Warwick, UK; University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, South Korea
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK
| |
Collapse
|
10
|
Aresta G, Araújo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, Marami B, Prastawa M, Chan M, Donovan M, Fernandez G, Zeineh J, Kohl M, Walz C, Ludwig F, Braunewell S, Baust M, Vu QD, To MNN, Kim E, Kwak JT, Galal S, Sanchez-Freire V, Brancati N, Frucci M, Riccio D, Wang Y, Sun L, Ma K, Fang J, Kone I, Boulmane L, Campilho A, Eloy C, Polónia A, Aguiar P. BACH: Grand challenge on breast cancer histology images. Med Image Anal 2019; 56:122-139. [PMID: 31226662 DOI: 10.1016/j.media.2019.05.010] [Citation(s) in RCA: 174] [Impact Index Per Article: 34.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 05/28/2019] [Accepted: 05/29/2019] [Indexed: 01/22/2023]
Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.
Collapse
Affiliation(s)
- Guilherme Aresta
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto 4200-465, Portugal; Faculty of Engineering of University of Porto, Porto 4200-465, Portugal.
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto 4200-465, Portugal; Faculty of Engineering of University of Porto, Porto 4200-465, Portugal.
| | | | | | | | | | - Bahram Marami
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Marcel Prastawa
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Monica Chan
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Michael Donovan
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Gerardo Fernandez
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Jack Zeineh
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | | | - Christoph Walz
- Institute of Pathology, Faculty of Medicine, LMU Munich, Munich, Germany
| | | | | | | | - Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | - Minh Nguyen Nhat To
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | - Eal Kim
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | | | | | - Nadia Brancati
- Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy
| | - Maria Frucci
- Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy
| | - Daniel Riccio
- Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy; University of Naples "Federico II", Naples, Italy
| | - Yaqi Wang
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Lingling Sun
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China; Zhejiang Provincial Laboratory of Integrated Circuits Design, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Kaiqiang Ma
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Jiannan Fang
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Ismael Kone
- 2MIA Research Group, LEM2A Lab, Faculté des Sciences, Université Moulay Ismail, Meknes, Morocco
| | - Lahsen Boulmane
- 2MIA Research Group, LEM2A Lab, Faculté des Sciences, Université Moulay Ismail, Meknes, Morocco
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto 4200-465, Portugal; Faculty of Engineering of University of Porto, Porto 4200-465, Portugal
| | - Catarina Eloy
- Laboratório de Anatomia Patológica, Ipatimup Diagnósticos, Rua Júilio Amaral de Carvalho, Porto 45, 4200-135, Portugal; Faculdade de Medicina, Universidade do Porto, Alameda Prof Hernâni Monteiro, Porto 4200-319, Portugal; Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal
| | - António Polónia
- Laboratório de Anatomia Patológica, Ipatimup Diagnósticos, Rua Júilio Amaral de Carvalho, Porto 45, 4200-135, Portugal; Faculdade de Medicina, Universidade do Porto, Alameda Prof Hernâni Monteiro, Porto 4200-319, Portugal; Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal.
| | - Paulo Aguiar
- Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal; Instituto de Engenharia Biomédica (INEB), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal.
| |
Collapse
|
11
|
Vu QD, Kwak JT. A dense multi-path decoder for tissue segmentation in histopathology images. Comput Methods Programs Biomed 2019; 173:119-129. [PMID: 31046986 DOI: 10.1016/j.cmpb.2019.03.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 02/19/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Segmenting different tissue components in histopathological images is of great importance for analyzing tissues and tumor environments. In recent years, an encoder-decoder family of convolutional neural networks has increasingly adopted to develop automated segmentation tools. While an encoder has been the main focus of most investigations, the role of a decoder so far has not been well studied and understood. Herein, we proposed an improved design of a decoder for the segmentation of epithelium and stroma components in histopathology images. METHODS The proposed decoder is built upon a multi-path layout and dense shortcut connections between layers to maximize the learning and inference capability. Equipped with the proposed decoder, neural networks are built using three types of encoders (VGG, ResNet and preactived ResNet). To assess the proposed method, breast and prostate tissue datasets are utilized, including 108 and 52 hematoxylin and eosin (H&E) breast tissues images and 224 H&E prostate tissue images. RESULTS Combining the pre-activated ResNet encoder and the proposed decoder, we achieved a pixel wise accuracy (ACC) of 0.9122, a rand index (RAND) score of 0.8398, an area under receiver operating characteristic curve (AUC) of 0.9716, Dice coefficient for stroma (DICE_STR) of 0.9092 and Dice coefficient for epithelium (DICE_EPI) of 0.9150 on the breast tissue dataset. The same network obtained 0.9074 ACC, 0.8320 Rand index, 0.9719 AUC, 0.9021 DICE_EPI and 0.9121 DICE_STR on the prostate dataset. CONCLUSIONS In general, the experimental results confirmed that the proposed network is superior to the networks combined with the conventional decoder. Therefore, the proposed decoder could aid in improving tissue analysis in histopathology images.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea.
| |
Collapse
|
12
|
Vu QD, Graham S, Kurc T, To MNN, Shaban M, Qaiser T, Koohbanani NA, Khurram SA, Kalpathy-Cramer J, Zhao T, Gupta R, Kwak JT, Rajpoot N, Saltz J, Farahani K. Methods for Segmentation and Classification of Digital Microscopy Tissue Images. Front Bioeng Biotechnol 2019; 7:53. [PMID: 31001524 PMCID: PMC6454006 DOI: 10.3389/fbioe.2019.00053] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 03/01/2019] [Indexed: 12/12/2022] Open
Abstract
High-resolution microscopy images of tissue specimens provide detailed information about the morphology of normal and diseased tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Segmentation of nuclei and classification of tissue images are two common tasks in tissue image analysis. Development of accurate and efficient algorithms for these tasks is a challenging problem because of the complexity of tissue morphology and tumor heterogeneity. In this paper we present two computer algorithms; one designed for segmentation of nuclei and the other for classification of whole slide tissue images. The segmentation algorithm implements a multiscale deep residual aggregation network to accurately segment nuclear material and then separate clumped nuclei into individual nuclei. The classification algorithm initially carries out patch-level classification via a deep learning method, then patch-level statistical and morphological features are used as input to a random forest regression model for whole slide image classification. The segmentation and classification algorithms were evaluated in the MICCAI 2017 Digital Pathology challenge. The segmentation algorithm achieved an accuracy score of 0.78. The classification algorithm achieved an accuracy score of 0.81. These scores were the highest in the challenge.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Simon Graham
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Minh Nguyen Nhat To
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Talha Qaiser
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | | | - Syed Ali Khurram
- School of Clinical Dentistry, The University of Sheffield, Sheffield, United Kingdom
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Harvard Medical School and Mass General Hospital, Boston, MA, United States
| | - Tianhao Zhao
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
13
|
Affiliation(s)
- P Lê
- Service de Chirurgie Générale, Centre Hospitalier de l'Agglomération Montargoise - Montargis.
| | | | | |
Collapse
|
14
|
Lê P, Blondon H, Vu QD. [Apropos of a "55-year-old man presenting an abdominal mass"]. J Chir (Paris) 2003; 140:68-9. [PMID: 12709658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 03/02/2023]
|