1
|
Hiremath A, Corredor G, Li L, Leo P, Magi-Galluzzi C, Elliott R, Purysko A, Shiradkar R, Madabhushi A. An integrated radiology-pathology machine learning classifier for outcome prediction following radical prostatectomy: Preliminary findings. Heliyon 2024; 10:e29602. [PMID: 38665576 PMCID: PMC11044050 DOI: 10.1016/j.heliyon.2024.e29602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 04/08/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
Objectives To evaluate the added benefit of integrating features from pre-treatment MRI (radiomics) and digitized post-surgical pathology slides (pathomics) in prostate cancer (PCa) patients for prognosticating outcomes post radical-prostatectomy (RP) including a) rising prostate specific antigen (PSA), and b) extraprostatic-extension (EPE). Methods Multi-institutional data (N = 58) of PCa patients who underwent pre-treatment 3-T MRI prior to RP were included in this retrospective study. Radiomic and pathomic features were extracted from PCa regions on MRI and RP specimens delineated by expert clinicians. On training set (D1, N = 44), Cox Proportional-Hazards models MR, MP and MRaP were trained using radiomics, pathomics, and their combination, respectively, to prognosticate rising PSA (PSA > 0.03 ng/mL). Top features from MRaP were used to train a model to predict EPE on D1 and test on external dataset (D2, N = 14). C-index, Kalplan-Meier curves were used for survival analysis, and area under ROC (AUC) was used for EPE. MRaP was compared with the existing post-treatment risk-calculator, CAPRA (MC). Results Patients had median follow-up of 34 months. MRaP (c-index = 0.685 ± 0.05) significantly outperformed MR (c-index = 0.646 ± 0.05), MP (c-index = 0.631 ± 0.06) and MC (c-index = 0.601 ± 0.071) (p < 0.0001). Cross-validated Kaplan-Meier curves showed significant separation among risk groups for rising PSA for MRaP (p < 0.005, Hazard Ratio (HR) = 11.36) as compared to MR (p = 0.64, HR = 1.33), MP (p = 0.19, HR = 2.82) and MC (p = 0.10, HR = 3.05). Integrated radio-pathomic model MRaP (AUC = 0.80) outperformed MR (AUC = 0.57) and MP (AUC = 0.76) in predicting EPE on external-data (D2). Conclusions Results from this preliminary study suggest that a combination of radiomic and pathomic features can better predict post-surgical outcomes (rising PSA and EPE) compared to either of them individually as well as extant prognostic nomogram (CAPRA).
Collapse
Affiliation(s)
| | - Germán Corredor
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Lin Li
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Patrick Leo
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Robin Elliott
- Department of Pathology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Andrei Purysko
- Department of Radiology and Nuclear Medicine, Cleveland Clinic, Cleveland, OH, USA
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA
| |
Collapse
|
2
|
Wang R, Chow SSL, Serafin RB, Xie W, Han Q, Baraznenok E, Lan L, Bishop KW, Liu JTC. Direct three-dimensional segmentation of prostate glands with nnU-Net. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:036001. [PMID: 38434772 PMCID: PMC10905031 DOI: 10.1117/1.jbo.29.3.036001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 03/05/2024]
Abstract
Significance In recent years, we and others have developed non-destructive methods to obtain three-dimensional (3D) pathology datasets of clinical biopsies and surgical specimens. For prostate cancer risk stratification (prognostication), standard-of-care Gleason grading is based on examining the morphology of prostate glands in thin 2D sections. This motivates us to perform 3D segmentation of prostate glands in our 3D pathology datasets for the purposes of computational analysis of 3D glandular features that could offer improved prognostic performance. Aim To facilitate prostate cancer risk assessment, we developed a computationally efficient and accurate deep learning model for 3D gland segmentation based on open-top light-sheet microscopy datasets of human prostate biopsies stained with a fluorescent analog of hematoxylin and eosin (H&E). Approach For 3D gland segmentation based on our H&E-analog 3D pathology datasets, we previously developed a hybrid deep learning and computer vision-based pipeline, called image translation-assisted segmentation in 3D (ITAS3D), which required a complex two-stage procedure and tedious manual optimization of parameters. To simplify this procedure, we use the 3D gland-segmentation masks previously generated by ITAS3D as training datasets for a direct end-to-end deep learning-based segmentation model, nnU-Net. The inputs to this model are 3D pathology datasets of prostate biopsies rapidly stained with an inexpensive fluorescent analog of H&E and the outputs are 3D semantic segmentation masks of the gland epithelium, gland lumen, and surrounding stromal compartments within the tissue. Results nnU-Net demonstrates remarkable accuracy in 3D gland segmentations even with limited training data. Moreover, compared with the previous ITAS3D pipeline, nnU-Net operation is simpler and faster, and it can maintain good accuracy even with lower-resolution inputs. Conclusions Our trained DL-based 3D segmentation model will facilitate future studies to demonstrate the value of computational 3D pathology for guiding critical treatment decisions for patients with prostate cancer.
Collapse
Affiliation(s)
- Rui Wang
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | - Sarah S. L. Chow
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | - Robert B. Serafin
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | - Weisi Xie
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | - Qinghua Han
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
| | - Elena Baraznenok
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
| | - Lydia Lan
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
- University of Washington, Department of Biology, Seattle, Washington, United States
| | - Kevin W. Bishop
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
| | - Jonathan T. C. Liu
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
- University of Washington, Department of Laboratory Medicine and Pathology, Seattle, Washington, United States
| |
Collapse
|
3
|
Chen C, Lu C, Viswanathan V, Maveal B, Maheshwari B, Willis J, Madabhushi A. Identifying primary tumor site of origin for liver metastases via a combination of handcrafted and deep learning features. J Pathol Clin Res 2024; 10:e344. [PMID: 37822044 PMCID: PMC10766034 DOI: 10.1002/cjp2.344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/14/2023] [Accepted: 08/28/2023] [Indexed: 10/13/2023]
Abstract
Liver is one of the most common sites for metastases, which can occur on account of primary tumors from multiple sites of origin. Identifying the primary site of origin (PSO) of a metastasis can help in guiding therapeutic options for liver metastases. In this pilot study, we hypothesized that computer extracted handcrafted (HC) histomorphometric features can be utilized to identify the PSO of liver metastases. Cellular features, including tumor nuclei morphological and graph features as well as cytoplasm texture features, were extracted by computer algorithms from 175 slides (114 patients). The study comprised three experiments: (1) comparing and (2) fusing a machine learning (ML) model trained with HC pathomic features and deep learning (DL)-based classifiers to predict site of origin; (3) identifying the section of the primary tumor from which metastases were derived. For experiment 1, we divided the cohort into training sets composed of primary and matched liver metastases [60 patients, 121 whole slide images (WSIs)], and a hold-out validation set (54 patients, 54 WSIs) composed solely of liver metastases of known site of origin. Using the extracted HC features of the training set, a combination of supervised machine classifiers and unsupervised clustering was applied to identify the PSO. A random forest classifier achieved areas under the curve (AUCs) of 0.83, 0.64, 0.82, and 0.64 in classifying the metastatic tumor from colon, esophagus, breast, and pancreas on the validation set. The top features related to nuclear and peri-nuclear shape and textural attributes. We also trained a DL network to serve as a direct comparison to our method. The DL model achieved AUCs for colon: 0.94, esophagus: 0.66, breast: 0.79, and pancreas: 0.67 in identifying PSO. A decision fusion-based strategy was deployed to fuse the trained ML and DL classifiers and achieved slightly better results than ML or DL classifier alone (colon: 0.93, esophagus: 0.68, breast: 0.81, and pancreas: 0.69). For the third experiment, WSI-level attention maps were also generated using a trained DL network to generate a composite feature similarity heat map between paired primaries and their associated metastases. Our experiments revealed that epithelium-rich and moderately differentiated tumor regions of primary tumors were quantitatively similar to paired metastatic tumors. Our findings suggest that a combination of HC and DL features could potentially help identify the PSO for liver metastases while at the same time also potentially identify the spatial sites of origin for the metastases within primary tumors.
Collapse
Affiliation(s)
- Chuheng Chen
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Cheng Lu
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Vidya Viswanathan
- Wallace H. Coulter Department of Biomedical EngineeringGeorgia Institute of Technology and Emory UniversityAtlantaGAUSA
| | - Brandon Maveal
- Department of PathologyUniversity Hospitals Cleveland Medical Center and Case Western Reserve UniversityClevelandOHUSA
| | - Bhunesh Maheshwari
- Department of PathologyUniversity Hospitals Cleveland Medical Center and Case Western Reserve UniversityClevelandOHUSA
| | - Joseph Willis
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Department of PathologyUniversity Hospitals Cleveland Medical Center and Case Western Reserve UniversityClevelandOHUSA
| | - Anant Madabhushi
- Wallace H. Coulter Department of Biomedical EngineeringGeorgia Institute of Technology and Emory UniversityAtlantaGAUSA
- Radiology and Imaging Sciences, Biomedical Informatics (BMI) and PathologyGeorgia Institute of Technology and Emory UniversityAtlantaGAUSA
- Atlanta Veterans Administration Medical CenterAtlantaGAUSA
| |
Collapse
|
4
|
Shafi S, Parwani AV. Artificial intelligence in diagnostic pathology. Diagn Pathol 2023; 18:109. [PMID: 37784122 PMCID: PMC10546747 DOI: 10.1186/s13000-023-01375-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/21/2023] [Indexed: 10/04/2023] Open
Abstract
Digital pathology (DP) is being increasingly employed in cancer diagnostics, providing additional tools for faster, higher-quality, accurate diagnosis. The practice of diagnostic pathology has gone through a staggering transformation wherein new tools such as digital imaging, advanced artificial intelligence (AI) algorithms, and computer-aided diagnostic techniques are being used for assisting, augmenting and empowering the computational histopathology and AI-enabled diagnostics. This is paving the way for advancement in precision medicine in cancer. Automated whole slide imaging (WSI) scanners are now rendering diagnostic quality, high-resolution images of entire glass slides and combining these images with innovative digital pathology tools is making it possible to integrate imaging into all aspects of pathology reporting including anatomical, clinical, and molecular pathology. The recent approvals of WSI scanners for primary diagnosis by the FDA as well as the approval of prostate AI algorithm has paved the way for starting to incorporate this exciting technology for use in primary diagnosis. AI tools can provide a unique platform for innovations and advances in anatomical and clinical pathology workflows. In this review, we describe the milestones and landmark trials in the use of AI in clinical pathology with emphasis on future directions.
Collapse
Affiliation(s)
- Saba Shafi
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA.
| |
Collapse
|
5
|
Xie W, Reder NP, Koyuncu C, Leo P, Hawley S, Huang H, Mao C, Postupna N, Kang S, Serafin R, Gao G, Han Q, Bishop KW, Barner LA, Fu P, Wright JL, Keene CD, Vaughan JC, Janowczyk A, Glaser AK, Madabhushi A, True LD, Liu JTC. Prostate Cancer Risk Stratification via Nondestructive 3D Pathology with Deep Learning-Assisted Gland Analysis. Cancer Res 2022; 82:334-345. [PMID: 34853071 PMCID: PMC8803395 DOI: 10.1158/0008-5472.can-21-2843] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 10/19/2021] [Accepted: 11/24/2021] [Indexed: 01/07/2023]
Abstract
Prostate cancer treatment planning is largely dependent upon examination of core-needle biopsies. The microscopic architecture of the prostate glands forms the basis for prognostic grading by pathologists. Interpretation of these convoluted three-dimensional (3D) glandular structures via visual inspection of a limited number of two-dimensional (2D) histology sections is often unreliable, which contributes to the under- and overtreatment of patients. To improve risk assessment and treatment decisions, we have developed a workflow for nondestructive 3D pathology and computational analysis of whole prostate biopsies labeled with a rapid and inexpensive fluorescent analogue of standard hematoxylin and eosin (H&E) staining. This analysis is based on interpretable glandular features and is facilitated by the development of image translation-assisted segmentation in 3D (ITAS3D). ITAS3D is a generalizable deep learning-based strategy that enables tissue microstructures to be volumetrically segmented in an annotation-free and objective (biomarker-based) manner without requiring immunolabeling. As a preliminary demonstration of the translational value of a computational 3D versus a computational 2D pathology approach, we imaged 300 ex vivo biopsies extracted from 50 archived radical prostatectomy specimens, of which, 118 biopsies contained cancer. The 3D glandular features in cancer biopsies were superior to corresponding 2D features for risk stratification of patients with low- to intermediate-risk prostate cancer based on their clinical biochemical recurrence outcomes. The results of this study support the use of computational 3D pathology for guiding the clinical management of prostate cancer. SIGNIFICANCE: An end-to-end pipeline for deep learning-assisted computational 3D histology analysis of whole prostate biopsies shows that nondestructive 3D pathology has the potential to enable superior prognostic stratification of patients with prostate cancer.
Collapse
Affiliation(s)
- Weisi Xie
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Nicholas P Reder
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
- Department of Laboratory Medicine & Pathology, University of Washington, Seattle, Washington
| | - Can Koyuncu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Patrick Leo
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | | | - Hongyi Huang
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Chenyi Mao
- Department of Chemistry, University of Washington, Seattle, Washington
| | - Nadia Postupna
- Department of Laboratory Medicine & Pathology, University of Washington, Seattle, Washington
| | - Soyoung Kang
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Robert Serafin
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Gan Gao
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Qinghua Han
- Department of Bioengineering, University of Washington, Seattle, Washington
| | - Kevin W Bishop
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
- Department of Bioengineering, University of Washington, Seattle, Washington
| | - Lindsey A Barner
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Cleveland, Ohio
| | - Jonathan L Wright
- Department of Urology, University of Washington, Seattle, Washington
| | - C Dirk Keene
- Department of Laboratory Medicine & Pathology, University of Washington, Seattle, Washington
| | - Joshua C Vaughan
- Department of Chemistry, University of Washington, Seattle, Washington
- Department of Physiology & Biophysics, Seattle, Washington
| | - Andrew Janowczyk
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
- Department of Oncology, Lausanne University Hospital and Lausanne University, Lausanne, Switzerland
| | - Adam K Glaser
- Department of Mechanical Engineering, University of Washington, Seattle, Washington
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio
| | - Lawrence D True
- Department of Laboratory Medicine & Pathology, University of Washington, Seattle, Washington
- Department of Urology, University of Washington, Seattle, Washington
| | - Jonathan T C Liu
- Department of Mechanical Engineering, University of Washington, Seattle, Washington.
- Department of Laboratory Medicine & Pathology, University of Washington, Seattle, Washington
- Department of Bioengineering, University of Washington, Seattle, Washington
| |
Collapse
|
6
|
Doherty T, McKeever S, Al-Attar N, Murphy T, Aura C, Rahman A, O'Neill A, Finn SP, Kay E, Gallagher WM, Watson RWG, Gowen A, Jackman P. Feature fusion of Raman chemical imaging and digital histopathology using machine learning for prostate cancer detection. Analyst 2021; 146:4195-4211. [PMID: 34060548 DOI: 10.1039/d1an00075f] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The diagnosis of prostate cancer is challenging due to the heterogeneity of its presentations, leading to the over diagnosis and treatment of non-clinically important disease. Accurate diagnosis can directly benefit a patient's quality of life and prognosis. Towards addressing this issue, we present a learning model for the automatic identification of prostate cancer. While many prostate cancer studies have adopted Raman spectroscopy approaches, none have utilised the combination of Raman Chemical Imaging (RCI) and other imaging modalities. This study uses multimodal images formed from stained Digital Histopathology (DP) and unstained RCI. The approach was developed and tested on a set of 178 clinical samples from 32 patients, containing a range of non-cancerous, Gleason grade 3 (G3) and grade 4 (G4) tissue microarray samples. For each histological sample, there is a pathologist labelled DP-RCI image pair. The hypothesis tested was whether multimodal image models can outperform single modality baseline models in terms of diagnostic accuracy. Binary non-cancer/cancer models and the more challenging G3/G4 differentiation were investigated. Regarding G3/G4 classification, the multimodal approach achieved a sensitivity of 73.8% and specificity of 88.1% while the baseline DP model showed a sensitivity and specificity of 54.1% and 84.7% respectively. The multimodal approach demonstrated a statistically significant 12.7% AUC advantage over the baseline with a value of 85.8% compared to 73.1%, also outperforming models based solely on RCI and mean and median Raman spectra. Feature fusion of DP and RCI does not improve the more trivial task of tumour identification but does deliver an observed advantage in G3/G4 discrimination. Building on these promising findings, future work could include the acquisition of larger datasets for enhanced model generalization.
Collapse
Affiliation(s)
- Trevor Doherty
- Technological University Dublin, School of Computer Science, City Campus, Grangegorman Lower, Dublin 7, Ireland.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Försch S, Klauschen F, Hufnagl P, Roth W. Artificial Intelligence in Pathology. DEUTSCHES ARZTEBLATT INTERNATIONAL 2021; 118:194-204. [PMID: 34024323 DOI: 10.3238/arztebl.m2021.0011] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 03/24/2020] [Accepted: 09/10/2020] [Indexed: 01/08/2023]
Abstract
BACKGROUND Increasing digitalization enables the use of artificial intelligence (AI) and machine learning in pathology. However, these technologies have only just begun to be implemented, and no randomized prospective trials have yet shown a benefit of AI-based diagnosis. In this review, we present current concepts, illustrate them with examples from representative publications, and discuss the possibilities and limitations of their use. METHODS This article is based on the results of a search in PubMed for articles published between January 1950 and January 2020 containing the searching terms "artificial intelligence," "deep learning," and "digital pathology," as well as the authors' own research findings. RESULTS Current research on AI in pathology focuses on supporting routine diagnosis and on prognostication, particularly for patients with cancer. Initial data indicate that pathologists can arrive at a diagnosis faster and more accurately with the aid of a computer. In a pilot study on the diagnosis of breast cancer, involving 70 patients, sensitivity for the detection of micrometastases rose from 83.3% (by a pathologist alone) to 91.2% (by a pathologist combined with a computer algorithm). The evidence likewise suggests that AI applied to histomorphological properties of cells during microscopy may enable the inference of certain genetic properties, such as mutations in key genes and deoxyribonucleic acid (DNA) methylation profiles. CONCLUSION Initial proof-of-concept studies for AI in pathology are now available. Randomized, prospective studies are now needed so that these early findings can be confirmed or falsified.
Collapse
Affiliation(s)
- Sebastian Försch
- Institute of Pathology, University Medical Center Mainz, Mainz; Institute of Pathology, Charité Universitätsmedizin Berlin, Berlin
| | | | | | | |
Collapse
|
8
|
Leo P, Janowczyk A, Elliott R, Janaki N, Bera K, Shiradkar R, Farré X, Fu P, El-Fahmawi A, Shahait M, Kim J, Lee D, Yamoah K, Rebbeck TR, Khani F, Robinson BD, Eklund L, Jambor I, Merisaari H, Ettala O, Taimen P, Aronen HJ, Boström PJ, Tewari A, Magi-Galluzzi C, Klein E, Purysko A, Nc Shih N, Feldman M, Gupta S, Lal P, Madabhushi A. Computer extracted gland features from H&E predicts prostate cancer recurrence comparably to a genomic companion diagnostic test: a large multi-site study. NPJ Precis Oncol 2021; 5:35. [PMID: 33941830 PMCID: PMC8093226 DOI: 10.1038/s41698-021-00174-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 04/05/2021] [Indexed: 01/04/2023] Open
Abstract
Existing tools for post-radical prostatectomy (RP) prostate cancer biochemical recurrence (BCR) prognosis rely on human pathologist-derived parameters such as tumor grade, with the resulting inter-reviewer variability. Genomic companion diagnostic tests such as Decipher tend to be tissue destructive, expensive, and not routinely available in most centers. We present a tissue non-destructive method for automated BCR prognosis, termed "Histotyping", that employs computational image analysis of morphologic patterns of prostate tissue from a single, routinely acquired hematoxylin and eosin slide. Patients from two institutions (n = 214) were used to train Histotyping for identifying high-risk patients based on six features of glandular morphology extracted from RP specimens. Histotyping was validated for post-RP BCR prognosis on a separate set of n = 675 patients from five institutions and compared against Decipher on n = 167 patients. Histotyping was prognostic of BCR in the validation set (p < 0.001, univariable hazard ratio [HR] = 2.83, 95% confidence interval [CI]: 2.03-3.93, concordance index [c-index] = 0.68, median years-to-BCR: 1.7). Histotyping was also prognostic in clinically stratified subsets, such as patients with Gleason grade group 3 (HR = 4.09) and negative surgical margins (HR = 3.26). Histotyping was prognostic independent of grade group, margin status, pathological stage, and preoperative prostate-specific antigen (PSA) (multivariable p < 0.001, HR = 2.09, 95% CI: 1.40-3.10, n = 648). The combination of Histotyping, grade group, and preoperative PSA outperformed Decipher (c-index = 0.75 vs. 0.70, n = 167). These results suggest that a prognostic classifier for prostate cancer based on digital images could serve as an alternative or complement to molecular-based companion diagnostic tests.
Collapse
Grants
- National Cancer Institute under award numbers 1U24CA199374-01, R01CA249992-01A1 R01CA202752-01A1 R01CA208236-01A1 R01CA216579-01A1 R01CA220581-01A1 1U01CA239055-01 1U01CA248226-01 1U54CA254566-01 National Heart, Lung and Blood Institute 1R01HL15127701A1, National Institute for Biomedical Imaging and Bioengineering 1R43EB028736-01, National Center for Research Resources 1 C06 RR12463-01, VA Merit Review Award IBX004121A from the United States Department of Veterans Affairs Biomedical Laboratory Research and Development Service, the Office of the Assistant Secretary of Defense for Health Affairs, through the Breast Cancer Research Program (W81XWH-19-1-0668), the Prostate Cancer Research Program (W81XWH-15-1-0558, W81XWH-20-1-0851), the Lung Cancer Research Program (W81XWH-18-1-0440, W81XWH-20-1-0595), the Peer Reviewed Cancer Research Program (W81XWH-18-1-0404), the Kidney Precision Medicine Project Glue Grant, the Ohio Third Frontier Technology Validation Fund, the Clinical and Translational Science Collaborative of Cleveland (UL1TR0002548) from the National Center for Advancing Translational Sciences (NCATS) component of the National Institutes of Health and NIH roadmap for Medical Research, The Wallace H. Coulter Foundation Program in the Department of Biomedical Engineering at Case Western Reserve University,
- Sigrid Jusélius Foundation The Finnish Cancer Foundation
- Department of Defense Prostate Cancer Disparity Award (W81XWH-19-1-0720),
- National Science Foundation Graduate Research Fellowship Program (CON501692)
Collapse
Affiliation(s)
- Patrick Leo
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Andrew Janowczyk
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
- Department of Oncology, Lausanne University Hospital and Lausanne University, Lausanne, Switzerland
| | - Robin Elliott
- Department of Pathology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Nafiseh Janaki
- Department of Pathology, Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Xavier Farré
- Public Health Agency of Catalonia, Lleida, Catalonia, Spain
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Ayah El-Fahmawi
- Department of Urology, Penn Presbyterian Medical Center, Philadelphia, PA, USA
| | - Mohammed Shahait
- Department of Urology, Penn Presbyterian Medical Center, Philadelphia, PA, USA
| | - Jessica Kim
- Department of Urology, Penn Presbyterian Medical Center, Philadelphia, PA, USA
| | - David Lee
- Department of Urology, Penn Presbyterian Medical Center, Philadelphia, PA, USA
| | - Kosj Yamoah
- Moffitt Cancer Center, Department of Radiation Oncology, University of South Florida, Tampa, FL, USA
| | - Timothy R Rebbeck
- T.H. Chan School of Public Health and Dana Farber Cancer Institute, Harvard University, Boston, MA, USA
| | - Francesca Khani
- Departments of Pathology and Laboratory Medicine and Urology, Weill Cornell Medicine, New York, NY, USA
| | - Brian D Robinson
- Departments of Pathology and Laboratory Medicine and Urology, Weill Cornell Medicine, New York, NY, USA
| | - Lauri Eklund
- Department of Pathology, University of Turku, Institute of Biomedicine and Turku University Hospital, Turku, Finland
| | - Ivan Jambor
- Department of Pathology, University of Turku, Institute of Biomedicine and Turku University Hospital, Turku, Finland
- Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Harri Merisaari
- Department of Pathology, University of Turku, Institute of Biomedicine and Turku University Hospital, Turku, Finland
| | - Otto Ettala
- Department of Urology, University of Turku, Institute of Biomedicine and Turku University Hospital, Turku, Finland
| | - Pekka Taimen
- Department of Pathology, University of Turku, Institute of Biomedicine and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Department of Pathology, University of Turku, Institute of Biomedicine and Turku University Hospital, Turku, Finland
- Turku University Hospital, Medical Imaging Centre of Southwest Finland, Turku, Finland
| | - Peter J Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Ashutosh Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Eric Klein
- Cleveland Clinic, Glickman Urological and Kidney Institute, Cleveland, OH, USA
| | - Andrei Purysko
- Cleveland Clinic, Imaging Institute, Section of Abdominal Imaging, Cleveland, OH, USA
| | - Natalie Nc Shih
- Department of Pathology, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael Feldman
- Department of Pathology, University of Pennsylvania, Philadelphia, PA, USA
| | - Sanjay Gupta
- Department of Urology, Case Western Reserve University, Cleveland, OH, USA
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA
| | - Priti Lal
- Department of Pathology, University of Pennsylvania, Philadelphia, PA, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
9
|
Machine learning-based image analysis for accelerating the diagnosis of complicated preneoplastic and neoplastic ductal lesions in breast biopsy tissues. Breast Cancer Res Treat 2021; 188:649-659. [PMID: 33934277 DOI: 10.1007/s10549-021-06243-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 04/21/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Diagnosis of breast preneoplastic and neoplastic lesions is difficult due to their similar morphology in breast biopsy specimens. To diagnose these lesions, pathologists perform immunohistochemical analysis and consult with expert breast pathologists. These additional examinations are time-consuming and expensive. Artificial intelligence (AI)-based image analysis has recently improved, and may help in ordinal pathological diagnosis. Here, we showed the significance of machine learning-based image analysis of breast preneoplastic and neoplastic lesions for facilitating high-throughput diagnosis. METHODS Images were obtained from normal mammary glands, hyperplastic lesions, preneoplastic lesions and neoplastic lesions, such as usual ductal hyperplasia (UDH), columnar cell lesion (CCL), ductal carcinoma in situ (DCIS), and DCIS with comedo necrosis (comedo DCIS) in breast biopsy specimens. The original enhanced convoluted neural network (CNN) system was used for analyzing the pathological images. RESULTS The AI-based image analysis provided the following area under the curve values (AUC): normal lesion versus DCIS, 0.9902; DCIS versus comedo DCIS, 0.9942; normal lesion versus CCL, 0.9786; and UDH versus DCIS, 1.000. Multiple comparison analysis showed precision and recall scores similar to those of single comparison analysis. Based on the gradient-weighted class activation mapping (Grad-CAM) used to visualize the important regions reflecting the result of CNN analysis, the ratio of stromal tissue in the whole weighted area was significantly higher in UDH and CCL than that in DCIS. CONCLUSIONS These analyses may provide a more accurate and rapid pathological diagnosis of patients. Moreover, Grad-CAM identifies uncharted important histological characteristics for newer pathological findings and targets of research for understanding diseases.
Collapse
|
10
|
Abstract
PURPOSE OF REVIEW Pathomics, the fusion of digitalized pathology and artificial intelligence, is currently changing the landscape of medical pathology and biologic disease classification. In this review, we give an overview of Pathomics and summarize its most relevant applications in urology. RECENT FINDINGS There is a steady rise in the number of studies employing Pathomics, and especially deep learning, in urology. In prostate cancer, several algorithms have been developed for the automatic differentiation between benign and malignant lesions and to differentiate Gleason scores. Furthermore, several applications have been developed for the automatic cancer cell detection in urine and for tumor assessment in renal cancer. Despite the explosion in research, Pathomics is not fully ready yet for widespread clinical application. SUMMARY In prostate cancer and other urologic pathologies, Pathomics is avidly being researched with commercial applications on the close horizon. Pathomics is set to improve the accuracy, speed, reliability, cost-effectiveness and generalizability of pathology, especially in uro-oncology.
Collapse
|
11
|
Prediction of early recurrence of hepatocellular carcinoma after resection using digital pathology images assessed by machine learning. Mod Pathol 2021; 34:417-425. [PMID: 32948835 PMCID: PMC7817520 DOI: 10.1038/s41379-020-00671-z] [Citation(s) in RCA: 50] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 08/12/2020] [Accepted: 08/13/2020] [Indexed: 01/03/2023]
Abstract
Hepatocellular carcinoma (HCC) is a representative primary liver cancer caused by long-term and repetitive liver injury. Surgical resection is generally selected as the radical cure treatment. Because the early recurrence of HCC after resection is associated with low overall survival, the prediction of recurrence after resection is clinically important. However, the pathological characteristics of the early recurrence of HCC have not yet been elucidated. We attempted to predict the early recurrence of HCC after resection based on digital pathologic images of hematoxylin and eosin-stained specimens and machine learning applying a support vector machine (SVM). The 158 HCC patients meeting the Milan criteria who underwent surgical resection were included in this study. The patients were categorized into three groups: Group I, patients with HCC recurrence within 1 year after resection (16 for training and 23 for test); Group II, patients with HCC recurrence between 1 and 2 years after resection (22 and 28); and Group III, patients with no HCC recurrence within 4 years after resection (31 and 38). The SVM-based prediction method separated the three groups with 89.9% (80/89) accuracy. Prediction of Groups I was consistent for all cases, while Group II was predicted to be Group III in one case, and Group III was predicted to be Group II in 8 cases. The use of digital pathology and machine learning could be used for highly accurate prediction of HCC recurrence after surgical resection, especially that for early recurrence. Currently, in most cases after HCC resection, regular blood tests and diagnostic imaging are used for follow-up observation; however, the use of digital pathology coupled with machine learning offers potential as a method for objective postoprative follow-up observation.
Collapse
|
12
|
Lu C, Koyuncu C, Corredor G, Prasanna P, Leo P, Wang X, Janowczyk A, Bera K, Lewis J, Velcheti V, Madabhushi A. Feature-driven local cell graph (FLocK): New computational pathology-based descriptors for prognosis of lung cancer and HPV status of oropharyngeal cancers. Med Image Anal 2020; 68:101903. [PMID: 33352373 DOI: 10.1016/j.media.2020.101903] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 10/26/2020] [Accepted: 11/09/2020] [Indexed: 01/02/2023]
Abstract
Local spatial arrangement of nuclei in histopathology images of different cancer subtypes has been shown to have prognostic value. In order to capture localized nuclear architectural information, local cell cluster graph-based measurements have been proposed. However, conventional ways of cell graph construction only utilize nuclear spatial proximity, and do not differentiate between different cell types while constructing the graph. In this paper, we present feature-driven local cell cluster graph (FLocK), a new approach to constructing local cell graphs by simultaneously considering spatial proximity and attributes of the individual nuclei (e.g. shape, size, texture). In addition, we have designed a new set of quantitative graph-derived metrics to be extracted from FLocKs, in turn capturing the interplay between different proximally located clusters of nuclei. We have evaluated the efficacy of FLocK features extracted from H&E stained tissue images in two clinical applications: to classify short-term vs. long-term survival among patients of early stage non-small cell lung cancer (ES-NSCLC), and also to predict human papillomavirus (HPV) status of oropharyngeal squamous cell carcinoma (OP-SCCs). In the classification of long-term vs. short-term survival among patients of ES-NSCLC (training cohort, n = 434), the top 10 discriminative FLocK features related to the variation of FLocK size and intersected FLocK distance were identified, via Minimum Redundancy and Maximum Relevance (MRMR) selection, in 100 runs of 10-fold cross-validation, and in conjunction with a linear discriminant classifier yielded a mean AUC of 0.68 for predicting survival in the training cohort. This is better than other state-of-art histomorphometric and deep learning classifiers (cell cluster graphs (AUC = 0.62), global cell graph (AUC = 0.56), nuclear shape (AUC = 0.54), nuclear orientation (AUC = 0.61), AlexNet (AUC = 0.55), ResNet (AUC = 0.56)). The FLocK-based classifier yielded an AUC of 0.70 in an independent testing cohort (n = 150). The patients identified as "high-risk" had significantly poorer overall survival in the testing cohort, with a hazard ratio (95% confidence interval) of 2.24 (1.24-4.05), p = 0.01144). In the classification of HPV status of OP-SCC, the top three FLocK features pertaining to the portion of intersected FLocKs were used to construct a classifier, which yielded an AUC of 0.80 in the training cohort (n = 50), and an accuracy of 0.78 in an independent testing cohort (n = 35). The combination of FLocK measurements with cell cluster graphs, nuclear orientation, and nuclear shape improved the training AUC to 0.87, 0.91 and 0.85, respectively. Deep learning approaches yielded marginally better performance than the FLocK-based classifier in this application, with AUC = 0.78 for AlexNet, AUC = 0.81 for ResNet, and AUC = 0.76 for FLocK-based classifier in the testing cohort. However, the combination of two hand-crafted features: FLocK and nuclear orientation yielded a better performance (AUC = 0.84). FLocK provides a unique and quantitative way to analyze histology images of solid tumors and interrogate tumor morphology from a different aspect than existing histomorphometrics. The source code can be accessed at https://github.com/hacylu/FLocK.
Collapse
Affiliation(s)
- Cheng Lu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Can Koyuncu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - German Corredor
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Prateek Prasanna
- Department of Biomedical Informatics, Stony Brook University, New York, NY, USA
| | - Patrick Leo
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - XiangXue Wang
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Andrew Janowczyk
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Precision Oncology Center, Lausanne University Hospital, Switzerland
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - James Lewis
- Department of Pathology, Microbiology, and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
13
|
Bera K, Katz I, Madabhushi A. Reimagining T Staging Through Artificial Intelligence and Machine Learning Image Processing Approaches in Digital Pathology. JCO Clin Cancer Inform 2020; 4:1039-1050. [PMID: 33166198 PMCID: PMC7713520 DOI: 10.1200/cci.20.00110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2020] [Indexed: 02/06/2023] Open
Abstract
Tumor stage and grade, visually assessed by pathologists from evaluation of pathology images in conjunction with radiographic imaging techniques, have been linked to outcome, progression, and survival for a number of cancers. The gold standard of staging in oncology has been the TNM (tumor-node-metastasis) staging system. Though histopathological grading has shown prognostic significance, it is subjective and limited by interobserver variability even among experienced surgical pathologists. Recently, artificial intelligence (AI) approaches have been applied to pathology images toward diagnostic-, prognostic-, and treatment prediction-related tasks in cancer. AI approaches have the potential to overcome the limitations of conventional TNM staging and tumor grading approaches, providing a direct prognostic prediction of disease outcome independent of tumor stage and grade. Broadly speaking, these AI approaches involve extracting patterns from images that are then compared against previously defined disease signatures. These patterns are typically categorized as either (1) handcrafted, which involve domain-inspired attributes, such as nuclear shape, or (2) deep learning (DL)-based representations, which tend to be more abstract. DL approaches have particularly gained considerable popularity because of the minimal domain knowledge needed for training, mostly only requiring annotated examples corresponding to the categories of interest. In this article, we discuss AI approaches for digital pathology, especially as they relate to disease prognosis, prediction of genomic and molecular alterations in the tumor, and prediction of treatment response in oncology. We also discuss some of the potential challenges with validation, interpretability, and reimbursement that must be addressed before widespread clinical deployment. The article concludes with a brief discussion of potential future opportunities in the field of AI for digital pathology and oncology.
Collapse
Affiliation(s)
- Kaustav Bera
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, OH
- Maimonides Medical Center, Department of Internal Medicine, Brooklyn, NY
| | - Ian Katz
- Southern Sun Pathology, Sydney, Australia, and University of Queensland, Brisbane, Australia
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, OH
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH
| |
Collapse
|
14
|
Barisoni L, Lafata KJ, Hewitt SM, Madabhushi A, Balis UGJ. Digital pathology and computational image analysis in nephropathology. Nat Rev Nephrol 2020; 16:669-685. [PMID: 32848206 PMCID: PMC7447970 DOI: 10.1038/s41581-020-0321-6] [Citation(s) in RCA: 97] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2020] [Indexed: 12/17/2022]
Abstract
The emergence of digital pathology - an image-based environment for the acquisition, management and interpretation of pathology information supported by computational techniques for data extraction and analysis - is changing the pathology ecosystem. In particular, by virtue of our new-found ability to generate and curate digital libraries, the field of machine vision can now be effectively applied to histopathological subject matter by individuals who do not have deep expertise in machine vision techniques. Although these novel approaches have already advanced the detection, classification, and prognostication of diseases in the fields of radiology and oncology, renal pathology is just entering the digital era, with the establishment of consortia and digital pathology repositories for the collection, analysis and integration of pathology data with other domains. The development of machine-learning approaches for the extraction of information from image data, allows for tissue interrogation in a way that was not previously possible. The application of these novel tools are placing pathology centre stage in the process of defining new, integrated, biologically and clinically homogeneous disease categories, to identify patients at risk of progression, and shifting current paradigms for the treatment and prevention of kidney diseases.
Collapse
Affiliation(s)
- Laura Barisoni
- Department of Pathology, Duke University, Durham, NC, USA.
- Department of Medicine, Division of Nephrology, Duke University, Durham, NC, USA.
| | - Kyle J Lafata
- Department of Radiology, Duke University, Durham, NC, USA
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Stephen M Hewitt
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
- Louis Stokes Veterans Administration Medical Center, Cleveland, OH, USA
| | | |
Collapse
|
15
|
Improving prostate cancer classification in H&E tissue micro arrays using Ki67 and P63 histopathology. Comput Biol Med 2020; 127:104053. [PMID: 33126125 DOI: 10.1016/j.compbiomed.2020.104053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 10/09/2020] [Accepted: 10/09/2020] [Indexed: 01/23/2023]
Abstract
Histopathology of Hematoxylin and Eosin (H&E)-stained tissue obtained from biopsy is commonly used in prostate cancer (PCa) diagnosis. Automatic PCa classification of digitized H&E slides has been developed before, but no attempts have been made to classify PCa using additional tissue stains registered to H&E. In this paper, we demonstrate that using H&E, Ki67 and p63-stained (3-stain) tissue improves PCa classification relative to H&E alone. We also show that we can infer PCa-relevant Ki67 and p63 information from the H&E slides alone, and use it to achieve H&E-based PCa classification that is comparable to the 3-stain classification. Reported improvements apply to classifying benign vs. malignant tissue, and low grade (Gleason group 2) vs. high grade (Gleason groups 3,4,5) cancer. Specifically, we conducted four classification tasks using 333 tissue samples extracted from 231 radical prostatectomy patients: regression tree-based classification using either (i) 3-stain features, with a benign vs malignant area under the curve (AUC = 92.9%), or (ii) real H&E features and H&E features learned from Ki67 and p63 stains (AUC = 92.4%), as well as deep learning classification using either (iii) real 3-stain tissue patches (AUC = 94.3%) and (iv) real H&E patches and generated Ki67 and p63 patches (AUC = 93.0%) using a deep convolutional generative adversarial network. Classification performance was assessed with Monte Carlo cross validation and quantified in terms of the Area Under the Curve, Brier score, sensitivity, and specificity. Our results are interpretable and indicate that the standard H&E classification could be improved by mimicking other stain types.
Collapse
|
16
|
Yan C, Nakane K, Wang X, Fu Y, Lu H, Fan X, Feldman MD, Madabhushi A, Xu J. Automated gleason grading on prostate biopsy slides by statistical representations of homology profile. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105528. [PMID: 32470903 PMCID: PMC8153074 DOI: 10.1016/j.cmpb.2020.105528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 04/13/2020] [Accepted: 04/30/2020] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Gleason grading system is currently the clinical gold standard for determining prostate cancer aggressiveness. Prostate cancer is typically classified into one of 5 different categories with 1 representing the most indolent disease and 5 reflecting the most aggressive disease. Grades 3 and 4 are the most common and difficult patterns to be discriminated in clinical practice. Even though the degree of gland differentiation is the strongest determinant of Gleason grade, manual grading is subjective and is hampered by substantial inter-reader disagreement, especially with regard to intermediate grade groups. METHODS To capture the topological characteristics and the degree of connectivity between nuclei around the gland, the concept of Homology Profile (HP) for prostate cancer grading is presented in this paper. HP is an algebraic tool, whereby, certain algebraic invariants are computed based on the structure of a topological space. We utilized the Statistical Representation of Homology Profile (SRHP) features to quantify the extent of glandular differentiation. The quantitative characteristics which represent the image patch are fed into a supervised classifier model for discrimination of grade patterns 3 and 4. RESULTS On the basis of the novel homology profile, we evaluated 43 digitized images of prostate biopsy slides annotated for regions corresponding to Grades 3 and 4. The quantitative patch-level evaluation results showed that our approach achieved an Area Under Curve (AUC) of 0.96 and an accuracy of 0.89 in terms of discriminating Grade 3 and 4 patches. Our approach was found to be superior to comparative methods including handcrafted cellular features, Stacked Sparse Autoencoder (SSAE) algorithm and end-to-end supervised learning method (DLGg). Also, slide-level quantitative and qualitative evaluation results reflect the ability of our approach in discriminating Gleason Grade 3 from 4 patterns on H&E tissue images. CONCLUSIONS We presented a novel Statistical Representation of Homology Profile (SRHP) approach for automated Gleason grading on prostate biopsy slides. The most discriminating topological descriptions of cancerous regions for grade 3 and 4 in prostate cancer were identified. Moreover, these characteristics of homology profile are interpretable, visually meaningful and highly consistent with the rubric employed by pathologists for the task of Gleason grading.
Collapse
Affiliation(s)
- Chaoyang Yan
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Kazuaki Nakane
- Department of Molecular Pathology, Osaka University Graduate School of Medicine, Division of Health Science, Osaka 565-0871, Japan
| | - Xiangxue Wang
- Dept. of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA
| | - Yao Fu
- Dept. of Pathology, the affiliated Drum Tower Hospital, Nanjing University Medical School, 210008, China
| | - Haoda Lu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiangshan Fan
- Dept. of Pathology, the affiliated Drum Tower Hospital, Nanjing University Medical School, 210008, China
| | - Michael D Feldman
- Division of Surgical Pathology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Anant Madabhushi
- Dept. of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA; Louis Stokes Cleveland Veterans Medical Center, Cleveland, OH 44106
| | - Jun Xu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China.
| |
Collapse
|
17
|
Computer Extracted Features from Initial H&E Tissue Biopsies Predict Disease Progression for Prostate Cancer Patients on Active Surveillance. Cancers (Basel) 2020; 12:cancers12092708. [PMID: 32967377 PMCID: PMC7563653 DOI: 10.3390/cancers12092708] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 09/13/2020] [Accepted: 09/14/2020] [Indexed: 01/21/2023] Open
Abstract
In this work, we assessed the ability of computerized features of nuclear morphology from diagnostic biopsy images to predict prostate cancer (CaP) progression in active surveillance (AS) patients. Improved risk characterization of AS patients could reduce over-testing of low-risk patients while directing high-risk patients to therapy. A total of 191 (125 progressors, 66 non-progressors) AS patients from a single site were identified using The Johns Hopkins University's (JHU) AS-eligibility criteria. Progression was determined by pathologists at JHU. 30 progressors and 30 non-progressors were randomly selected to create the training cohort D1 (n = 60). The remaining patients comprised the validation cohort D2 (n = 131). Digitized Hematoxylin & Eosin (H&E) biopsies were annotated by a pathologist for CaP regions. Nuclei within the cancer regions were segmented using a watershed method and 216 nuclear features describing position, shape, orientation, and clustering were extracted. Six features associated with disease progression were identified using D1 and then used to train a machine learning classifier. The classifier was validated on D2. The classifier was further compared on a subset of D2 (n = 47) against pro-PSA, an isoform of prostate specific antigen (PSA) more linked with CaP, in predicting progression. Performance was evaluated with area under the curve (AUC). A combination of nuclear spatial arrangement, shape, and disorder features were associated with progression. The classifier using these features yielded an AUC of 0.75 in D2. On the 47 patient subset with pro-PSA measurements, the classifier yielded an AUC of 0.79 compared to an AUC of 0.42 for pro-PSA. Nuclear morphometric features from digitized H&E biopsies predicted progression in AS patients. This may be useful for identifying AS-eligible patients who could benefit from immediate curative therapy. However, additional multi-site validation is needed.
Collapse
|
18
|
Ji MY, Yuan L, Lu SM, Gao MT, Zeng Z, Zhan N, Ding YJ, Liu ZR, Huang PX, Lu C, Dong WG. Glandular orientation and shape determined by computational pathology could identify aggressive tumor for early colon carcinoma: a triple-center study. J Transl Med 2020; 18:129. [PMID: 32178690 PMCID: PMC7077008 DOI: 10.1186/s12967-020-02297-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 03/11/2020] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Identifying the early-stage colon adenocarcinoma (ECA) patients who have lower risk cancer vs. the higher risk cancer could improve disease prognosis. Our study aimed to explore whether the glandular morphological features determined by computational pathology could identify high risk cancer in ECA via H&E images digitally. METHODS 532 ECA patients retrospectively from 2 independent data centers, as well as 113 from The Cancer Genome Atlas (TCGA), were enrolled in this study. Four tissue microarrays (TMAs) were constructed across ECA hematoxylin and eosin (H&E) stained slides. 797 quantitative glandular morphometric features were extracted and 5 most prognostic features were identified using minimum redundancy maximum relevance to construct an image classifier. The image classifier was evaluated on D2/D3 = 223, D4 = 46, D5 = 113. The expression of Ki67 and serum CEA levels were scored on D3, aiming to explore the correlations between image classifier and immunohistochemistry data and serum CEA levels. The roles of clinicopathological data and ECAHBC were evaluated by univariate and multivariate analyses for prognostic value. RESULTS The image classifier could predict ECA recurrence (accuracy of 88.1%). ECA histomorphometric-based image classifier (ECAHBC) was an independent prognostic factor for poorer disease-specific survival [DSS, (HR = 9.65, 95% CI 2.15-43.12, P = 0.003)]. Significant correlations were observed between ECAHBC-positive patients and positivity of Ki67 labeling index (Ki67Li) and serum CEA. CONCLUSION Glandular orientation and shape could predict the high risk cancer in ECA and contribute to precision oncology. Computational pathology is emerging as a viable and objective means of identifying predictive biomarkers for cancer patients.
Collapse
Affiliation(s)
- Meng-Yao Ji
- Department of Gastroenterology, Wuhan University Renmin Hospital, Wuhan, Hubei, China.,Key Laboratory of Hubei Province for Digestive System Disease, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Lei Yuan
- Department of Information Center, Wuhan University Renmin Hospital, Wuhan, Hubei, China. .,Key Laboratory of Hubei Province for Digestive System Disease, Wuhan University Renmin Hospital, Wuhan, Hubei, China.
| | - Shi-Min Lu
- Department of Gastroenterology, Wuhan University Renmin Hospital, Wuhan, Hubei, China.,Key Laboratory of Hubei Province for Digestive System Disease, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Meng-Ting Gao
- Department of Information Center, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Zhi Zeng
- Department of Pathology, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Na Zhan
- Department of Pathology, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Yi-Juan Ding
- Department of Gastroenterology, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Zheng-Ru Liu
- Department of Gastroenterology, Wuhan University Renmin Hospital, Wuhan, Hubei, China
| | - Ping-Xiao Huang
- Department of Gastroenterology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cheng Lu
- College of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi, China.
| | - Wei-Guo Dong
- Department of Gastroenterology, Wuhan University Renmin Hospital, Wuhan, Hubei, China.
| |
Collapse
|
19
|
Bera K, Schalper KA, Rimm DL, Velcheti V, Madabhushi A. Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology. Nat Rev Clin Oncol 2019; 16:703-715. [PMID: 31399699 PMCID: PMC6880861 DOI: 10.1038/s41571-019-0252-y] [Citation(s) in RCA: 636] [Impact Index Per Article: 127.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/04/2019] [Indexed: 02/06/2023]
Abstract
In the past decade, advances in precision oncology have resulted in an increased demand for predictive assays that enable the selection and stratification of patients for treatment. The enormous divergence of signalling and transcriptional networks mediating the crosstalk between cancer, stromal and immune cells complicates the development of functionally relevant biomarkers based on a single gene or protein. However, the result of these complex processes can be uniquely captured in the morphometric features of stained tissue specimens. The possibility of digitizing whole-slide images of tissue has led to the advent of artificial intelligence (AI) and machine learning tools in digital pathology, which enable mining of subvisual morphometric phenotypes and might, ultimately, improve patient management. In this Perspective, we critically evaluate various AI-based computational approaches for digital pathology, focusing on deep neural networks and 'hand-crafted' feature-based methodologies. We aim to provide a broad framework for incorporating AI and machine learning tools into clinical oncology, with an emphasis on biomarker development. We discuss some of the challenges relating to the use of AI, including the need for well-curated validation datasets, regulatory approval and fair reimbursement strategies. Finally, we present potential future opportunities for precision oncology.
Collapse
Affiliation(s)
- Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Kurt A Schalper
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - David L Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - Vamsidhar Velcheti
- Thoracic Medical Oncology, Perlmutter Cancer Center, New York University, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
20
|
Nir G, Hor S, Karimi D, Fazli L, Skinnider BF, Tavassoli P, Turbin D, Villamil CF, Wang G, Wilson RS, Iczkowski KA, Lucia MS, Black PC, Abolmaesumi P, Goldenberg SL, Salcudean SE. Automatic grading of prostate cancer in digitized histopathology images: Learning from multiple experts. Med Image Anal 2018; 50:167-180. [DOI: 10.1016/j.media.2018.09.005] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 07/11/2018] [Accepted: 09/21/2018] [Indexed: 01/17/2023]
|
21
|
Leo P, Elliott R, Shih NNC, Gupta S, Feldman M, Madabhushi A. Stable and discriminating features are predictive of cancer presence and Gleason grade in radical prostatectomy specimens: a multi-site study. Sci Rep 2018; 8:14918. [PMID: 30297720 PMCID: PMC6175913 DOI: 10.1038/s41598-018-33026-5] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 09/20/2018] [Indexed: 01/17/2023] Open
Abstract
Site variation in fixation, staining, and scanning can confound automated tissue based image classifiers for disease characterization. In this study we incorporated stability into four feature selection methods for identifying the most robust and discriminating features for two prostate histopathology classification tasks. We evaluated 242 morphology features from N = 212 prostatectomy specimens from four sites for automated cancer detection and grading. We quantified instability as the rate of significant cross-site feature differences. We mapped feature stability and discriminability using 188 non-cancerous and 210 cancerous regions via 3-fold cross validation, then held one site out, creating independent training and testing sets. In training, one feature set was selected only for discriminability, another for discriminability and stability. We trained a classifier with each feature set, testing on the hold out site. Experiments were repeated with 117 Gleason grade 3 and 112 grade 4 regions. Stability was calculated across non-cancerous regions. Gland shape features yielded the best stability and area under the receiver operating curve (AUC) trade-off while co-occurrence texture features were generally unstable. Our stability-informed method produced a cancer detection AUC of 0.98 ± 0.05 and increased average Gleason grading AUC by 4.38%. Color normalization of the images tended to exacerbate feature instability.
Collapse
Affiliation(s)
- Patrick Leo
- Case Western Reserve University, Dept. of Biomedical Engineering, Cleveland, OH, 44106, United States
| | - Robin Elliott
- Case Western Reserve University, Dept. of Pathology, Cleveland, OH, 44106, United States
| | - Natalie N C Shih
- University of Pennsylvania, Dept. of Pathology, Philadelphia, PA, 19104, United States
| | - Sanjay Gupta
- Case Western Reserve University, Dept. of Urology, Cleveland, OH, 44106, United States
| | - Michael Feldman
- University of Pennsylvania, Dept. of Pathology, Philadelphia, PA, 19104, United States
| | - Anant Madabhushi
- Case Western Reserve University, Dept. of Biomedical Engineering, Cleveland, OH, 44106, United States.
| |
Collapse
|
22
|
Penzias G, Singanamalli A, Elliott R, Gollamudi J, Shih N, Feldman M, Stricker PD, Delprado W, Tiwari S, Böhm M, Haynes AM, Ponsky L, Fu P, Tiwari P, Viswanath S, Madabhushi A. Identifying the morphologic basis for radiomic features in distinguishing different Gleason grades of prostate cancer on MRI: Preliminary findings. PLoS One 2018; 13:e0200730. [PMID: 30169514 PMCID: PMC6118356 DOI: 10.1371/journal.pone.0200730] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Accepted: 07/02/2018] [Indexed: 12/29/2022] Open
Abstract
Translation of radiomics into the clinic may require a more comprehensive understanding of the underlying morphologic tissue characteristics they reflect. In the context of prostate cancer (PCa), some studies have correlated gross histological measurements of gland lumen, epithelium, and nuclei with disease appearance on MRI. Quantitative histomorphometry (QH), like radiomics for radiologic images, is the computer based extraction of features for describing tumor morphology on digitized tissue images. In this work, we attempt to establish the histomorphometric basis for radiomic features for prostate cancer by (1) identifying the radiomic features from T2w MRI most discriminating of low vs. intermediate/high Gleason score, (2) identifying QH features correlated with the most discriminating radiomic features previously identified, and (3) evaluating the discriminative ability of QH features found to be correlated with spatially co-localized radiomic features. On a cohort of 36 patients (23 for training, 13 for validation), Gabor texture features were identified as being most predictive of Gleason grade on MRI (AUC of 0.69) and gland lumen shape features were identified as the most predictive QH features (AUC = 0.75). Our results suggest that the PCa grade discriminability of Gabor features is a consequence of variations in gland shape and morphology at the tissue level.
Collapse
Affiliation(s)
- Gregory Penzias
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| | - Asha Singanamalli
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| | - Robin Elliott
- University Hospitals, Cleveland, OH, United States of America
| | - Jay Gollamudi
- University Hospitals, Cleveland, OH, United States of America
| | - Natalie Shih
- Department of Pathology, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Michael Feldman
- Department of Pathology, University of Pennsylvania, Philadelphia, PA, United States of America
| | | | - Warick Delprado
- Douglass Hanly Moir Pathology, Macquarie Park, NSW, Australia
| | - Sarita Tiwari
- Garvan Institute of Medical Research/The Kinghorn Cancer Centre, Darlinghurst, NSW, Australia
| | - Maret Böhm
- Garvan Institute of Medical Research/The Kinghorn Cancer Centre, Darlinghurst, NSW, Australia
| | - Anne-Maree Haynes
- Garvan Institute of Medical Research/The Kinghorn Cancer Centre, Darlinghurst, NSW, Australia
| | - Lee Ponsky
- University Hospitals, Cleveland, OH, United States of America
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, United States of America
| | - Pallavi Tiwari
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| | - Satish Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| |
Collapse
|
23
|
Charmsaz S, Prencipe M, Kiely M, Pidgeon GP, Collins DM. Innovative Technologies Changing Cancer Treatment. Cancers (Basel) 2018; 10:cancers10060208. [PMID: 29921753 PMCID: PMC6025540 DOI: 10.3390/cancers10060208] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 06/13/2018] [Accepted: 06/14/2018] [Indexed: 12/28/2022] Open
Abstract
Conventional therapies for cancer such as chemotherapy and radiotherapy remain a mainstay in treatment, but in many cases a targeted approach is lacking, and patients can be vulnerable to drug resistance. In recent years, novel concepts have been emerging to improve the traditional therapeutic options in cancers with poor survival outcomes. New therapeutic strategies involving areas like energy metabolism and extracellular vesicles along with advances in immunotherapy and nanotechnology are driving the next generation of cancer treatments. The development of fields such as theranostics in nanomedicine is also opening new doors for targeted drug delivery and nano-imaging. Here we discuss the use of innovative technologies presented at the Irish Association for Cancer Research (IACR) Annual Meeting, highlighting examples of where new approaches may lead to promising new treatment options for a range of cancer types.
Collapse
Affiliation(s)
- Sara Charmsaz
- RCSI Surgery, Royal College of Surgeons in Ireland, 31A York Street, Dublin 2, Ireland.
| | - Maria Prencipe
- School of Biomolecular and Biomedical Research, UCD Conway Institute of Biomolecular and Biomedical Research, University College Dublin, Belfield, Dublin 4, Ireland.
| | - Maeve Kiely
- Graduate Entry Medical School, University of Limerick, Limerick, Ireland.
| | - Graham P Pidgeon
- Trinity Translational Medicine Institute (TTMI), St. James's Hospital and Trinity College Dublin, Dublin 2, Ireland.
| | - Denis M Collins
- Cancer Biotherapeutics, National Institute for Cellular Biotechnology, Dublin City University, Glasnevin, Dublin 9, Ireland.
| |
Collapse
|
24
|
Xu J, Lan Y, Yu F, Zhu S, Ran J, Zhu J, Zhang H, Li L, Cheng S, Xiao Y, Li X. Transcriptome analysis reveals a long non-coding RNA signature to improve biochemical recurrence prediction in prostate cancer. Oncotarget 2018; 9:24936-24949. [PMID: 29861844 PMCID: PMC5982764 DOI: 10.18632/oncotarget.25048] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 02/27/2018] [Indexed: 11/25/2022] Open
Abstract
Despite highly successful treatments for localized prostate cancer (PCa), prognostic biomarkers are needed to improve patient management and prognosis. Accumulating evidence suggests that long noncoding RNAs (lncRNAs) are key regulators with biological and clinical significance. By transcriptome analysis, we identified a set of consistently dysregulated lncRNAs in PCa across different datasets and revealed an eight-lncRNA signature that significantly associated with the biochemical recurrence (BCR)-free survival. Based on the signature, patients could be classified into high- and low-risk groups with significantly different survival (HR = 2.19; 95% CI = 1.67-2.88; P < 0.0001). Validations in the validation cohorts and another independent cohort confirmed its prognostic value for recurrence prediction. Multivariable analysis showed that the signature was independent of common clinicopathological features and stratified analysis further revealed its role in elevating risk stratification of current prognostic models. Additionally, the eight-lncRNA signature was able to improve on the CAPRA-S score for the prediction of BCR as well as to reflect the metastatic potential of PCa. Functional characterization suggested that these lncRNAs which showed PCa-specific expression patterns may involve in critical processes in tumorigenesis. Overall, our results demonstrated potential application of lncRNAs as novel independent biomarkers. The eight-lncRNA signature may have clinical potential for facilitating further stratification of more aggressive patients who would benefit from adjuvant therapy.
Collapse
Affiliation(s)
- Jinyuan Xu
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Yujia Lan
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Fulong Yu
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Shiwei Zhu
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Jianrong Ran
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Jiali Zhu
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Hongyi Zhang
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Lili Li
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Shujun Cheng
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China.,State Key Laboratory of Molecular Oncology, Department of Etiology and Carcinogenesis, Cancer Institute and Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100021, China
| | - Yun Xiao
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| | - Xia Li
- College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, Heilongjiang 150086, China
| |
Collapse
|
25
|
Li J, Sarma KV, Chung Ho K, Gertych A, Knudsen BS, Arnold CW. A Multi-scale U-Net for Semantic Segmentation of Histological Images from Radical Prostatectomies. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2018; 2017:1140-1148. [PMID: 29854182 PMCID: PMC5977596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Gleason grading of histological images is important in risk assessment and treatment planning for prostate cancer patients. Much research has been done in classifying small homogeneous cancer regions within histological images. However, semi-supervised methods published to date depend on pre-selected regions and cannot be easily extended to an image of heterogeneous tissue composition. In this paper, we propose a multi-scale U-Net model to classify images at the pixel-level using 224 histological image tiles from radical prostatectomies of 20 patients. Our model was evaluated by a patient-based 10-fold cross validation, and achieved a mean Jaccard index of 65.8% across 4 classes (stroma, Gleason 3, Gleason 4 and benign glands), and 75.5% for 3 classes (stroma, benign glands, prostate cancer), outperforming other methods.
Collapse
Affiliation(s)
- Jiayun Li
- Department of Bioengineering, University of California, Los Angeles, CA, USA
- Computational Integrated Diagnostics, Departments of Radiological Sciences and Pathology and Laboratory Medicine, University of California, Los Angeles, CA, USA
| | - Karthik V Sarma
- Department of Bioengineering, University of California, Los Angeles, CA, USA
- Computational Integrated Diagnostics, Departments of Radiological Sciences and Pathology and Laboratory Medicine, University of California, Los Angeles, CA, USA
| | - King Chung Ho
- Department of Bioengineering, University of California, Los Angeles, CA, USA
- Computational Integrated Diagnostics, Departments of Radiological Sciences and Pathology and Laboratory Medicine, University of California, Los Angeles, CA, USA
| | - Arkadiusz Gertych
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Beatrice S Knudsen
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Corey W Arnold
- Department of Bioengineering, University of California, Los Angeles, CA, USA
- Computational Integrated Diagnostics, Departments of Radiological Sciences and Pathology and Laboratory Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
26
|
A deep-learning classifier identifies patients with clinical heart failure using whole-slide images of H&E tissue. PLoS One 2018; 13:e0192726. [PMID: 29614076 PMCID: PMC5882098 DOI: 10.1371/journal.pone.0192726] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Accepted: 01/29/2018] [Indexed: 01/02/2023] Open
Abstract
Over 26 million people worldwide suffer from heart failure annually. When the cause of heart failure cannot be identified, endomyocardial biopsy (EMB) represents the gold-standard for the evaluation of disease. However, manual EMB interpretation has high inter-rater variability. Deep convolutional neural networks (CNNs) have been successfully applied to detect cancer, diabetic retinopathy, and dermatologic lesions from images. In this study, we develop a CNN classifier to detect clinical heart failure from H&E stained whole-slide images from a total of 209 patients, 104 patients were used for training and the remaining 105 patients for independent testing. The CNN was able to identify patients with heart failure or severe pathology with a 99% sensitivity and 94% specificity on the test set, outperforming conventional feature-engineering approaches. Importantly, the CNN outperformed two expert pathologists by nearly 20%. Our results suggest that deep learning analytics of EMB can be used to predict cardiac outcome.
Collapse
|
27
|
Harder N, Athelogou M, Hessel H, Brieu N, Yigitsoy M, Zimmermann J, Baatz M, Buchner A, Stief CG, Kirchner T, Binnig G, Schmidt G, Huss R. Tissue Phenomics for prognostic biomarker discovery in low- and intermediate-risk prostate cancer. Sci Rep 2018. [PMID: 29535336 PMCID: PMC5849604 DOI: 10.1038/s41598-018-22564-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Tissue Phenomics is the discipline of mining tissue images to identify patterns that are related to clinical outcome providing potential prognostic and predictive value. This involves the discovery process from assay development, image analysis, and data mining to the final interpretation and validation of the findings. Importantly, this process is not linear but allows backward steps and optimization loops over multiple sub-processes. We provide a detailed description of the Tissue Phenomics methodology while exemplifying each step on the application of prostate cancer recurrence prediction. In particular, we automatically identified tissue-based biomarkers having significant prognostic value for low- and intermediate-risk prostate cancer patients (Gleason scores 6–7b) after radical prostatectomy. We found that promising phenes were related to CD8(+) and CD68(+) cells in the microenvironment of cancerous glands in combination with the local micro-vascularization. Recurrence prediction based on the selected phenes yielded accuracies up to 83% thereby clearly outperforming prediction based on the Gleason score. Moreover, we compared different machine learning algorithms to combine the most relevant phenes resulting in increased accuracies of 88% for tumor progression prediction. These findings will be of potential use for future prognostic tests for prostate cancer patients and provide a proof-of-principle of the Tissue Phenomics approach.
Collapse
Affiliation(s)
| | | | - Harald Hessel
- Institute for Pathology, Ludwig-Maximilians-University, Munich, Germany
| | | | - Mehmet Yigitsoy
- Definiens AG, Munich, Germany.,Carl Zeiss Meditec AG, Munich, Germany
| | | | | | - Alexander Buchner
- Department of Urology, Ludwig-Maximilians-University, Munich, Germany
| | - Christian G Stief
- Department of Urology, Ludwig-Maximilians-University, Munich, Germany
| | - Thomas Kirchner
- Institute for Pathology, Ludwig-Maximilians-University, Munich, Germany
| | | | | | | |
Collapse
|
28
|
Bhargava R, Madabhushi A. Emerging Themes in Image Informatics and Molecular Analysis for Digital Pathology. Annu Rev Biomed Eng 2017; 18:387-412. [PMID: 27420575 DOI: 10.1146/annurev-bioeng-112415-114722] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Pathology is essential for research in disease and development, as well as for clinical decision making. For more than 100 years, pathology practice has involved analyzing images of stained, thin tissue sections by a trained human using an optical microscope. Technological advances are now driving major changes in this paradigm toward digital pathology (DP). The digital transformation of pathology goes beyond recording, archiving, and retrieving images, providing new computational tools to inform better decision making for precision medicine. First, we discuss some emerging innovations in both computational image analytics and imaging instrumentation in DP. Second, we discuss molecular contrast in pathology. Molecular DP has traditionally been an extension of pathology with molecularly specific dyes. Label-free, spectroscopic images are rapidly emerging as another important information source, and we describe the benefits and potential of this evolution. Third, we describe multimodal DP, which is enabled by computational algorithms and combines the best characteristics of structural and molecular pathology. Finally, we provide examples of application areas in telepathology, education, and precision medicine. We conclude by discussing challenges and emerging opportunities in this area.
Collapse
Affiliation(s)
- Rohit Bhargava
- Departments of Bioengineering, Chemical and Biomolecular Engineering, Electrical and Computer Engineering, Mechanical Science and Engineering, and Chemistry, and Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801;
| | - Anant Madabhushi
- Center for Computational Imaging and Personalized Diagnostics; Departments of Biomedical Engineering, Urology, Pathology, Radiology, Radiation Oncology, General Medical Sciences, Electrical Engineering, and Computer Science; and Case Comprehensive Cancer Center, Case Western Reserve University, Cleveland, Ohio 44106;
| |
Collapse
|
29
|
Discriminative Scale Learning (DiScrn): Applications to Prostate Cancer Detection from MRI and Needle Biopsies. Sci Rep 2017; 7:12375. [PMID: 28959011 PMCID: PMC5620056 DOI: 10.1038/s41598-017-12569-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 08/22/2017] [Indexed: 01/03/2023] Open
Abstract
There has been recent substantial interest in extracting sub-visual features from medical images for improved disease characterization compared to what might be achievable via visual inspection alone. Features such as Haralick and Gabor can provide a multi-scale representation of the original image by extracting measurements across differently sized neighborhoods. While these multi-scale features are effective, on large-scale digital pathological images, the process of extracting these features is computationally expensive. Moreover for different problems, different scales and neighborhood sizes may be more or less important and thus a large number of features extracted might end up being redundant. In this paper, we present a Discriminative Scale learning (DiScrn) approach that attempts to automatically identify the distinctive scales at which features are able to best separate cancerous from non-cancerous regions on both radiologic and digital pathology tissue images. To evaluate the efficacy of our approach, our approach was employed to detect presence and extent of prostate cancer on a total of 60 MRI and digitized histopathology images. Compared to a multi-scale feature analysis approach invoking features across all scales, DiScrn achieved 66% computational efficiency while also achieving comparable or even better classifier performance.
Collapse
|
30
|
Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi A. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1550-1560. [PMID: 28287963 DOI: 10.1109/tmi.2017.2677499] [Citation(s) in RCA: 320] [Impact Index Per Article: 45.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.
Collapse
|
31
|
Romo-Bucheli D, Janowczyk A, Gilmore H, Romero E, Madabhushi A. A deep learning based strategy for identifying and associating mitotic activity with gene expression derived risk categories in estrogen receptor positive breast cancers. Cytometry A 2017; 91:566-573. [PMID: 28192639 PMCID: PMC6124660 DOI: 10.1002/cyto.a.23065] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Revised: 11/05/2016] [Accepted: 01/18/2017] [Indexed: 11/10/2022]
Abstract
The treatment and management of early stage estrogen receptor positive (ER+) breast cancer is hindered by the difficulty in identifying patients who require adjuvant chemotherapy in contrast to those that will respond to hormonal therapy. To distinguish between the more and less aggressive breast tumors, which is a fundamental criterion for the selection of an appropriate treatment plan, Oncotype DX (ODX) and other gene expression tests are typically employed. While informative, these gene expression tests are expensive, tissue destructive, and require specialized facilities. Bloom-Richardson (BR) grade, the common scheme employed in breast cancer grading, has been shown to be correlated with the Oncotype DX risk score. Unfortunately, studies have also shown that the BR grade determined experiences notable inter-observer variability. One of the constituent categories in BR grading is the mitotic index. The goal of this study was to develop a deep learning (DL) classifier to identify mitotic figures from whole slides images of ER+ breast cancer, the hypothesis being that the number of mitoses identified by the DL classifier would correlate with the corresponding Oncotype DX risk categories. The mitosis detector yielded an average F-score of 0.556 in the AMIDA mitosis dataset using a 6-fold validation setup. For a cohort of 174 whole slide images with early stage ER+ breast cancer for which the corresponding Oncotype DX score was available, the distributions of the number of mitoses identified by the DL classifier was found to be significantly different between the high vs low Oncotype DX risk groups (P < 0.01). Comparisons of other risk groups, using both ODX score and histological grade, were also found to present significantly different automated mitoses distributions. Additionally, a support vector machine classifier trained to separate low/high Oncotype DX risk categories using the mitotic count determined by the DL classifier yielded a 83.19% classification accuracy. © 2017 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- David Romo-Bucheli
- Engineering Faculty, Universidad Nacional de Colombia, Bogotá, DC Colombia
| | - Andrew Janowczyk
- Biomedical Engineering Department, Case Western Reserve University, Cleveland, Ohio
| | - Hannah Gilmore
- Department of Pathology, University Hospitals Cleveland Medical Center, Cleveland, Ohio
| | - Eduardo Romero
- Engineering Faculty, Universidad Nacional de Colombia, Bogotá, DC Colombia
| | - Anant Madabhushi
- Biomedical Engineering Department, Case Western Reserve University, Cleveland, Ohio
| |
Collapse
|
32
|
Tennill TA, Gross ME, Frieboes HB. Automated analysis of co-localized protein expression in histologic sections of prostate cancer. PLoS One 2017; 12:e0178362. [PMID: 28552967 PMCID: PMC5446169 DOI: 10.1371/journal.pone.0178362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Accepted: 05/11/2017] [Indexed: 12/13/2022] Open
Abstract
An automated approach based on routinely-processed, whole-slide immunohistochemistry (IHC) was implemented to study co-localized protein expression in tissue samples. Expression of two markers was chosen to represent stromal (CD31) and epithelial (Ki-67) compartments in prostate cancer. IHC was performed on whole-slide sections representing low-, intermediate-, and high-grade disease from 15 patients. The automated workflow was developed using a training set of regions-of-interest in sequential tissue sections. Protein expression was studied on digital representations of IHC images across entire slides representing formalin-fixed paraffin embedded blocks. Using the training-set, the known association between Ki-67 and Gleason grade was confirmed. CD31 expression was more heterogeneous across samples and remained invariant with grade in this cohort. Interestingly, the Ki-67/CD31 ratio was significantly increased in high (Gleason ≥ 8) versus low/intermediate (Gleason ≤7) samples when assessed in the training-set and the whole-tissue block images. Further, the feasibility of the automated approach to process Tissue Microarray (TMA) samples in high throughput was evaluated. This work establishes an initial framework for automated analysis of co-localized protein expression and distribution in high-resolution digital microscopy images based on standard IHC techniques. Applied to a larger sample population, the approach may help to elucidate the biologic basis for the Gleason grade, which is the strongest, single factor distinguishing clinically aggressive from indolent prostate cancer.
Collapse
Affiliation(s)
- Thomas A. Tennill
- Department of Bioengineering, University of Louisville, Louisville, KY, United States of America
| | - Mitchell E. Gross
- Lawrence J. Elliston Institute for Transformational Medicine, University of Southern California, Los Angeles, CA, United States of America
| | - Hermann B. Frieboes
- Department of Bioengineering, University of Louisville, Louisville, KY, United States of America
- James Graham Brown Cancer Center, University of Louisville, Louisville, KY, United States of America
| |
Collapse
|
33
|
Corredor G, Whitney J, Arias V, Madabhushi A, Romero E. Training a cell-level classifier for detecting basal-cell carcinoma by combining human visual attention maps with low-level handcrafted features. J Med Imaging (Bellingham) 2017; 4:021105. [PMID: 28382314 PMCID: PMC5363808 DOI: 10.1117/1.jmi.4.2.021105] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Accepted: 02/22/2017] [Indexed: 12/16/2022] Open
Abstract
Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size [Formula: see text] were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an [Formula: see text]-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively.
Collapse
Affiliation(s)
- Germán Corredor
- Universidad Nacional de Colombia, Computer Imaging and Medical Applications Lab, Department of Medical Imaging, Bogota, Colombia
- Case Western Reserve University, Center of Computational Imaging and Personalized Diagnostics, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Jon Whitney
- Case Western Reserve University, Center of Computational Imaging and Personalized Diagnostics, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Viviana Arias
- Universidad Nacional de Colombia, Patología Molecular Research Group, Department of Pathology, Bogota, Colombia
| | - Anant Madabhushi
- Case Western Reserve University, Center of Computational Imaging and Personalized Diagnostics, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Eduardo Romero
- Universidad Nacional de Colombia, Computer Imaging and Medical Applications Lab, Department of Medical Imaging, Bogota, Colombia
| |
Collapse
|
34
|
Wan T, Cao J, Chen J, Qin Z. Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.084] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
35
|
Leo P, Lee G, Shih NNC, Elliott R, Feldman MD, Madabhushi A. Evaluating stability of histomorphometric features across scanner and staining variations: prostate cancer diagnosis from whole slide images. J Med Imaging (Bellingham) 2016; 3:047502. [PMID: 27803941 DOI: 10.1117/1.jmi.3.4.047502] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2016] [Accepted: 09/16/2016] [Indexed: 01/04/2023] Open
Abstract
Quantitative histomorphometry (QH) is the process of computerized feature extraction from digitized tissue slide images to predict disease presence, behavior, and outcome. Feature stability between sites may be compromised by laboratory-specific variables including dye batch, slice thickness, and the whole slide scanner used. We present two new measures, preparation-induced instability score and latent instability score, to quantify feature instability across and within datasets. In a use case involving prostate cancer, we examined QH features which may detect cancer on whole slide images. Using our method, we found that five feature families (graph, shape, co-occurring gland tensor, sub-graph, and texture) were different between datasets in 19.7% to 48.6% of comparisons while the values expected without site variation were 4.2% to 4.6%. Color normalizing all images to a template did not reduce instability. Scanning the same 34 slides on three scanners demonstrated that Haralick features were most substantively affected by scanner variation, being unstable in 62% of comparisons. We found that unstable feature families performed significantly worse in inter- than intrasite classification. Our results appear to suggest QH features should be evaluated across sites to assess robustness, and class discriminability alone should not represent the benchmark for digital pathology feature selection.
Collapse
Affiliation(s)
- Patrick Leo
- Case Western Reserve University , Department of Biomedical Engineering, 2071 Martin Luther King Jr. Drive, Cleveland, Ohio 44106, United States
| | - George Lee
- Case Western Reserve University , Department of Biomedical Engineering, 2071 Martin Luther King Jr. Drive, Cleveland, Ohio 44106, United States
| | - Natalie N C Shih
- University of Pennsylvania , Department of Pathology, 3400 Spruce Street, Philadelphia, Pennsylvania 19104, United States
| | - Robin Elliott
- Case Western Reserve University , Department of Pathology, 11100 Euclid Avenue, Cleveland, Ohio 44106, United States
| | - Michael D Feldman
- University of Pennsylvania , Department of Pathology, 3400 Spruce Street, Philadelphia, Pennsylvania 19104, United States
| | - Anant Madabhushi
- Case Western Reserve University , Department of Biomedical Engineering, 2071 Martin Luther King Jr. Drive, Cleveland, Ohio 44106, United States
| |
Collapse
|
36
|
Lu C, Xu H, Xu J, Gilmore H, Mandal M, Madabhushi A. Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images. Sci Rep 2016; 6:33985. [PMID: 27694950 PMCID: PMC5046183 DOI: 10.1038/srep33985] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 09/02/2016] [Indexed: 12/15/2022] Open
Abstract
Nuclei detection is often a critical initial step in the development of computer aided diagnosis and prognosis schemes in the context of digital pathology images. While over the last few years, a number of nuclei detection methods have been proposed, most of these approaches make idealistic assumptions about the staining quality of the tissue. In this paper, we present a new Multi-Pass Adaptive Voting (MPAV) for nuclei detection which is specifically geared towards images with poor quality staining and noise on account of tissue preparation artifacts. The MPAV utilizes the symmetric property of nuclear boundary and adaptively selects gradient from edge fragments to perform voting for a potential nucleus location. The MPAV was evaluated in three cohorts with different staining methods: Hematoxylin &Eosin, CD31 &Hematoxylin, and Ki-67 and where most of the nuclei were unevenly and imprecisely stained. Across a total of 47 images and nearly 17,700 manually labeled nuclei serving as the ground truth, MPAV was able to achieve a superior performance, with an area under the precision-recall curve (AUC) of 0.73. Additionally, MPAV also outperformed three state-of-the-art nuclei detection methods, a single pass voting method, a multi-pass voting method, and a deep learning based method.
Collapse
Affiliation(s)
- Cheng Lu
- College of Computer Science, Shaanxi Normal University, Xi’an, Shaanxi Province, 710119, China
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| | - Hongming Xu
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 2V4, Canada
| | - Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Hannah Gilmore
- Department of Pathology-Anatomic, University Hospitals Case Medial Center, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 2V4, Canada
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| |
Collapse
|
37
|
Romo-Bucheli D, Janowczyk A, Gilmore H, Romero E, Madabhushi A. Automated Tubule Nuclei Quantification and Correlation with Oncotype DX risk categories in ER+ Breast Cancer Whole Slide Images. Sci Rep 2016; 6:32706. [PMID: 27599752 PMCID: PMC5013328 DOI: 10.1038/srep32706] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 08/15/2016] [Indexed: 12/22/2022] Open
Abstract
Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these tests are expensive, tissue destructive and require specialized facilities. Interestingly BCa grade has been shown to be correlated with the ODX risk score. Unfortunately Bloom-Richardson (BR) grade determined by pathologists can be variable. A constituent category in BR grading is tubule formation. This study aims to develop a deep learning classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei (a tubule formation indicator - TFI) correlates with the corresponding ODX risk categories. This correlation was assessed in 7513 fields extracted from 174 WSI. The results suggests that low ODX/BR cases have a larger TFI than high ODX/BR cases (p < 0.01). The low ODX/BR cases also presented a larger TFI than that obtained for the rest of cases (p < 0.05). Finally, the high ODX/BR cases have a significantly smaller TFI than that obtained for the rest of cases (p < 0.01).
Collapse
Affiliation(s)
- David Romo-Bucheli
- Universidad Nacional de Colombia, Engineering Faculty, Bogotá D.C, Colombia
| | - Andrew Janowczyk
- Case Western Reserve University, Biomedical Engineering department, Cleveland, OH, USA
| | - Hannah Gilmore
- University Hospitals, School of Medicine, Cleveland, OH, USA
| | - Eduardo Romero
- Universidad Nacional de Colombia, Engineering Faculty, Bogotá D.C, Colombia
| | - Anant Madabhushi
- Case Western Reserve University, Biomedical Engineering department, Cleveland, OH, USA
| |
Collapse
|
38
|
Madabhushi A, Lee G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med Image Anal 2016; 33:170-175. [PMID: 27423409 DOI: 10.1016/j.media.2016.06.037] [Citation(s) in RCA: 458] [Impact Index Per Article: 57.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 06/13/2016] [Accepted: 06/30/2016] [Indexed: 02/05/2023]
Abstract
With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology.
Collapse
Affiliation(s)
- Anant Madabhushi
- Department of Biomedical Engineering, Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH 44106-7207, United State.
| | - George Lee
- Department of Biomedical Engineering, Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH 44106-7207, United State
| |
Collapse
|
39
|
Lee G, Singanamalli A, Wang H, Feldman MD, Master SR, Shih NNC, Spangler E, Rebbeck T, Tomaszewski JE, Madabhushi A. Supervised multi-view canonical correlation analysis (sMVCCA): integrating histologic and proteomic features for predicting recurrent prostate cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:284-297. [PMID: 25203987 DOI: 10.1109/tmi.2014.2355175] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this work, we present a new methodology to facilitate prediction of recurrent prostate cancer (CaP) following radical prostatectomy (RP) via the integration of quantitative image features and protein expression in the excised prostate. Creating a fused predictor from high-dimensional data streams is challenging because the classifier must 1) account for the "curse of dimensionality" problem, which hinders classifier performance when the number of features exceeds the number of patient studies and 2) balance potential mismatches in the number of features across different channels to avoid classifier bias towards channels with more features. Our new data integration methodology, supervised Multi-view Canonical Correlation Analysis (sMVCCA), aims to integrate infinite views of highdimensional data to provide more amenable data representations for disease classification. Additionally, we demonstrate sMVCCA using Spearman's rank correlation which, unlike Pearson's correlation, can account for nonlinear correlations and outliers. Forty CaP patients with pathological Gleason scores 6-8 were considered for this study. 21 of these men revealed biochemical recurrence (BCR) following RP, while 19 did not. For each patient, 189 quantitative histomorphometric attributes and 650 protein expression levels were extracted from the primary tumor nodule. The fused histomorphometric/proteomic representation via sMVCCA combined with a random forest classifier predicted BCR with a mean AUC of 0.74 and a maximum AUC of 0.9286. We found sMVCCA to perform statistically significantly (p < 0.05) better than comparative state-of-the-art data fusion strategies for predicting BCR. Furthermore, Kaplan-Meier analysis demonstrated improved BCR-free survival prediction for the sMVCCA-fused classifier as compared to histology or proteomic features alone.
Collapse
|