1
|
Marra A, Morganti S, Pareja F, Campanella G, Bibeau F, Fuchs T, Loda M, Parwani A, Scarpa A, Reis-Filho JS, Curigliano G, Marchiò C, Kather JN. Artificial intelligence entering the pathology arena in oncology: current applications and future perspectives. Ann Oncol 2025:S0923-7534(25)00112-7. [PMID: 40307127 DOI: 10.1016/j.annonc.2025.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 02/19/2025] [Accepted: 03/07/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) is rapidly transforming the fields of pathology and oncology, offering novel opportunities for advancing diagnosis, prognosis, and treatment of cancer. METHODS Through a systematic review-based approach, the representatives from the European Society for Medical Oncology (ESMO) Precision Oncology Working Group (POWG) and international experts identified studies in pathology and oncology that applied AI-based algorithms for tumour diagnosis, molecular biomarker detection, and cancer prognosis assessment. These findings were synthesised to provide a comprehensive overview of current AI applications and future directions in cancer pathology. RESULTS The integration of AI tools in digital pathology is markedly improving the accuracy and efficiency of image analysis, allowing for automated tumour detection and classification, identification of prognostic molecular biomarkers, and prediction of treatment response and patient outcomes. Several barriers for the adoption of AI in clinical workflows, such as data availability, explainability, and regulatory considerations, still persist. There are currently no prognostic or predictive AI-based biomarkers supported by level IA or IB evidence. The ongoing advancements in AI algorithms, particularly foundation models, generalist models and transformer-based deep learning, offer immense promise for the future of cancer research and care. AI is also facilitating the integration of multi-omics data, leading to more precise patient stratification and personalised treatment strategies. CONCLUSIONS The application of AI in pathology is poised to not only enhance the accuracy and efficiency of cancer diagnosis and prognosis but also facilitate the development of personalised treatment strategies. Although barriers to implementation remain, ongoing research and development in this field coupled with addressing ethical and regulatory considerations will likely lead to a future where AI plays an integral role in cancer management and precision medicine. The continued evolution and adoption of AI in pathology and oncology are anticipated to reshape the landscape of cancer care, heralding a new era of precision medicine and improved patient outcomes.
Collapse
Affiliation(s)
- A Marra
- Division of Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy
| | - S Morganti
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, USA; Department of Medicine, Harvard Medical School, Boston, USA; Gerstner Center for Cancer Diagnostics, Broad Institute of MIT and Harvard, Boston, USA
| | - F Pareja
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, USA
| | - G Campanella
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, USA
| | - F Bibeau
- Department of Pathology, University Hospital of Besançon, Besancon, France
| | - T Fuchs
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, USA
| | - M Loda
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, USA; Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK; Department of Oncologic Pathology, Dana-Farber Cancer Institute and Harvard Medical School, Boston, USA
| | - A Parwani
- Department of Pathology, Wexner Medical Center, Ohio State University, Columbus, USA
| | - A Scarpa
- Department of Diagnostics and Public Health, Section of Pathology, University and Hospital Trust of Verona, Verona, Italy; ARC-Net Research Center, University of Verona, Verona, Italy
| | - J S Reis-Filho
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, USA
| | - G Curigliano
- Division of Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy; Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - C Marchiò
- Candiolo Cancer Institute, FPO IRCCS, Candiolo, Italy; Department of Medical Sciences, University of Turin, Turin, Italy
| | - J N Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany; Department of Medicine I, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
2
|
Tanabe P, Schlenk D, Forsgren KL, Pampanin DM. Using digital pathology to standardize and automate histological evaluations of environmental samples. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY 2025; 44:306-317. [PMID: 39919237 PMCID: PMC11816309 DOI: 10.1093/etojnl/vgae038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 09/27/2024] [Accepted: 10/07/2024] [Indexed: 02/09/2025]
Abstract
Histological evaluations of tissues are commonly used in environmental monitoring studies to assess the health and fitness status of populations or even whole ecosystems. Although traditional histology can be cost-effective, there is a shortage of proficient histopathologists and results can often be subjective between operators, leading to variance. Digital pathology is a powerful diagnostic tool that has already significantly transformed research in human health but has rarely been applied to environmental studies. Digital analyses of whole slide images introduce possibilities of highly standardized histopathological evaluations, as well as the use of artificial intelligence for novel analyses. Furthermore, incorporation of digital pathology into environmental monitoring studies using standardized bioindicator species or groups such as bivalves and fish can greatly improve the accuracy, reproducibility, and efficiency of the studies. This review aims to introduce readers to digital pathology and how it can be applied to environmental studies. This includes guidelines for sample preparation, potential sources of error, and comparisons to traditional histopathological analyses.
Collapse
Affiliation(s)
- Philip Tanabe
- National Ocean Service, National Centers for Coastal Ocean Science, National Oceanic and Atmospheric Administration, Charleston, SC, United States
| | - Daniel Schlenk
- Department of Environmental Sciences, University of California, Riverside, Riverside, CA, United States
| | - Kristy L Forsgren
- Department of Biological Science, California State University, Fullerton, Fullerton, CA, United States
| | - Daniela M Pampanin
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway
| |
Collapse
|
3
|
Matthews GA, McGenity C, Bansal D, Treanor D. Public evidence on AI products for digital pathology. NPJ Digit Med 2024; 7:300. [PMID: 39455883 PMCID: PMC11511888 DOI: 10.1038/s41746-024-01294-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 10/08/2024] [Indexed: 10/28/2024] Open
Abstract
Novel products applying artificial intelligence (AI)-based methods to digital pathology images are touted to have many uses and benefits. However, publicly available information for products can be variable, with few sources of independent evidence. This review aimed to identify public evidence for AI-based products for digital pathology. Key features of products on the European Economic Area/Great Britain (EEA/GB) markets were examined, including their regulatory approval, intended use, and published validation studies. There were 26 AI-based products that met the inclusion criteria and, of these, 24 had received regulatory approval via the self-certification route as General in vitro diagnostic (IVD) medical devices. Only 10 of the products (38%) had peer-reviewed internal validation studies and 11 products (42%) had peer-reviewed external validation studies. To support transparency an online register was developed using identified public evidence ( https://osf.io/gb84r/ ), which we anticipate will provide an accessible resource on novel devices and support decision making.
Collapse
Affiliation(s)
| | - Clare McGenity
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
| | | | - Darren Treanor
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
- University of Leeds, Leeds, UK.
- Department of Clinical Pathology & Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden.
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden.
| |
Collapse
|
4
|
Sarkar S, Wu T, Harwood M, Silva AC. A Transfer Learning-Based Framework for Classifying Lymph Node Metastasis in Prostate Cancer Patients. Biomedicines 2024; 12:2345. [PMID: 39457657 PMCID: PMC11504638 DOI: 10.3390/biomedicines12102345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 09/13/2024] [Accepted: 10/09/2024] [Indexed: 10/28/2024] Open
Abstract
Background: Prostate cancer is the second most common new cancer diagnosis in the United States. It is usually slow-growing, and when it is low-grade and confined to the prostate gland, it can be treated either conservatively (through active surveillance) or with surgery. However, if the cancer has spread beyond the prostate, such as to the lymph nodes, then that indicates a more aggressive cancer, and surgery may not be adequate. Methods: The challenge is that it is often difficult for radiologists reading prostate-specific imaging such as magnetic resonance images (MRIs) to differentiate malignant lymph nodes from non-malignant ones. An emerging field is the development of artificial intelligence (AI) models, including machine learning and deep learning, for medical imaging to assist in diagnostic tasks. Earlier research focused on implementing texture algorithms to extract imaging features used in classification models. More recently, researchers began studying the use of deep learning for both stand-alone feature extraction and end-to-end classification tasks. In order to tackle the challenges inherent in small datasets, this study was designed as a scalable hybrid framework utilizing pre-trained ResNet-18, a deep learning model, to extract features that were subsequently fed into a machine learning classifier to automatically identify malignant lymph nodes in patients with prostate cancer. For comparison, two texture algorithms were implemented, namely the gray-level co-occurrence matrix (GLCM) and Gabor. Results: Using an institutional prostate lymph node dataset (42 positives, 84 negatives), the proposed framework achieved an accuracy of 76.19%, a sensitivity of 79.76%, and a specificity of 69.05%. Using GLCM features, the classification achieved an accuracy of 61.90%, a sensitivity of 74.07%, and a specificity of 42.86%. Using Gabor features, the classification achieved an accuracy of 65.08%, a sensitivity of 73.47%, and a specificity of 52.50%. Conclusions: Our results demonstrate that a hybrid approach, i.e., using a pre-trainined deep learning model for feature extraction, followed by a machine learning classifier, is a viable solution. This hybrid approach is especially useful in medical-imaging-based applications with small datasets.
Collapse
Affiliation(s)
- Suryadipto Sarkar
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Teresa Wu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA;
| | - Matthew Harwood
- Mayo Clinic, Department of Radiology, Phoenix, AZ 85259, USA; (M.H.); (A.C.S.)
| | - Alvin C. Silva
- Mayo Clinic, Department of Radiology, Phoenix, AZ 85259, USA; (M.H.); (A.C.S.)
| |
Collapse
|
5
|
Hashemi Gheinani A, Kim J, You S, Adam RM. Bioinformatics in urology - molecular characterization of pathophysiology and response to treatment. Nat Rev Urol 2024; 21:214-242. [PMID: 37604982 DOI: 10.1038/s41585-023-00805-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2023] [Indexed: 08/23/2023]
Abstract
The application of bioinformatics has revolutionized the practice of medicine in the past 20 years. From early studies that uncovered subtypes of cancer to broad efforts spearheaded by the Cancer Genome Atlas initiative, the use of bioinformatics strategies to analyse high-dimensional data has provided unprecedented insights into the molecular basis of disease. In addition to the identification of disease subtypes - which enables risk stratification - informatics analysis has facilitated the identification of novel risk factors and drivers of disease, biomarkers of progression and treatment response, as well as possibilities for drug repurposing or repositioning; moreover, bioinformatics has guided research towards precision and personalized medicine. Implementation of specific computational approaches such as artificial intelligence, machine learning and molecular subtyping has yet to become widespread in urology clinical practice for reasons of cost, disruption of clinical workflow and need for prospective validation of informatics approaches in independent patient cohorts. Solving these challenges might accelerate routine integration of bioinformatics into clinical settings.
Collapse
Affiliation(s)
- Ali Hashemi Gheinani
- Department of Urology, Boston Children's Hospital, Boston, MA, USA
- Department of Surgery, Harvard Medical School, Boston, MA, USA
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Department of Urology, Inselspital, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Jina Kim
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Sungyong You
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Rosalyn M Adam
- Department of Urology, Boston Children's Hospital, Boston, MA, USA.
- Department of Surgery, Harvard Medical School, Boston, MA, USA.
- Broad Institute of MIT and Harvard, Cambridge, MA, USA.
| |
Collapse
|
6
|
Gifani P, Shalbaf A. Transfer Learning with Pretrained Convolutional Neural Network for Automated Gleason Grading of Prostate Cancer Tissue Microarrays. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:4. [PMID: 38510670 PMCID: PMC10950311 DOI: 10.4103/jmss.jmss_42_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/20/2022] [Accepted: 03/22/2023] [Indexed: 03/22/2024]
Abstract
Background The Gleason grading system has been the most effective prediction for prostate cancer patients. This grading system provides this possibility to assess prostate cancer's aggressiveness and then constitutes an important factor for stratification and therapeutic decisions. However, determining Gleason grade requires highly-trained pathologists and is time-consuming and tedious, and suffers from inter-pathologist variability. To remedy these limitations, this paper introduces an automatic methodology based on transfer learning with pretrained convolutional neural networks (CNNs) for automatic Gleason grading of prostate cancer tissue microarray (TMA). Methods Fifteen pretrained (CNNs): Efficient Nets (B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50, and inception_resnet_v2 were fine-tuned on a dataset of prostate carcinoma TMA images. Six pathologists separately identified benign and cancerous areas for each prostate TMA image by allocating benign, 3, 4, or 5 Gleason grade for 244 patients. The dataset was labeled by these pathologists and majority vote was applied on pixel-wise annotations to obtain a unified label. Results Results showed the NasnetLarge architecture is the best model among them in the classification of prostate TMA images of 244 patients with accuracy of 0.93 and area under the curve of 0.98. Conclusion Our study can act as a highly trained pathologist to categorize the prostate cancer stages with more objective and reproducible results.
Collapse
Affiliation(s)
- Parisa Gifani
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
7
|
Azadi Moghadam P, Bashashati A, Goldenberg SL. Artificial Intelligence and Pathomics: Prostate Cancer. Urol Clin North Am 2024; 51:15-26. [PMID: 37945099 DOI: 10.1016/j.ucl.2023.06.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Artificial intelligence (AI) has the potential to transform pathologic diagnosis and cancer patient management as a predictive and prognostic biomarker. AI-based systems can be used to examine digitally scanned histopathology slides and differentiate benign from malignant cells and low from high grade. Deep learning models can analyze patient data from individual or multimodal combinations and identify patterns to be used to predict the response to different therapeutic options, the risk of recurrence or progression, and the prognosis of the newly diagnosed patient. AI-based models will improve treatment planning for patients with prostate cancer and improve the efficiency and cost-effectiveness of the pathology laboratory.
Collapse
Affiliation(s)
- Puria Azadi Moghadam
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, British Columbia V6T 1Z4, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, British Columbia V6T 1Z3, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, 2211 Wesbrook Mall, Vancouver, BC V6T 1Z7, Canada
| | - S Larry Goldenberg
- Department of Urologic Sciences, University of British Columbia, 2775 Laurel Street, Vancouver British Columbia V5Z 1M9, Canada.
| |
Collapse
|
8
|
Shafi S, Parwani AV. Artificial intelligence in diagnostic pathology. Diagn Pathol 2023; 18:109. [PMID: 37784122 PMCID: PMC10546747 DOI: 10.1186/s13000-023-01375-z] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/21/2023] [Indexed: 10/04/2023] Open
Abstract
Digital pathology (DP) is being increasingly employed in cancer diagnostics, providing additional tools for faster, higher-quality, accurate diagnosis. The practice of diagnostic pathology has gone through a staggering transformation wherein new tools such as digital imaging, advanced artificial intelligence (AI) algorithms, and computer-aided diagnostic techniques are being used for assisting, augmenting and empowering the computational histopathology and AI-enabled diagnostics. This is paving the way for advancement in precision medicine in cancer. Automated whole slide imaging (WSI) scanners are now rendering diagnostic quality, high-resolution images of entire glass slides and combining these images with innovative digital pathology tools is making it possible to integrate imaging into all aspects of pathology reporting including anatomical, clinical, and molecular pathology. The recent approvals of WSI scanners for primary diagnosis by the FDA as well as the approval of prostate AI algorithm has paved the way for starting to incorporate this exciting technology for use in primary diagnosis. AI tools can provide a unique platform for innovations and advances in anatomical and clinical pathology workflows. In this review, we describe the milestones and landmark trials in the use of AI in clinical pathology with emphasis on future directions.
Collapse
Affiliation(s)
- Saba Shafi
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA.
| |
Collapse
|
9
|
Hu W, Li X, Li C, Li R, Jiang T, Sun H, Huang X, Grzegorzek M, Li X. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput Biol Med 2023; 161:107034. [PMID: 37230019 DOI: 10.1016/j.compbiomed.2023.107034] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 04/13/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xintong Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
10
|
Su L, Wang Z, Shi Y, Li A, Wang M. Local augmentation based consistency learning for semi-supervised pathology image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107446. [PMID: 36871546 DOI: 10.1016/j.cmpb.2023.107446] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Labeling pathology images is often costly and time-consuming, which is quite detrimental for supervised pathology image classification that relies heavily on sufficient labeled data during training. Exploring semi-supervised methods based on image augmentation and consistency regularization may effectively alleviate this problem. Nevertheless, traditional image-based augmentation (e.g., flip) produces only a single enhancement to an image, whereas combining multiple image sources may mix unimportant image regions resulting in poor performance. In addition, the regularization losses used in these augmentation approaches typically enforce the consistency of image level predictions, and meanwhile simply require each prediction of augmented image to be consistent bilaterally, which may force pathology image features with better predictions to be wrongly aligned towards the features with worse predictions. METHODS To tackle these problems, we propose a novel semi-supervised method called Semi-LAC for pathology image classification. Specifically, we first present local augmentation technique to randomly apply different augmentations produces to each local pathology patch, which can boost the diversity of pathology image and avoid mixing unimportant regions in other images. Moreover, we further propose the directional consistency loss to enforce restrictions on the consistency of both features and prediction results, thus improving the ability of the network to obtain robust representations and achieve accurate predictions. RESULTS The proposed method is evaluated on Bioimaging2015 and BACH datasets, and the extensive experiments show the superior performance of our Semi-LAC compared with state-of-the-art methods for pathology image classification. CONCLUSIONS We conclude that using the Semi-LAC method can effectively reduce the cost for annotating pathology images, and enhance the ability of classification networks to represent pathology images by using local augmentation techniques and directional consistency loss.
Collapse
Affiliation(s)
- Lei Su
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Zhi Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Yi Shi
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
11
|
Diao S, Luo W, Hou J, Lambo R, Al-Kuhali HA, Zhao H, Tian Y, Xie Y, Zaki N, Qin W. Deep Multi-Magnification Similarity Learning for Histopathological Image Classification. IEEE J Biomed Health Inform 2023; 27:1535-1545. [PMID: 37021898 DOI: 10.1109/jbhi.2023.3237137] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Precise classification of histopathological images is crucial to computer-aided diagnosis in clinical practice. Magnification-based learning networks have attracted considerable attention for their ability to improve performance in histopathological classification. However, the fusion of pyramids of histopathological images at different magnifications is an under-explored area. In this paper, we proposed a novel deep multi-magnification similarity learning (DSML) approach that can be useful for the interpretation of multi-magnification learning framework and easy to visualize feature representation from low-dimension (e.g., cell-level) to high-dimension (e.g., tissue-level), which has overcome the difficulty of understanding cross-magnification information propagation. It uses a similarity cross entropy loss function designation to simultaneously learn the similarity of the information among cross-magnifications. In order to verify the effectiveness of DMSL, experiments with different network backbones and different magnification combinations were designed, and its ability to interpret was also investigated through visualization. Our experiments were performed on two different histopathological datasets: a clinical nasopharyngeal carcinoma and a public breast cancer BCSS2021 dataset. The results show that our method achieved outstanding performance in classification with a higher value of area under curve, accuracy, and F-score than other comparable methods. Moreover, the reasons behind multi-magnification effectiveness were discussed.
Collapse
|
12
|
Parwani AV, Patel A, Zhou M, Cheville JC, Tizhoosh H, Humphrey P, Reuter VE, True LD. An update on computational pathology tools for genitourinary pathology practice: A review paper from the Genitourinary Pathology Society (GUPS). J Pathol Inform 2023; 14:100177. [PMID: 36654741 PMCID: PMC9841212 DOI: 10.1016/j.jpi.2022.100177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 12/20/2022] [Accepted: 12/20/2022] [Indexed: 12/31/2022] Open
Abstract
Machine learning has been leveraged for image analysis applications throughout a multitude of subspecialties. This position paper provides a perspective on the evolutionary trajectory of practical deep learning tools for genitourinary pathology through evaluating the most recent iterations of such algorithmic devices. Deep learning tools for genitourinary pathology demonstrate potential to enhance prognostic and predictive capacity for tumor assessment including grading, staging, and subtype identification, yet limitations in data availability, regulation, and standardization have stymied their implementation.
Collapse
Affiliation(s)
- Anil V. Parwani
- The Ohio State University, Columbus, Ohio, USA
- Corresponding author.
| | - Ankush Patel
- The Ohio State University, 2441 60th Ave SE, Mercer Island, Washington 98040, USA
| | - Ming Zhou
- Tufts University, Medford, Massachusetts, USA
| | | | | | | | | | | |
Collapse
|
13
|
Patel AU, Mohanty SK, Parwani AV. Applications of Digital and Computational Pathology and Artificial Intelligence in Genitourinary Pathology Diagnostics. Surg Pathol Clin 2022; 15:759-785. [PMID: 36344188 DOI: 10.1016/j.path.2022.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As machine learning (ML) solutions for genitourinary pathology image analysis are fostered by a progressively digitized laboratory landscape, these integrable modalities usher in a revolution in histopathological diagnosis. As technology advances, limitations stymying clinical artificial intelligence (AI) will not be extinguished without thorough validation and interrogation of ML tools by pathologists and regulatory bodies alike. ML solutions deployed in clinical settings for applications in prostate pathology yield promising results. Recent breakthroughs in clinical artificial intelligence for genitourinary pathology demonstrate unprecedented generalizability, heralding prospects for a future in which AI-driven assistive solutions may be seen as laboratory faculty, rather than novelty.
Collapse
Affiliation(s)
- Ankush Uresh Patel
- Department of Laboratory Medicine and Pathology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Sambit K Mohanty
- Surgical and Molecular Pathology, Advanced Medical Research Institute, Plot No. 1, Near Jayadev Vatika Park, Khandagiri, Bhubaneswar, Odisha 751019. https://twitter.com/SAMBITKMohanty1
| | - Anil V Parwani
- Department of Pathology, The Ohio State University, Cooperative Human Tissue Network (CHTN) Midwestern Division Polaris Innovation Centre, 2001 Polaris Parkway Suite 1000, Columbus, OH 43240, USA.
| |
Collapse
|
14
|
Terada Y, Takahashi T, Hayakawa T, Ono A, Kawata T, Isaka M, Muramatsu K, Tone K, Kodama H, Imai T, Notsu A, Mori K, Ohde Y, Nakajima T, Sugino T, Takahashi T. Artificial Intelligence-Powered Prediction of ALK Gene Rearrangement in Patients With Non-Small-Cell Lung Cancer. JCO Clin Cancer Inform 2022; 6:e2200070. [PMID: 36162012 DOI: 10.1200/cci.22.00070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
PURPOSE Several studies reported the possibility of predicting genetic abnormalities in non-small-cell lung cancer by deep learning (DL). However, there are no data of predicting ALK gene rearrangement (ALKr) using DL. We evaluated the ALKr predictability using the DL platform. MATERIALS AND METHODS We selected 66 ALKr-positive cases and 142 ALKr-negative cases, which were diagnosed by ALKr immunohistochemical staining in our institution from January 2009 to March 2019. We generated virtual slide of 300 slides (150 ALKr-positive slides and 150 ALKr-negative slides) using NanoZoomer. HALO-AI was used to analyze the whole-slide imaging data, and the DenseNet network was used to build the learning model. Of the 300 slides, we randomly assigned 172 slides to the training cohort and 128 slides to the test cohort to ensure no duplication of cases. In four resolutions (16.0/4.0/1.0/0.25 μm/pix), ALKr prediction models were built in the training cohort and ALKr prediction performance was evaluated in the test cohort. We evaluated the diagnostic probability of ALKr by receiver operating characteristic analysis in each ALKr probability threshold (50%, 60%, 70%, 80%, 90%, and 95%). We expected the area under the curve to be 0.64-0.85 in the model of a previous study. Furthermore, in the test cohort data, an expert pathologist also evaluated the presence of ALKr by hematoxylin and eosin staining on whole-slide imaging. RESULTS The maximum area under the curve was 0.73 (50% threshold: 95% CI, 0.65 to 0.82) in the resolution of 1.0 μm/pix. In this resolution, with an ALKr probability of 50% threshold, the sensitivity and specificity were 73% and 73%, respectively. The expert pathologist's sensitivity and specificity in the same test cohort were 13% and 94%. CONCLUSION The ALKr prediction by DL was feasible. Further study should be addressed to improve accuracy of ALKr prediction.
Collapse
Affiliation(s)
- Yukihiro Terada
- Division of Thoracic Surgery, Shizuoka Cancer Center, Shizuoka, Japan
| | | | | | - Akira Ono
- Division of Thoracic Oncology, Shizuoka Cancer Center, Shizuoka, Japan
| | - Takuya Kawata
- Division of Pathology, Shizuoka Cancer Center, Shizuoka, Japan
| | - Mitsuhiro Isaka
- Division of Thoracic Surgery, Shizuoka Cancer Center, Shizuoka, Japan
| | - Koji Muramatsu
- Division of Pathology, Shizuoka Cancer Center, Shizuoka, Japan
| | - Kiyoshi Tone
- Division of Pathology, Shizuoka Cancer Center, Shizuoka, Japan
| | - Hiroaki Kodama
- Division of Thoracic Oncology, Shizuoka Cancer Center, Shizuoka, Japan
| | - Toru Imai
- Department of Biostatistics, Clinical Research Center, Shizuoka Cancer Center, Shizuoka, Japan
| | - Akifumi Notsu
- Department of Biostatistics, Clinical Research Center, Shizuoka Cancer Center, Shizuoka, Japan
| | - Keita Mori
- Department of Biostatistics, Clinical Research Center, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yasuhisa Ohde
- Division of Thoracic Surgery, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Takashi Sugino
- Division of Pathology, Shizuoka Cancer Center, Shizuoka, Japan
| | | |
Collapse
|
15
|
Sheikh TS, Kim JY, Shim J, Cho M. Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12061480. [PMID: 35741289 PMCID: PMC9222016 DOI: 10.3390/diagnostics12061480] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
An automatic pathological diagnosis is a challenging task because histopathological images with different cellular heterogeneity representations are sometimes limited. To overcome this, we investigated how the holistic and local appearance features with limited information can be fused to enhance the analysis performance. We propose an unsupervised deep learning model for whole-slide image diagnosis, which uses stacked autoencoders simultaneously feeding multiple-image descriptors such as the histogram of oriented gradients and local binary patterns along with the original image to fuse the heterogeneous features. The pre-trained latent vectors are extracted from each autoencoder, and these fused feature representations are utilized for classification. We observed that training with additional descriptors helps the model to overcome the limitations of multiple variants and the intricate cellular structure of histopathology data by various experiments. Our model outperforms existing state-of-the-art approaches by achieving the highest accuracies of 87.2 for ICIAR2018, 94.6 for Dartmouth, and other significant metrics for public benchmark datasets. Our model does not rely on a specific set of pre-trained features based on classifiers to achieve high performance. Unsupervised spaces are learned from the number of independent multiple descriptors and can be used with different variants of classifiers to classify cancer diseases from whole-slide images. Furthermore, we found that the proposed model classifies the types of breast and lung cancer similar to the viewpoint of pathologists by visualization. We also designed our whole-slide image processing toolbox to extract and process the patches from whole-slide images.
Collapse
Affiliation(s)
| | - Jee-Yeon Kim
- Department of Pathology, Pusan National University Yangsan Hospital, School of Medicine, Pusan National University, Yangsan-si 50612, Korea;
| | - Jaesool Shim
- School of Mechanical Engineering, Yeungnam University, Gyeongsan 38541, Korea
- Correspondence: (J.S.); (M.C.)
| | - Migyung Cho
- Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Korea;
- Correspondence: (J.S.); (M.C.)
| |
Collapse
|
16
|
Fine-Tuned DenseNet-169 for Breast Cancer Metastasis Prediction Using FastAI and 1-Cycle Policy. SENSORS 2022; 22:s22082988. [PMID: 35458972 PMCID: PMC9025766 DOI: 10.3390/s22082988] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/09/2022] [Accepted: 04/12/2022] [Indexed: 12/02/2022]
Abstract
Lymph node metastasis in breast cancer may be accurately predicted using a DenseNet-169 model. However, the current system for identifying metastases in a lymph node is manual and tedious. A pathologist well-versed with the process of detection and characterization of lymph nodes goes through hours investigating histological slides. Furthermore, because of the massive size of most whole-slide images (WSI), it is wise to divide a slide into batches of small image patches and apply methods independently on each patch. The present work introduces a novel method for the automated diagnosis and detection of metastases from whole slide images using the Fast AI framework and the 1-cycle policy. Additionally, it compares this new approach to previous methods. The proposed model has surpassed other state-of-art methods with more than 97.4% accuracy. In addition, a mobile application is developed for prompt and quick response. It collects user information and models to diagnose metastases present in the early stages of cancer. These results indicate that the suggested model may assist general practitioners in accurately analyzing breast cancer situations, hence preventing future complications and mortality. With digital image processing, histopathologic interpretation and diagnostic accuracy have improved considerably.
Collapse
|
17
|
Hirway SU, Weinberg SH. A review of computational modeling, machine learning and image analysis in cancer metastasis dynamics. COMPUTATIONAL AND SYSTEMS ONCOLOGY 2022. [DOI: 10.1002/cso2.1044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Affiliation(s)
- Shreyas U. Hirway
- Department of Biomedical Engineering The Ohio State University Columbus Ohio USA
| | - Seth H. Weinberg
- Department of Biomedical Engineering The Ohio State University Columbus Ohio USA
| |
Collapse
|
18
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
19
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
20
|
Fathi Kazerooni A, Bagley SJ, Akbari H, Saxena S, Bagheri S, Guo J, Chawla S, Nabavizadeh A, Mohan S, Bakas S, Davatzikos C, Nasrallah MP. Applications of Radiomics and Radiogenomics in High-Grade Gliomas in the Era of Precision Medicine. Cancers (Basel) 2021; 13:cancers13235921. [PMID: 34885031 PMCID: PMC8656630 DOI: 10.3390/cancers13235921] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 12/22/2022] Open
Abstract
Simple Summary Radiomics and radiogenomics offer new insight into high-grade glioma biology, as well as into glioma behavior in response to standard therapies. In this article, we provide neuro-oncology, neuropathology, and computational perspectives on the role of radiomics in providing more accurate diagnoses, prognostication, and surveillance of patients with high-grade glioma, and on the potential application of radiomics in clinical practice, with the overarching goal of advancing precision medicine for optimal patient care. Abstract Machine learning (ML) integrated with medical imaging has introduced new perspectives in precision diagnostics of high-grade gliomas, through radiomics and radiogenomics. This has raised hopes for characterizing noninvasive and in vivo biomarkers for prediction of patient survival, tumor recurrence, and genomics and therefore encouraging treatments tailored to individualized needs. Characterization of tumor infiltration based on pre-operative multi-parametric magnetic resonance imaging (MP-MRI) scans may allow prediction of the loci of future tumor recurrence and thereby aid in planning the course of treatment for the patients, such as optimizing the extent of resection and the dose and target area of radiation. Imaging signatures of tumor genomics can help in identifying the patients who benefit from certain targeted therapies. Specifying molecular properties of gliomas and prediction of their changes over time and with treatment would allow optimization of treatment. In this article, we provide neuro-oncology, neuropathology, and computational perspectives on the promise of radiomics and radiogenomics for allowing personalized treatments of patients with gliomas and discuss the challenges and limitations of these methods in multi-institutional clinical trials and suggestions to mitigate the issues and the future directions.
Collapse
Affiliation(s)
- Anahita Fathi Kazerooni
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Stephen J. Bagley
- Abramson Cancer Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA;
- Glioblastoma Translational Center of Excellence, Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Sanjay Saxena
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Sina Bagheri
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Jun Guo
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Sanjeev Chawla
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Ali Nabavizadeh
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Suyash Mohan
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA; (A.F.K.); (H.A.); (S.S.); (J.G.); (A.N.); (S.M.); (S.B.); (C.D.)
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; (S.B.); (S.C.)
| | - MacLean P. Nasrallah
- Glioblastoma Translational Center of Excellence, Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Pathology & Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
- Correspondence:
| |
Collapse
|
21
|
Miao R, Toth R, Zhou Y, Madabhushi A, Janowczyk A. Quick Annotator: an open-source digital pathology based rapid image annotation tool. J Pathol Clin Res 2021; 7:542-547. [PMID: 34288586 PMCID: PMC8503896 DOI: 10.1002/cjp2.229] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/16/2021] [Accepted: 05/22/2021] [Indexed: 11/23/2022]
Abstract
Image-based biomarker discovery typically requires accurate segmentation of histologic structures (e.g. cell nuclei, tubules, and epithelial regions) in digital pathology whole slide images (WSIs). Unfortunately, annotating each structure of interest is laborious and often intractable even in moderately sized cohorts. Here, we present an open-source tool, Quick Annotator (QA), designed to improve annotation efficiency of histologic structures by orders of magnitude. While the user annotates regions of interest (ROIs) via an intuitive web interface, a deep learning (DL) model is concurrently optimized using these annotations and applied to the ROI. The user iteratively reviews DL results to either (1) accept accurately annotated regions or (2) correct erroneously segmented structures to improve subsequent model suggestions, before transitioning to other ROIs. We demonstrate the effectiveness of QA over comparable manual efforts via three use cases. These include annotating (1) 337,386 nuclei in 5 pancreatic WSIs, (2) 5,692 tubules in 10 colorectal WSIs, and (3) 14,187 regions of epithelium in 10 breast WSIs. Efficiency gains in terms of annotations per second of 102×, 9×, and 39× were, respectively, witnessed while retaining f-scores >0.95, suggesting that QA may be a valuable tool for efficiently fully annotating WSIs employed in downstream biomarker studies.
Collapse
Affiliation(s)
- Runtian Miao
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | | | - Yu Zhou
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Veterans Administration Medical CenterClevelandOHUSA
| | - Andrew Janowczyk
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Precision Oncology CenterLausanne University HospitalLausanneSwitzerland
| |
Collapse
|
22
|
de P Mendes R, Yuan X, Genega EM, Xu X, da F Costa L, Comin CH. Gland context networks: A novel approach for improving prostate cancer identification. Comput Med Imaging Graph 2021; 94:101999. [PMID: 34753056 DOI: 10.1016/j.compmedimag.2021.101999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 06/12/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
Prostate cancer (PCa) is a pervasive condition that is manifested in a wide range of histologic patterns in biopsy samples. Given the importance of identifying abnormal prostate tissue to improve prognosis, many computerized methodologies aimed at assisting pathologists in diagnosis have been developed. It is often argued that improved diagnosis of a tissue region can be obtained by considering measurements that can take into account several properties of its surroundings, therefore providing a more robust context for the analysis. Here we propose a novel methodology that can be used for systematically defining contextual features regarding prostate glands. This is done by defining a Gland Context Network (GCN), a representation of the prostate sample containing information about the spatial relationship between glands as well as the similarity between their appearance. We show that such a network can be used for establishing contextual features at any spatial scale, therefore providing information that is not easily obtained from traditional shape and textural features. Furthermore, it is shown that even basic features derived from a GCN can lead to state-of-the-art classification performance regarding PCa. All in all, GCNs can assist in defining more effective approaches for PCa grading.
Collapse
Affiliation(s)
- Rodrigo de P Mendes
- Department of Computer Science, Federal University of São Carlos, São Carlos, SP, Brazil
| | - Xin Yuan
- Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Elizabeth M Genega
- Department of Pathology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Luciano da F Costa
- São Carlos Institute of Physics, University of São Paulo, São Carlos, SP, Brazil
| | - Cesar H Comin
- Department of Computer Science, Federal University of São Carlos, São Carlos, SP, Brazil.
| |
Collapse
|
23
|
Xu H, Liu L, Lei X, Mandal M, Lu C. An unsupervised method for histological image segmentation based on tissue cluster level graph cut. Comput Med Imaging Graph 2021; 93:101974. [PMID: 34481236 DOI: 10.1016/j.compmedimag.2021.101974] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/11/2021] [Accepted: 08/17/2021] [Indexed: 11/16/2022]
Abstract
While deep learning models have demonstrated outstanding performance in medical image segmentation tasks, histological annotations for training deep learning models are usually challenging to obtain, due to the effort and experience required to carefully delineate tissue structures. In this study, we propose an unsupervised method, termed as tissue cluster level graph cut (TisCut), for segmenting histological images into meaningful compartments (e.g., tumor or non-tumor regions), which aims at assisting histological annotations for downstream supervised models. The TisCut consists of three modules. First, histological tissue objects are clustered based on their spatial proximity and morphological features. The Voronoi diagram is then constructed based on tissue object clustering. In the last module, morphological features computed from the Voronoi diagram are integrated into a region adjacency graph. Image partition is then performed to divide the image into meaningful compartments by using the graph cut algorithm. The TisCut has been evaluated on three histological image sets for necrosis and melanoma detections. Experiments show that the TisCut could provide a comparative performance with U-Net models, which achieves about 70% and 85% Jaccard index coefficients in partitioning brain and skin histological images, respectively. In addition, it shows the potential to be used for generating histological annotations when training masks are difficult to collect for supervised segmentation models.
Collapse
Affiliation(s)
- Hongming Xu
- School of Biomedical Engineering at Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, Liaoning 116024, China
| | - Lina Liu
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Xiujuan Lei
- College of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi 710119, China
| | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Cheng Lu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA.
| |
Collapse
|
24
|
Su L, Liu Y, Wang M, Li A. Semi-HIC: A novel semi-supervised deep learning method for histopathological image classification. Comput Biol Med 2021; 137:104788. [PMID: 34461503 DOI: 10.1016/j.compbiomed.2021.104788] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 08/17/2021] [Accepted: 08/18/2021] [Indexed: 11/30/2022]
Abstract
Histopathological images provide a gold standard for cancer recognition and diagnosis. Existing approaches for histopathological image classification are supervised learning methods that demand a large amount of labeled data to obtain satisfying performance, which have to face the challenge of limited data annotation due to prohibitive time cost. To circumvent this shortage, a promising strategy is to design semi-supervised learning methods. Recently, a novel semi-supervised approach called Learning by Association (LA) is proposed, which achieves promising performance in nature image classification. However, there are still great challenges in its application to histopathological image classification due to the wide inter-class similarity and intra-class heterogeneity in histopathological images. To address these issues, we propose a novel semi-supervised deep learning method called Semi-HIC for histopathological image classification. Particularly, we introduce a new semi-supervised loss function combining an association cycle consistency (ACC) loss and a maximal conditional association (MCA) loss, which can take advantage of a large number of unlabeled patches and address the problems of inter-class similarity and intra-class variation in histopathological images, and thereby remarkably improve classification performance for histopathological images. Besides, we employ an efficient network architecture with cascaded Inception blocks (CIBs) to learn rich and discriminative embeddings from patches. Experimental results on both the Bioimaging 2015 challenge dataset and the BACH dataset demonstrate our Semi-HIC method compares favorably with existing deep learning methods for histopathological image classification and consistently outperforms the semi-supervised LA method.
Collapse
Affiliation(s)
- Lei Su
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| | - Yu Liu
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China; Research Centers for Biomedical Engineering, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China; Research Centers for Biomedical Engineering, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| |
Collapse
|
25
|
Vuong TTL, Song B, Kim K, Cho YM, Kwak JT. Multi-scale binary pattern encoding network for cancer classification in pathology images. IEEE J Biomed Health Inform 2021; 26:1152-1163. [PMID: 34310334 DOI: 10.1109/jbhi.2021.3099817] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multi-scale approaches have been widely studied in pathology image analysis. These offer an ability to characterize tissues in an image at various scales, in which the tissues may appear differently. Many of such methods have focused on extracting multi-scale hand-crafted features and applied them to various tasks in pathology image analysis. Even, several deep learning methods explicitly adopt the multi-scale approaches. However, most of these methods simply merge the multi-scale features together or adopt the coarse-to-fine/fine-to-coarse strategy, which uses the features one at a time in a sequential manner. Utilizing the multi-scale features in a cooperative and discriminative fashion, the learning capabilities could be further improved. Herein, we propose a multi-scale approach that can identify and leverage the patterns of the multiple scales within a deep neural network and provide the superior capability of cancer classification. The patterns of the features across multiple scales are encoded as a binary pattern code and further converted to a decimal number, which can be easily embedded in the current framework of the deep neural networks. To evaluate the proposed method, multiple sets of pathology images are employed. Under the various experimental settings, the proposed method is systematically assessed and shows an improved classification performance in comparison to other competing methods.
Collapse
|
26
|
Abstract
PURPOSE OF REVIEW Pathomics, the fusion of digitalized pathology and artificial intelligence, is currently changing the landscape of medical pathology and biologic disease classification. In this review, we give an overview of Pathomics and summarize its most relevant applications in urology. RECENT FINDINGS There is a steady rise in the number of studies employing Pathomics, and especially deep learning, in urology. In prostate cancer, several algorithms have been developed for the automatic differentiation between benign and malignant lesions and to differentiate Gleason scores. Furthermore, several applications have been developed for the automatic cancer cell detection in urine and for tumor assessment in renal cancer. Despite the explosion in research, Pathomics is not fully ready yet for widespread clinical application. SUMMARY In prostate cancer and other urologic pathologies, Pathomics is avidly being researched with commercial applications on the close horizon. Pathomics is set to improve the accuracy, speed, reliability, cost-effectiveness and generalizability of pathology, especially in uro-oncology.
Collapse
|
27
|
Lew M, Wilbur DC. A novel approach to integrating artificial intelligence into routine practice. Cancer Cytopathol 2021; 129:677-678. [PMID: 33826793 DOI: 10.1002/cncy.22424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 02/25/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Madelyn Lew
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | - David C Wilbur
- Department of Pathology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
28
|
Abstract
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research.
Collapse
|
29
|
Li J, Li W, Sisk A, Ye H, Wallace WD, Speier W, Arnold CW. A multi-resolution model for histopathology image classification and localization with multiple instance learning. Comput Biol Med 2021; 131:104253. [PMID: 33601084 DOI: 10.1016/j.compbiomed.2021.104253] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/31/2021] [Accepted: 02/03/2021] [Indexed: 12/17/2022]
Abstract
Large numbers of histopathological images have been digitized into high resolution whole slide images, opening opportunities in developing computational image analysis tools to reduce pathologists' workload and potentially improve inter- and intra-observer agreement. Most previous work on whole slide image analysis has focused on classification or segmentation of small pre-selected regions-of-interest, which requires fine-grained annotation and is non-trivial to extend for large-scale whole slide analysis. In this paper, we proposed a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction. Instead of relying on expensive region- or pixel-level annotations, our model can be trained end-to-end with only slide-level labels. The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group ≥ 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4% for differentiating malignant and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for cancer detection on an external dataset.
Collapse
Affiliation(s)
- Jiayun Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA.
| | - Wenyuan Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Anthony Sisk
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - Huihui Ye
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - W Dean Wallace
- Department of Pathology, USC, 2011 Zonal Avenue, Los Angeles, CA, 90033, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Corey W Arnold
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA.
| |
Collapse
|
30
|
Tang H, Mao L, Zeng S, Deng S, Ai Z. Discriminative dictionary learning algorithm with pairwise local constraints for histopathological image classification. Med Biol Eng Comput 2021; 59:153-164. [PMID: 33386592 DOI: 10.1007/s11517-020-02281-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 10/22/2020] [Indexed: 10/22/2022]
Abstract
Histopathological image contains rich pathological information that is valued for the aided diagnosis of many diseases such as cancer. An important issue in histopathological image classification is how to learn a high-quality discriminative dictionary due to diverse tissue pattern, a variety of texture, and different morphologies structure. In this paper, we propose a discriminative dictionary learning algorithm with pairwise local constraints (PLCDDL) for histopathological image classification. Inspired by the one-to-one mapping between dictionary atom and profile, we learn a pair of discriminative graph Laplacian matrices that are less sensitive to noise or outliers to capture the locality and discriminating information of data manifold by utilizing the local geometry information of category-specific dictionaries rather than input data. Furthermore, graph-based pairwise local constraints are designed and incorporated into the original dictionary learning model to effectively encode the locality consistency with intra-class samples and the locality inconsistency with inter-class samples. Specifically, we learn the discriminative localities for representations by jointly optimizing both the intra-class locality and inter-class locality, which can significantly improve the discriminability and robustness of dictionary. Extensive experiments on the challenging datasets verify that the proposed PLCDDL algorithm can achieve a better classification accuracy and powerful robustness compared with the state-of-the-art dictionary learning methods. Graphical abstract The proposed PLCDDL algorithm. 1) A pair of graph Laplacian matrices are first learned based on the class-specific dictionaries. 2) Graph-based pairwise local constraints are designed to transfer the locality for coding coefficients. 3) Class-specific dictionaries can be further updated.
Collapse
Affiliation(s)
- Hongzhong Tang
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China. .,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China. .,Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, Xiangtan University, Xiangtan, Hunan, People's Republic of China.
| | - Lizhen Mao
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shuying Zeng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shijun Deng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China.,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China
| | - Zhaoyang Ai
- Institute of Biophysics Linguistics, College of Foreign Languages, Hunan University, Changsha, Hunan, People's Republic of China
| |
Collapse
|
31
|
An Artificial Intelligence-based Support Tool for Automation and Standardisation of Gleason Grading in Prostate Biopsies. Eur Urol Focus 2020; 7:995-1001. [PMID: 33303404 DOI: 10.1016/j.euf.2020.11.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 10/15/2020] [Accepted: 11/14/2020] [Indexed: 11/21/2022]
Abstract
BACKGROUND Gleason grading is the standard diagnostic method for prostate cancer and is essential for determining prognosis and treatment. The dearth of expert pathologists, the inter- and intraobserver variability, as well as the labour intensity of Gleason grading all necessitate the development of a user-friendly tool for robust standardisation. OBJECTIVE To develop an artificial intelligence (AI) algorithm, based on machine learning and convolutional neural networks, as a tool for improved standardisation in Gleason grading in prostate cancer biopsies. DESIGN, SETTING, AND PARTICIPANTS A total of 698 prostate biopsy sections from 174 patients were used for training. The training sections were annotated by two senior consultant pathologists. The final algorithm was tested on 37 biopsy sections from 21 patients, with digitised slide images from two different scanners. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Correlation, sensitivity, and specificity parameters were calculated. RESULTS AND LIMITATIONS The algorithm shows high accuracy in detecting cancer areas (sensitivity: 100%, specificity: 68%). Compared with the pathologists, the algorithm also performed well in detecting cancer areas (intraclass correlation coefficient [ICC]: 0.99) and assigning the Gleason patterns correctly: Gleason patterns 3 and 4 (ICC: 0.96 and 0.94, respectively), and to a lesser extent, Gleason pattern 5 (ICC: 0.82). Similar results were obtained using two different scanners. CONCLUSIONS Our AI-based algorithm can reliably detect prostate cancer and quantify the Gleason patterns in core needle biopsies, with similar accuracy as pathologists. The results are reproducible on images from different scanners with a proven low level of intraobserver variability. We believe that this AI tool could be regarded as an efficient and interactive tool for pathologists. PATIENT SUMMARY We developed a sensitive artificial intelligence tool for prostate biopsies, which detects and grades cancer with similar accuracy to pathologists. This tool holds promise to improve the diagnosis of prostate cancer.
Collapse
|
32
|
Ali T, Masood K, Irfan M, Draz U, Nagra AA, Asif M, Alshehri BM, Glowacz A, Tadeusiewicz R, Mahnashi MH, Yasin S. Multistage Segmentation of Prostate Cancer Tissues Using Sample Entropy Texture Analysis. ENTROPY 2020; 22:e22121370. [PMID: 33279915 PMCID: PMC7761953 DOI: 10.3390/e22121370] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 11/24/2020] [Accepted: 12/01/2020] [Indexed: 12/12/2022]
Abstract
In this study, a multistage segmentation technique is proposed that identifies cancerous cells in prostate tissue samples. The benign areas of the tissue are distinguished from the cancerous regions using the texture of glands. The texture is modeled based on wavelet packet features along with sample entropy values. In a multistage segmentation process, the mean-shift algorithm is applied on the pre-processed images to perform a coarse segmentation of the tissue. Wavelet packets are employed in the second stage to obtain fine details of the structured shape of glands. Finally, the texture of the gland is modeled by the sample entropy values, which identifies epithelial regions from stroma patches. Although there are three stages of the proposed algorithm, the computation is fast as wavelet packet features and sample entropy values perform robust modeling for the required regions of interest. A comparative analysis with other state-of-the-art texture segmentation techniques is presented and dice ratios are computed for the comparison. It has been observed that our algorithm not only outperforms other techniques, but, by introducing sample entropy features, identification of cancerous regions of tissues is achieved with 90% classification accuracy, which shows the robustness of the proposed algorithm.
Collapse
Affiliation(s)
- Tariq Ali
- Department of Computer Science, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan;
| | - Khalid Masood
- Department of Computer Science, Lahore Garrison University, Lahore 54792, Pakistan; (K.M.); (A.A.N.); (M.A.)
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia
- Correspondence: (M.I.); (U.D.); (A.G.)
| | - Umar Draz
- Department of Computer Science, University of Sahiwal, Sahiwal, Punjab 57000, Pakistan
- Correspondence: (M.I.); (U.D.); (A.G.)
| | - Arfan Ali Nagra
- Department of Computer Science, Lahore Garrison University, Lahore 54792, Pakistan; (K.M.); (A.A.N.); (M.A.)
| | - Muhammad Asif
- Department of Computer Science, Lahore Garrison University, Lahore 54792, Pakistan; (K.M.); (A.A.N.); (M.A.)
| | - Bandar M. Alshehri
- Department of Clinical Laboratory, Faculty of Applied Medical Sciences, Najran University, P.O. Box 1988, Najran 61441, Saudi Arabia;
| | - Adam Glowacz
- Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Kraków, Poland
- Correspondence: (M.I.); (U.D.); (A.G.)
| | - Ryszard Tadeusiewicz
- Department of Biocybernetics and Biomedical Engineering, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Kraków, Poland;
| | - Mater H. Mahnashi
- Department of Medicinal Chemistry, Pharmacy School, Najran University, Najran 61441, Saudi Arabia;
| | - Sana Yasin
- Department of Computer Science, University of Okara, Okara 56130, Pakistan;
| |
Collapse
|
33
|
Yan C, Nakane K, Wang X, Fu Y, Lu H, Fan X, Feldman MD, Madabhushi A, Xu J. Automated gleason grading on prostate biopsy slides by statistical representations of homology profile. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105528. [PMID: 32470903 PMCID: PMC8153074 DOI: 10.1016/j.cmpb.2020.105528] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 04/13/2020] [Accepted: 04/30/2020] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Gleason grading system is currently the clinical gold standard for determining prostate cancer aggressiveness. Prostate cancer is typically classified into one of 5 different categories with 1 representing the most indolent disease and 5 reflecting the most aggressive disease. Grades 3 and 4 are the most common and difficult patterns to be discriminated in clinical practice. Even though the degree of gland differentiation is the strongest determinant of Gleason grade, manual grading is subjective and is hampered by substantial inter-reader disagreement, especially with regard to intermediate grade groups. METHODS To capture the topological characteristics and the degree of connectivity between nuclei around the gland, the concept of Homology Profile (HP) for prostate cancer grading is presented in this paper. HP is an algebraic tool, whereby, certain algebraic invariants are computed based on the structure of a topological space. We utilized the Statistical Representation of Homology Profile (SRHP) features to quantify the extent of glandular differentiation. The quantitative characteristics which represent the image patch are fed into a supervised classifier model for discrimination of grade patterns 3 and 4. RESULTS On the basis of the novel homology profile, we evaluated 43 digitized images of prostate biopsy slides annotated for regions corresponding to Grades 3 and 4. The quantitative patch-level evaluation results showed that our approach achieved an Area Under Curve (AUC) of 0.96 and an accuracy of 0.89 in terms of discriminating Grade 3 and 4 patches. Our approach was found to be superior to comparative methods including handcrafted cellular features, Stacked Sparse Autoencoder (SSAE) algorithm and end-to-end supervised learning method (DLGg). Also, slide-level quantitative and qualitative evaluation results reflect the ability of our approach in discriminating Gleason Grade 3 from 4 patterns on H&E tissue images. CONCLUSIONS We presented a novel Statistical Representation of Homology Profile (SRHP) approach for automated Gleason grading on prostate biopsy slides. The most discriminating topological descriptions of cancerous regions for grade 3 and 4 in prostate cancer were identified. Moreover, these characteristics of homology profile are interpretable, visually meaningful and highly consistent with the rubric employed by pathologists for the task of Gleason grading.
Collapse
Affiliation(s)
- Chaoyang Yan
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Kazuaki Nakane
- Department of Molecular Pathology, Osaka University Graduate School of Medicine, Division of Health Science, Osaka 565-0871, Japan
| | - Xiangxue Wang
- Dept. of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA
| | - Yao Fu
- Dept. of Pathology, the affiliated Drum Tower Hospital, Nanjing University Medical School, 210008, China
| | - Haoda Lu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiangshan Fan
- Dept. of Pathology, the affiliated Drum Tower Hospital, Nanjing University Medical School, 210008, China
| | - Michael D Feldman
- Division of Surgical Pathology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Anant Madabhushi
- Dept. of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA; Louis Stokes Cleveland Veterans Medical Center, Cleveland, OH 44106
| | - Jun Xu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China.
| |
Collapse
|
34
|
Rakha EA, Toss M, Shiino S, Gamble P, Jaroensri R, Mermel CH, Chen PHC. Current and future applications of artificial intelligence in pathology: a clinical perspective. J Clin Pathol 2020; 74:409-414. [PMID: 32763920 DOI: 10.1136/jclinpath-2020-206908] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 07/07/2020] [Indexed: 12/17/2022]
Abstract
During the last decade, a dramatic rise in the development and application of artificial intelligence (AI) tools for use in pathology services has occurred. This trend is often expected to continue and reshape the field of pathology in the coming years. The deployment of computational pathology and applications of AI tools can be considered as a paradigm shift that will change pathology services, making them more efficient and capable of meeting the needs of this era of precision medicine. Despite the success of AI models, the translational process from discovery to clinical applications has been slow. The gap between self-contained research and clinical environment may be too wide and has been largely neglected. In this review, we cover the current and prospective applications of AI in pathology. We examine its applications in diagnosis and prognosis, and we offer insights for considerations that could improve clinical applicability of these tools. Then, we discuss its potential to improve workflow efficiency, and its benefits in pathologist education. Finally, we review the factors that could influence adoption in clinical practices and the associated regulatory processes.
Collapse
Affiliation(s)
- Emad A Rakha
- Histopathology, University of Nottingham School of Medicine, Nottingham, UK
| | - Michael Toss
- Histopathology, University of Nottingham School of Medicine, Nottingham, UK
| | - Sho Shiino
- Histopathology, University of Nottingham School of Medicine, Nottingham, UK
| | - Paul Gamble
- Google Health, Google, Palo Alto, California, USA
| | | | | | | |
Collapse
|
35
|
Lew M, Wilbur DC, Pantanowitz L. Computational Cytology: Lessons Learned from Pap Test Computer-Assisted Screening. Acta Cytol 2020; 65:286-300. [PMID: 32694246 DOI: 10.1159/000508629] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 05/13/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND In the face of rapid technological advances in computational cytology including artificial intelligence (AI), optimization of its application to clinical practice would benefit from reflection on the lessons learned from the decades-long journey in the development of computer-assisted Pap test screening. SUMMARY The initial driving force for automated screening in cytology was the overwhelming number of Pap tests requiring manual screening, leading to workflow backlogs and incorrect diagnoses. Several companies invested resources to address these concerns utilizing different specimen processing techniques and imaging systems. However, not all companies were commercially prosperous. Successful implementation of this new technology required viable use cases, improved clinical outcomes, and an acceptable means of integration into the daily workflow of cytopathology laboratories. Several factors including supply and demand, Food and Drug Administration (FDA) oversight, reimbursement, overcoming learning curves and workflow changes associated with the adoption of new technology, and cytologist apprehension, played a significant role in either promoting or preventing the widespread adoption of automated screening technologies. Key Messages: Any change in health care, particularly those involving new technology that impacts clinical workflow, is bound to have its successes and failures. However, perseverance through learning curves, optimizing workflow processes, improvements in diagnostic accuracy, and regulatory and financial approval can facilitate widespread adoption of these technologies. Given their history with successfully implementing automated Pap test screening, cytologists are uniquely positioned to not only help with the development of AI technology for other areas of pathology, but also to guide how they are utilized, regulated, and managed.
Collapse
Affiliation(s)
- Madelyn Lew
- Department of Pathology, University of Michigan, Ann Arbor, Michigan, USA,
| | - David C Wilbur
- Department of Pathology, Harvard Medical School, Boston, Massachusetts, USA
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburg Medical Center, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
36
|
Han W, Johnson C, Warner A, Gaed M, Gomez JA, Moussa M, Chin J, Pautler S, Bauman G, Ward AD. Automatic cancer detection on digital histopathology images of mid-gland radical prostatectomy specimens. J Med Imaging (Bellingham) 2020; 7:047501. [PMID: 32715024 DOI: 10.1117/1.jmi.7.4.047501] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 07/06/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automatic cancer detection on radical prostatectomy (RP) sections facilitates graphical and quantitative surgical pathology reporting, which can potentially benefit postsurgery follow-up care and treatment planning. It can also support imaging validation studies using a histologic reference standard and pathology research studies. This problem is challenging due to the large sizes of digital histopathology whole-mount whole-slide images (WSIs) of RP sections and staining variability across different WSIs. Approach: We proposed a calibration-free adaptive thresholding algorithm, which compensates for staining variability and yields consistent tissue component maps (TCMs) of the nuclei, lumina, and other tissues. We used and compared three machine learning methods for classifying each cancer versus noncancer region of interest (ROI) throughout each WSI: (1) conventional machine learning methods and 14 texture features extracted from TCMs, (2) transfer learning with pretrained AlexNet fine-tuned by TCM ROIs, and (3) transfer learning with pretrained AlexNet fine-tuned with raw image ROIs. Results: The three methods yielded areas under the receiver operating characteristic curve of 0.96, 0.98, and 0.98, respectively, in leave-one-patient-out cross validation using 1.3 million ROIs from 286 mid-gland whole-mount WSIs from 68 patients. Conclusion: Transfer learning with the use of TCMs demonstrated state-of-the-art overall performance and is more stable with respect to sample size across different tissue types. For the tissue types involving Gleason 5 (most aggressive) cancer, it achieved the best performance compared to the other tested methods. This tool can be translated to clinical workflow to assist graphical and quantitative pathology reporting for surgical specimens upon further multicenter validation.
Collapse
Affiliation(s)
- Wenchao Han
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada.,Lawson Health Research Institute, London, Ontario, Canada.,Western University, Department of Medical Biophysics, London, Ontario, Canada
| | - Carol Johnson
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada
| | - Andrew Warner
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada
| | - Mena Gaed
- Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada
| | - Jose A Gomez
- Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada
| | - Madeleine Moussa
- Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada
| | - Joseph Chin
- Western University, Department of Oncology, London, Ontario, Canada.,Western University, Department of Surgery, London, Ontario, Canada
| | - Stephen Pautler
- Western University, Department of Oncology, London, Ontario, Canada.,Western University, Department of Surgery, London, Ontario, Canada
| | - Glenn Bauman
- Lawson Health Research Institute, London, Ontario, Canada.,Western University, Department of Medical Biophysics, London, Ontario, Canada.,Western University, Department of Oncology, London, Ontario, Canada
| | - Aaron D Ward
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada.,Lawson Health Research Institute, London, Ontario, Canada.,Western University, Department of Medical Biophysics, London, Ontario, Canada.,Western University, Department of Oncology, London, Ontario, Canada
| |
Collapse
|
37
|
Han W, Johnson C, Gaed M, Gómez JA, Moussa M, Chin JL, Pautler S, Bauman GS, Ward AD. Histologic tissue components provide major cues for machine learning-based prostate cancer detection and grading on prostatectomy specimens. Sci Rep 2020; 10:9911. [PMID: 32555410 PMCID: PMC7303108 DOI: 10.1038/s41598-020-66849-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Accepted: 05/19/2020] [Indexed: 11/10/2022] Open
Abstract
Automatically detecting and grading cancerous regions on radical prostatectomy (RP) sections facilitates graphical and quantitative pathology reporting, potentially benefitting post-surgery prognosis, recurrence prediction, and treatment planning after RP. Promising results for detecting and grading prostate cancer on digital histopathology images have been reported using machine learning techniques. However, the importance and applicability of those methods have not been fully investigated. We computed three-class tissue component maps (TCMs) from the images, where each pixel was labeled as nuclei, lumina, or other. We applied seven different machine learning approaches: three non-deep learning classifiers with features extracted from TCMs, and four deep learning, using transfer learning with the 1) TCMs, 2) nuclei maps, 3) lumina maps, and 4) raw images for cancer detection and grading on whole-mount RP tissue sections. We performed leave-one-patient-out cross-validation against expert annotations using 286 whole-slide images from 68 patients. For both cancer detection and grading, transfer learning using TCMs performed best. Transfer learning using nuclei maps yielded slightly inferior overall performance, but the best performance for classifying higher-grade cancer. This suggests that 3-class TCMs provide the major cues for cancer detection and grading primarily using nucleus features, which are the most important information for identifying higher-grade cancer.
Collapse
Affiliation(s)
- Wenchao Han
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Ontario, Canada. .,Department of Medical Biophysics, University of Western Ontario, London, Ontario, Canada. .,Lawson Health Research Institute, London, Ontario, Canada.
| | - Carol Johnson
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Ontario, Canada.,Lawson Health Research Institute, London, Ontario, Canada
| | - Mena Gaed
- Department of Pathology and Laboratory Medicine, University of Western Ontario, London, Ontario, Canada
| | - José A Gómez
- Department of Pathology and Laboratory Medicine, University of Western Ontario, London, Ontario, Canada
| | - Madeleine Moussa
- Department of Pathology and Laboratory Medicine, University of Western Ontario, London, Ontario, Canada
| | - Joseph L Chin
- Department of Surgery, University of Western Ontario, London, Ontario, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, Canada
| | - Stephen Pautler
- Department of Surgery, University of Western Ontario, London, Ontario, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, Canada
| | - Glenn S Bauman
- Department of Medical Biophysics, University of Western Ontario, London, Ontario, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, Canada.,Lawson Health Research Institute, London, Ontario, Canada
| | - Aaron D Ward
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Ontario, Canada. .,Department of Medical Biophysics, University of Western Ontario, London, Ontario, Canada. .,Department of Oncology, University of Western Ontario, London, Ontario, Canada. .,Lawson Health Research Institute, London, Ontario, Canada.
| |
Collapse
|
38
|
Karimi D, Nir G, Fazli L, Black PC, Goldenberg L, Salcudean SE. Deep Learning-Based Gleason Grading of Prostate Cancer From Histopathology Images—Role of Multiscale Decision Aggregation and Data Augmentation. IEEE J Biomed Health Inform 2020; 24:1413-1426. [DOI: 10.1109/jbhi.2019.2944643] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
39
|
|
40
|
Ibrahim A, Gamble P, Jaroensri R, Abdelsamea MM, Mermel CH, Chen PHC, Rakha EA. Artificial intelligence in digital breast pathology: Techniques and applications. Breast 2020; 49:267-273. [PMID: 31935669 PMCID: PMC7375550 DOI: 10.1016/j.breast.2019.12.007] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 12/12/2019] [Indexed: 12/16/2022] Open
Abstract
Breast cancer is the most common cancer and second leading cause of cancer-related death worldwide. The mainstay of breast cancer workup is histopathological diagnosis - which guides therapy and prognosis. However, emerging knowledge about the complex nature of cancer and the availability of tailored therapies have exposed opportunities for improvements in diagnostic precision. In parallel, advances in artificial intelligence (AI) along with the growing digitization of pathology slides for the primary diagnosis are a promising approach to meet the demand for more accurate detection, classification and prediction of behaviour of breast tumours. In this article, we cover the current and prospective uses of AI in digital pathology for breast cancer, review the basics of digital pathology and AI, and outline outstanding challenges in the field.
Collapse
Affiliation(s)
- Asmaa Ibrahim
- Department of Histopathology, Division of Cancer and Stem Cells, School of Medicine, The University of Nottingham and Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham, NG5 1PB, UK
| | | | | | - Mohammed M Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham, UK
| | | | | | - Emad A Rakha
- Department of Histopathology, Division of Cancer and Stem Cells, School of Medicine, The University of Nottingham and Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham, NG5 1PB, UK.
| |
Collapse
|
41
|
Barsoum I, Tawedrous E, Faragalla H, Yousef GM. Histo-genomics: digital pathology at the forefront of precision medicine. ACTA ACUST UNITED AC 2020; 6:203-212. [PMID: 30827078 DOI: 10.1515/dx-2018-0064] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 09/28/2018] [Indexed: 12/26/2022]
Abstract
The toughest challenge OMICs face is that they provide extremely high molecular resolution but poor spatial information. Understanding the cellular/histological context of the overwhelming genetic data is critical for a full understanding of the clinical behavior of a malignant tumor. Digital pathology can add an extra layer of information to help visualize in a spatial and microenvironmental context the molecular information of cancer. Thus, histo-genomics provide a unique chance for data integration. In the era of a precision medicine, a four-dimensional (4D) (temporal/spatial) analysis of cancer aided by digital pathology can be a critical step to understand the evolution/progression of different cancers and consequently tailor individual treatment plans. For instance, the integration of molecular biomarkers expression into a three-dimensional (3D) image of a digitally scanned tumor can offer a better understanding of its subtype, behavior, host immune response and prognosis. Using advanced digital image analysis, a larger spectrum of parameters can be analyzed as potential predictors of clinical behavior. Correlation between morphological features and host immune response can be also performed with therapeutic implications. Radio-histomics, or the interface of radiological images and histology is another emerging exciting field which encompasses the integration of radiological imaging with digital pathological images, genomics, and clinical data to portray a more holistic approach to understating and treating disease. These advances in digital slide scanning are not without technical challenges, which will be addressed carefully in this review with quick peek at its future.
Collapse
Affiliation(s)
- Ivraym Barsoum
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Eriny Tawedrous
- Department of Laboratory Medicine, and the Keenan Research Centre for Biomedical Science at the Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada
| | - Hala Faragalla
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada
| | - George M Yousef
- Department of Laboratory Medicine, and the Keenan Research Centre for Biomedical Science at the Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada.,Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada.,Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, 555 University Avenue, Toronto, ON M5G 1X8, Canada
| |
Collapse
|
42
|
Chen CM, Huang YS, Fang PW, Liang CW, Chang RF. A computer-aided diagnosis system for differentiation and delineation of malignant regions on whole-slide prostate histopathology image using spatial statistics and multidimensional DenseNet. Med Phys 2020; 47:1021-1033. [PMID: 31834623 DOI: 10.1002/mp.13964] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 11/26/2019] [Accepted: 12/04/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE Prostate cancer (PCa) is a major health concern in aging males, and proper management of the disease depends on accurately interpreting pathology specimens. However, reading prostatectomy histopathology slides, which is basically for staging, is usually time consuming and differs from reading small biopsy specimens, which is mainly used for diagnosis. Generally, each prostatectomy specimen generates tens of large tissue sections and for each section, the malignant region needs to be delineated to assess the amount of tumor and its burden. With the aim of reducing the workload of pathologists, in this study, we focus on developing a computer-aided diagnosis (CAD) system based on a densely connected convolutional neural network (DenseNet) for whole-slide histopathology images to outline the malignant regions. METHODS We use an efficient color normalization process based on ranklet transformation to automatically correct the intensity of the images. Additionally, we use spatial probability to segment the tissue structure regions for different tissue recognition patterns. Based on the segmentation, we incorporate a multidimensional structure into DenseNet to determine if a particular prostatic region is benign or malignant. RESULTS As demonstrated by the experimental results with a test set of 2,663 images from 32 whole-slide prostate histopathology images, our proposed system achieved 0.726, 0.6306, and 0.5209 in the average of the Dice coefficient, Jaccard similarity coefficient, and Boundary F1 score measures, respectively. Then, the accuracy, sensitivity, specificity, and the area under the ROC curve (AUC) of the proposed classification method were observed to be 95.0% (2544/2663), 96.7% (1210/1251), 93.9% (1334/1412), and 0.9831, respectively. DISCUSSIONS We provide a detailed discussion on how our proposed system demonstrates considerable improvement compared with similar methods considered in previous researches as well as how it can be used for delineating malignant regions.
Collapse
Affiliation(s)
- Chiao-Min Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Pei-Wei Fang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Cher-Wei Liang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan.,School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan.,Graduate Institute of Pathology, College of Medicine, National Taiwan University Taipei, Taipei, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan.,Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,MOST Joint Research Center for AI Technology and All Vista Healthcare Taipei, Taipei, Taiwan
| |
Collapse
|
43
|
|
44
|
Automated acquisition of explainable knowledge from unannotated histopathology images. Nat Commun 2019; 10:5642. [PMID: 31852890 PMCID: PMC6920352 DOI: 10.1038/s41467-019-13647-8] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Accepted: 11/19/2019] [Indexed: 01/07/2023] Open
Abstract
Deep learning algorithms have been successfully used in medical image classification. In the next stage, the technology of acquiring explainable knowledge from medical images is highly desired. Here we show that deep learning algorithm enables automated acquisition of explainable features from diagnostic annotation-free histopathology images. We compare the prediction accuracy of prostate cancer recurrence using our algorithm-generated features with that of diagnosis by expert pathologists using established criteria on 13,188 whole-mount pathology images consisting of over 86 billion image patches. Our method not only reveals findings established by humans but also features that have not been recognized, showing higher accuracy than human in prognostic prediction. Combining both our algorithm-generated features and human-established criteria predicts the recurrence more accurately than using either method alone. We confirm robustness of our method using external validation datasets including 2276 pathology images. This study opens up fields of machine learning analysis for discovering uncharted knowledge. Technologies for acquiring explainable features from medical images need further development. Here, the authors report a deep learning based automated acquisition of explainable features from pathology images, and show a higher accuracy of their method as compared to pathologist based diagnosis of prostate cancer recurrence.
Collapse
|
45
|
Bazaga A, Roldán M, Badosa C, Jiménez-Mallebrera C, Porta JM. A Convolutional Neural Network for the automatic diagnosis of collagen VI-related muscular dystrophies. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
46
|
Bera K, Schalper KA, Rimm DL, Velcheti V, Madabhushi A. Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology. Nat Rev Clin Oncol 2019; 16:703-715. [PMID: 31399699 PMCID: PMC6880861 DOI: 10.1038/s41571-019-0252-y] [Citation(s) in RCA: 792] [Impact Index Per Article: 132.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/04/2019] [Indexed: 02/06/2023]
Abstract
In the past decade, advances in precision oncology have resulted in an increased demand for predictive assays that enable the selection and stratification of patients for treatment. The enormous divergence of signalling and transcriptional networks mediating the crosstalk between cancer, stromal and immune cells complicates the development of functionally relevant biomarkers based on a single gene or protein. However, the result of these complex processes can be uniquely captured in the morphometric features of stained tissue specimens. The possibility of digitizing whole-slide images of tissue has led to the advent of artificial intelligence (AI) and machine learning tools in digital pathology, which enable mining of subvisual morphometric phenotypes and might, ultimately, improve patient management. In this Perspective, we critically evaluate various AI-based computational approaches for digital pathology, focusing on deep neural networks and 'hand-crafted' feature-based methodologies. We aim to provide a broad framework for incorporating AI and machine learning tools into clinical oncology, with an emphasis on biomarker development. We discuss some of the challenges relating to the use of AI, including the need for well-curated validation datasets, regulatory approval and fair reimbursement strategies. Finally, we present potential future opportunities for precision oncology.
Collapse
Affiliation(s)
- Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Kurt A Schalper
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - David L Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - Vamsidhar Velcheti
- Thoracic Medical Oncology, Perlmutter Cancer Center, New York University, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
47
|
Abstract
Many classification algorithms aim to minimize just their training error count; however, it is often desirable to minimize a more general cost metric, where distinct instances have different costs. In this paper, an instance-based cost-sensitive Bayesian consistent version of exponential loss function is proposed. Using the modified loss function, the derivation of instance-based cost-sensitive extensions of AdaBoost, RealBoost and GentleBoost are developed which are termed as ICSAdaBoost, ICSRealBoost and ICSGentleBoost, respectively. In this research, a new instance-based cost generation method is proposed instead of doing this expensive process by experts. Thus, each sample takes two cost values; a class cost and a sample cost. The first cost is equally assigned to all samples of each class while the second cost is generated according to the probability of each sample within its class probability density function. Experimental results of the proposed schemes imply 12% enhancement in terms of [Formula: see text]-measure and 13% on cost-per-sample over a variety of UCI datasets, compared to the state-of-the-art methods. The significant priority of the proposed method is supported by applying the pair of [Formula: see text]-tests to the results.
Collapse
Affiliation(s)
- Ensieh Sharifnia
- CSE & IT Dept., School of Electrical and Computer Engineering, Shiraz University, Campus#2, MollaSadra St., Shiraz 71348-51154, Iran
| | - Reza Boostani
- CSE & IT Dept., School of Electrical and Computer Engineering, Shiraz University, Campus#2, MollaSadra St., Shiraz 71348-51154, Iran
| |
Collapse
|
48
|
Sari CT, Gunduz-Demir C. Unsupervised Feature Extraction via Deep Learning for Histopathological Classification of Colon Tissue Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1139-1149. [PMID: 30403624 DOI: 10.1109/tmi.2018.2879369] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathological examination is today's gold standard for cancer diagnosis. However, this task is time consuming and prone to errors as it requires a detailed visual inspection and interpretation of a pathologist. Digital pathology aims at alleviating these problems by providing computerized methods that quantitatively analyze digitized histopathological tissue images. The performance of these methods mainly relies on the features that they use, and thus, their success strictly depends on the ability of these features by successfully quantifying the histopathology domain. With this motivation, this paper presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This feature extractor has three main contributions: First, it proposes to identify salient subregions in an image, based on domain-specific prior knowledge, and to quantify the image by employing only the characteristics of these subregions instead of considering the characteristics of all image locations. Second, it introduces a new deep learning-based technique that quantizes the salient subregions by extracting a set of features directly learned on image data and uses the distribution of these quantizations for image representation and classification. To this end, the proposed deep learning-based technique constructs a deep belief network of the restricted Boltzmann machines (RBMs), defines the activation values of the hidden unit nodes in the final RBM as the features, and learns the quantizations by clustering these features in an unsupervised way. Third, this extractor is the first example for successfully using the restricted Boltzmann machines in the domain of histopathological image analysis. Our experiments on microscopic colon tissue images reveal that the proposed feature extractor is effective to obtain more accurate classification results compared to its counterparts.
Collapse
|
49
|
Li W, Li J, Sarma KV, Ho KC, Shen S, Knudsen BS, Gertych A, Arnold CW. Path R-CNN for Prostate Cancer Diagnosis and Gleason Grading of Histological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:945-954. [PMID: 30334752 PMCID: PMC6497079 DOI: 10.1109/tmi.2018.2875868] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Prostate cancer is the most common and second most deadly form of cancer in men in the United States. The classification of prostate cancers based on Gleason grading using histological images is important in risk assessment and treatment planning for patients. Here, we demonstrate a new region-based convolutional neural network framework for multi-task prediction using an epithelial network head and a grading network head. Compared with a single-task model, our multi-task model can provide complementary contextual information, which contributes to better performance. Our model is achieved a state-of-the-art performance in epithelial cells detection and Gleason grading tasks simultaneously. Using fivefold cross-validation, our model is achieved an epithelial cells detection accuracy of 99.07% with an average area under the curve of 0.998. As for Gleason grading, our model is obtained a mean intersection over union of 79.56% and an overall pixel accuracy of 89.40%.
Collapse
|
50
|
Nir G, Karimi D, Goldenberg SL, Fazli L, Skinnider BF, Tavassoli P, Turbin D, Villamil CF, Wang G, Thompson DJS, Black PC, Salcudean SE. Comparison of Artificial Intelligence Techniques to Evaluate Performance of a Classifier for Automatic Grading of Prostate Cancer From Digitized Histopathologic Images. JAMA Netw Open 2019; 2:e190442. [PMID: 30848813 PMCID: PMC6484626 DOI: 10.1001/jamanetworkopen.2019.0442] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
IMPORTANCE Proper evaluation of the performance of artificial intelligence techniques in the analysis of digitized medical images is paramount for the adoption of such techniques by the medical community and regulatory agencies. OBJECTIVES To compare several cross-validation (CV) approaches to evaluate the performance of a classifier for automatic grading of prostate cancer in digitized histopathologic images and compare the performance of the classifier when trained using data from 1 expert and multiple experts. DESIGN, SETTING, AND PARTICIPANTS This quality improvement study used tissue microarray data (333 cores) from 231 patients who underwent radical prostatectomy at the Vancouver General Hospital between June 27, 1997, and June 7, 2011. Digitized images of tissue cores were annotated by 6 pathologists for 4 classes (benign and Gleason grades 3, 4, and 5) between December 12, 2016, and October 5, 2017. Patches of 192 µm2 were extracted from these images. There was no overlap between patches. A deep learning classifier based on convolutional neural networks was trained to predict a class label from among the 4 classes (benign and Gleason grades 3, 4, and 5) for each image patch. The classification performance was evaluated in leave-patches-out CV, leave-cores-out CV, and leave-patients-out 20-fold CV. The analysis was performed between November 15, 2018, and January 1, 2019. MAIN OUTCOMES AND MEASURES The classifier performance was evaluated by its accuracy, sensitivity, and specificity in detection of cancer (benign vs cancer) and in low-grade vs high-grade differentiation (Gleason grade 3 vs grades 4-5). The statistical significance analysis was performed using the McNemar test. The agreement level between pathologists and the classifier was quantified using a quadratic-weighted κ statistic. RESULTS On 333 tissue microarray cores from 231 participants with prostate cancer (mean [SD] age, 63.2 [6.3] years), 20-fold leave-patches-out CV resulted in mean (SD) accuracy of 97.8% (1.2%), sensitivity of 98.5% (1.0%), and specificity of 97.5% (1.2%) for classifying benign patches vs cancerous patches. By contrast, 20-fold leave-patients-out CV resulted in mean (SD) accuracy of 85.8% (4.3%), sensitivity of 86.3% (4.1%), and specificity of 85.5% (7.2%). Similarly, 20-fold leave-cores-out CV resulted in mean (SD) accuracy of 86.7% (3.7%), sensitivity of 87.2% (4.0%), and specificity of 87.7% (5.5%). Results of McNemar tests showed that the leave-patches-out CV accuracy, sensitivity, and specificity were significantly higher than those for both leave-patients-out CV and leave-cores-out CV. Similar results were observed for classifying low-grade cancer vs high-grade cancer. When trained on a single expert, the overall agreement in grading between pathologists and the classifier ranged from 0.38 to 0.58; when trained using the majority vote among all experts, it was 0.60. CONCLUSIONS AND RELEVANCE Results of this study suggest that in prostate cancer classification from histopathologic images, patch-wise CV and single-expert training and evaluation may lead to a biased estimation of classifier's performance. To allow reproducibility and facilitate comparison between automatic classification methods, studies in the field should evaluate their performance using patient-based CV and multiexpert data. Some of these conclusions may be generalizable to other histopathologic applications and to other applications of machine learning in medicine.
Collapse
Affiliation(s)
- Guy Nir
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Davood Karimi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - S. Larry Goldenberg
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Ladan Fazli
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Brian F. Skinnider
- Department of Pathology and Laboratory Medicine, Vancouver General Hospital, Vancouver, British Columbia, Canada
- British Columbia Cancer Agency, Vancouver, British Columbia, Canada
| | - Peyman Tavassoli
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Richmond Hospital, Vancouver Coastal Health, Richmond, British Columbia, Canada
| | - Dmitry Turbin
- Department of Pathology and Laboratory Medicine, Vancouver General Hospital, Vancouver, British Columbia, Canada
| | | | - Gang Wang
- British Columbia Cancer Agency, Vancouver, British Columbia, Canada
| | - Darby J. S. Thompson
- Emmes Canada, Burnaby, British Columbia, Canada
- Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Peter C. Black
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Septimiu E. Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|