1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Niehues JM, Müller-Franzes G, Schirris Y, Wagner SJ, Jendrusch M, Kloor M, Pearson AT, Muti HS, Hewitt KJ, Veldhuizen GP, Zigutyte L, Truhn D, Kather JN. Using histopathology latent diffusion models as privacy-preserving dataset augmenters improves downstream classification performance. Comput Biol Med 2024; 175:108410. [PMID: 38678938 DOI: 10.1016/j.compbiomed.2024.108410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 03/23/2024] [Accepted: 04/02/2024] [Indexed: 05/01/2024]
Abstract
Latent diffusion models (LDMs) have emerged as a state-of-the-art image generation method, outperforming previous Generative Adversarial Networks (GANs) in terms of training stability and image quality. In computational pathology, generative models are valuable for data sharing and data augmentation. However, the impact of LDM-generated images on histopathology tasks compared to traditional GANs has not been systematically studied. We trained three LDMs and a styleGAN2 model on histology tiles from nine colorectal cancer (CRC) tissue classes. The LDMs include 1) a fine-tuned version of stable diffusion v1.4, 2) a Kullback-Leibler (KL)-autoencoder (KLF8-DM), and 3) a vector quantized (VQ)-autoencoder deploying LDM (VQF8-DM). We assessed image quality through expert ratings, dimensional reduction methods, distribution similarity measures, and their impact on training a multiclass tissue classifier. Additionally, we investigated image memorization in the KLF8-DM and styleGAN2 models. All models provided a high image quality, with the KLF8-DM achieving the best Frechet Inception Distance (FID) and expert rating scores for complex tissue classes. For simpler classes, the VQF8-DM and styleGAN2 models performed better. Image memorization was negligible for both styleGAN2 and KLF8-DM models. Classifiers trained on a mix of KLF8-DM generated and real images achieved a 4% improvement in overall classification accuracy, highlighting the usefulness of these images for dataset augmentation. Our systematic study of generative methods showed that KLF8-DM produces the highest quality images with negligible image memorization. The higher classifier performance in the generatively augmented dataset suggests that this augmentation technique can be employed to enhance histopathology classifiers for various tasks.
Collapse
Affiliation(s)
- Jan M Niehues
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Yoni Schirris
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany; Netherlands Cancer Institute, 1066 CX, Amsterdam, the Netherlands; University of Amsterdam, 1012 WP, Amsterdam, the Netherlands
| | - Sophia Janine Wagner
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany; Helmholtz Munich - German Research Center for Environment and Health, Munich, Germany; School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - Michael Jendrusch
- Institute of Pathology, University Hospital Heidelberg, Heidelberg, Germany
| | - Matthias Kloor
- Institute of Pathology, University Hospital Heidelberg, Heidelberg, Germany
| | | | - Hannah Sophie Muti
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany; Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Katherine J Hewitt
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany; Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Gregory P Veldhuizen
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany; Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Laura Zigutyte
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany; Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, United Kingdom; Department of Medicine I, University Hospital Dresden, Dresden, Germany; Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
3
|
Jensen MP, Qiang Z, Khan DZ, Stoyanov D, Baldeweg SE, Jaunmuktane Z, Brandner S, Marcus HJ. Artificial intelligence in histopathological image analysis of central nervous system tumours: A systematic review. Neuropathol Appl Neurobiol 2024; 50:e12981. [PMID: 38738494 DOI: 10.1111/nan.12981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 04/05/2024] [Accepted: 04/10/2024] [Indexed: 05/14/2024]
Abstract
The convergence of digital pathology and artificial intelligence could assist histopathology image analysis by providing tools for rapid, automated morphological analysis. This systematic review explores the use of artificial intelligence for histopathological image analysis of digitised central nervous system (CNS) tumour slides. Comprehensive searches were conducted across EMBASE, Medline and the Cochrane Library up to June 2023 using relevant keywords. Sixty-eight suitable studies were identified and qualitatively analysed. The risk of bias was evaluated using the Prediction model Risk of Bias Assessment Tool (PROBAST) criteria. All the studies were retrospective and preclinical. Gliomas were the most frequently analysed tumour type. The majority of studies used convolutional neural networks or support vector machines, and the most common goal of the model was for tumour classification and/or grading from haematoxylin and eosin-stained slides. The majority of studies were conducted when legacy World Health Organisation (WHO) classifications were in place, which at the time relied predominantly on histological (morphological) features but have since been superseded by molecular advances. Overall, there was a high risk of bias in all studies analysed. Persistent issues included inadequate transparency in reporting the number of patients and/or images within the model development and testing cohorts, absence of external validation, and insufficient recognition of batch effects in multi-institutional datasets. Based on these findings, we outline practical recommendations for future work including a framework for clinical implementation, in particular, better informing the artificial intelligence community of the needs of the neuropathologist.
Collapse
Affiliation(s)
- Melanie P Jensen
- Pathology Department, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
- Briscoe Lab, The Francis Crick Institute, London, UK
| | - Zekai Qiang
- School of Medicine and Population Health, University of Sheffield Medical School, Sheffield, UK
| | - Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| | - Danail Stoyanov
- Department of Computer Science, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes and Endocrinology, University College London Hospitals, London, UK
- Centre for Obesity and Metabolism, Department of Experimental and Translational Medicine, Division of Medicine, University College London, London, UK
| | - Zane Jaunmuktane
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London, UK
| | - Sebastian Brandner
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| |
Collapse
|
4
|
Masayoshi K, Katada Y, Ozawa N, Ibuki M, Negishi K, Kurihara T. Deep learning segmentation of non-perfusion area from color fundus images and AI-generated fluorescein angiography. Sci Rep 2024; 14:10801. [PMID: 38734727 PMCID: PMC11088618 DOI: 10.1038/s41598-024-61561-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/07/2024] [Indexed: 05/13/2024] Open
Abstract
The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with branch retinal vein occlusion (BRVO). However, the current evaluation method of NPA, fluorescein angiography (FA), is invasive and burdensome. In this study, we examined the use of deep learning models for detecting NPA in color fundus images, bypassing the need for FA, and we also investigated the utility of synthetic FA generated from color fundus images. The models were evaluated using the Dice score and Monte Carlo dropout uncertainty. We retrospectively collected 403 sets of color fundus and FA images from 319 BRVO patients. We trained three deep learning models on FA, color fundus images, and synthetic FA. As a result, though the FA model achieved the highest score, the other two models also performed comparably. We found no statistical significance in median Dice scores between the models. However, the color fundus model showed significantly higher uncertainty than the other models (p < 0.05). In conclusion, deep learning models can detect NPAs from color fundus images with reasonable accuracy, though with somewhat less prediction stability. Synthetic FA stabilizes the prediction and reduces misleading uncertainty estimates by enhancing image quality.
Collapse
Affiliation(s)
- Kanato Masayoshi
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Yusaku Katada
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Nobuhiro Ozawa
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Mari Ibuki
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Kazuno Negishi
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Toshihide Kurihara
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan.
| |
Collapse
|
5
|
Ma Y, Zhou W, Ma R, Wang E, Yang S, Tang Y, Zhang XP, Guan X. DOVE: Doodled vessel enhancement for photoacoustic angiography super resolution. Med Image Anal 2024; 94:103106. [PMID: 38387244 DOI: 10.1016/j.media.2024.103106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/12/2023] [Accepted: 02/08/2024] [Indexed: 02/24/2024]
Abstract
Deep-learning-based super-resolution photoacoustic angiography (PAA) has emerged as a valuable tool for enhancing the resolution of blood vessel images and aiding in disease diagnosis. However, due to the scarcity of training samples, PAA super-resolution models do not generalize well, especially in the challenging in-vivo imaging of organs with deep tissue penetration. Furthermore, prolonged exposure to high laser intensity during the image acquisition process can lead to tissue damage and secondary infections. To address these challenges, we propose an approach doodled vessel enhancement (DOVE) that utilizes hand-drawn doodles to train a PAA super-resolution model. With a training dataset consisting of only 32 real PAA images, we construct a diffusion model that interprets hand-drawn doodles as low-resolution images. DOVE enables us to generate a large number of realistic PAA images, achieving a 49.375% fool rate, even among experts in photoacoustic imaging. Subsequently, we employ these generated images to train a self-similarity-based model for super-resolution. During cross-domain tests, our method, trained solely on generated images, achieves a structural similarity value of 0.8591, surpassing the scores of all other models trained with real high-resolution images. DOVE successfully overcomes the limitation of insufficient training samples and unlocks the clinic application potential of super-resolution-based biomedical imaging.
Collapse
Affiliation(s)
- Yuanzheng Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Wangting Zhou
- Engineering Research Center of Molecular & Neuro Imaging of the Ministry of Education, Xidian University, Xi'an, Shaanxi 710126, China
| | - Rui Ma
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Erqi Wang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.
| | - Yansong Tang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiao-Ping Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xun Guan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| |
Collapse
|
6
|
Azadi Moghadam P, Bashashati A, Goldenberg SL. Artificial Intelligence and Pathomics: Prostate Cancer. Urol Clin North Am 2024; 51:15-26. [PMID: 37945099 DOI: 10.1016/j.ucl.2023.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Artificial intelligence (AI) has the potential to transform pathologic diagnosis and cancer patient management as a predictive and prognostic biomarker. AI-based systems can be used to examine digitally scanned histopathology slides and differentiate benign from malignant cells and low from high grade. Deep learning models can analyze patient data from individual or multimodal combinations and identify patterns to be used to predict the response to different therapeutic options, the risk of recurrence or progression, and the prognosis of the newly diagnosed patient. AI-based models will improve treatment planning for patients with prostate cancer and improve the efficiency and cost-effectiveness of the pathology laboratory.
Collapse
Affiliation(s)
- Puria Azadi Moghadam
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, British Columbia V6T 1Z4, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, British Columbia V6T 1Z3, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, 2211 Wesbrook Mall, Vancouver, BC V6T 1Z7, Canada
| | - S Larry Goldenberg
- Department of Urologic Sciences, University of British Columbia, 2775 Laurel Street, Vancouver British Columbia V5Z 1M9, Canada.
| |
Collapse
|
7
|
Katalinic M, Schenk M, Franke S, Katalinic A, Neumuth T, Dietz A, Stoehr M, Gaebel J. Generation of a Realistic Synthetic Laryngeal Cancer Cohort for AI Applications. Cancers (Basel) 2024; 16:639. [PMID: 38339389 PMCID: PMC10854797 DOI: 10.3390/cancers16030639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 01/27/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Obtaining large amounts of real patient data involves great efforts and expenses, and processing this data is fraught with data protection concerns. Consequently, data sharing might not always be possible, particularly when large, open science datasets are needed, as for AI development. For such purposes, the generation of realistic synthetic data may be the solution. Our project aimed to generate realistic cancer data with the use case of laryngeal cancer. METHODS We used the open-source software Synthea and programmed an additional module for development, treatment and follow-up for laryngeal cancer by using external, real-world (RW) evidence from guidelines and cancer registries from Germany. To generate an incidence-based cohort view, we randomly drew laryngeal cancer cases from the simulated population and deceased persons, stratified by the real-world age and sex distributions at diagnosis. RESULTS A module with age- and stage-specific treatment and prognosis for laryngeal cancer was successfully implemented. The synthesized population reflects RW prevalence well, extracting a cohort of 50,000 laryngeal cancer patients. Descriptive data on stage-specific and 5-year overall survival were in accordance with published data. CONCLUSIONS We developed a large cohort of realistic synthetic laryngeal cancer cases with Synthea. Such data can be shared and published open source without data protection issues.
Collapse
Affiliation(s)
- Mika Katalinic
- Innovation Center Computer Assisted Surgery, Faculty of Medicine, University Leipzig, 04109 Leipzig, Germany; (M.K.)
| | - Martin Schenk
- Innovation Center Computer Assisted Surgery, Faculty of Medicine, University Leipzig, 04109 Leipzig, Germany; (M.K.)
| | - Stefan Franke
- Innovation Center Computer Assisted Surgery, Faculty of Medicine, University Leipzig, 04109 Leipzig, Germany; (M.K.)
| | - Alexander Katalinic
- Institute of Social Medicine and Epidemiology, University of Luebeck, 23562 Luebeck, Germany;
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery, Faculty of Medicine, University Leipzig, 04109 Leipzig, Germany; (M.K.)
| | - Andreas Dietz
- Department of Otolaryngology, Head and Neck Surgery, University Hospital Leipzig, 04103 Leipzig, Germany
| | - Matthaeus Stoehr
- Department of Otolaryngology, Head and Neck Surgery, University Hospital Leipzig, 04103 Leipzig, Germany
| | - Jan Gaebel
- Innovation Center Computer Assisted Surgery, Faculty of Medicine, University Leipzig, 04109 Leipzig, Germany; (M.K.)
| |
Collapse
|
8
|
Furtado LV. In Silico Options for Assay Validation. J Appl Lab Med 2024; 9:180-182. [PMID: 38167772 DOI: 10.1093/jalm/jfad099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/20/2023] [Indexed: 01/05/2024]
Affiliation(s)
- Larissa V Furtado
- Department of Pathology, St. Jude Children's Research Hospital, Memphis, TN, United States
| |
Collapse
|
9
|
Deshpande S, Dawood M, Minhas F, Rajpoot N. SynCLay: Interactive synthesis of histology images from bespoke cellular layouts. Med Image Anal 2024; 91:102995. [PMID: 37898050 DOI: 10.1016/j.media.2023.102995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 09/27/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Automated synthesis of histology images has several potential applications in computational pathology. However, no existing method can generate realistic tissue images with a bespoke cellular layout or user-defined histology parameters. In this work, we propose a novel framework called SynCLay (Synthesis from Cellular Layouts) that can construct realistic and high-quality histology images from user-defined cellular layouts along with annotated cellular boundaries. Tissue image generation based on bespoke cellular layouts through the proposed framework allows users to generate different histological patterns from arbitrary topological arrangement of different types of cells (e.g., neutrophils, lymphocytes, epithelial cells and others). SynCLay generated synthetic images can be helpful in studying the role of different types of cells present in the tumor microenvironment. Additionally, they can assist in balancing the distribution of cellular counts in tissue images for designing accurate cellular composition predictors by minimizing the effects of data imbalance. We train SynCLay in an adversarial manner and integrate a nuclear segmentation and classification model in its training to refine nuclear structures and generate nuclear masks in conjunction with synthetic images. During inference, we combine the model with another parametric model for generating colon images and associated cellular counts as annotations given the grade of differentiation and cellularities (cell densities) of different cells. We assess the generated images quantitatively using the Frechet Inception Distance and report on feedback from trained pathologists who assigned realism scores to a set of images generated by the framework. The average realism score across all pathologists for synthetic images was as high as that for the real images. Moreover, with the assistance from pathologists, we showcase the ability of the generated images to accurately differentiate between benign and malignant tumors, thus reinforcing their reliability. We demonstrate that the proposed framework can be used to add new cells to a tissue images and alter cellular positions. We also show that augmenting limited real data with the synthetic data generated by our framework can significantly boost prediction performance of the cellular composition prediction task. The implementation of the proposed SynCLay framework is available at https://github.com/Srijay/SynCLay-Framework.
Collapse
Affiliation(s)
- Srijay Deshpande
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK.
| | - Muhammad Dawood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK; Histofy Ltd, Birmingham, UK.
| |
Collapse
|
10
|
Delanerolle G, Phiri P, Cavalini H, Benfield D, Shetty A, Bouchareb Y, Shi JQ, Zemkoho A. Synthetic data & the future of Women's Health: A synergistic relationship. Int J Med Inform 2023; 179:105238. [PMID: 37813078 DOI: 10.1016/j.ijmedinf.2023.105238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/11/2023]
Abstract
OBJECTIVES The aim of this perspective is to report the use of synthetic data as a viable method in women's health given the current challenges linked to obtaining life-course data within a short period of time and accessing electronic healthcare data. METHODS We used a 3-point perspective method to report an overview of data science, common applications, and ethical implications. RESULTS There are several ethical challenges linked to using real-world data, consequently, generating synthetic data provides an alternative method to conduct comprehensive research when used effectively. The use of clinical characteristics to develop synthetic data is a useful method to consider. Aligning this data as closely as possible to the clinical phenotype would enable researchers to provide data that is very similar to that of the real-world. DISCUSSION Population diversity and disease characterisation is important to optimally use data science. There are several artificial intelligence techniques that can be used to develop synthetic data. CONCLUSION Synthetic data demonstrates promise and versatility when used efficiently aligned to clinical problems. Therefore, exploring this option as a viable method in women's health, in particular for epidemiology may be useful.
Collapse
Affiliation(s)
- Gayathri Delanerolle
- Research & Innovation Department, Southern Health NHS Foundation Trust, SO40 2RZ, Southampton, UK
| | - Peter Phiri
- Research & Innovation Department, Southern Health NHS Foundation Trust, SO40 2RZ, Southampton, UK; School of Psychology, Faculty of Environmental and Life Sciences, University of Southampton, SO17 1BJ, Southampton, UK.
| | - Heitor Cavalini
- Research & Innovation Department, Southern Health NHS Foundation Trust, SO40 2RZ, Southampton, UK
| | - David Benfield
- Research & Innovation Department, Southern Health NHS Foundation Trust, SO40 2RZ, Southampton, UK; Department of Mathematics, University of Southampton, SO17 1BJ, Southampton, UK
| | - Ashish Shetty
- Female Pelvic Medicine and Reconstructive Surgery, University College London, WC1E 6BT, London, UK; University College London Hospitals NHS Foundation Trust, NW1 2PG, London, UK
| | - Yassine Bouchareb
- Sultan Qaboos University, College of Medicine and Health Sciences, Muscat, Oman
| | - Jian Qing Shi
- Research & Innovation Department, Southern Health NHS Foundation Trust, SO40 2RZ, Southampton, UK; Department of Statistics and Data Science, Southern University of Science and Technology, 518055, Shenzhen, China
| | - Alain Zemkoho
- Research & Innovation Department, Southern Health NHS Foundation Trust, SO40 2RZ, Southampton, UK; Department of Mathematics, University of Southampton, SO17 1BJ, Southampton, UK; Alan Turing Institute, 96 Euston Road, NW1 2DB, London, UK
| |
Collapse
|
11
|
Falahkheirkhah K, Mukherjee SS, Gupta S, Herrera-Hernandez L, McCarthy MR, Jimenez RE, Cheville JC, Bhargava R. Accelerating Cancer Histopathology Workflows with Chemical Imaging and Machine Learning. CANCER RESEARCH COMMUNICATIONS 2023; 3:1875-1887. [PMID: 37772992 PMCID: PMC10506535 DOI: 10.1158/2767-9764.crc-23-0226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 08/21/2023] [Accepted: 08/21/2023] [Indexed: 09/30/2023]
Abstract
Histopathology has remained a cornerstone for biomedical tissue assessment for over a century, with a resource-intensive workflow involving biopsy or excision, gross examination, sampling, tissue processing to snap frozen or formalin-fixed paraffin-embedded blocks, sectioning, staining, optical imaging, and microscopic assessment. Emerging chemical imaging approaches, including stimulated Raman scattering (SRS) microscopy, can directly measure inherent molecular composition in tissue (thereby dispensing with the need for tissue processing, sectioning, and using dyes) and can use artificial intelligence (AI) algorithms to provide high-quality images. Here we show the integration of SRS microscopy in a pathology workflow to rapidly record chemical information from minimally processed fresh-frozen prostate tissue. Instead of using thin sections, we record data from intact thick tissues and use optical sectioning to generate images from multiple planes. We use a deep learning–based processing pipeline to generate virtual hematoxylin and eosin images. Next, we extend the computational method to generate archival-quality images in minutes, which are equivalent to those obtained from hours/days-long formalin-fixed, paraffin-embedded processing. We assessed the quality of images from the perspective of enabling pathologists to make decisions, demonstrating that the virtual stained image quality was diagnostically useful and the interpathologist agreement on prostate cancer grade was not impacted. Finally, because this method does not wash away lipids and small molecules, we assessed the utility of lipid chemical composition in determining grade. Together, the combination of chemical imaging and AI provides novel capabilities for rapid assessments in pathology by reducing the complexity and burden of current workflows. SIGNIFICANCE Archival-quality (formalin-fixed paraffin-embedded), thin-section diagnostic images are obtained from thick-cut, fresh-frozen prostate tissues without dyes or stains to expedite cancer histopathology by combining SRS microscopy and machine learning.
Collapse
Affiliation(s)
- Kianoush Falahkheirkhah
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, Illinois
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, Illinois
| | - Sudipta S. Mukherjee
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, Illinois
| | - Sounak Gupta
- Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Rafael E. Jimenez
- Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - John C. Cheville
- Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Rohit Bhargava
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, Illinois
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, Illinois
- Department of Chemical and Biomolecular Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois
- Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois
- Cancer Center at Illinois, University of Illinois Urbana-Champaign, Urbana, Illinois
| |
Collapse
|
12
|
Breen J, Allen K, Zucker K, Adusumilli P, Scarsbrook A, Hall G, Orsi NM, Ravikumar N. Artificial intelligence in ovarian cancer histopathology: a systematic review. NPJ Precis Oncol 2023; 7:83. [PMID: 37653025 PMCID: PMC10471607 DOI: 10.1038/s41698-023-00432-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 08/01/2023] [Indexed: 09/02/2023] Open
Abstract
This study evaluates the quality of published research using artificial intelligence (AI) for ovarian cancer diagnosis or prognosis using histopathology data. A systematic search of PubMed, Scopus, Web of Science, Cochrane CENTRAL, and WHO-ICTRP was conducted up to May 19, 2023. Inclusion criteria required that AI was used for prognostic or diagnostic inferences in human ovarian cancer histopathology images. Risk of bias was assessed using PROBAST. Information about each model was tabulated and summary statistics were reported. The study was registered on PROSPERO (CRD42022334730) and PRISMA 2020 reporting guidelines were followed. Searches identified 1573 records, of which 45 were eligible for inclusion. These studies contained 80 models of interest, including 37 diagnostic models, 22 prognostic models, and 21 other diagnostically relevant models. Common tasks included treatment response prediction (11/80), malignancy status classification (10/80), stain quantification (9/80), and histological subtyping (7/80). Models were developed using 1-1375 histopathology slides from 1-776 ovarian cancer patients. A high or unclear risk of bias was found in all studies, most frequently due to limited analysis and incomplete reporting regarding participant recruitment. Limited research has been conducted on the application of AI to histopathology images for diagnostic or prognostic purposes in ovarian cancer, and none of the models have been demonstrated to be ready for real-world implementation. Key aspects to accelerate clinical translation include transparent and comprehensive reporting of data provenance and modelling approaches, and improved quantitative evaluation using cross-validation and external validations. This work was funded by the Engineering and Physical Sciences Research Council.
Collapse
Affiliation(s)
- Jack Breen
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK.
| | - Katie Allen
- Leeds Institute of Medical Research at St James's, School of Medicine, University of Leeds, Leeds, UK
| | - Kieran Zucker
- Leeds Cancer Centre, St James's University Hospital, Leeds, UK
| | - Pratik Adusumilli
- Leeds Institute of Medical Research at St James's, School of Medicine, University of Leeds, Leeds, UK
- Department of Radiology, St James's University Hospital, Leeds, UK
| | - Andrew Scarsbrook
- Leeds Institute of Medical Research at St James's, School of Medicine, University of Leeds, Leeds, UK
- Department of Radiology, St James's University Hospital, Leeds, UK
| | - Geoff Hall
- Leeds Cancer Centre, St James's University Hospital, Leeds, UK
| | - Nicolas M Orsi
- Leeds Institute of Medical Research at St James's, School of Medicine, University of Leeds, Leeds, UK
| | - Nishant Ravikumar
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK
| |
Collapse
|
13
|
Jacobs F, D'Amico S, Benvenuti C, Gaudio M, Saltalamacchia G, Miggiano C, De Sanctis R, Della Porta MG, Santoro A, Zambelli A. Opportunities and Challenges of Synthetic Data Generation in Oncology. JCO Clin Cancer Inform 2023; 7:e2300045. [PMID: 37535875 DOI: 10.1200/cci.23.00045] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/05/2023] [Accepted: 05/25/2023] [Indexed: 08/05/2023] Open
Abstract
Widespread interest in artificial intelligence (AI) in health care has focused mainly on deductive systems that analyze available real-world data to discover patterns not otherwise visible. Generative adversarial network, a new type of inductive AI, has recently evolved to generate high-fidelity virtual synthetic data (SD) trained on relatively limited real-world information. The AI system is fed with a collection of real data, and it learns to generate new augmented data while maintaining the general characteristics of the original data set. The use of SD to enhance clinical research and protect patient privacy has drawn a lot of interest in medicine and in the complex field of oncology. This article summarizes the main characteristics of this innovative technology and critically discusses how it can be used to accelerate data access for secondary purposes, providing an overview of the opportunities and challenges of SD generation for clinical cancer research and health care.
Collapse
Affiliation(s)
- Flavia Jacobs
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | | | - Chiara Benvenuti
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Mariangela Gaudio
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | | | - Chiara Miggiano
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Rita De Sanctis
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Matteo Giovanni Della Porta
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Armando Santoro
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Alberto Zambelli
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| |
Collapse
|
14
|
Wu Y, Li Y, Xiong X, Liu X, Lin B, Xu B. Recent advances of pathomics in colorectal cancer diagnosis and prognosis. Front Oncol 2023; 13:1094869. [PMID: 37538112 PMCID: PMC10396402 DOI: 10.3389/fonc.2023.1094869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 06/13/2023] [Indexed: 08/05/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common malignancies, with the third highest incidence and the second highest mortality in the world. To improve the therapeutic outcome, the risk stratification and prognosis predictions would help guide clinical treatment decisions. Achieving these goals have been facilitated by the fast development of artificial intelligence (AI) -based algorithms using radiological and pathological data, in combination with genomic information. Among them, features extracted from pathological images, termed pathomics, are able to reflect sub-visual characteristics linking to better stratification and prediction of therapeutic responses. In this paper, we review recent advances in pathological image-based algorithms in CRC, focusing on diagnosis of benign and malignant lesions, micro-satellite instability, as well as prediction of neoadjuvant chemoradiotherapy and the prognosis of CRC patients.
Collapse
Affiliation(s)
- Yihan Wu
- School of Medicine, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
| | - Yi Li
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xiaomin Xiong
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xiaohua Liu
- Bioengineering College, Chongqing University, Chongqing, China
| | - Bo Lin
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
| | - Bo Xu
- School of Medicine, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
15
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
16
|
Dolezal JM, Wolk R, Hieromnimon HM, Howard FM, Srisuwananukorn A, Karpeyev D, Ramesh S, Kochanny S, Kwon JW, Agni M, Simon RC, Desai C, Kherallah R, Nguyen TD, Schulte JJ, Cole K, Khramtsova G, Garassino MC, Husain AN, Li H, Grossman R, Cipriani NA, Pearson AT. Deep learning generates synthetic cancer histology for explainability and education. NPJ Precis Oncol 2023; 7:49. [PMID: 37248379 PMCID: PMC10227067 DOI: 10.1038/s41698-023-00399-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023] Open
Abstract
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Rachelle Wolk
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Hanna M Hieromnimon
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Frederick M Howard
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | | | | | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Jung Woo Kwon
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Meghana Agni
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Richard C Simon
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Chandni Desai
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Raghad Kherallah
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Tung D Nguyen
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Jefree J Schulte
- Department of Pathology and Laboratory Medicine, University of Wisconsin at Madison, Madison, WN, USA
| | - Kimberly Cole
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Galina Khramtsova
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Marina Chiara Garassino
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Aliya N Husain
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Huihua Li
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Robert Grossman
- University of Chicago, Center for Translational Data Science, Chicago, IL, USA
| | - Nicole A Cipriani
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA.
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA.
| |
Collapse
|
17
|
Ghose S, Cho S, Ginty F, McDonough E, Davis C, Zhang Z, Mitra J, Harris AL, Thike AA, Tan PH, Gökmen-Polar Y, Badve SS. Predicting Breast Cancer Events in Ductal Carcinoma In Situ (DCIS) Using Generative Adversarial Network Augmented Deep Learning Model. Cancers (Basel) 2023; 15:1922. [PMID: 37046583 PMCID: PMC10093091 DOI: 10.3390/cancers15071922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/21/2023] [Accepted: 03/14/2023] [Indexed: 04/14/2023] Open
Abstract
Standard clinicopathological parameters (age, growth pattern, tumor size, margin status, and grade) have been shown to have limited value in predicting recurrence in ductal carcinoma in situ (DCIS) patients. Early and accurate recurrence prediction would facilitate a more aggressive treatment policy for high-risk patients (mastectomy or adjuvant radiation therapy), and simultaneously reduce over-treatment of low-risk patients. Generative adversarial networks (GAN) are a class of DL models in which two adversarial neural networks, generator and discriminator, compete with each other to generate high quality images. In this work, we have developed a deep learning (DL) classification network that predicts breast cancer events (BCEs) in DCIS patients using hematoxylin and eosin (H & E) images. The DL classification model was trained on 67 patients using image patches from the actual DCIS cores and GAN generated image patches to predict breast cancer events (BCEs). The hold-out validation dataset (n = 66) had an AUC of 0.82. Bayesian analysis further confirmed the independence of the model from classical clinicopathological parameters. DL models of H & E images may be used as a risk stratification strategy for DCIS patients to personalize therapy.
Collapse
Affiliation(s)
| | - Sanghee Cho
- GE Research Center, Niskayuna, NY 12309, USA
| | - Fiona Ginty
- GE Research Center, Niskayuna, NY 12309, USA
| | | | | | | | | | - Adrian L. Harris
- Department of Oncology, Cancer and Haematology Centre, Oxford University, Oxford OX3 9DU, UK
| | - Aye Aye Thike
- Anatomical Pathology, Singapore General Hospital, Singapore 169608, Singapore
| | - Puay Hoon Tan
- Anatomical Pathology, Singapore General Hospital, Singapore 169608, Singapore
| | - Yesim Gökmen-Polar
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
- Winship Cancer Institute, Atlanta, GA 30322, USA
| | - Sunil S. Badve
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
- Winship Cancer Institute, Atlanta, GA 30322, USA
| |
Collapse
|
18
|
Sharafudeen M, J. A, Chandra S. S. V. Leveraging Vision Attention Transformers for Detection of Artificially Synthesized Dermoscopic Lesion Deepfakes Using Derm-CGAN. Diagnostics (Basel) 2023; 13:diagnostics13050825. [PMID: 36899969 PMCID: PMC10001347 DOI: 10.3390/diagnostics13050825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/25/2023] Open
Abstract
Synthesized multimedia is an open concern that has received much too little attention in the scientific community. In recent years, generative models have been utilized in maneuvering deepfakes in medical imaging modalities. We investigate the synthesized generation and detection of dermoscopic skin lesion images by leveraging the conceptual aspects of Conditional Generative Adversarial Networks and state-of-the-art Vision Transformers (ViT). The Derm-CGAN is architectured for the realistic generation of six different dermoscopic skin lesions. Analysis of the similarity between real and synthesized fakes revealed a high correlation. Further, several ViT variations were investigated to distinguish between actual and fake lesions. The best-performing model achieved an accuracy of 97.18% which has over 7% marginal gain over the second best-performing network. The trade-off of the proposed model compared to other networks, as well as a benchmark face dataset, was critically analyzed in terms of computational complexity. This technology is capable of harming laymen through medical misdiagnosis or insurance scams. Further research in this domain would be able to assist physicians and the general public in countering and resisting deepfake threats.
Collapse
Affiliation(s)
- Misaj Sharafudeen
- Department of Computer Science, University of Kerala, Kerala 695581, India
| | - Andrew J.
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
- Correspondence: (A.J.); (V.C.S.S.)
| | - Vinod Chandra S. S.
- Department of Computer Science, University of Kerala, Kerala 695581, India
- Correspondence: (A.J.); (V.C.S.S.)
| |
Collapse
|
19
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
20
|
Falahkheirkhah K, Tiwari S, Yeh K, Gupta S, Herrera-Hernandez L, McCarthy MR, Jimenez RE, Cheville JC, Bhargava R. Deepfake Histologic Images for Enhancing Digital Pathology. J Transl Med 2023; 103:100006. [PMID: 36748189 PMCID: PMC10457173 DOI: 10.1016/j.labinv.2022.100006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/04/2022] [Accepted: 09/21/2022] [Indexed: 01/19/2023] Open
Abstract
A pathologist's optical microscopic examination of thinly cut, stained tissue on glass slides prepared from a formalin-fixed paraffin-embedded tissue blocks is the gold standard for tissue diagnostics. In addition, the diagnostic abilities and expertise of pathologists is dependent on their direct experience with common and rarer variant morphologies. Recently, deep learning approaches have been used to successfully show a high level of accuracy for such tasks. However, obtaining expert-level annotated images is an expensive and time-consuming task, and artificially synthesized histologic images can prove greatly beneficial. In this study, we present an approach to not only generate histologic images that reproduce the diagnostic morphologic features of common disease but also provide a user ability to generate new and rare morphologies. Our approach involves developing a generative adversarial network model that synthesizes pathology images constrained by class labels. We investigated the ability of this framework in synthesizing realistic prostate and colon tissue images and assessed the utility of these images in augmenting the diagnostic ability of machine learning methods and their usability by a panel of experienced anatomic pathologists. Synthetic data generated by our framework performed similar to real data when training a deep learning model for diagnosis. Pathologists were not able to distinguish between real and synthetic images, and their analyses showed a similar level of interobserver agreement for prostate cancer grading. We extended the approach to significantly more complex images from colon biopsies and showed that the morphology of the complex microenvironment in such tissues can be reproduced. Finally, we present the ability for a user to generate deepfake histologic images using a simple markup of sematic labels.
Collapse
Affiliation(s)
- Kianoush Falahkheirkhah
- Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois; Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Saumya Tiwari
- Department of Medicine, University of California San Diego, San Diego, California
| | - Kevin Yeh
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Sounak Gupta
- College of Medicine and Science, Mayo Clinic, Rochester, Minnesota
| | | | | | - Rafael E Jimenez
- College of Medicine and Science, Mayo Clinic, Rochester, Minnesota
| | - John C Cheville
- College of Medicine and Science, Mayo Clinic, Rochester, Minnesota
| | - Rohit Bhargava
- Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois; Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois; Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois; Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois; Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois; Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, Illinois.
| |
Collapse
|
21
|
Rajotte JF, Bergen R, Buckeridge DL, El Emam K, Ng R, Strome E. Synthetic data as an enabler for machine learning applications in medicine. iScience 2022; 25:105331. [DOI: 10.1016/j.isci.2022.105331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
|
22
|
Shmatko A, Ghaffari Laleh N, Gerstung M, Kather JN. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. NATURE CANCER 2022; 3:1026-1038. [PMID: 36138135 DOI: 10.1038/s43018-022-00436-4] [Citation(s) in RCA: 94] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.
Collapse
Affiliation(s)
- Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK
| | | | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK.
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
23
|
Medical domain knowledge in domain-agnostic generative AI. NPJ Digit Med 2022; 5:90. [PMID: 35817798 PMCID: PMC9273760 DOI: 10.1038/s41746-022-00634-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 06/15/2022] [Indexed: 11/25/2022] Open
|
24
|
Budelmann D, Laue H, Weiss N, Dahmen U, D'Alessandro LA, Biermayer I, Klingmüller U, Ghallab A, Hassan R, Begher-Tibbe B, Hengstler JG, Schwen LO. Automated Detection of Portal Fields and Central Veins in Whole-Slide Images of Liver Tissue. J Pathol Inform 2022; 13:100001. [PMID: 35242441 PMCID: PMC8860737 DOI: 10.1016/j.jpi.2022.100001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 11/30/2021] [Indexed: 02/07/2023] Open
Abstract
Many physiological processes and pathological phenomena in the liver tissue are spatially heterogeneous. At a local scale, biomarkers can be quantified along the axis of the blood flow, from portal fields (PFs) to central veins (CVs), i.e., in zonated form. This requires detecting PFs and CVs. However, manually annotating these structures in multiple whole-slide images is a tedious task. We describe and evaluate a fully automated method, based on a convolutional neural network, for simultaneously detecting PFs and CVs in a single stained section. Trained on scans of hematoxylin and eosin-stained liver tissue, the detector performed well with an F1 score of 0.81 compared to annotation by a human expert. It does, however, not generalize well to previously unseen scans of steatotic liver tissue with an F1 score of 0.59. Automated PF and CV detection eliminates the bottleneck of manual annotation for subsequent automated analyses, as illustrated by two proof-of-concept applications: We computed lobulus sizes based on the detected PF and CV positions, where results agreed with published lobulus sizes. Moreover, we demonstrate the feasibility of zonated quantification of biomarkers detected in different stainings based on lobuli and zones obtained from the detected PF and CV positions. A negative control (hematoxylin and eosin) showed the expected homogeneity, a positive control (glutamine synthetase) was quantified to be strictly pericentral, and a plausible zonation for a heterogeneous F4/80 staining was obtained. Automated detection of PFs and CVs is one building block for automatically quantifying physiologically relevant heterogeneity of liver tissue biomarkers. Perspectively, a more robust and automated assessment of zonation from whole-slide images will be valuable for parameterizing spatially resolved models of liver metabolism and to provide diagnostic information.
Collapse
Affiliation(s)
| | | | | | - Uta Dahmen
- Experimental Transplantation Surgery, Department of General, Visceral and Vascular Surgery, University Hospital Jena, Jena, Germany
| | - Lorenza A D'Alessandro
- Deutsches Krebsforschungszentrum, Systems Biology of Signal Transduction, Heidelberg, Germany
| | - Ina Biermayer
- Deutsches Krebsforschungszentrum, Systems Biology of Signal Transduction, Heidelberg, Germany
| | - Ursula Klingmüller
- Deutsches Krebsforschungszentrum, Systems Biology of Signal Transduction, Heidelberg, Germany
| | - Ahmed Ghallab
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany.,Department of Forensic Medicine and Toxicology, Faculty of Veterinary Medicine, South Valley University, Qena, Egypt
| | - Reham Hassan
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany.,Department of Forensic Medicine and Toxicology, Faculty of Veterinary Medicine, South Valley University, Qena, Egypt
| | - Brigitte Begher-Tibbe
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany
| | - Jan G Hengstler
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany
| | | |
Collapse
|
25
|
Artificial Intelligence for Predicting Microsatellite Instability Based on Tumor Histomorphology: A Systematic Review. Int J Mol Sci 2022; 23:ijms23052462. [PMID: 35269607 PMCID: PMC8910565 DOI: 10.3390/ijms23052462] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 02/21/2022] [Indexed: 02/04/2023] Open
Abstract
Microsatellite instability (MSI)/defective DNA mismatch repair (dMMR) is receiving more attention as a biomarker for eligibility for immune checkpoint inhibitors in advanced diseases. However, due to high costs and resource limitations, MSI/dMMR testing is not widely performed. Some attempts are in progress to predict MSI/dMMR status through histomorphological features on H&E slides using artificial intelligence (AI) technology. In this study, the potential predictive role of this new methodology was reviewed through a systematic review. Studies up to September 2021 were searched through PubMed and Embase database searches. The design and results of each study were summarized, and the risk of bias for each study was evaluated. For colorectal cancer, AI-based systems showed excellent performance with the highest standard of 0.972; for gastric and endometrial cancers they showed a relatively low but satisfactory performance, with the highest standard of 0.81 and 0.82, respectively. However, analyzing the risk of bias, most studies were evaluated at high-risk. AI-based systems showed a high potential in predicting the MSI/dMMR status of different cancer types, and particularly of colorectal cancers. Therefore, a confirmation test should be required only for the results that are positive in the AI test.
Collapse
|
26
|
Homeyer A, Geißler C, Schwen LO, Zakrzewski F, Evans T, Strohmenger K, Westphal M, Bülow RD, Kargl M, Karjauv A, Munné-Bertran I, Retzlaff CO, Romero-López A, Sołtysiński T, Plass M, Carvalho R, Steinbach P, Lan YC, Bouteldja N, Haber D, Rojas-Carulla M, Vafaei Sadr A, Kraft M, Krüger D, Fick R, Lang T, Boor P, Müller H, Hufnagl P, Zerbe N. Recommendations on compiling test datasets for evaluating artificial intelligence solutions in pathology. Mod Pathol 2022; 35:1759-1769. [PMID: 36088478 PMCID: PMC9708586 DOI: 10.1038/s41379-022-01147-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/24/2022] [Accepted: 07/25/2022] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations on compiling test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help pathologists and regulatory agencies verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.
Collapse
Affiliation(s)
- André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany.
| | - Christian Geißler
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | - Lars Ole Schwen
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Falk Zakrzewski
- grid.412282.f0000 0001 1091 2917Institute of Pathology, Carl Gustav Carus University Hospital Dresden (UKD), TU Dresden (TUD), Fetscherstrasse 74, 01307 Dresden, Germany
| | - Theodore Evans
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | - Klaus Strohmenger
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| | - Max Westphal
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Roman David Bülow
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Michaela Kargl
- grid.11598.340000 0000 8988 2476Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010 Graz, Austria
| | - Aray Karjauv
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | - Isidre Munné-Bertran
- MoticEurope, S.L.U., C. Les Corts, 12 Poligono Industrial, 08349 Barcelona, Spain
| | - Carl Orge Retzlaff
- grid.6734.60000 0001 2292 8254Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
| | | | | | - Markus Plass
- grid.11598.340000 0000 8988 2476Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010 Graz, Austria
| | - Rita Carvalho
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| | - Peter Steinbach
- grid.40602.300000 0001 2158 0612Helmholtz-Zentrum Dresden Rossendorf, Bautzner Landstraße 400, 01328 Dresden, Germany
| | - Yu-Chia Lan
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Nassim Bouteldja
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - David Haber
- Lakera AI AG, Zelgstrasse 7, 8003 Zürich, Switzerland
| | | | - Alireza Vafaei Sadr
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | | | - Daniel Krüger
- grid.474385.90000 0004 4676 7928Olympus Soft Imaging Solutions GmbH, Johann-Krane-Weg 39, 48149 Münster, Germany
| | - Rutger Fick
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France
| | - Tobias Lang
- Mindpeak GmbH, Zirkusweg 2, 20359 Hamburg, Germany
| | - Peter Boor
- grid.412301.50000 0000 8653 1507Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Heimo Müller
- grid.11598.340000 0000 8988 2476Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010 Graz, Austria
| | - Peter Hufnagl
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| | - Norman Zerbe
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117 Berlin, Germany
| |
Collapse
|
27
|
SAFRON: Stitching Across the Frontier Network for Generating Colorectal Cancer Histology Images. Med Image Anal 2021; 77:102337. [PMID: 35016078 DOI: 10.1016/j.media.2021.102337] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/12/2022]
Abstract
Automated synthesis of histology images has several potential applications including the development of data-efficient deep learning algorithms. In the field of computational pathology, where histology images are large in size and visual context is crucial, synthesis of large high-resolution images via generative modeling is an important but challenging task due to memory and computational constraints. To address this challenge, we propose a novel framework called SAFRON (Stitching Across the FROntier Network) to construct realistic, large high-resolution tissue images conditioned on input tissue component masks. The main novelty in the framework is integration of stitching in its loss function which enables generation of images of arbitrarily large sizes after training on relatively small image patches while preserving morphological features with minimal boundary artifacts. We have used the proposed framework for generating, to the best of our knowledge, the largest-sized synthetic histology images to date (up to 11K×8K pixels). Compared to existing approaches, our framework is efficient in terms of the memory required for training and computations needed for synthesizing large high-resolution images. The quality of generated images was assessed quantitatively using Frechet Inception Distance as well as by 7 trained pathologists, who assigned a realism score to a set of images generated by SAFRON. The average realism score across all pathologists for synthetic images was as high as that of real images. We also show that training with additional synthetic data generated by SAFRON can significantly boost prediction performance of gland segmentation and cancer detection algorithms in colorectal cancer histology images.
Collapse
|
28
|
Dehkharghanian T, Rahnamayan S, Riasatian A, Bidgoli AA, Kalra S, Zaveri M, Babaie M, Seyed Sajadi MS, Gonzalelz R, Diamandis P, Pantanowitz L, Huang T, Tizhoosh HR. Selection, Visualization, and Interpretation of Deep Features in Lung Adenocarcinoma and Squamous Cell Carcinoma. THE AMERICAN JOURNAL OF PATHOLOGY 2021; 191:2172-2183. [PMID: 34508689 DOI: 10.1016/j.ajpath.2021.08.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 08/09/2021] [Accepted: 08/20/2021] [Indexed: 12/18/2022]
Abstract
Although deep learning networks applied to digital images have shown impressive results for many pathology-related tasks, their black-box approach and limitation in terms of interpretability are significant obstacles for their widespread clinical utility. This study investigates the visualization of deep features (DFs) to characterize two lung cancer subtypes, adenocarcinoma and squamous cell carcinoma. This study demonstrates that a subset of DFs exist that can accurately distinguish these two cancer subtypes, prominent DFs. Visualization of such individual DFs allows us to understand better histopathologic patterns at both the whole-slide and patch levels, allowing discrimination of these cancer types. These DFs were visualized at the whole slide image level through DF-specific heatmaps and at tissue patch level through generating activation maps. In addition, we show that these prominent DFs contain information that can distinguish carcinomas of organs other than the lung. This framework may serve as a platform for evaluating the interpretability of any deep network for diagnostic decision making.
Collapse
Affiliation(s)
- Taher Dehkharghanian
- Nature Inspired Computer Intelligence (NICI) Lab, Ontario Tech University, Oshawa, Ontario, Canada; Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Shahryar Rahnamayan
- Nature Inspired Computer Intelligence (NICI) Lab, Ontario Tech University, Oshawa, Ontario, Canada
| | - Abtin Riasatian
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Azam A Bidgoli
- Nature Inspired Computer Intelligence (NICI) Lab, Ontario Tech University, Oshawa, Ontario, Canada
| | - Shivam Kalra
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Manit Zaveri
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Morteza Babaie
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Mahjabin S Seyed Sajadi
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Ricardo Gonzalelz
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Phedias Diamandis
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Liron Pantanowitz
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | - Tao Huang
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | - Hamid R Tizhoosh
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada.
| |
Collapse
|
29
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|
30
|
Cornish TC. Artificial intelligence for automating the measurement of histologic image biomarkers. J Clin Invest 2021; 131:147966. [PMID: 33855974 DOI: 10.1172/jci147966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence has been applied to histopathology for decades, but the recent increase in interest is attributable to well-publicized successes in the application of deep-learning techniques, such as convolutional neural networks, for image analysis. Recently, generative adversarial networks (GANs) have provided a method for performing image-to-image translation tasks on histopathology images, including image segmentation. In this issue of the JCI, Koyuncu et al. applied GANs to whole-slide images of p16-positive oropharyngeal squamous cell carcinoma (OPSCC) to automate the calculation of a multinucleation index (MuNI) for prognostication in p16-positive OPSCC. Multivariable analysis showed that the MuNI was prognostic for disease-free survival, overall survival, and metastasis-free survival. These results are promising, as they present a prognostic method for p16-positive OPSCC and highlight methods for using deep learning to measure image biomarkers from histopathologic samples in an inherently explainable manner.
Collapse
|
31
|
Homeyer A, Lotz J, Schwen LO, Weiss N, Romberg D, Höfener H, Zerbe N, Hufnagl P. Artificial Intelligence in Pathology: From Prototype to Product. J Pathol Inform 2021; 12:13. [PMID: 34012717 PMCID: PMC8112352 DOI: 10.4103/jpi.jpi_84_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 12/28/2020] [Accepted: 01/18/2021] [Indexed: 12/13/2022] Open
Abstract
Modern image analysis techniques based on artificial intelligence (AI) have great potential to improve the quality and efficiency of diagnostic procedures in pathology and to detect novel biomarkers. Despite thousands of published research papers on applications of AI in pathology, hardly any research implementations have matured into commercial products for routine use. Bringing an AI solution for pathology to market poses significant technological, business, and regulatory challenges. In this paper, we provide a comprehensive overview and advice on how to meet these challenges. We outline how research prototypes can be turned into a product-ready state and integrated into the IT infrastructure of clinical laboratories. We also discuss business models for profitable AI solutions and reimbursement options for computer assistance in pathology. Moreover, we explain how to obtain regulatory approval so that AI solutions can be launched as in vitro diagnostic medical devices. Thus, this paper offers computer scientists, software companies, and pathologists a road map for transforming prototypes of AI solutions into commercial products.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Norman Zerbe
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Pathology, Berlin, Germany.,HTW University of Applied Sciences Berlin, Berlin, Germany
| | - Peter Hufnagl
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Pathology, Berlin, Germany.,HTW University of Applied Sciences Berlin, Berlin, Germany
| |
Collapse
|
32
|
Safarpoor A, Kalra S, Tizhoosh HR. Generative models in pathology: synthesis of diagnostic quality pathology images †. J Pathol 2020; 253:131-132. [PMID: 33140849 DOI: 10.1002/path.5577] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 10/27/2020] [Indexed: 11/10/2022]
Abstract
Within artificial intelligence and machine learning, a generative model is a powerful tool for learning any kind of data distribution. With the advent of deep learning and its success in image recognition, the field of deep generative models has clearly emerged as one of the promising fields for medical imaging. In a recent issue of The Journal of Pathology, Levine, Peng et al demonstrate the ability of generative models to synthesize high-quality pathology images. They suggested that generative models can serve as an unlimited source of images either for educating freshman pathologists or training machine learning models for diverse image analysis tasks, especially in scarce cases, while resolving patients' privacy and confidentiality concerns. © 2020 The Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
| | - Shivam Kalra
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada
| | | |
Collapse
|
33
|
Tschuchnig ME, Oostingh GJ, Gadermayr M. Generative Adversarial Networks in Digital Pathology: A Survey on Trends and Future Potential. PATTERNS (NEW YORK, N.Y.) 2020; 1:100089. [PMID: 33205132 PMCID: PMC7660380 DOI: 10.1016/j.patter.2020.100089] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Image analysis in the field of digital pathology has recently gained increased popularity. The use of high-quality whole-slide scanners enables the fast acquisition of large amounts of image data, showing extensive context and microscopic detail at the same time. Simultaneously, novel machine-learning algorithms have boosted the performance of image analysis approaches. In this paper, we focus on a particularly powerful class of architectures, the so-called generative adversarial networks (GANs) applied to histological image data. Besides improving performance, GANs also enable previously intractable application scenarios in this field. However, GANs could exhibit a potential for introducing bias. Hereby, we summarize the recent state-of-the-art developments in a generalizing notation, present the main applications of GANs, and give an outlook of some chosen promising approaches and their possible future applications. In addition, we identify currently unavailable methods with potential for future applications.
Collapse
Affiliation(s)
- Maximilian E. Tschuchnig
- Department of Information Technologies and Systems Management, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
- Department of Biomedical Sciences, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
| | - Gertie J. Oostingh
- Department of Biomedical Sciences, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
| | - Michael Gadermayr
- Department of Information Technologies and Systems Management, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
| |
Collapse
|