1
|
Srinivasan G, Le MK, Azher Z, Liu X, Vaickus L, Kaur H, Kolling F, Palisoul S, Perreard L, Lau KS, Yao K, Levy J. Histology-Based Virtual RNA Inference Identifies Pathways Associated with Metastasis Risk in Colorectal Cancer. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.04.22.25326170. [PMID: 40313260 PMCID: PMC12045403 DOI: 10.1101/2025.04.22.25326170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/03/2025]
Abstract
Colorectal cancer (CRC) remains a major health concern, with over 150,000 new diagnoses and more than 50,000 deaths annually in the United States, underscoring an urgent need for improved screening, prognostication, disease management, and therapeutic approaches. The tumor microenvironment (TME)-comprising cancerous and immune cells interacting within the tumor's spatial architecture-plays a critical role in disease progression and treatment outcomes, reinforcing its importance as a prognostic marker for metastasis and recurrence risk. However, traditional methods for TME characterization, such as bulk transcriptomics and multiplex protein assays, lack sufficient spatial resolution. Although spatial transcriptomics (ST) allows for the high-resolution mapping of whole transcriptomes at near-cellular resolution, current ST technologies (e.g., Visium, Xenium) are limited by high costs, low throughput, and issues with reproducibility, preventing their widespread application in large-scale molecular epidemiology studies. In this study, we refined and implemented Virtual RNA Inference (VRI) to derive ST-level molecular information directly from hematoxylin and eosin (H&E)-stained tissue images. Our VRI models were trained on the largest matched CRC ST dataset to date, comprising 45 patients and more than 300,000 Visium spots from primary tumors. Using state-of-the-art architectures (UNI, ResNet-50, ViT, and VMamba), we achieved a median Spearman's correlation coefficient of 0.546 between predicted and measured spot-level expression. As validation, VRI-derived gene signatures linked to specific tissue regions (tumor, interface, submucosa, stroma, serosa, muscularis, inflammation) showed strong concordance with signatures generated via direct ST, and VRI performed accurately in estimating cell-type proportions spatially from H&E slides. In an expanded CRC cohort controlling for tumor invasiveness and clinical factors, we further identified VRI-derived gene signatures significantly associated with key prognostic outcomes, including metastasis status. Although certain tumor-related pathways are not fully captured by histology alone, our findings highlight the ability of VRI to infer a wide range of "histology-associated" biological pathways at near-cellular resolution without requiring ST profiling. Future efforts will extend this framework to expand TME phenotyping from standard H&E tissue images, with the potential to accelerate translational CRC research at scale.
Collapse
Affiliation(s)
- Gokul Srinivasan
- Departments of Pathology and Laboratory Medicine and Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048
| | - Minh-Khang Le
- Departments of Pathology and Laboratory Medicine and Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048
| | - Zarif Azher
- Departments of Pathology and Laboratory Medicine and Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048
- California Institute of Technology, Pasadena, CA, 91125
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center and Geisel School of Medicine at Dartmouth, Lebanon, NH 03766
| | - Louis Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center and Geisel School of Medicine at Dartmouth, Lebanon, NH 03766
| | - Harsimran Kaur
- Center for Computational Systems Biology, Department of Cell and Developmental Biology, Chemical and Physical Biology Program, Vanderbilt University School of Medicine, Nashville TN 37232
| | | | - Scott Palisoul
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center and Geisel School of Medicine at Dartmouth, Lebanon, NH 03766
| | | | - Ken S. Lau
- Center for Computational Systems Biology, Department of Cell and Developmental Biology, Chemical and Physical Biology Program, Vanderbilt University School of Medicine, Nashville TN 37232
| | - Keluo Yao
- Departments of Pathology and Laboratory Medicine and Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048
| | - Joshua Levy
- Departments of Pathology and Laboratory Medicine and Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center and Geisel School of Medicine at Dartmouth, Lebanon, NH 03766
| |
Collapse
|
2
|
Zhang Z, Zhou X, Fang Y, Xiong Z, Zhang T. AI-driven 3D bioprinting for regenerative medicine: From bench to bedside. Bioact Mater 2025; 45:201-230. [PMID: 39651398 PMCID: PMC11625302 DOI: 10.1016/j.bioactmat.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 11/01/2024] [Accepted: 11/16/2024] [Indexed: 12/11/2024] Open
Abstract
In recent decades, 3D bioprinting has garnered significant research attention due to its ability to manipulate biomaterials and cells to create complex structures precisely. However, due to technological and cost constraints, the clinical translation of 3D bioprinted products (BPPs) from bench to bedside has been hindered by challenges in terms of personalization of design and scaling up of production. Recently, the emerging applications of artificial intelligence (AI) technologies have significantly improved the performance of 3D bioprinting. However, the existing literature remains deficient in a methodological exploration of AI technologies' potential to overcome these challenges in advancing 3D bioprinting toward clinical application. This paper aims to present a systematic methodology for AI-driven 3D bioprinting, structured within the theoretical framework of Quality by Design (QbD). This paper commences by introducing the QbD theory into 3D bioprinting, followed by summarizing the technology roadmap of AI integration in 3D bioprinting, including multi-scale and multi-modal sensing, data-driven design, and in-line process control. This paper further describes specific AI applications in 3D bioprinting's key elements, including bioink formulation, model structure, printing process, and function regulation. Finally, the paper discusses current prospects and challenges associated with AI technologies to further advance the clinical translation of 3D bioprinting.
Collapse
Affiliation(s)
- Zhenrui Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Xianhao Zhou
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Yongcong Fang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| | - Zhuo Xiong
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Ting Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| |
Collapse
|
3
|
Hou X, Guan Z, Zhang X, Hu X, Zou S, Liang C, Shi L, Zhang K, You H. Evaluation of tumor budding with virtual panCK stains generated by novel multi-model CNN framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108352. [PMID: 39241330 DOI: 10.1016/j.cmpb.2024.108352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 06/03/2024] [Accepted: 07/22/2024] [Indexed: 09/09/2024]
Abstract
As the global incidence of cancer continues to rise rapidly, the need for swift and precise diagnoses has become increasingly pressing. Pathologists commonly rely on H&E-panCK stain pairs for various aspects of cancer diagnosis, including the detection of occult tumor cells and the evaluation of tumor budding. Nevertheless, conventional chemical staining methods suffer from notable drawbacks, such as time-intensive processes and irreversible staining outcomes. The virtual stain technique, leveraging generative adversarial network (GAN), has emerged as a promising alternative to chemical stains. This approach aims to transform biopsy scans (often H&E) into other stain types. Despite achieving notable progress in recent years, current state-of-the-art virtual staining models confront challenges that hinder their efficacy, particularly in achieving accurate staining outcomes under specific conditions. These limitations have impeded the practical integration of virtual staining into diagnostic practices. To address the goal of producing virtual panCK stains capable of replacing chemical panCK, we propose an innovative multi-model framework. Our approach involves employing a combination of Mask-RCNN (for cell segmentation) and GAN models to extract cytokeratin distribution from chemical H&E images. Additionally, we introduce a tailored dynamic GAN model to convert H&E images into virtual panCK stains, integrating the derived cytokeratin distribution. Our framework is motivated by the fact that the unique pattern of the panCK is derived from cytokeratin distribution. As a proof of concept, we employ our virtual panCK stains to evaluate tumor budding in 45 H&E whole-slide images taken from breast cancer-invaded lymph nodes . Through thorough validation by both pathologists and the QuPath software, our virtual panCK stains demonstrate a remarkable level of accuracy. In stark contrast, the accuracy of state-of-the-art single cycleGAN virtual panCK stains is negligible. To our best knowledge, this is the first instance of a multi-model virtual panCK framework and the utilization of virtual panCK for tumor budding assessment. Our framework excels in generating dependable virtual panCK stains with significantly improved efficiency, thereby considerably reducing turnaround times in diagnosis. Furthermore, its outcomes are easily comprehensible even to pathologists who may not be well-versed in computer technology. We firmly believe that our framework has the potential to advance the field of virtual stain, thereby making significant strides towards improved cancer diagnosis.
Collapse
Affiliation(s)
- Xingzhong Hou
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Zhen Guan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - Xianwei Zhang
- Department of Pathology, Henan Provincial People's Hospital; People's Hospital of Zhengzhou University, Zhengzhou, Henan 450003, China
| | - Xiao Hu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Shuangmei Zou
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Chunzi Liang
- School of Laboratory Medicine, Hubei University of Chinese Medicine, 16 Huangjia Lake West Road, Wuhan, Hubei 430065, China.
| | - Lulin Shi
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Kaitai Zhang
- State Key Laboratory of Molecular Oncology, Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Haihang You
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; Zhongguancun Laboratory, Beijing 102206, China
| |
Collapse
|
4
|
Kawai M, Odate T, Kasai K, Inoue T, Mochizuki K, Oishi N, Kondo T. Virtual multi-staining in a single-section view for renal pathology using generative adversarial networks. Comput Biol Med 2024; 182:109149. [PMID: 39298886 DOI: 10.1016/j.compbiomed.2024.109149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 08/17/2024] [Accepted: 09/09/2024] [Indexed: 09/22/2024]
Abstract
Sections stained in periodic acid-Schiff (PAS), periodic acid-methenamine silver (PAM), hematoxylin and eosin (H&E), and Masson's trichrome (MT) stain with minimal morphological discordance are helpful for pathological diagnosis in renal biopsy. Here, we propose an artificial intelligence-based re-stainer called PPHM-GAN (PAS, PAM, H&E, and MT-generative adversarial networks) with multi-stain to multi-stain transformation capability. We trained three GAN models on 512 × 512-pixel patches from 26 training cases. The model with the best transformation quality was selected for each pair of stain transformations by human evaluation. Frechet inception distances, peak signal-to-noise ratio, structural similarity index measure, contrast structural similarity, and newly introduced domain shift inception score were calculated as auxiliary quality metrics. We validated the diagnostic utility using 5120 × 5120 patches of ten validation cases for major glomerular and interstitial abnormalities. Transformed stains were sometimes superior to original stains for the recognition of crescent formation, mesangial hypercellularity, glomerular sclerosis, interstitial lesions, or arteriosclerosis. 23 of 24 glomeruli (95.83 %) from 9 additional validation cases transformed to PAM, PAS, or MT facilitated recognition of crescent formation. Stain transformations to PAM (p = 4.0E-11) and transformations from H&E (p = 4.8E-9) most improved crescent formation recognition. PPHM-GAN maximizes information from a given section by providing several stains in a virtual single-section view, and may change the staining and diagnostic strategy.
Collapse
Affiliation(s)
- Masataka Kawai
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan.
| | - Toru Odate
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Kazunari Kasai
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Tomohiro Inoue
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Kunio Mochizuki
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Naoki Oishi
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Tetsuo Kondo
- Department of Pathology, University of Yamanashi, Chuo, Yamanashi, Japan
| |
Collapse
|
5
|
Wiedenmann M, Barch M, Chang PS, Giltnane J, Risom T, Zijlstra A. An Immunofluorescence-Guided Segmentation Model in Hematoxylin and Eosin Images Is Enabled by Tissue Artifact Correction Using a Cycle-Consistent Generative Adversarial Network. Mod Pathol 2024; 37:100591. [PMID: 39147031 DOI: 10.1016/j.modpat.2024.100591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 08/01/2024] [Indexed: 08/17/2024]
Abstract
Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence (IF) staining and mapping these annotations to a post-IF hematoxylin and eosin (H&E) (terminal H&E) stain. Mapping the annotations between IF and terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network was applied to transfer the appearance of conventional H&E such that it emulates terminal H&E. These synthetic emulations allowed us to train a deep learning model for the segmentation of epithelium in terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the cycle-consistent generative adversarial network stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.
Collapse
Affiliation(s)
- Marcel Wiedenmann
- Department of Computer and Information Science, University of Konstanz, Konstanz, Germany
| | - Mariya Barch
- Department of Research Pathology, Genentech Inc, South San Francisco, California
| | - Patrick S Chang
- Department of Research Pathology, Genentech Inc, South San Francisco, California
| | - Jennifer Giltnane
- Department of Research Pathology, Genentech Inc, South San Francisco, California
| | - Tyler Risom
- Department of Research Pathology, Genentech Inc, South San Francisco, California.
| | - Andries Zijlstra
- Department of Research Pathology, Genentech Inc, South San Francisco, California; Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
6
|
Fatemi MY, Lu Y, Diallo AB, Srinivasan G, Azher ZL, Christensen BC, Salas LA, Tsongalis GJ, Palisoul SM, Perreard L, Kolling FW, Vaickus LJ, Levy JJ. An initial game-theoretic assessment of enhanced tissue preparation and imaging protocols for improved deep learning inference of spatial transcriptomics from tissue morphology. Brief Bioinform 2024; 25:bbae476. [PMID: 39367648 PMCID: PMC11452536 DOI: 10.1093/bib/bbae476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 07/19/2024] [Accepted: 09/11/2024] [Indexed: 10/06/2024] Open
Abstract
The application of deep learning to spatial transcriptomics (ST) can reveal relationships between gene expression and tissue architecture. Prior work has demonstrated that inferring gene expression from tissue histomorphology can discern these spatial molecular markers to enable population scale studies, reducing the fiscal barriers associated with large-scale spatial profiling. However, while most improvements in algorithmic performance have focused on improving model architectures, little is known about how the quality of tissue preparation and imaging can affect deep learning model training for spatial inference from morphology and its potential for widespread clinical adoption. Prior studies for ST inference from histology typically utilize manually stained frozen sections with imaging on non-clinical grade scanners. Training such models on ST cohorts is also costly. We hypothesize that adopting tissue processing and imaging practices that mirror standards for clinical implementation (permanent sections, automated tissue staining, and clinical grade scanning) can significantly improve model performance. An enhanced specimen processing and imaging protocol was developed for deep learning-based ST inference from morphology. This protocol featured the Visium CytAssist assay to permit automated hematoxylin and eosin staining (e.g. Leica Bond), 40×-resolution imaging, and joining of multiple patients' tissue sections per capture area prior to ST profiling. Using a cohort of 13 pathologic T Stage-III stage colorectal cancer patients, we compared the performance of models trained on slide prepared using enhanced versus traditional (i.e. manual staining and low-resolution imaging) protocols. Leveraging Inceptionv3 neural networks, we predicted gene expression across serial, histologically-matched tissue sections using whole slide images (WSI) from both protocols. The data Shapley was used to quantify and compare marginal performance gains on a patient-by-patient basis attributed to using the enhanced protocol versus the actual costs of spatial profiling. Findings indicate that training and validating on WSI acquired through the enhanced protocol as opposed to the traditional method resulted in improved performance at lower fiscal cost. In the realm of ST, the enhancement of deep learning architectures frequently captures the spotlight; however, the significance of specimen processing and imaging is often understated. This research, informed through a game-theoretic lens, underscores the substantial impact that specimen preparation/imaging can have on spatial transcriptomic inference from morphology. It is essential to integrate such optimized processing protocols to facilitate the identification of prognostic markers at a larger scale.
Collapse
Affiliation(s)
- Michael Y Fatemi
- Department of Computer Science, University of Virginia, Charlottesville, VA 22903, USA
| | - Yunrui Lu
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
| | - Alos B Diallo
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH 03756, USA
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH 03756, USA
| | - Gokul Srinivasan
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
| | - Zarif L Azher
- Thomas Jefferson High School for Science and Technology, Alexandria, VA 22312, USA
| | - Brock C Christensen
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH 03756, USA
| | - Lucas A Salas
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH 03756, USA
| | - Gregory J Tsongalis
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
| | - Scott M Palisoul
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
| | - Laurent Perreard
- Genomics Shared Resource, Dartmouth Cancer Center, Lebanon, NH 03756, USA
| | - Fred W Kolling
- Genomics Shared Resource, Dartmouth Cancer Center, Lebanon, NH 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
| | - Joshua J Levy
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH 03766, USA
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH 03756, USA
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH 03756, USA
- Department of Dermatology, Dartmouth Health, Lebanon, NH 03756, USA
- Department of Pathology and Laboratory Medicine, Cedars Sinai Medical Center, Los Angeles, CA 90048, USA
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles, CA 90048, USA
| |
Collapse
|
7
|
Fan J, Zhang X, Zeng N, Liu S, He H, Luo L, He C, Ma H. Stain transformation using Mueller matrix guided generative adversarial networks. OPTICS LETTERS 2024; 49:5135-5138. [PMID: 39270248 DOI: 10.1364/ol.537220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 08/22/2024] [Indexed: 09/15/2024]
Abstract
Recently, virtual staining techniques have attracted more and more attention, which can help bypass the chemical staining process of traditional histopathological examination, saving time and resources. Meanwhile, as an emerging tool to characterize specific tissue structures in a label-free manner, the Mueller matrix microscopy can supplement more structural information that may not be apparent in bright-field images. In this Letter, we propose the Mueller matrix guided generative adversarial networks (MMG-GAN). By integrating polarization information provided by the Mueller matrix microscopy, the MMG-GAN enables the effective transformation of input H&E-stained images into corresponding Masson trichrome (MT)-stained images. The experimental results demonstrate the accuracy of the generated images by MMG-GAN and reveal the potential for more stain transformation tasks by incorporating the Mueller matrix polarization information, laying the foundation for future polarimetry-assisted digital pathology.
Collapse
|
8
|
Zhu R, He H, Chen Y, Yi M, Ran S, Wang C, Wang Y. Deep learning for rapid virtual H&E staining of label-free glioma tissue from hyperspectral images. Comput Biol Med 2024; 180:108958. [PMID: 39094325 DOI: 10.1016/j.compbiomed.2024.108958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024]
Abstract
Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
Collapse
Affiliation(s)
- Ruohua Zhu
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Haiyang He
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Yuzhe Chen
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Ming Yi
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Shengdong Ran
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Chengde Wang
- Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China.
| | - Yi Wang
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China; Wenzhou Institute, University of Chinese Academy of Sciences, Jinlian Road 1, Wenzhou, 325001, China.
| |
Collapse
|
9
|
Latonen L, Koivukoski S, Khan U, Ruusuvuori P. Virtual staining for histology by deep learning. Trends Biotechnol 2024; 42:1177-1191. [PMID: 38480025 DOI: 10.1016/j.tibtech.2024.02.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 09/07/2024]
Abstract
In pathology and biomedical research, histology is the cornerstone method for tissue analysis. Currently, the histological workflow consumes plenty of chemicals, water, and time for staining procedures. Deep learning is now enabling digital replacement of parts of the histological staining procedure. In virtual staining, histological stains are created by training neural networks to produce stained images from an unstained tissue image, or through transferring information from one stain to another. These technical innovations provide more sustainable, rapid, and cost-effective alternatives to traditional histological pipelines, but their development is in an early phase and requires rigorous validation. In this review we cover the basic concepts of virtual staining for histology and provide future insights into the utilization of artificial intelligence (AI)-enabled virtual histology.
Collapse
Affiliation(s)
- Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland.
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Turku, Finland
| | | |
Collapse
|
10
|
Grignaffini F, Barbuto F, Troiano M, Piazzo L, Simeoni P, Mangini F, De Stefanis C, Onetti Muda A, Frezza F, Alisi A. The Use of Artificial Intelligence in the Liver Histopathology Field: A Systematic Review. Diagnostics (Basel) 2024; 14:388. [PMID: 38396427 PMCID: PMC10887838 DOI: 10.3390/diagnostics14040388] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Digital pathology (DP) has begun to play a key role in the evaluation of liver specimens. Recent studies have shown that a workflow that combines DP and artificial intelligence (AI) applied to histopathology has potential value in supporting the diagnosis, treatment evaluation, and prognosis prediction of liver diseases. Here, we provide a systematic review of the use of this workflow in the field of hepatology. Based on the PRISMA 2020 criteria, a search of the PubMed, SCOPUS, and Embase electronic databases was conducted, applying inclusion/exclusion filters. The articles were evaluated by two independent reviewers, who extracted the specifications and objectives of each study, the AI tools used, and the results obtained. From the 266 initial records identified, 25 eligible studies were selected, mainly conducted on human liver tissues. Most of the studies were performed using whole-slide imaging systems for imaging acquisition and applying different machine learning and deep learning methods for image pre-processing, segmentation, feature extractions, and classification. Of note, most of the studies selected demonstrated good performance as classifiers of liver histological images compared to pathologist annotations. Promising results to date bode well for the not-too-distant inclusion of these techniques in clinical practice.
Collapse
Affiliation(s)
- Flavia Grignaffini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Francesco Barbuto
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Maurizio Troiano
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Patrizio Simeoni
- National Transport Authority (NTA), D02 WT20 Dublin, Ireland;
- Faculty of Lifelong Learning, South East Technological University (SETU), R93 V960 Carlow, Ireland
| | - Fabio Mangini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Cristiano De Stefanis
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | | | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Anna Alisi
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| |
Collapse
|
11
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
12
|
Fatemi MY, Lu Y, Diallo AB, Srinivasan G, Azher ZL, Christensen BC, Salas LA, Tsongalis GJ, Palisoul SM, Perreard L, Kolling FW, Vaickus LJ, Levy JJ. The Overlooked Role of Specimen Preparation in Bolstering Deep Learning-Enhanced Spatial Transcriptomics Workflows. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.10.09.23296700. [PMID: 37873287 PMCID: PMC10593052 DOI: 10.1101/2023.10.09.23296700] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
The application of deep learning methods to spatial transcriptomics has shown promise in unraveling the complex relationships between gene expression patterns and tissue architecture as they pertain to various pathological conditions. Deep learning methods that can infer gene expression patterns directly from tissue histomorphology can expand the capability to discern spatial molecular markers within tissue slides. However, current methods utilizing these techniques are plagued by substantial variability in tissue preparation and characteristics, which can hinder the broader adoption of these tools. Furthermore, training deep learning models using spatial transcriptomics on small study cohorts remains a costly endeavor. Necessitating novel tissue preparation processes enhance assay reliability, resolution, and scalability. This study investigated the impact of an enhanced specimen processing workflow for facilitating a deep learning-based spatial transcriptomics assessment. The enhanced workflow leveraged the flexibility of the Visium CytAssist assay to permit automated H&E staining (e.g., Leica Bond) of tissue slides, whole-slide imaging at 40x-resolution, and multiplexing of tissue sections from multiple patients within individual capture areas for spatial transcriptomics profiling. Using a cohort of thirteen pT3 stage colorectal cancer (CRC) patients, we compared the efficacy of deep learning models trained on slide prepared using an enhanced workflow as compared to the traditional workflow which leverages manual tissue staining and standard imaging of tissue slides. Leveraging Inceptionv3 neural networks, we aimed to predict gene expression patterns across matched serial tissue sections, each stemming from a distinct workflow but aligned based on persistent histological structures. Findings indicate that the enhanced workflow considerably outperformed the traditional spatial transcriptomics workflow. Gene expression profiles predicted from enhanced tissue slides also yielded expression patterns more topologically consistent with the ground truth. This led to enhanced statistical precision in pinpointing biomarkers associated with distinct spatial structures. These insights can potentially elevate diagnostic and prognostic biomarker detection by broadening the range of spatial molecular markers linked to metastasis and recurrence. Future endeavors will further explore these findings to enrich our comprehension of various diseases and uncover molecular pathways with greater nuance. Combining deep learning with spatial transcriptomics provides a compelling avenue to enrich our understanding of tumor biology and improve clinical outcomes. For results of the highest fidelity, however, effective specimen processing is crucial, and fostering collaboration between histotechnicians, pathologists, and genomics specialists is essential to herald this new era in spatial transcriptomics-driven cancer research.
Collapse
|
13
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|
14
|
Chen G, Zhao X, Dankovskyy M, Ansah-Zame A, Alghamdi U, Liu D, Wei R, Zhao J, Zhou A. A novel role of RNase L in the development of nonalcoholic steatohepatitis. FASEB J 2023; 37:e23158. [PMID: 37615181 PMCID: PMC10715709 DOI: 10.1096/fj.202300621r] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 06/29/2023] [Accepted: 08/10/2023] [Indexed: 08/25/2023]
Abstract
Nonalcoholic fatty liver disease (NAFLD) is the most common chronic liver disease and affects about 25% of the population globally. NAFLD has the potential to cause significant liver damage in many patients because it can progress to nonalcoholic steatohepatitis (NASH) and cirrhosis, which substantially increases disease morbidity and mortality. Despite the key role of innate immunity in the disease progression, the underlying molecular and pathogenic mechanisms remain to be elucidated. RNase L is a key enzyme in interferon action against viral infection and displays pleiotropic biological functions such as control of cell proliferation, apoptosis, and autophagy. Recent studies have demonstrated that RNase L is involved in innate immunity. In this study, we revealed that RNase L contributed to the development of NAFLD, which further progressed to NASH in a time-dependent fashion after RNase L wild-type (WT) and knockout mice were fed with a high-fat and high-cholesterol diet. RNase L WT mice showed significantly more severe NASH, evidenced by widespread macro-vesicular steatosis, hepatocyte ballooning degeneration, inflammation, and fibrosis, although physiological and biochemical data indicated that both types of mice developed obesity, hyperglycemia, hypercholesterolemia, dysfunction of the liver, and systemic inflammation at different extents. Further investigation demonstrated that RNase L was responsible for the expression of some key genes in lipid metabolism, inflammation, and fibrosis signaling. Taken together, our results suggest that a novel therapeutic intervention for NAFLD may be developed based on regulating the expression and activity of RNase L.
Collapse
Affiliation(s)
- Guanmin Chen
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Xiaotong Zhao
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Maksym Dankovskyy
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Abigail Ansah-Zame
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Uthman Alghamdi
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Danting Liu
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Ruhan Wei
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
| | - Jianjun Zhao
- Department of Cancer Biology, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Aimin Zhou
- Department of Chemistry, Cleveland State University, Cleveland, OH 44115, USA
- Center for Gene Regulation in Health and Diseases, Cleveland State University, Cleveland, OH 44115, USA
| |
Collapse
|
15
|
Yan R, He Q, Liu Y, Ye P, Zhu L, Shi S, Gou J, He Y, Guan T, Zhou G. Unpaired virtual histological staining using prior-guided generative adversarial networks. Comput Med Imaging Graph 2023; 105:102185. [PMID: 36764189 DOI: 10.1016/j.compmedimag.2023.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 01/05/2023] [Accepted: 01/05/2023] [Indexed: 01/24/2023]
Abstract
Fibrosis is an inevitable stage in the development of chronic liver disease and has an irreplaceable role in characterizing the degree of progression of chronic liver disease. Histopathological diagnosis is the gold standard for the interpretation of fibrosis parameters. Conventional hematoxylin-eosin (H&E) staining can only reflect the gross structure of the tissue and the distribution of hepatocytes, while Masson trichrome can highlight specific types of collagen fiber structure, thus providing the necessary structural information for fibrosis scoring. However, the expensive costs of time, economy, and patient specimens as well as the non-uniform preparation and staining process make the conversion of existing H&E staining into virtual Masson trichrome staining a solution for fibrosis evaluation. Existing translation approaches fail to extract fiber features accurately enough, and the decoder of staining is unable to converge due to the inconsistent color of physical staining. In this work, we propose a prior-guided generative adversarial network, based on unpaired data for effective Masson trichrome stained image generation from the corresponding H&E stained image. Conducted on a small training set, our method takes full advantage of prior knowledge to set up better constraints on both the encoder and the decoder. Experiments indicate the superior performance of our method that surpasses the previous approaches. For various liver diseases, our results demonstrate a high correlation between the staging of real and virtual stains (ρ=0.82; 95% CI: 0.73-0.89). In addition, our finetuning strategy is able to standardize the staining color and release the memory and computational burden, which can be employed in clinical assessment.
Collapse
Affiliation(s)
- Renao Yan
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Qiming He
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Yiqing Liu
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Peng Ye
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Lianghui Zhu
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Shanshan Shi
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Jizhou Gou
- The Third People's Hospital of Shenzhen, Buji Buran Road 29, Shenzhen, 518112, Guangdong, China
| | - Yonghong He
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China
| | - Tian Guan
- Shenzhen International Graduate School, Tsinghua University, Xili University City, Shenzhen, 518055, Guangdong, China.
| | - Guangde Zhou
- The Third People's Hospital of Shenzhen, Buji Buran Road 29, Shenzhen, 518112, Guangdong, China.
| |
Collapse
|
16
|
Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. LIGHT, SCIENCE & APPLICATIONS 2023; 12:57. [PMID: 36864032 PMCID: PMC9981740 DOI: 10.1038/s41377-023-01104-7] [Citation(s) in RCA: 85] [Impact Index Per Article: 42.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue. However, the current histological staining workflow requires tedious sample preparation steps, specialized laboratory infrastructure, and trained histotechnologists, making it expensive, time-consuming, and not accessible in resource-limited settings. Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks, providing rapid, cost-effective, and accurate alternatives to standard chemical staining methods. These techniques, broadly referred to as virtual staining, were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples; similar approaches were also used for transforming images of an already stained tissue sample into another type of stain, performing virtual stain-to-stain transformations. In this Review, we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques. The basic concepts and the typical workflow of virtual staining are introduced, followed by a discussion of representative works and their technical innovations. We also share our perspectives on the future of this emerging field, aiming to inspire readers from diverse scientific fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
17
|
Unstained Tissue Imaging and Virtual Hematoxylin and Eosin Staining of Histologic Whole Slide Images. J Transl Med 2023; 103:100070. [PMID: 36801642 DOI: 10.1016/j.labinv.2023.100070] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/11/2023] [Accepted: 01/19/2023] [Indexed: 01/27/2023] Open
Abstract
Tissue structures, phenotypes, and pathology are routinely investigated based on histology. This includes chemically staining the transparent tissue sections to make them visible to the human eye. Although chemical staining is fast and routine, it permanently alters the tissue and often consumes hazardous reagents. On the other hand, on using adjacent tissue sections for combined measurements, the cell-wise resolution is lost owing to sections representing different parts of the tissue. Hence, techniques providing visual information of the basic tissue structure enabling additional measurements from the exact same tissue section are required. Here we tested unstained tissue imaging for the development of computational hematoxylin and eosin (HE) staining. We used unsupervised deep learning (CycleGAN) and whole slide images of prostate tissue sections to compare the performance of imaging tissue in paraffin, as deparaffinized in air, and as deparaffinized in mounting medium with section thicknesses varying between 3 and 20 μm. We showed that although thicker sections increase the information content of tissue structures in the images, thinner sections generally perform better in providing information that can be reproduced in virtual staining. According to our results, tissue imaged in paraffin and as deparaffinized provides a good overall representation of the tissue for virtually HE-stained images. Further, using a pix2pix model, we showed that the reproduction of overall tissue histology can be clearly improved with image-to-image translation using supervised learning and pixel-wise ground truth. We also showed that virtual HE staining can be used for various tissues and used with both 20× and 40× imaging magnifications. Although the performance and methods of virtual staining need further development, our study provides evidence of the feasibility of whole slide unstained microscopy as a fast, cheap, and feasible approach to producing virtual staining of tissue histology while sparing the exact same tissue section ready for subsequent utilization with follow-up methods at single-cell resolution.
Collapse
|
18
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington
- Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
19
|
McAlpine E, Michelow P, Liebenberg E, Celik T. Are synthetic cytology images ready for prime time? A comparative assessment of real and synthetic urine cytology images. J Am Soc Cytopathol 2022; 12:126-135. [PMID: 37013344 DOI: 10.1016/j.jasc.2022.10.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/17/2022] [Accepted: 10/01/2022] [Indexed: 11/06/2022]
Abstract
INTRODUCTION The use of synthetic data in pathology has, to date, predominantly been augmenting existing pathology data to improve supervised machine learning algorithms. We present an alternative use case-using synthetic images to augment cytology training when the availability of real-world examples is limited. Moreover, we compare the assessment of real and synthetic urine cytology images by pathology personnel to explore the usefulness of this technology in a real-world setting. MATERIALS AND METHODS Synthetic urine cytology images were generated using a custom-trained conditional StyleGAN3 model. A morphologically balanced 60-image data set of real and synthetic urine cytology images was created for an online image survey system to allow for the assessment of the differences in visual perception between real and synthetic urine cytology images by pathology personnel. RESULTS A total of 12 participants were recruited to answer the 60-image survey. The study population had a median age of 36.5 years and a median of 5 years of pathology experience. There was no significant difference in diagnostic error rates between real and synthetic images, nor was there a significant difference between subjective image quality scores between real and synthetic images when assessed on an individual observer basis. CONCLUSIONS The ability of Generative Adversarial Networks technology to generate highly realistic urine cytology images was demonstrated. Furthermore, there was no difference in how pathology personnel perceived the subjective quality of synthetic images, nor was there a difference in diagnostic error rates between real and synthetic urine cytology images. This has important implications for the application of Generative Adversarial Networks technology to cytology teaching and learning.
Collapse
Affiliation(s)
- Ewen McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Ampath National Laboratories, Johannesburg, South Africa.
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; National Health Laboratory Services, Johannesburg, South Africa
| | - Eric Liebenberg
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering and Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
20
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
21
|
McAlpine E, Michelow P, Liebenberg E, Celik T. Is it real or not? Toward artificial intelligence-based realistic synthetic cytology image generation to augment teaching and quality assurance in pathology. J Am Soc Cytopathol 2022; 11:123-132. [PMID: 35249862 DOI: 10.1016/j.jasc.2022.02.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 01/20/2022] [Accepted: 02/03/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Urine cytology offers a rapid and relatively inexpensive method to diagnose urothelial neoplasia. In our setting of a public sector laboratory in South Africa, urothelial neoplasia is rare, compromising pathology training in this specific aspect of cytology. Artificial intelligence-based synthetic image generation-specifically the use of generative adversarial networks (GANs)-offers a solution to this problem. MATERIALS AND METHODS A limited, but morphologically diverse, dataset of 1000 malignant urothelial cytology images was used to train a StyleGAN3 model to create completely novel, synthetic examples of malignant urine cytology using computer resources within reach of most pathology departments worldwide. RESULTS We have presented the results of our trained GAN model, which was able to generate realistic, morphologically diverse examples of malignant urine cytology images when trained using a modest dataset. Although the trained model is capable of generating realistic images, we have also presented examples for which unrealistic and artifactual images were generated-illustrating the need for manual curation when using this technology in a training context. CONCLUSIONS We have presented a proof-of-concept illustration of creating synthetic malignant urine cytology images using machine learning technology to augment cytology training when real-world examples are sparse. We have shown that despite significant morphologic diversity in terms of staining variations, slide background, variations in the diagnostic malignant cellular elements, the presence of other nondiagnostic cellular elements, and artifacts, visually acceptable and varied results are achievable using limited data and computing resources.
Collapse
Affiliation(s)
- Ewen McAlpine
- Department of Anatomical Pathology, National Health Laboratory Service, Johannesburg, South Africa; Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa.
| | - Pamela Michelow
- Department of Anatomical Pathology, National Health Laboratory Service, Johannesburg, South Africa; Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Eric Liebenberg
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering and Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
22
|
Yoo TK, Kim BY, Jeong HK, Kim HK, Yang D, Ryu IH. Simple Code Implementation for Deep Learning-Based Segmentation to Evaluate Central Serous Chorioretinopathy in Fundus Photography. Transl Vis Sci Technol 2022; 11:22. [PMID: 35147661 PMCID: PMC8842634 DOI: 10.1167/tvst.11.2.22] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Purpose Central serous chorioretinopathy (CSC) is a retinal disease that frequently shows resolution and recurrence with serous detachment of the neurosensory retina. Here, we present a deep learning analysis of subretinal fluid (SRF) lesion segmentation in fundus photographs to evaluate CSC. Methods We collected 194 fundus photographs of SRF lesions from the patients with CSC. Three graders manually annotated of the entire SRF area in the retinal images. The dataset was randomly separated into training (90%) and validation (10%) datasets. We used the U-Net segmentation model based on conditional generative adversarial networks (pix2pix) to detect the SRF lesions. The algorithms were trained and validated using Google Colaboratory. Researchers did not need prior knowledge of coding skills or computing resources to implement this code. Results The validation results showed that the Jaccard index and Dice coefficient scores were 0.619 and 0.763, respectively. In most cases, the segmentation results overlapped with most of the reference areas in the annotated images. However, cases with exceptional SRFs were not accurate in terms of prediction. Using Colaboratory, the proposed segmentation task ran easily in a web-based environment without setup or personal computing resources. Conclusions The results suggest that the deep learning model based on U-Net from the pix2pix algorithm is suitable for the automatic segmentation of SRF lesions to evaluate CSC. Translational Relevance Our code implementation has the potential to facilitate ophthalmology research; in particular, deep learning–based segmentation can assist in the development of pathological lesion detection solutions.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Korea Air Force, Cheongju, South Korea.,B&VIIT Eye Center, Seoul, South Korea
| | - Bo Yi Kim
- Department of Ophthalmology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Hyun Kyo Jeong
- Department of Ophthalmology, 10 th Fighter Wing, Republic of Korea Air Force, Suwon, South Korea
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Donghyun Yang
- Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea.,Visuworks, Seoul, South Korea
| |
Collapse
|
23
|
McAlpine ED, Michelow P, Celik T. The Utility of Unsupervised Machine Learning in Anatomic Pathology. Am J Clin Pathol 2022; 157:5-14. [PMID: 34302331 DOI: 10.1093/ajcp/aqab085] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/18/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVES Developing accurate supervised machine learning algorithms is hampered by the lack of representative annotated datasets. Most data in anatomic pathology are unlabeled and creating large, annotated datasets is a time consuming and laborious process. Unsupervised learning, which does not require annotated data, possesses the potential to assist with this challenge. This review aims to introduce the concept of unsupervised learning and illustrate how clustering, generative adversarial networks (GANs) and autoencoders have the potential to address the lack of annotated data in anatomic pathology. METHODS A review of unsupervised learning with examples from the literature was carried out. RESULTS Clustering can be used as part of semisupervised learning where labels are propagated from a subset of annotated data points to remaining unlabeled data points in a dataset. GANs may assist by generating large amounts of synthetic data and performing color normalization. Autoencoders allow training of a network on a large, unlabeled dataset and transferring learned representations to a classifier using a smaller, labeled subset (unsupervised pretraining). CONCLUSIONS Unsupervised machine learning techniques such as clustering, GANs, and autoencoders, used individually or in combination, may help address the lack of annotated data in pathology and improve the process of developing supervised learning models.
Collapse
Affiliation(s)
- Ewen D McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, South Africa
- Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
24
|
Mehrvar S, Himmel LE, Babburi P, Goldberg AL, Guffroy M, Janardhan K, Krempley AL, Bawa B. Deep Learning Approaches and Applications in Toxicologic Histopathology: Current Status and Future Perspectives. J Pathol Inform 2021; 12:42. [PMID: 34881097 PMCID: PMC8609289 DOI: 10.4103/jpi.jpi_36_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/18/2021] [Indexed: 12/13/2022] Open
Abstract
Whole slide imaging enables the use of a wide array of digital image analysis tools that are revolutionizing pathology. Recent advances in digital pathology and deep convolutional neural networks have created an enormous opportunity to improve workflow efficiency, provide more quantitative, objective, and consistent assessments of pathology datasets, and develop decision support systems. Such innovations are already making their way into clinical practice. However, the progress of machine learning - in particular, deep learning (DL) - has been rather slower in nonclinical toxicology studies. Histopathology data from toxicology studies are critical during the drug development process that is required by regulatory bodies to assess drug-related toxicity in laboratory animals and its impact on human safety in clinical trials. Due to the high volume of slides routinely evaluated, low-throughput, or narrowly performing DL methods that may work well in small-scale diagnostic studies or for the identification of a single abnormality are tedious and impractical for toxicologic pathology. Furthermore, regulatory requirements around good laboratory practice are a major hurdle for the adoption of DL in toxicologic pathology. This paper reviews the major DL concepts, emerging applications, and examples of DL in toxicologic pathology image analysis. We end with a discussion of specific challenges and directions for future research.
Collapse
Affiliation(s)
- Shima Mehrvar
- Preclinical Safety, AbbVie Inc., North Chicago, IL, USA
| | | | - Pradeep Babburi
- Business Technology Solutions, AbbVie Inc., North Chicago, IL, USA
| | | | | | | | | | | |
Collapse
|
25
|
Levy J, Haudenschild C, Barwick C, Christensen B, Vaickus L. Topological Feature Extraction and Visualization of Whole Slide Images using Graph Neural Networks. PACIFIC SYMPOSIUM ON BIOCOMPUTING. PACIFIC SYMPOSIUM ON BIOCOMPUTING 2021; 26:285-296. [PMID: 33691025 PMCID: PMC7959046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Whole-slide images (WSI) are digitized representations of thin sections of stained tissue from various patient sources (biopsy, resection, exfoliation, fluid) and often exceed 100,000 pixels in any given spatial dimension. Deep learning approaches to digital pathology typically extract information from sub-images (patches) and treat the sub-images as independent entities, ignoring contributing information from vital large-scale architectural relationships. Modeling approaches that can capture higher-order dependencies between neighborhoods of tissue patches have demonstrated the potential to improve predictive accuracy while capturing the most essential slide-level information for prognosis, diagnosis and integration with other omics modalities. Here, we review two promising methods for capturing macro and micro architecture of histology images, Graph Neural Networks, which contextualize patch level information from their neighbors through message passing, and Topological Data Analysis, which distills contextual information into its essential components. We introduce a modeling framework, WSI-GTFE that integrates these two approaches in order to identify and quantify key pathogenic information pathways. To demonstrate a simple use case, we utilize these topological methods to develop a tumor invasion score to stage colon cancer.
Collapse
Affiliation(s)
- Joshua Levy
- Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Lebanon, NH 03756, USA* To whom correspondence should be addressed.,
| | | | | | | | | |
Collapse
|