1
|
Xiong Z, He J, Valkema P, Nguyen TQ, Naesens M, Kers J, Verbeek FJ. Advances in kidney biopsy lesion assessment through dense instance segmentation. Artif Intell Med 2025; 164:103111. [PMID: 40174354 DOI: 10.1016/j.artmed.2025.103111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 02/06/2025] [Accepted: 03/14/2025] [Indexed: 04/04/2025]
Abstract
Renal biopsies are the gold standard for the diagnosis of kidney diseases. Lesion scores made by renal pathologists are semi-quantitative and exhibit high inter-observer variability. Automating lesion classification within segmented anatomical structures can provide decision support in quantification analysis, thereby reducing inter-observer variability. Nevertheless, classifying lesions in regions-of-interest (ROIs) is clinically challenging due to (a) a large amount of densely packed anatomical objects, (b) class imbalance across different compartments (at least 3), (c) significant variation in size and shape of anatomical objects and (d) the presence of multi-label lesions per anatomical structure. Existing models cannot address these complexities in an efficient and generic manner. This paper presents an analysis for a generalized solution to datasets from various sources (pathology departments) with different types of lesions. Our approach utilizes two sub-networks: dense instance segmentation and lesion classification. We introduce DiffRegFormer, an end-to-end dense instance segmentation sub-network designed for multi-class, multi-scale objects within ROIs. Combining diffusion models, transformers, and RCNNs, DiffRegFormer is a computational-friendly framework that can efficiently recognize over 500 objects across three anatomical classes, i.e., glomeruli, tubuli, and arteries, within ROIs. In a dataset of 303 ROIs from 148 Jones' silver-stained renal Whole Slide Images (WSIs), our approach outperforms previous methods, achieving an Average Precision of 52.1% (detection) and 46.8% (segmentation). Moreover, our lesion classification sub-network achieves 89.2% precision and 64.6% recall on 21889 object patches out of the 303 ROIs. Lastly, our model demonstrates direct domain transfer to PAS-stained renal WSIs without fine-tuning.
Collapse
Affiliation(s)
- Zhan Xiong
- LIACS, Leiden University, Snellius Gebouw, Niels Bohrweg 1, 2333 CA, Leiden, The Netherlands
| | - Junling He
- Department of Pathology, Leiden University Medical Center, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - Pieter Valkema
- Department of Pathology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Tri Q Nguyen
- Department of Pathology, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Maarten Naesens
- Department of Nephrology and Renal Transplantation, University Hospitals Leuven, Herestraat 49, 3000, Leuven, Belgium; Department of Microbiology, Immunology, and Transplantation, KU Leuven, Oude Markt 13, 3000, Leuven, Belgium
| | - Jesper Kers
- Department of Pathology, Leiden University Medical Center, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands; Department of Pathology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands; Van't Hoff Institute for Molecular Sciences, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands
| | - Fons J Verbeek
- LIACS, Leiden University, Snellius Gebouw, Niels Bohrweg 1, 2333 CA, Leiden, The Netherlands.
| |
Collapse
|
2
|
Yang F, He Q, Wang Y, Zeng S, Xu Y, Ye J, He Y, Guan T, Wang Z, Li J. Unsupervised stain augmentation enhanced glomerular instance segmentation on pathology images. Int J Comput Assist Radiol Surg 2025; 20:225-236. [PMID: 38848032 DOI: 10.1007/s11548-024-03154-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/16/2024] [Indexed: 02/11/2025]
Abstract
PURPOSE In pathology images, different stains highlight different glomerular structures, so a supervised deep learning-based glomerular instance segmentation model trained on individual stains performs poorly on other stains. However, it is difficult to obtain a training set with multiple stains because the labeling of pathology images is very time-consuming and tedious. Therefore, in this paper, we proposed an unsupervised stain augmentation-based method for segmentation of glomerular instances. METHODS In this study, we successfully realized the conversion between different staining methods such as PAS, MT and PASM by contrastive unpaired translation (CUT), thus improving the staining diversity of the training set. Moreover, we replaced the backbone of mask R-CNN with swin transformer to further improve the efficiency of feature extraction and thus achieve better performance in instance segmentation task. RESULTS To validate the method presented in this paper, we constructed a dataset from 216 WSIs of the three stains in this study. After conducting in-depth experiments, we verified that the instance segmentation method based on stain augmentation outperforms existing methods across all metrics for PAS, PASM, and MT stains. Furthermore, ablation experiments are performed in this paper to further demonstrate the effectiveness of the proposed module. CONCLUSION This study successfully demonstrated the potential of unsupervised stain augmentation to improve glomerular segmentation in pathology analysis. Future research could extend this approach to other complex segmentation tasks in the pathology image domain to further explore the potential of applying stain augmentation techniques in different domains of pathology image analysis.
Collapse
Affiliation(s)
- Fan Yang
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Yanxia Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Siqi Zeng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
- Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China
| | - Yingming Xu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Jing Ye
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Tian Guan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, China.
| | - Zhe Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China.
| | - Jing Li
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
- School of Basic Medicine, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
3
|
Athreya S, Radhachandran A, Ivezić V, Sant VR, Arnold CW, Speier W. Enhancing Ultrasound Image Quality Across Disease Domains: Application of Cycle-Consistent Generative Adversarial Network and Perceptual Loss. JMIR BIOMEDICAL ENGINEERING 2024; 9:e58911. [PMID: 39689310 DOI: 10.2196/58911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 08/24/2024] [Accepted: 09/13/2024] [Indexed: 12/19/2024] Open
Abstract
BACKGROUND Numerous studies have explored image processing techniques aimed at enhancing ultrasound images to narrow the performance gap between low-quality portable devices and high-end ultrasound equipment. These investigations often use registered image pairs created by modifying the same image through methods like down sampling or adding noise, rather than using separate images from different machines. Additionally, they rely on organ-specific features, limiting the models' generalizability across various imaging conditions and devices. The challenge remains to develop a universal framework capable of improving image quality across different devices and conditions, independent of registration or specific organ characteristics. OBJECTIVE This study aims to develop a robust framework that enhances the quality of ultrasound images, particularly those captured with compact, portable devices, which are often constrained by low quality due to hardware limitations. The framework is designed to effectively process nonregistered ultrasound image pairs, a common challenge in medical imaging, across various clinical settings and device types. By addressing these challenges, the research seeks to provide a more generalized and adaptable solution that can be widely applied across diverse medical scenarios, improving the accessibility and quality of diagnostic imaging. METHODS A retrospective analysis was conducted by using a cycle-consistent generative adversarial network (CycleGAN) framework enhanced with perceptual loss to improve the quality of ultrasound images, focusing on nonregistered image pairs from various organ systems. The perceptual loss was integrated to preserve anatomical integrity by comparing deep features extracted from pretrained neural networks. The model's performance was evaluated against corresponding high-resolution images, ensuring that the enhanced outputs closely mimic those from high-end ultrasound devices. The model was trained and validated using a publicly available, diverse dataset to ensure robustness and generalizability across different imaging scenarios. RESULTS The advanced CycleGAN framework, enhanced with perceptual loss, significantly outperformed the previous state-of-the-art, stable CycleGAN, in multiple evaluation metrics. Specifically, our method achieved a structural similarity index of 0.2889 versus 0.2502 (P<.001), a peak signal-to-noise ratio of 15.8935 versus 14.9430 (P<.001), and a learned perceptual image patch similarity score of 0.4490 versus 0.5005 (P<.001). These results demonstrate the model's superior ability to enhance image quality while preserving critical anatomical details, thereby improving diagnostic usefulness. CONCLUSIONS This study presents a significant advancement in ultrasound imaging by leveraging a CycleGAN model enhanced with perceptual loss to bridge the quality gap between images from different devices. By processing nonregistered image pairs, the model not only enhances visual quality but also ensures the preservation of essential anatomical structures, crucial for accurate diagnosis. This approach holds the potential to democratize high-quality ultrasound imaging, making it accessible through low-cost portable devices, thereby improving health care outcomes, particularly in resource-limited settings. Future research will focus on further validation and optimization for clinical use.
Collapse
Affiliation(s)
- Shreeram Athreya
- Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Ashwath Radhachandran
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Vedrana Ivezić
- Medical Informatics, University of California Los Angeles, Los Angeles, CA, United States
| | - Vivek R Sant
- Department of Surgery, The University of Texas Southwestern Medical Center, Dallas, TX, United States
| | - Corey W Arnold
- Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
- Medical Informatics, University of California Los Angeles, Los Angeles, CA, United States
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
- Department of Pathology and Laboratory Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - William Speier
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
- Medical Informatics, University of California Los Angeles, Los Angeles, CA, United States
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
4
|
Fayzullin A, Ivanova E, Grinin V, Ermilov D, Solovyeva S, Balyasin M, Bakulina A, Nikitin P, Valieva Y, Kalinichenko A, Arutyunyan A, Lychagin A, Timashev P. Towards accurate and efficient diagnoses in nephropathology: An AI-based approach for assessing kidney transplant rejection. Comput Struct Biotechnol J 2024; 24:571-582. [PMID: 39258238 PMCID: PMC11385065 DOI: 10.1016/j.csbj.2024.08.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 08/11/2024] [Accepted: 08/11/2024] [Indexed: 09/12/2024] Open
Abstract
The Banff classification is useful for diagnosing renal transplant rejection. However, it has limitations due to subjectivity and varying concordance in physicians' assessments. Artificial intelligence (AI) can help standardize research, increase objectivity and accurately quantify morphological characteristics, improving reproducibility in clinical practice. This study aims to develop an AI-based solutions for diagnosing acute kidney transplant rejection by introducing automated evaluation of prognostic morphological patterns. The proposed approach aims to help accurately distinguish borderline changes from rejection. We trained a deep-learning model utilizing a fine-tuned Mask R-CNN architecture which achieved a mean Average Precision value of 0.74 for the segmentation of renal tissue structures. A strong positive nonlinear correlation was found between the measured infiltration areas and fibrosis, indicating the model's potential for assessing these parameters in kidney biopsies. The ROC analysis showed a high predictive ability for distinguishing between ci and i scores based on infiltration area and fibrosis area measurements. The AI model demonstrated high precision in predicting clinical scores which makes it a promising AI assisting tool for pathologists. The application of AI in nephropathology has a potential for advancements, including automated morphometric evaluation, 3D histological models and faster processing to enhance diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Alexey Fayzullin
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
- World-Class Research Center "Digital Biodesign and Personalized Healthcare, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Elena Ivanova
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
- B.V.Petrovsky Russian Research Center of Surgery, 2 Abrikosovskiy lane, Moscow 119991, Russia
| | - Victor Grinin
- PJSC VimpelCom, 10 8th March Street, Moscow 127083, Russia
| | - Dmitry Ermilov
- PJSC VimpelCom, 10 8th March Street, Moscow 127083, Russia
| | - Svetlana Solovyeva
- B.V.Petrovsky Russian Research Center of Surgery, 2 Abrikosovskiy lane, Moscow 119991, Russia
| | - Maxim Balyasin
- Scientific and Educational Resource Center, Peoples' Friendship University of Russia, 6 Miklukho-Maklaya st., Moscow 117198, Russia
| | - Alesia Bakulina
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Pavel Nikitin
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Yana Valieva
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
- World-Class Research Center "Digital Biodesign and Personalized Healthcare, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Alina Kalinichenko
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | | | - Aleksey Lychagin
- Department of Trauma, Orthopedics and Disaster Surgery, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Peter Timashev
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
- World-Class Research Center "Digital Biodesign and Personalized Healthcare, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| |
Collapse
|
5
|
Timakova A, Ananev V, Fayzullin A, Zemnuhov E, Rumyantsev E, Zharov A, Zharkov N, Zotova V, Shchelokova E, Demura T, Timashev P, Makarov V. LVI-PathNet: Segmentation-classification pipeline for detection of lymphovascular invasion in whole slide images of lung adenocarcinoma. J Pathol Inform 2024; 15:100395. [PMID: 39328468 PMCID: PMC11426154 DOI: 10.1016/j.jpi.2024.100395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 08/20/2024] [Accepted: 08/27/2024] [Indexed: 09/28/2024] Open
Abstract
Lymphovascular invasion (LVI) in lung cancer is a significant prognostic factor that influences treatment and outcomes, yet its reliable detection is challenging due to interobserver variability. This study aims to develop a deep learning model for LVI detection using whole slide images (WSIs) and evaluate its effectiveness within a pathologist's information system. Experienced pathologists annotated blood vessels and invading tumor cells in 162 WSIs of non-mucinous lung adenocarcinoma sourced from two external and one internal datasets. Two models were trained to segment vessels and identify images with LVI features. DeepLabV3+ model achieved an Intersection-over-Union of 0.8840 and an area under the receiver operating characteristic curve (AUC-ROC) of 0.9869 in vessel segmentation. For LVI classification, the ensemble model achieved a F1-score of 0.9683 and an AUC-ROC of 0.9987. The model demonstrated robustness and was unaffected by variations in staining and image quality. The pilot study showed that pathologists' evaluation time for LVI detecting decreased by an average of 16.95%, and by 21.5% in "hard cases". The model facilitated consistent diagnostic assessments, suggesting potential for broader applications in detecting pathological changes in blood vessels and other lung pathologies.
Collapse
Affiliation(s)
- Anna Timakova
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Vladislav Ananev
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, Veliky Novgorod 173003, Russia
| | - Alexey Fayzullin
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Egor Zemnuhov
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, Veliky Novgorod 173003, Russia
| | - Egor Rumyantsev
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, Veliky Novgorod 173003, Russia
| | - Andrey Zharov
- Helmholtz National Medical Research Center for Eye Diseases, 14/19 Sadovaya- Chernogryazskaya, Moscow 105062, Russia
| | - Nicolay Zharkov
- Institute for Morphology and Digital Pathology, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Varvara Zotova
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, Veliky Novgorod 173003, Russia
| | - Elena Shchelokova
- Institute for Morphology and Digital Pathology, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Tatiana Demura
- Institute for Morphology and Digital Pathology, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Peter Timashev
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
- World-Class Research Center “Digital Biodesign and Personalized Healthcare”, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya st., Moscow 119991, Russia
| | - Vladimir Makarov
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, Veliky Novgorod 173003, Russia
| |
Collapse
|
6
|
Wiedenmann M, Barch M, Chang PS, Giltnane J, Risom T, Zijlstra A. An Immunofluorescence-Guided Segmentation Model in Hematoxylin and Eosin Images Is Enabled by Tissue Artifact Correction Using a Cycle-Consistent Generative Adversarial Network. Mod Pathol 2024; 37:100591. [PMID: 39147031 DOI: 10.1016/j.modpat.2024.100591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 08/01/2024] [Indexed: 08/17/2024]
Abstract
Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence (IF) staining and mapping these annotations to a post-IF hematoxylin and eosin (H&E) (terminal H&E) stain. Mapping the annotations between IF and terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network was applied to transfer the appearance of conventional H&E such that it emulates terminal H&E. These synthetic emulations allowed us to train a deep learning model for the segmentation of epithelium in terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the cycle-consistent generative adversarial network stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.
Collapse
Affiliation(s)
- Marcel Wiedenmann
- Department of Computer and Information Science, University of Konstanz, Konstanz, Germany
| | - Mariya Barch
- Department of Research Pathology, Genentech Inc, South San Francisco, California
| | - Patrick S Chang
- Department of Research Pathology, Genentech Inc, South San Francisco, California
| | - Jennifer Giltnane
- Department of Research Pathology, Genentech Inc, South San Francisco, California
| | - Tyler Risom
- Department of Research Pathology, Genentech Inc, South San Francisco, California.
| | - Andries Zijlstra
- Department of Research Pathology, Genentech Inc, South San Francisco, California; Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
7
|
Yang M, Yang R, Tao S, Zhang X, Wang M. Unsupervised domain adaptive building semantic segmentation network by edge-enhanced contrastive learning. Neural Netw 2024; 179:106581. [PMID: 39128276 DOI: 10.1016/j.neunet.2024.106581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 07/23/2024] [Accepted: 07/25/2024] [Indexed: 08/13/2024]
Abstract
Unsupervised domain adaptation (UDA) is a weakly supervised learning technique that classifies images in the target domain when the source domain has labeled samples, and the target domain has unlabeled samples. Due to the complexity of imaging conditions and the content of remote sensing images, the use of UDA to accurately extract artificial features such as buildings from high-spatial-resolution (HSR) imagery is still challenging. In this study, we propose a new UDA method for building extraction, the contrastive domain adaptation network (CDANet), by utilizing adversarial learning and contrastive learning techniques. CDANet consists of a single multitask generator and dual discriminators. The generator employs a region and edge dual-branch structure that strengthens its edge extraction ability and is beneficial for the extraction of small and densely distributed buildings. The dual discriminators receive the region and edge prediction outputs and achieve multilevel adversarial learning. During adversarial training processing, CDANet aligns the cross-domain of similar pixel features in the embedding space by constructing the regional pixelwise contrastive loss. A self-training (ST) strategy based on pseudolabel generation is further utilized to address the target intradomain discrepancy. Comprehensive experiments are conducted to validate CDANet on three publicly accessible datasets, namely the WHU, Austin, and Massachusetts. Ablation experiments show that the generator network structure, contrastive loss and ST strategy all improve the building extraction accuracy. Method comparisons validate that CDANet achieves superior performance to several state-of-the-art methods, including AdaptSegNet, AdvEnt, IntraDA, FDANet and ADRS, in terms of F1 score and mIoU.
Collapse
Affiliation(s)
- Mengyuan Yang
- Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China; School of Geography, Nanjing Normal University, Nanjing 210023, China; Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China
| | - Rui Yang
- Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China; School of Geography, Nanjing Normal University, Nanjing 210023, China; Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China
| | - Shikang Tao
- Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China; School of Geography, Nanjing Normal University, Nanjing 210023, China; Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China
| | - Xin Zhang
- Qilu Aerospace Information Research Institute, Jinan 250132, China; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
| | - Min Wang
- Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China; School of Geography, Nanjing Normal University, Nanjing 210023, China; Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China.
| |
Collapse
|
8
|
Zhang J, Lu JD, Chen B, Pan S, Jin L, Zheng Y, Pan M. Vision transformer introduces a new vitality to the classification of renal pathology. BMC Nephrol 2024; 25:337. [PMID: 39385124 PMCID: PMC11465538 DOI: 10.1186/s12882-024-03800-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 10/07/2024] [Indexed: 10/11/2024] Open
Abstract
Recent advancements in computer vision within the field of artificial intelligence (AI) have made significant inroads into the medical domain. However, the application of AI for classifying renal pathology remains challenging due to the subtle variations in multiple renal pathological classifications. Vision Transformers (ViT), an adaptation of the Transformer model for image recognition, have demonstrated superior capabilities in capturing global features and providing greater explainability. In our study, we developed a ViT model using a diverse set of stained renal histopathology images to evaluate its effectiveness in classifying renal pathology. A total of 1861 whole slide images (WSI) stained with HE, MASSON, PAS, and PASM were collected from 635 patients. Renal tissue images were then extracted, tiled, and categorized into 14 classes on the basis of renal pathology. We employed the classic ViT model from the Timm library, utilizing images sized 384 × 384 pixels with 16 × 16 pixel patches, to train the classification model. A comparative analysis was conducted to evaluate the performance of the ViT model against traditional convolutional neural network (CNN) models. The results indicated that the ViT model demonstrated superior recognition ability (accuracy: 0.96-0.99). Furthermore, we visualized the identification process of the ViT models to investigate potentially significant pathological ultrastructures. Our study demonstrated that ViT models outperformed CNN models in accurately classifying renal pathology. Additionally, ViT models are able to focus on specific, significant structures within renal histopathology, which could be crucial for identifying novel and meaningful pathological features in the diagnosis and treatment of renal disease.
Collapse
Affiliation(s)
- Ji Zhang
- Department of Nephrology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, PR China
| | - Jia Dan Lu
- Department of Nephrology, The Second Affiliated Hospital, Yuying Children's Hospital of Wenzhou Medical University, 109 Xueyuan Road, Wenzhou, Zhejiang, PR China
| | - Bo Chen
- Department of Nephrology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, PR China
| | - ShuFang Pan
- Department of Nephrology, The Second Affiliated Hospital, Yuying Children's Hospital of Wenzhou Medical University, 109 Xueyuan Road, Wenzhou, Zhejiang, PR China
| | - LingWei Jin
- Department of Nephrology, The Second Affiliated Hospital, Yuying Children's Hospital of Wenzhou Medical University, 109 Xueyuan Road, Wenzhou, Zhejiang, PR China
| | - Yu Zheng
- Department of Nephrology, The Second Affiliated Hospital, Yuying Children's Hospital of Wenzhou Medical University, 109 Xueyuan Road, Wenzhou, Zhejiang, PR China
| | - Min Pan
- Department of Nephrology, The Second Affiliated Hospital, Yuying Children's Hospital of Wenzhou Medical University, 109 Xueyuan Road, Wenzhou, Zhejiang, PR China.
| |
Collapse
|
9
|
Latonen L, Koivukoski S, Khan U, Ruusuvuori P. Virtual staining for histology by deep learning. Trends Biotechnol 2024; 42:1177-1191. [PMID: 38480025 DOI: 10.1016/j.tibtech.2024.02.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 09/07/2024]
Abstract
In pathology and biomedical research, histology is the cornerstone method for tissue analysis. Currently, the histological workflow consumes plenty of chemicals, water, and time for staining procedures. Deep learning is now enabling digital replacement of parts of the histological staining procedure. In virtual staining, histological stains are created by training neural networks to produce stained images from an unstained tissue image, or through transferring information from one stain to another. These technical innovations provide more sustainable, rapid, and cost-effective alternatives to traditional histological pipelines, but their development is in an early phase and requires rigorous validation. In this review we cover the basic concepts of virtual staining for histology and provide future insights into the utilization of artificial intelligence (AI)-enabled virtual histology.
Collapse
Affiliation(s)
- Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland.
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Turku, Finland
| | | |
Collapse
|
10
|
Zhou H, Wang Y, Zhang B, Zhou C, Vonsky MS, Mitrofanova LB, Zou D, Li Q. Unsupervised domain adaptation for histopathology image segmentation with incomplete labels. Comput Biol Med 2024; 171:108226. [PMID: 38428096 DOI: 10.1016/j.compbiomed.2024.108226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 02/04/2024] [Accepted: 02/25/2024] [Indexed: 03/03/2024]
Abstract
Stain variations pose a major challenge to deep learning segmentation algorithms in histopathology images. Current unsupervised domain adaptation methods show promise in improving model generalization across diverse staining appearances but demand abundant accurately labeled source domain data. This paper assumes a novel scenario, namely, unsupervised domain adaptation based segmentation task with incompletely labeled source data. This paper propose a Stain-Adaptive Segmentation Network with Incomplete Labels (SASN-IL). Specifically, the algorithm consists of two stages. The first stage is an incomplete label correction stage, involving reliable model selection and label correction to rectify false-negative regions in incomplete labels. The second stage is the unsupervised domain adaptation stage, achieving segmentation on the target domain. In this stage, we introduce an adaptive stain transformation module, which adjusts the degree of transformation based on segmentation performance. We evaluate our method on a gastric cancer dataset, demonstrating significant improvements, with a 10.01% increase in Dice coefficient compared to the baseline and competitive performance relative to existing methods.
Collapse
Affiliation(s)
- Huihui Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
| | - Benyan Zhang
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Chunhua Zhou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Maxim S Vonsky
- D.I. Mendeleev Institute for Metrology, St. Petersburg 190005, Russia
| | | | - Duowu Zou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China.
| |
Collapse
|
11
|
Feng C, Ong K, Young DM, Chen B, Li L, Huo X, Lu H, Gu W, Liu F, Tang H, Zhao M, Yang M, Zhu K, Huang L, Wang Q, Marini GPL, Gui K, Han H, Sanders SJ, Li L, Yu W, Mao J. Artificial intelligence-assisted quantification and assessment of whole slide images for pediatric kidney disease diagnosis. Bioinformatics 2024; 40:btad740. [PMID: 38058211 PMCID: PMC10796177 DOI: 10.1093/bioinformatics/btad740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 11/13/2023] [Accepted: 12/05/2023] [Indexed: 12/08/2023] Open
Abstract
MOTIVATION Pediatric kidney disease is a widespread, progressive condition that severely impacts growth and development of children. Chronic kidney disease is often more insidious in children than in adults, usually requiring a renal biopsy for diagnosis. Biopsy evaluation requires copious examination by trained pathologists, which can be tedious and prone to human error. In this study, we propose an artificial intelligence (AI) method to assist pathologists in accurate segmentation and classification of pediatric kidney structures, named as AI-based Pediatric Kidney Diagnosis (APKD). RESULTS We collected 2935 pediatric patients diagnosed with kidney disease for the development of APKD. The dataset comprised 93 932 histological structures annotated manually by three skilled nephropathologists. APKD scored an average accuracy of 94% for each kidney structure category, including 99% in the glomerulus. We found strong correlation between the model and manual detection in detected glomeruli (Spearman correlation coefficient r = 0.98, P < .001; intraclass correlation coefficient ICC = 0.98, 95% CI = 0.96-0.98). Compared to manual detection, APKD was approximately 5.5 times faster in segmenting glomeruli. Finally, we show how the pathological features extracted by APKD can identify focal abnormalities of the glomerular capillary wall to aid in the early diagnosis of pediatric kidney disease. AVAILABILITY AND IMPLEMENTATION https://github.com/ChunyueFeng/Kidney-DataSet.
Collapse
Affiliation(s)
- Chunyue Feng
- Department of Nephrology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310000, China
- National Clinical Research Center for Child Health, Hangzhou 310000, China
| | - Kokhaur Ong
- Bioinformatics Institute, A*STAR, Singapore 138673, Singapore
| | - David M Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore 138673, Singapore
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, 94143, United States
| | - Bingxian Chen
- Ningbo Konfoong Bioinformation Tech Co., Ltd., Ningbo 315000, China
| | - Longjie Li
- Bioinformatics Institute, A*STAR, Singapore 138673, Singapore
| | - Xinmi Huo
- Bioinformatics Institute, A*STAR, Singapore 138673, Singapore
| | - Haoda Lu
- Bioinformatics Institute, A*STAR, Singapore 138673, Singapore
- Institute for AI in Medicine, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Weizhong Gu
- National Clinical Research Center for Child Health, Hangzhou 310000, China
- Department of Pathology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou, 310000, China
| | - Fei Liu
- Department of Nephrology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310000, China
- National Clinical Research Center for Child Health, Hangzhou 310000, China
| | - Hongfeng Tang
- National Clinical Research Center for Child Health, Hangzhou 310000, China
- Department of Pathology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou, 310000, China
| | - Manli Zhao
- National Clinical Research Center for Child Health, Hangzhou 310000, China
- Department of Pathology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou, 310000, China
| | - Min Yang
- National Clinical Research Center for Child Health, Hangzhou 310000, China
- Department of Pathology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou, 310000, China
| | - Kun Zhu
- National Clinical Research Center for Child Health, Hangzhou 310000, China
- Department of Pathology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou, 310000, China
| | - Limin Huang
- Department of Nephrology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310000, China
- National Clinical Research Center for Child Health, Hangzhou 310000, China
| | - Qiang Wang
- Ningbo Konfoong Bioinformation Tech Co., Ltd., Ningbo 315000, China
| | | | - Kun Gui
- Ningbo Konfoong Bioinformation Tech Co., Ltd., Ningbo 315000, China
| | - Hao Han
- Institute of Molecular and Cell Biology, A*STAR, Singapore 138673, Singapore
| | - Stephan J Sanders
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, 94143, United States
| | - Lin Li
- Department of Nephrology, Shanghai Changzheng Hospital, Naval Medical University, Shanghai 200003, China
| | - Weimiao Yu
- Bioinformatics Institute, A*STAR, Singapore 138673, Singapore
- Institute of Molecular and Cell Biology, A*STAR, Singapore 138673, Singapore
- Institute for AI in Medicine, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Jianhua Mao
- Department of Nephrology, Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310000, China
- National Clinical Research Center for Child Health, Hangzhou 310000, China
| |
Collapse
|
12
|
Cazzaniga G, Rossi M, Eccher A, Girolami I, L'Imperio V, Van Nguyen H, Becker JU, Bueno García MG, Sbaraglia M, Dei Tos AP, Gambaro G, Pagni F. Time for a full digital approach in nephropathology: a systematic review of current artificial intelligence applications and future directions. J Nephrol 2024; 37:65-76. [PMID: 37768550 PMCID: PMC10920416 DOI: 10.1007/s40620-023-01775-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 08/22/2023] [Indexed: 09/29/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) integration in nephropathology has been growing rapidly in recent years, facing several challenges including the wide range of histological techniques used, the low occurrence of certain diseases, and the need for data sharing. This narrative review retraces the history of AI in nephropathology and provides insights into potential future developments. METHODS Electronic searches in PubMed-MEDLINE and Embase were made to extract pertinent articles from the literature. Works about automated image analysis or the application of an AI algorithm on non-neoplastic kidney histological samples were included and analyzed to extract information such as publication year, AI task, and learning type. Prepublication servers and reviews were not included. RESULTS Seventy-six (76) original research articles were selected. Most of the studies were conducted in the United States in the last 7 years. To date, research has been mainly conducted on relatively easy tasks, like single-stain glomerular segmentation. However, there is a trend towards developing more complex tasks such as glomerular multi-stain classification. CONCLUSION Deep learning has been used to identify patterns in complex histopathology data and looks promising for the comprehensive assessment of renal biopsy, through the use of multiple stains and virtual staining techniques. Hybrid and collaborative learning approaches have also been explored to utilize large amounts of unlabeled data. A diverse team of experts, including nephropathologists, computer scientists, and clinicians, is crucial for the development of AI systems for nephropathology. Collaborative efforts among multidisciplinary experts result in clinically relevant and effective AI tools.
Collapse
Affiliation(s)
- Giorgio Cazzaniga
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy.
| | - Mattia Rossi
- Division of Nephrology, Department of Medicine, University of Verona, Piazzale Aristide Stefani, 1, 37126, Verona, Italy
| | - Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, P.le Stefani n. 1, 37126, Verona, Italy
- Department of Medical and Surgical Sciences for Children and Adults, University of Modena and Reggio Emilia, University Hospital of Modena, Modena, Italy
| | - Ilaria Girolami
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, P.le Stefani n. 1, 37126, Verona, Italy
| | - Vincenzo L'Imperio
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy
| | - Hien Van Nguyen
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, 77004, USA
| | - Jan Ulrich Becker
- Institute of Pathology, University Hospital of Cologne, Cologne, Germany
| | - María Gloria Bueno García
- VISILAB Research Group, E.T.S. Ingenieros Industriales, University of Castilla-La Mancha, Ciudad Real, Spain
| | - Marta Sbaraglia
- Department of Pathology, Azienda Ospedale-Università Padova, Padua, Italy
- Department of Medicine, University of Padua School of Medicine, Padua, Italy
| | - Angelo Paolo Dei Tos
- Department of Pathology, Azienda Ospedale-Università Padova, Padua, Italy
- Department of Medicine, University of Padua School of Medicine, Padua, Italy
| | - Giovanni Gambaro
- Division of Nephrology, Department of Medicine, University of Verona, Piazzale Aristide Stefani, 1, 37126, Verona, Italy
| | - Fabio Pagni
- Department of Medicine and Surgery, Pathology, Fondazione IRCCS San Gerardo dei Tintori, Università di Milano-Bicocca, Monza, Italy
| |
Collapse
|
13
|
Kassab M, Jehanzaib M, Başak K, Demir D, Keles GE, Turan M. FFPE++: Improving the quality of formalin-fixed paraffin-embedded tissue imaging via contrastive unpaired image-to-image translation. Med Image Anal 2024; 91:102992. [PMID: 37852162 DOI: 10.1016/j.media.2023.102992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 04/29/2023] [Accepted: 10/02/2023] [Indexed: 10/20/2023]
Abstract
Formalin-fixation and paraffin-embedding (FFPE) is a technique for preparing and preserving tissue specimens that has been utilized in histopathology since the late 19th century. This process is further complicated by FFPE preparation steps such as fixation, processing, embedding, microtomy, staining, and coverslipping, which often results in artifacts due to the complex histological and cytological characteristics of a tissue specimen. The term "artifacts" includes, but is not limited to, staining inconsistencies, tissue folds, chattering, pen marks, blurring, air bubbles, and contamination. The presence of artifacts may interfere with pathological diagnosis in disease detection, subtyping, grading, and choice of therapy. In this study, we propose FFPE++, an unpaired image-to-image translation method based on contrastive learning with a mixed channel-spatial attention module and self-regularization loss that drastically corrects the aforementioned artifacts in FFPE tissue sections. Turing tests were performed by 10 board-certified pathologists with more than 10 years of experience. These tests which were performed for ovarian carcinoma, lung adenocarcinoma, lung squamous cell carcinoma, and papillary thyroid carcinoma, demonstrate the clear superiority of the proposed method in many clinical aspects compared with standard FFPE images. Based on the qualitative experiments and feedback from the Turing tests, we believe that FFPE++ can contribute to substantial diagnostic and prognostic accuracy in clinical pathology in the future and can also improve the performance of AI tools in digital pathology. The code and dataset are publicly available at https://github.com/DeepMIALab/FFPEPlus.
Collapse
Affiliation(s)
- Mohamad Kassab
- Department of Computer Engineering, Bogazici University, Istanbul, Turkey
| | - Muhammad Jehanzaib
- Department of Computer Engineering, Bogazici University, Istanbul, Turkey
| | - Kayhan Başak
- Sağlık Bilimleri University, Kartal Dr.Lütfi Kırdar City Hospital, Department of Pathology, Istanbul, Turkey
| | - Derya Demir
- Faculty of Medicine, Department of Pathology, Ege University, Izmir, Turkey
| | | | - Mehmet Turan
- Department of Computer Engineering, Bogazici University, Istanbul, Turkey.
| |
Collapse
|
14
|
Baldeon-Calisto M, Lai-Yuen SK, Puente-Mejia B. StAC-DA: Structure aware cross-modality domain adaptation framework with image and feature-level adaptation for medical image segmentation. Digit Health 2024; 10:20552076241277440. [PMID: 39229464 PMCID: PMC11369866 DOI: 10.1177/20552076241277440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/06/2024] [Indexed: 09/05/2024] Open
Abstract
Objective Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.
Collapse
Affiliation(s)
- Maria Baldeon-Calisto
- Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| | - Susana K. Lai-Yuen
- Department of Industrial and Management Systems, University of South Florida, Tampa, FL, USA
| | - Bernardo Puente-Mejia
- Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| |
Collapse
|
15
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
16
|
Nerrienet N, Peyret R, Sockeel M, Sockeel S. Standardized CycleGAN training for unsupervised stain adaptation in invasive carcinoma classification for breast histopathology. J Med Imaging (Bellingham) 2023; 10:067502. [PMID: 38145285 PMCID: PMC10743931 DOI: 10.1117/1.jmi.10.6.067502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/25/2023] [Accepted: 11/27/2023] [Indexed: 12/26/2023] Open
Abstract
Purpose Generalization is one of the main challenges of computational pathology. Slide preparation heterogeneity and the diversity of scanners lead to poor model performance when used on data from medical centers not seen during training. In order to achieve stain invariance in breast invasive carcinoma patch classification, we implement a stain translation strategy using cycleGANs for unsupervised image-to-image translation. Those models often suffer from a lack of proper metrics to monitor and stop the training at a particular point. We also introduce a method to solve this issue. Approach We compare three CycleGAN-based approaches to a baseline classification model obtained without any stain invariance strategy. Two of the proposed approaches use CycleGAN's translations at inference or training to build stain-specific classification models. The last method uses them for stain data augmentation during training. This constrains the classification model to learn stain-invariant features. Regarding CycleGANs' training monitoring, we leverage Fréchet inception distance between generated and real samples and use it as a stopping criterion. We compare CycleGANs' models stopped using this criterion and models stopped at a fixed number of epochs. Results Baseline metrics are set by training and testing the baseline classification model on a reference stain. We assessed performances using three medical centers with H&E and H&E&S staining. Every approach tested in this study improves baseline metrics without needing labels on target stains. The stain augmentation-based approach produced the best results on every stain. Each method's pros and cons are studied and discussed. Moreover, FID stopping criterion proves superiority to methods using a predefined number of training epoch and has the benefit of not requiring any visual inspection of CycleGAN results. Conclusion We introduce a method to attain stain invariance for breast invasive carcinoma classification by leveraging CycleGAN's abilities to produce realistic translations between various stains. Moreover, we propose a systematical method for scheduling CycleGANs' trainings by using FID as a stopping criterion and prove its superiority to other methods. Finally, we give an insight on the minimal amount of data required for CycleGAN training in a digital histopathology setting.
Collapse
|
17
|
Artificial intelligence in breast pathology - dawn of a new era. NPJ Breast Cancer 2023; 9:5. [PMID: 36720886 PMCID: PMC9889344 DOI: 10.1038/s41523-023-00507-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 01/10/2023] [Indexed: 02/02/2023] Open
|
18
|
Berijanian M, Schaadt NS, Huang B, Lotz J, Feuerhake F, Merhof D. Unsupervised many-to-many stain translation for histological image augmentation to improve classification accuracy. J Pathol Inform 2023; 14:100195. [PMID: 36844704 PMCID: PMC9947329 DOI: 10.1016/j.jpi.2023.100195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 01/19/2023] [Accepted: 01/20/2023] [Indexed: 01/26/2023] Open
Abstract
Background Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues. Methods StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy. Results The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively. Conclusions This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.
Collapse
Affiliation(s)
- Maryam Berijanian
- Department of Computational Mathematics, Science and Engineering (CMSE), Michigan State University, East Lansing, USA,Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | | | - Boqiang Huang
- Institute of Image Analysis and Computer Vision, Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Friedrich Feuerhake
- Institute for Pathology, Hannover Medical School, Hannover, Germany,Institute for Neuropathology, University Clinic Freiburg, Freiburg, Germany
| | - Dorit Merhof
- Institute of Image Analysis and Computer Vision, Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany,Corresponding author at: University of Regensburg, 93040 Regensburg, Germany.
| |
Collapse
|
19
|
Stain-Independent Deep Learning-Based Analysis of Digital Kidney Histopathology. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:73-83. [PMID: 36309103 DOI: 10.1016/j.ajpath.2022.09.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/10/2022] [Accepted: 09/30/2022] [Indexed: 11/12/2022]
Abstract
Convolutional neural network (CNN)-based image analysis applications in digital pathology (eg, tissue segmentation) require a large amount of annotated data and are mostly trained and applicable on a single stain. Here, a novel concept based on stain augmentation is proposed to develop stain-independent CNNs requiring only one annotated stain. In this benchmark study on stain independence in digital pathology, this approach is comprehensively compared with state-of-the-art techniques including image registration and stain translation, and several modifications thereof. A previously developed CNN for segmentation of periodic acid-Schiff-stained kidney histology was used and applied to various immunohistochemical stainings. Stain augmentation showed very high performance in all evaluated stains and outperformed all other techniques in all structures and stains. Without the need for additional annotations, it enabled segmentation on immunohistochemical stainings with performance nearly comparable to that of the annotated periodic acid-Schiff stain and could further uphold performance on several held-out stains not seen during training. Herein, examples of how this framework can be applied for compartment-specific quantification of immunohistochemical stains for inflammation and fibrosis in animal models and patient biopsy specimens are presented. The results show that stain augmentation is a highly effective approach to enable stain-independent applications of deep-learning segmentation algorithms. This opens new possibilities for broad implementation in digital pathology.
Collapse
|
20
|
Ayorinde JOO, Citterio F, Landrò M, Peruzzo E, Islam T, Tilley S, Taylor G, Bardsley V, Liò P, Samoshkin A, Pettigrew GJ. Artificial Intelligence You Can Trust: What Matters Beyond Performance When Applying Artificial Intelligence to Renal Histopathology? J Am Soc Nephrol 2022; 33:2133-2140. [PMID: 36351761 PMCID: PMC9731632 DOI: 10.1681/asn.2022010069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Although still in its infancy, artificial intelligence (AI) analysis of kidney biopsy images is anticipated to become an integral aspect of renal histopathology. As these systems are developed, the focus will understandably be on developing ever more accurate models, but successful translation to the clinic will also depend upon other characteristics of the system.In the extreme, deployment of highly performant but "black box" AI is fraught with risk, and high-profile errors could damage future trust in the technology. Furthermore, a major factor determining whether new systems are adopted in clinical settings is whether they are "trusted" by clinicians. Key to unlocking trust will be designing platforms optimized for intuitive human-AI interactions and ensuring that, where judgment is required to resolve ambiguous areas of assessment, the workings of the AI image classifier are understandable to the human observer. Therefore, determining the optimal design for AI systems depends on factors beyond performance, with considerations of goals, interpretability, and safety constraining many design and engineering choices.In this article, we explore challenges that arise in the application of AI to renal histopathology, and consider areas where choices around model architecture, training strategy, and workflow design may be influenced by factors beyond the final performance metrics of the system.
Collapse
Affiliation(s)
- John O O Ayorinde
- Department of Surgery, University of Cambridge, Addenbrooke's Hospital, Cambridge, United Kingdom
| | | | | | | | | | | | | | - Victoria Bardsley
- Department of Histopathology, Addenbrooke's Hospital, Cambridge, United Kingdom
| | - Pietro Liò
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Alex Samoshkin
- Office for Translational Research, School of Clinical Medicine, University of Cambridge, Cambridge, United Kingdom
| | - Gavin J Pettigrew
- Department of Surgery, University of Cambridge, Addenbrooke's Hospital, Cambridge, United Kingdom
| |
Collapse
|
21
|
Homeyer A, Geißler C, Schwen LO, Zakrzewski F, Evans T, Strohmenger K, Westphal M, Bülow RD, Kargl M, Karjauv A, Munné-Bertran I, Retzlaff CO, Romero-López A, Sołtysiński T, Plass M, Carvalho R, Steinbach P, Lan YC, Bouteldja N, Haber D, Rojas-Carulla M, Vafaei Sadr A, Kraft M, Krüger D, Fick R, Lang T, Boor P, Müller H, Hufnagl P, Zerbe N. Recommendations on compiling test datasets for evaluating artificial intelligence solutions in pathology. Mod Pathol 2022; 35:1759-1769. [PMID: 36088478 PMCID: PMC9708586 DOI: 10.1038/s41379-022-01147-y] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/24/2022] [Accepted: 07/25/2022] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations on compiling test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help pathologists and regulatory agencies verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.
Collapse
Affiliation(s)
- André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany.
| | - Christian Geißler
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | - Lars Ole Schwen
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany
| | - Falk Zakrzewski
- Institute of Pathology, Carl Gustav Carus University Hospital Dresden (UKD), TU Dresden (TUD), Fetscherstrasse 74, 01307, Dresden, Germany
| | - Theodore Evans
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | - Klaus Strohmenger
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| | - Max Westphal
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359, Bremen, Germany
| | - Roman David Bülow
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Michaela Kargl
- Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| | - Aray Karjauv
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | - Isidre Munné-Bertran
- MoticEurope, S.L.U., C. Les Corts, 12 Poligono Industrial, 08349, Barcelona, Spain
| | - Carl Orge Retzlaff
- Technische Universität Berlin, DAI-Labor, Ernst-Reuter-Platz 7, 10587, Berlin, Germany
| | | | | | - Markus Plass
- Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| | - Rita Carvalho
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| | - Peter Steinbach
- Helmholtz-Zentrum Dresden Rossendorf, Bautzner Landstraße 400, 01328, Dresden, Germany
| | - Yu-Chia Lan
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Nassim Bouteldja
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - David Haber
- Lakera AI AG, Zelgstrasse 7, 8003, Zürich, Switzerland
| | | | - Alireza Vafaei Sadr
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | | | - Daniel Krüger
- Olympus Soft Imaging Solutions GmbH, Johann-Krane-Weg 39, 48149, Münster, Germany
| | - Rutger Fick
- Tribun Health, 2 Rue du Capitaine Scott, 75015, Paris, France
| | - Tobias Lang
- Mindpeak GmbH, Zirkusweg 2, 20359, Hamburg, Germany
| | - Peter Boor
- Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Heimo Müller
- Medical University of Graz, Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| | - Peter Hufnagl
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| | - Norman Zerbe
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Pathology, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
22
|
Vasiljević J, Nisar Z, Feuerhake F, Wemmert C, Lampert T. CycleGAN for virtual stain transfer: Is seeing really believing? Artif Intell Med 2022; 133:102420. [PMID: 36328671 DOI: 10.1016/j.artmed.2022.102420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 03/16/2022] [Accepted: 10/02/2022] [Indexed: 01/18/2023]
Abstract
Digital Pathology is an area prone to high variation due to multiple factors which can strongly affect diagnostic quality and visual appearance of the Whole-Slide-Images (WSIs). The state-of-the art methods to deal with such variation tend to address this through style-transfer inspired approaches. Usually, these solutions directly apply successful approaches from the literature, potentially with some task-related modifications. The majority of the obtained results are visually convincing, however, this paper shows that this is not a guarantee that such images can be directly used for either medical diagnosis or reducing domain shift.This article shows that slight modification in a stain transfer architecture, such as a choice of normalisation layer, while resulting in a variety of visually appealing results, surprisingly greatly effects the ability of a stain transfer model to reduce domain shift. By extensive qualitative and quantitative evaluations, we confirm that translations resulting from different stain transfer architectures are distinct from each other and from the real samples. Therefore conclusions made by visual inspection or pretrained model evaluation might be misleading.
Collapse
Affiliation(s)
- Jelica Vasiljević
- ICube, University of Strasbourg, CNRS (UMR 7357), France; University of Belgrade, Belgrade, Serbia; Faculty of Science, University of Kragujevac, Kragujevac, Serbia.
| | - Zeeshan Nisar
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| | - Friedrich Feuerhake
- Institute of Pathology, Hannover Medical School, Germany; University Clinic, Freiburg, Germany
| | - Cédric Wemmert
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| | - Thomas Lampert
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| |
Collapse
|
23
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
24
|
Bouteldja N, Hölscher DL, Bülow RD, Roberts IS, Coppo R, Boor P. Tackling stain variability using CycleGAN-based stain augmentation. J Pathol Inform 2022; 13:100140. [PMID: 36268102 PMCID: PMC9577138 DOI: 10.1016/j.jpi.2022.100140] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 10/28/2022] Open
Abstract
Background Considerable inter- and intra-laboratory stain variability exists in pathology, representing a challenge in development and application of deep learning (DL) approaches. Since tackling all sources of stain variability with manual annotation is not feasible, we here investigated and compared unsupervised DL approaches to reduce the consequences of stain variability in kidney pathology. Methods We aimed to improve the applicability of a pretrained DL segmentation model to 3 external multi-centric cohorts with large stain variability. In contrast to the traditional approach of training generative adversarial networks (GAN) for stain normalization, we here propose to tackle stain variability by data augmentation. We augment the training data of the pretrained model by the stain variability using CycleGANs and then retrain the model on the stain-augmented dataset. We compared the performance of i/ the unmodified pretrained segmentation model with ii/ CycleGAN-based stain normalization, iii/ a feature-preserving modification to ii/ for improved normalization, and iv/ the proposed stain-augmented model. Results The proposed stain-augmented model showed highest mean segmentation accuracy in all external cohorts and maintained comparable performance on the training cohort. However, the increase in performance was only marginal compared to the pretrained model. CycleGAN-based stain normalization suffered from encoded imperceptible information into the normalizations that confused the pretrained model and thus resulted in slightly worse performance. Conclusions Our findings suggest that stain variability can be tackled more effectively by augmenting data by it than by following the commonly used approach of normalizing the stain. However, the applicability of this approach providing only a rather slight performance increase has to be weighted against an additional carbon footprint.
Collapse
Affiliation(s)
- Nassim Bouteldja
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - David L. Hölscher
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Roman D. Bülow
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Ian S.D. Roberts
- Department of Cellular Pathology, Oxford University Hospitals National Health Service Foundation Trust, Oxford, United Kingdom
| | - Rosanna Coppo
- Fondazione Ricerca Molinette, Torino, Italy
- Regina Margherita Children’s University Hospital, Torino, Italy
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| |
Collapse
|
25
|
Shmatko A, Ghaffari Laleh N, Gerstung M, Kather JN. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. NATURE CANCER 2022; 3:1026-1038. [PMID: 36138135 DOI: 10.1038/s43018-022-00436-4] [Citation(s) in RCA: 182] [Impact Index Per Article: 60.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.
Collapse
Affiliation(s)
- Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK
| | | | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK.
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
26
|
Film and Video Quality Optimization Using Attention Mechanism-Embedded Lightweight Neural Network Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8229580. [PMID: 35720938 PMCID: PMC9200523 DOI: 10.1155/2022/8229580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 05/06/2022] [Accepted: 05/21/2022] [Indexed: 11/17/2022]
Abstract
In filming, the collected video may be blurred due to camera shake and object movement, causing the target edge to be unclear or deforming the targets. In order to solve these problems and deeply optimize the quality of movie videos, this work proposes a video deblurring (VD) algorithm based on neural network (NN) model and attention mechanism (AM). Based on the scale recurrent network, Haar planar wavelet transform (WT) is introduced to preprocess the video image and to deblur the video image in the wavelet domain. Additionally, the spatial and channel AMs are fused into the overall network framework to improve the feature expression ability. Further, the residual inception spatial-channel attention (RISCA) mechanism is introduced to extract the multiscale feature information from video images. Meanwhile, skip spatial-channel attention (SSCA) accelerates the network training time to achieve a better VD effect. Finally, relevant experiments are designed, factoring in peak signal-to-noise ratio (PSNR) and structural similarity (SSI). The experimental findings corroborate that the proposed Haar and attention video deblurring (HAVD) outperforms multisize network Haar (MSNH) in PSNR and structural similarity (SSIM), improved by 0.10 dB and 0.005, respectively. Therefore, embedding the dual AMs can improve the model performance and optimize the video quality. This work provides technical support for solving the video distortion problems.
Collapse
|
27
|
Bouteldja N, Klinkhammer BM, Schlaich T, Boor P, Merhof D. Improving unsupervised stain-to-stain translation using self-supervision and meta-learning. J Pathol Inform 2022; 13:100107. [PMID: 36268068 PMCID: PMC9577059 DOI: 10.1016/j.jpi.2022.100107] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Background In digital pathology, many image analysis tasks are challenged by the need for large and time-consuming manual data annotations to cope with various sources of variability in the image domain. Unsupervised domain adaptation based on image-to-image translation is gaining importance in this field by addressing variabilities without the manual overhead. Here, we tackle the variation of different histological stains by unsupervised stain-to-stain translation to enable a stain-independent applicability of a deep learning segmentation model. Methods We use CycleGANs for stain-to-stain translation in kidney histopathology, and propose two novel approaches to improve translational effectivity. First, we integrate a prior segmentation network into the CycleGAN for a self-supervised, application-oriented optimization of translation through semantic guidance, and second, we incorporate extra channels to the translation output to implicitly separate artificial meta-information otherwise encoded for tackling underdetermined reconstructions. Results The latter showed partially superior performances to the unmodified CycleGAN, but the former performed best in all stains providing instance-level Dice scores ranging between 78% and 92% for most kidney structures, such as glomeruli, tubules, and veins. However, CycleGANs showed only limited performance in the translation of other structures, e.g. arteries. Our study also found somewhat lower performance for all structures in all stains when compared to segmentation in the original stain. Conclusions Our study suggests that with current unsupervised technologies, it seems unlikely to produce "generally" applicable simulated stains.
Collapse
Affiliation(s)
- Nassim Bouteldja
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | | | - Tarek Schlaich
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
28
|
Gupta L, Klinkhammer BM, Seikrit C, Fan N, Bouteldja N, Gräbel P, Gadermayr M, Boor P, Merhof D. Large-scale extraction of interpretable features provides new insights into kidney histopathology – a proof-of-concept study. J Pathol Inform 2022; 13:100097. [PMID: 36268111 PMCID: PMC9576990 DOI: 10.1016/j.jpi.2022.100097] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 04/14/2022] [Accepted: 05/02/2022] [Indexed: 11/21/2022] Open
Abstract
Whole slide images contain a magnitude of quantitative information that may not be fully explored in qualitative visual assessments. We propose: (1) a novel pipeline for extracting a comprehensive set of visual features, which are detectable by a pathologist, as well as sub-visual features, which are not discernible by human experts and (2) perform detailed analyses on renal images from mice with experimental unilateral ureteral obstruction. An important criterion for these features is that they are easy to interpret, as opposed to features obtained from neural networks. We extract and compare features from pathological and healthy control kidneys to learn how the compartments (glomerulus, Bowman's capsule, tubule, interstitium, artery, and arterial lumen) are affected by the pathology. We define feature selection methods to extract the most informative and discriminative features. We perform statistical analyses to understand the relation of the extracted features, both individually, and in combinations, with tissue morphology and pathology. Particularly for the presented case-study, we highlight features that are affected in each compartment. With this, prior biological knowledge, such as the increase in interstitial nuclei, is confirmed and presented in a quantitative way, alongside with novel findings, like color and intensity changes in glomeruli and Bowman's capsule. The proposed approach is therefore an important step towards quantitative, reproducible, and rater-independent analysis in histopathology.
Collapse
Affiliation(s)
- Laxmi Gupta
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Corresponding author.
| | | | - Claudia Seikrit
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
- Division of Nephrology and Clinical Immunology, RWTH Aachen University, Aachen, Germany
| | - Nina Fan
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Nassim Bouteldja
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
| | - Philipp Gräbel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Michael Gadermayr
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Salzburg University of Applied Sciences, Puch/Salzburg, Austria
| | - Peter Boor
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
29
|
Brain stroke lesion segmentation using consistent perception generative adversarial network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06816-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
30
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
31
|
Park JH, Kim EY, Luchini C, Eccher A, Tizaoui K, Shin JI, Lim BJ. Artificial Intelligence for Predicting Microsatellite Instability Based on Tumor Histomorphology: A Systematic Review. Int J Mol Sci 2022; 23:2462. [PMID: 35269607 PMCID: PMC8910565 DOI: 10.3390/ijms23052462] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 02/21/2022] [Indexed: 02/04/2023] Open
Abstract
Microsatellite instability (MSI)/defective DNA mismatch repair (dMMR) is receiving more attention as a biomarker for eligibility for immune checkpoint inhibitors in advanced diseases. However, due to high costs and resource limitations, MSI/dMMR testing is not widely performed. Some attempts are in progress to predict MSI/dMMR status through histomorphological features on H&E slides using artificial intelligence (AI) technology. In this study, the potential predictive role of this new methodology was reviewed through a systematic review. Studies up to September 2021 were searched through PubMed and Embase database searches. The design and results of each study were summarized, and the risk of bias for each study was evaluated. For colorectal cancer, AI-based systems showed excellent performance with the highest standard of 0.972; for gastric and endometrial cancers they showed a relatively low but satisfactory performance, with the highest standard of 0.81 and 0.82, respectively. However, analyzing the risk of bias, most studies were evaluated at high-risk. AI-based systems showed a high potential in predicting the MSI/dMMR status of different cancer types, and particularly of colorectal cancers. Therefore, a confirmation test should be required only for the results that are positive in the AI test.
Collapse
Affiliation(s)
- Ji Hyun Park
- Department of Pathology, Yonsei University College of Medicine, Seoul 03722, Korea;
| | - Eun Young Kim
- Evidence-Based and Clinical Research Laboratory, Department of Health, Social and Clinical Pharmacy, College of Pharmacy, Chung-Ang University, Seoul 06974, Korea;
| | - Claudio Luchini
- Department of Diagnostics and Public Health, Section of Pathology, University of Verona, 37134 Verona, Italy;
- ARC-Net Research Center, University and Hospital Trust of Verona, 37134 Verona, Italy
| | - Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, 37134 Verona, Italy;
| | - Kalthoum Tizaoui
- Laboratory of Microorganisms and Active Biomolecules, Sciences Faculty of Tunis, Tunis El Manar University, Tunis 2092, Tunisia;
| | - Jae Il Shin
- Department of Pediatrics, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Beom Jin Lim
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Korea
| |
Collapse
|
32
|
Moeller MJ, Kramann R, Lammers T, Hoppe B, Latz E, Ludwig-Portugall I, Boor P, Floege J, Kurts C, Weiskirchen R, Ostendorf T. New Aspects of Kidney Fibrosis-From Mechanisms of Injury to Modulation of Disease. Front Med (Lausanne) 2022; 8:814497. [PMID: 35096904 PMCID: PMC8790098 DOI: 10.3389/fmed.2021.814497] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 12/20/2021] [Indexed: 02/02/2023] Open
Abstract
Organ fibrogenesis is characterized by a common pathophysiological final pathway independent of the underlying progressive disease of the respective organ. This makes it particularly suitable as a therapeutic target. The Transregional Collaborative Research Center “Organ Fibrosis: From Mechanisms of Injury to Modulation of Disease” (referred to as SFB/TRR57) was hosted from 2009 to 2021 by the Medical Faculties of RWTH Aachen University and the University of Bonn. This consortium had the ultimate goal of discovering new common but also different fibrosis pathways in the liver and kidneys. It finally successfully identified new mechanisms and established novel therapeutic approaches to interfere with hepatic and renal fibrosis. This review covers the consortium's key kidney-related findings, where three overarching questions were addressed: (i) What are new relevant mechanisms and signaling pathways triggering renal fibrosis? (ii) What are new immunological mechanisms, cells and molecules that contribute to renal fibrosis?, and finally (iii) How can renal fibrosis be modulated?
Collapse
Affiliation(s)
- Marcus J Moeller
- Division of Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Aachen, Germany.,Heisenberg Chair for Preventive and Translational Nephrology, Aachen, Germany
| | - Rafael Kramann
- Division of Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Aachen, Germany.,Institute of Experimental Medicine and Systems Biology, RWTH Aachen University Hospital, Aachen, Germany.,Department of Internal Medicine, Nephrology and Transplantation, Erasmus Medical Center, Rotterdam, Netherlands
| | - Twan Lammers
- Department of Nanomedicine and Theranostics, Faculty of Medicine, Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany
| | - Bernd Hoppe
- Division of Pediatric Nephrology and Kidney Transplantation, University Hospital of Bonn, Bonn, Germany.,German Hyperoxaluria Center, Pediatric Kidney Care Center, Bonn, Germany
| | - Eicke Latz
- Institute of Innate Immunity, University Hospital of Bonn, Bonn, Germany
| | - Isis Ludwig-Portugall
- Institute for Molecular Medicine and Experimental Immunology, University Hospital of Bonn, Bonn, Germany
| | - Peter Boor
- Division of Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Aachen, Germany.,Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Jürgen Floege
- Division of Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Christian Kurts
- Institute for Molecular Medicine and Experimental Immunology, University Hospital of Bonn, Bonn, Germany.,Department of Microbiology and Immunology, Doherty Institute for Infection and Immunity, University of Melbourne, Melbourne, VIC, Australia
| | - Ralf Weiskirchen
- Institute of Molecular Pathobiochemistry, Experimental Gene Therapy and Clinical Chemistry (IFMPEGKC), University Hospital RWTH Aachen, Aachen, Germany
| | - Tammo Ostendorf
- Division of Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Aachen, Germany
| |
Collapse
|
33
|
Siller M, Stangassinger LM, Kreutzer C, Boor P, Bulow RD, Kraus TJ, von Stillfried S, Wolfl S, Couillard-Despres S, Oostingh GJ, Hittmair A, Gadermayr M. On the Acceptance of "Fake" Histopathology: A Study on Frozen Sections Optimized with Deep Learning. J Pathol Inform 2022; 13:6. [PMID: 35136673 PMCID: PMC8794030 DOI: 10.4103/jpi.jpi_53_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 08/01/2021] [Accepted: 09/20/2021] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND The fast acquisition process of frozen sections allows surgeons to wait for histological findings during the interventions to base intrasurgical decisions on the outcome of the histology. Compared with paraffin sections, however, the quality of frozen sections is often strongly reduced, leading to a lower diagnostic accuracy. Deep neural networks are capable of modifying specific characteristics of digital histological images. Particularly, generative adversarial networks proved to be effective tools to learn about translation between two modalities, based on two unconnected data sets only. The positive effects of such deep learning-based image optimization on computer-aided diagnosis have already been shown. However, since fully automated diagnosis is controversial, the application of enhanced images for visual clinical assessment is currently probably of even higher relevance. METHODS Three different deep learning-based generative adversarial networks were investigated. The methods were used to translate frozen sections into virtual paraffin sections. Overall, 40 frozen sections were processed. For training, 40 further paraffin sections were available. We investigated how pathologists assess the quality of the different image translation approaches and whether experts are able to distinguish between virtual and real digital pathology. RESULTS Pathologists' detection accuracy of virtual paraffin sections (from pairs consisting of a frozen and a paraffin section) was between 0.62 and 0.97. Overall, in 59% of images, the virtual section was assessed as more appropriate for a diagnosis. In 53% of images, the deep learning approach was preferred to conventional stain normalization (SN). CONCLUSION Overall, expert assessment indicated slightly improved visual properties of converted images and a high similarity to real paraffin sections. The observed high variability showed clear differences in personal preferences.
Collapse
Affiliation(s)
- Mario Siller
- Department of Information Technology and System Management, Salzburg University of Applied Sciences, Salzburg, Austria
| | - Lea Maria Stangassinger
- Department of Biomedical Sciences, Salzburg University of Applied Sciences, Salzburg, Austria
| | - Christina Kreutzer
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany
| | - Roman D. Bulow
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany
| | - Theo J.F. Kraus
- Institute of Pathology, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Saskia von Stillfried
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany
| | | | - Sebastien Couillard-Despres
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Gertie Janneke Oostingh
- Department of Biomedical Sciences, Salzburg University of Applied Sciences, Salzburg, Austria
| | - Anton Hittmair
- Department of Pathology and Microbiology, Kardinal Schwarzenberg Klinikum, Schwarzach, Austria
| | - Michael Gadermayr
- Department of Information Technology and System Management, Salzburg University of Applied Sciences, Salzburg, Austria
| |
Collapse
|
34
|
Xun S, Li D, Zhu H, Chen M, Wang J, Li J, Chen M, Wu B, Zhang H, Chai X, Jiang Z, Zhang Y, Huang P. Generative adversarial networks in medical image segmentation: A review. Comput Biol Med 2022; 140:105063. [PMID: 34864584 DOI: 10.1016/j.compbiomed.2021.105063] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/14/2021] [Accepted: 11/20/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE Since Generative Adversarial Network (GAN) was introduced into the field of deep learning in 2014, it has received extensive attention from academia and industry, and a lot of high-quality papers have been published. GAN effectively improves the accuracy of medical image segmentation because of its good generating ability and capability to capture data distribution. This paper introduces the origin, working principle, and extended variant of GAN, and it reviews the latest development of GAN-based medical image segmentation methods. METHOD To find the papers, we searched on Google Scholar and PubMed with the keywords like "segmentation", "medical image", and "GAN (or generative adversarial network)". Also, additional searches were performed on Semantic Scholar, Springer, arXiv, and the top conferences in computer science with the above keywords related to GAN. RESULTS We reviewed more than 120 GAN-based architectures for medical image segmentation that were published before September 2021. We categorized and summarized these papers according to the segmentation regions, imaging modality, and classification methods. Besides, we discussed the advantages, challenges, and future research directions of GAN in medical image segmentation. CONCLUSIONS We discussed in detail the recent papers on medical image segmentation using GAN. The application of GAN and its extended variants has effectively improved the accuracy of medical image segmentation. Obtaining the recognition of clinicians and patients and overcoming the instability, low repeatability, and uninterpretability of GAN will be an important research direction in the future.
Collapse
Affiliation(s)
- Siyi Xun
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| | - Hui Zhu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Min Chen
- The Second Hospital of Shandong University, Shandong University, The Department of Medicine, The Second Hospital of Shandong University, Jinan, China
| | - Jianbo Wang
- Department of Radiation Oncology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, 250012, China
| | - Jie Li
- Department of Infectious Disease, Shandong Provincial Hospital Affiliated to Shandong University, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Meirong Chen
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Bing Wu
- Laibo Biotechnology Co., Ltd., Jinan, Shandong, China
| | - Hua Zhang
- LinkingMed Technology Co., Ltd., Beijing, China
| | - Xiangfei Chai
- Huiying Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Yan Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| |
Collapse
|
35
|
Noriaki S, Eiichiro U, Yasushi O. Artificial Intelligence in Kidney Pathology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
36
|
Li X, Jiang Y, Rodriguez-Andina JJ, Luo H, Yin S, Kaynak O. When medical images meet generative adversarial network: recent development and research opportunities. DISCOVER ARTIFICIAL INTELLIGENCE 2021; 1:5. [DOI: 10.1007/s44163-021-00006-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 07/12/2021] [Indexed: 11/27/2022]
Abstract
AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.
Collapse
|
37
|
Yamashita R, Long J, Banda S, Shen J, Rubin DL. Learning Domain-Agnostic Visual Representation for Computational Pathology Using Medically-Irrelevant Style Transfer Augmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3945-3954. [PMID: 34339370 DOI: 10.1109/tmi.2021.3101985] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Suboptimal generalization of machine learning models on unseen data is a key challenge which hampers the clinical applicability of such models to medical imaging. Although various methods such as domain adaptation and domain generalization have evolved to combat this challenge, learning robust and generalizable representations is core to medical image understanding, and continues to be a problem. Here, we propose STRAP (Style TRansfer Augmentation for histoPathology), a form of data augmentation based on random style transfer from non-medical style sources such as artistic paintings, for learning domain-agnostic visual representations in computational pathology. Style transfer replaces the low-level texture content of an image with the uninformative style of randomly selected style source image, while preserving the original high-level semantic content. This improves robustness to domain shift and can be used as a simple yet powerful tool for learning domain-agnostic representations. We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts, on two particular classification tasks in computational pathology. Our code is available at https://github.com/rikiyay/style-transfer-for-digital-pathology.
Collapse
|
38
|
Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments in advanced imaging modalities and new technologies. Generative adversarial networks (GANs) are a recent development in the field of artificial intelligence and since their inception, have boosted considerable interest in digital pathology. GANs and their extensions have opened several ways to tackle many challenging histopathological image processing problems such as color normalization, virtual staining, ink removal, image enhancement, automatic feature extraction, segmentation of nuclei, domain adaptation and data augmentation. This paper reviews recent advances in histopathological image processing using GANs with special emphasis on the future perspectives related to the use of such a technique. The papers included in this review were retrieved by conducting a keyword search on Google Scholar and manually selecting the papers on the subject of H&E stained digital pathology images for histopathological image processing. In the first part, we describe recent literature that use GANs in various image preprocessing tasks such as stain normalization, virtual staining, image enhancement, ink removal, and data augmentation. In the second part, we describe literature that use GANs for image analysis, such as nuclei detection, segmentation, and feature extraction. This review illustrates the role of GANs in digital pathology with the objective to trigger new research on the application of generative models in future research in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- Australian Institute of Health Innovation, Centre for
Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
- Department of Physics and Astronomy, Faculty of Science
and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| |
Collapse
|
39
|
Li X, Davis RC, Xu Y, Wang Z, Souma N, Sotolongo G, Bell J, Ellis M, Howell D, Shen X, Lafata KJ, Barisoni L. Deep learning segmentation of glomeruli on kidney donor frozen sections. J Med Imaging (Bellingham) 2021; 8:067501. [PMID: 34950750 PMCID: PMC8685284 DOI: 10.1117/1.jmi.8.6.067501] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 11/08/2021] [Indexed: 10/15/2023] Open
Abstract
Purpose: Recent advances in computational image analysis offer the opportunity to develop automatic quantification of histologic parameters as aid tools for practicing pathologists. We aim to develop deep learning (DL) models to quantify nonsclerotic and sclerotic glomeruli on frozen sections from donor kidney biopsies. Approach: A total of 258 whole slide images (WSI) from cadaveric donor kidney biopsies performed at our institution ( n = 123 ) and at external institutions ( n = 135 ) were used in this study. WSIs from our institution were divided at the patient level into training and validation datasets (ratio: 0.8:0.2), and external WSIs were used as an independent testing dataset. Nonsclerotic ( n = 22767 ) and sclerotic ( n = 1366 ) glomeruli were manually annotated by study pathologists on all WSIs. A nine-layer convolutional neural network based on the common U-Net architecture was developed and tested for the segmentation of nonsclerotic and sclerotic glomeruli. DL-derived, manual segmentation, and reported glomerular count (standard of care) were compared. Results: The average Dice similarity coefficient testing was 0.90 and 0.83. And the F 1 , recall, and precision scores were 0.93, 0.96, and 0.90, and 0.87, 0.93, and 0.81, for nonsclerotic and sclerotic glomeruli, respectively. DL-derived and manual segmentation-derived glomerular counts were comparable, but statistically different from reported glomerular count. Conclusions: DL segmentation is a feasible and robust approach for automatic quantification of glomeruli. We represent the first step toward new protocols for the evaluation of donor kidney biopsies.
Collapse
Affiliation(s)
- Xiang Li
- Duke University, Department of Electrical and Computer Engineering, Durham, North Carolina, United States
| | - Richard C. Davis
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Yuemei Xu
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
- Nanjing Drum Tower Hospital, Department of Pathology, Nanjing, China
| | - Zehan Wang
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Nao Souma
- Duke University, Department of Medicine, Division of Nephrology, Durham, North Carolina, United States
| | - Gina Sotolongo
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Jonathan Bell
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Matthew Ellis
- Duke University, Department of Medicine, Division of Nephrology, Durham, North Carolina, United States
- Duke University, Department of Surgery, Durham, North Carolina, United States
| | - David Howell
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Xiling Shen
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Kyle J. Lafata
- Duke University, Department of Electrical and Computer Engineering, Durham, North Carolina, United States
- Duke University, Department of Radiation Oncology, Durham, North Carolina, United States
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - Laura Barisoni
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
- Duke University, Department of Medicine, Division of Nephrology, Durham, North Carolina, United States
| |
Collapse
|
40
|
Towards histopathological stain invariance by Unsupervised Domain Augmentation using generative adversarial networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
41
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
42
|
Liu S, Zhang B, Liu Y, Han A, Shi H, Guan T, He Y. Unpaired Stain Transfer Using Pathology-Consistent Constrained Generative Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1977-1989. [PMID: 33784619 DOI: 10.1109/tmi.2021.3069874] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Pathological examination is the gold standard for the diagnosis of cancer. Common pathological examinations include hematoxylin-eosin (H&E) staining and immunohistochemistry (IHC). In some cases, it is hard to make accurate diagnoses of cancer by referring only to H&E staining images. Whereas, the IHC examination can further provide enough evidence for the diagnosis process. Hence, the generation of virtual IHC images from H&E-stained images will be a good solution for current IHC examination hard accessibility issue, especially for some low-resource regions. However, existing approaches have limitations in microscopic structural preservation and the consistency of pathology properties. In addition, pixel-level paired data is hard available. In our work, we propose a novel adversarial learning method for effective Ki-67-stained image generation from corresponding H&E-stained image. Our method takes fully advantage of structural similarity constraint and skip connection to improve structural details preservation; and pathology consistency constraint and pathological representation network are first proposed to enforce the generated and source images hold the same pathological properties in different staining domains. We empirically demonstrate the effectiveness of our approach on two different unpaired histopathological datasets. Extensive experiments indicate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin. In addition, our approach also achieves a stable and good performance on unbalanced datasets, which shows our method has strong robustness. We believe that our method has significant potential in clinical virtual staining and advance the progress of computer-aided multi-staining histology image analysis.
Collapse
|
43
|
Gadermayr M, Heckmann L, Li K, Bähr F, Müller M, Truhn D, Merhof D, Gess B. Image-to-Image Translation for Simplified MRI Muscle Segmentation. FRONTIERS IN RADIOLOGY 2021; 1:664444. [PMID: 37492182 PMCID: PMC10365001 DOI: 10.3389/fradi.2021.664444] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 04/30/2021] [Indexed: 07/27/2023]
Abstract
Deep neural networks recently showed high performance and gained popularity in the field of radiology. However, the fact that large amounts of labeled data are required for training these architectures inhibits practical applications. We take advantage of an unpaired image-to-image translation approach in combination with a novel domain specific loss formulation to create an "easier-to-segment" intermediate image representation without requiring any label data. The requirement here is that the task can be translated from a hard to a related but simplified task for which unlabeled data are available. In the experimental evaluation, we investigate fully automated approaches for segmentation of pathological muscle tissue in T1-weighted magnetic resonance (MR) images of human thighs. The results show clearly improved performance in case of supervised segmentation techniques. Even more impressively, we obtain similar results with a basic completely unsupervised segmentation approach.
Collapse
Affiliation(s)
- Michael Gadermayr
- Department of Information Technology and Systems Management, Salzburg University of Applied Sciences, Salzburg, Austria
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Lotte Heckmann
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Kexin Li
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Friederike Bähr
- Department of Neurology, RWTH Aachen, University Hospital Aachen, Aachen, Germany
| | - Madlaine Müller
- Department of Neurology, RWTH Aachen, University Hospital Aachen, Aachen, Germany
- Department of Neurology, Inselspital Bern, Bern, Switzerland
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Burkhard Gess
- Department of Neurology, RWTH Aachen, University Hospital Aachen, Aachen, Germany
- Department of Neurology, Evangelisches Klinikum Bethel, Universitätsklinikum OWL, Bielefeld, Germany
| |
Collapse
|
44
|
Huo Y, Deng R, Liu Q, Fogo AB, Yang H. AI applications in renal pathology. Kidney Int 2021; 99:1309-1320. [PMID: 33581198 PMCID: PMC8154730 DOI: 10.1016/j.kint.2021.01.015] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/09/2021] [Accepted: 01/13/2021] [Indexed: 12/20/2022]
Abstract
The explosive growth of artificial intelligence (AI) technologies, especially deep learning methods, has been translated at revolutionary speed to efforts in AI-assisted healthcare. New applications of AI to renal pathology have recently become available, driven by the successful AI deployments in digital pathology. However, synergetic developments of renal pathology and AI require close interdisciplinary collaborations between computer scientists and renal pathologists. Computer scientists should understand that not every AI innovation is translatable to renal pathology, while renal pathologists should capture high-level principles of the relevant AI technologies. Herein, we provide an integrated review on current and possible future applications in AI-assisted renal pathology, by including perspectives from computer scientists and renal pathologists. First, the standard stages, from data collection to analysis, in full-stack AI-assisted renal pathology studies are reviewed. Second, representative renal pathology-optimized AI techniques are introduced. Last, we review current clinical AI applications, as well as promising future applications with the recent advances in AI.
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Ruining Deng
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Quan Liu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Agnes B Fogo
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
| |
Collapse
|
45
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
46
|
Hacking S, Bijol V. Deep learning for the classification of medical kidney disease: a pilot study for electron microscopy. Ultrastruct Pathol 2021; 45:118-127. [PMID: 33583322 DOI: 10.1080/01913123.2021.1882628] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Artificial intelligence (AI) is a new frontier and often enigmatic for medical professionals. Cloud computing could open up the field of computer vision to a wider medical audience and deep learning on the cloud allows one to design, develop, train and deploy applications with ease. In the field of histopathology, the implementation of various applications in AI has been successful for whole slide images rich in biological diversity. However, the analysis of other tissue medias, including electron microscopy, is yet to be explored. The present study aims to evaluate deep learning for the classification of medical kidney disease on electron microscopy images: amyloidosis, diabetic glomerulosclerosis, membranous nephropathy, membranoproliferative glomerulonephritis (MPGN), and thin basement membrane disease (TBMD). We found good overall classification with the MedKidneyEM-v1 Classifier and when looking at normal and diseased kidneys, the average area under the curve for precision and recall was 0.841. The average area under the curve for precision and recall on the disease only cohort was 0.909. Digital pathology will shape a new era for medical kidney disease and the present study demonstrates the feasibility of deep learning for electron microscopy. Future approaches could be used by renal pathologists to improve diagnostic concordance, determine therapeutic strategies, and optimize patient outcomes in a true clinical environment.
Collapse
Affiliation(s)
- Sean Hacking
- Department of Pathology and Laboratory Medicine, Donald and Barbara Zucker School of Medicine at Northwell, Manhasset, New York, USA
| | - Vanesa Bijol
- Department of Pathology and Laboratory Medicine, Donald and Barbara Zucker School of Medicine at Northwell, Manhasset, New York, USA
| |
Collapse
|
47
|
Bülow RD, Kers J, Boor P. Multistain segmentation of renal histology: first steps toward artificial intelligence-augmented digital nephropathology. Kidney Int 2021; 99:17-19. [PMID: 33390226 DOI: 10.1016/j.kint.2020.08.025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 08/19/2020] [Indexed: 11/28/2022]
Abstract
Artificial intelligence (AI), and particularly deep learning (DL), are showing great potential in improving pathology diagnostics in many aspects, 1 of which is the segmentation of histology into (diagnostically) relevant compartments. Although most current studies focus on AI and DL in oncologic pathology, an increasing number of studies explore their application to nephropathology, including the study published in this issue of Kidney International by Jayapandian et al.
Collapse
Affiliation(s)
- Roman D Bülow
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Jesper Kers
- Department of Pathology, Amsterdam UMC, Amsterdam Infection & Immunity Institute, University of Amsterdam, Amsterdam, The Netherlands; Department of Pathology, Leiden University Medical Center, Leiden, The Netherlands; Van 't Hoff Institute for Molecular Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany; Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany.
| |
Collapse
|
48
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 255] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
49
|
Bouteldja N, Klinkhammer BM, Bülow RD, Droste P, Otten SW, Freifrau von Stillfried S, Moellmann J, Sheehan SM, Korstanje R, Menzel S, Bankhead P, Mietsch M, Drummer C, Lehrke M, Kramann R, Floege J, Boor P, Merhof D. Deep Learning-Based Segmentation and Quantification in Experimental Kidney Histopathology. J Am Soc Nephrol 2021; 32:52-68. [PMID: 33154175 PMCID: PMC7894663 DOI: 10.1681/asn.2020050597] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 09/09/2020] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Nephropathologic analyses provide important outcomes-related data in experiments with the animal models that are essential for understanding kidney disease pathophysiology. Precision medicine increases the demand for quantitative, unbiased, reproducible, and efficient histopathologic analyses, which will require novel high-throughput tools. A deep learning technique, the convolutional neural network, is increasingly applied in pathology because of its high performance in tasks like histology segmentation. METHODS We investigated use of a convolutional neural network architecture for accurate segmentation of periodic acid-Schiff-stained kidney tissue from healthy mice and five murine disease models and from other species used in preclinical research. We trained the convolutional neural network to segment six major renal structures: glomerular tuft, glomerulus including Bowman's capsule, tubules, arteries, arterial lumina, and veins. To achieve high accuracy, we performed a large number of expert-based annotations, 72,722 in total. RESULTS Multiclass segmentation performance was very high in all disease models. The convolutional neural network allowed high-throughput and large-scale, quantitative and comparative analyses of various models. In disease models, computational feature extraction revealed interstitial expansion, tubular dilation and atrophy, and glomerular size variability. Validation showed a high correlation of findings with current standard morphometric analysis. The convolutional neural network also showed high performance in other species used in research-including rats, pigs, bears, and marmosets-as well as in humans, providing a translational bridge between preclinical and clinical studies. CONCLUSIONS We developed a deep learning algorithm for accurate multiclass segmentation of digital whole-slide images of periodic acid-Schiff-stained kidneys from various species and renal disease models. This enables reproducible quantitative histopathologic analyses in preclinical models that also might be applicable to clinical studies.
Collapse
Affiliation(s)
- Nassim Bouteldja
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Barbara M. Klinkhammer
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany,Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Roman D. Bülow
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Patrick Droste
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Simon W. Otten
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | | | - Julia Moellmann
- Department of Cardiology and Vascular Medicine, RWTH Aachen University Hospital, Aachen, Germany
| | | | | | - Sylvia Menzel
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Peter Bankhead
- Edinburgh Pathology, University of Edinburgh, Edinburgh, United Kingdom,Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, United Kingdom
| | - Matthias Mietsch
- Laboratory Animal Science Unit, German Primate Center, Goettingen, Germany
| | - Charis Drummer
- Platform Degenerative Diseases, German Primate Center, Goettingen, Germany
| | - Michael Lehrke
- Department of Cardiology and Vascular Medicine, RWTH Aachen University Hospital, Aachen, Germany
| | - Rafael Kramann
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany,Department of Internal Medicine, Nephrology and Transplantation, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jürgen Floege
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany,Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
50
|
Jayapandian CP, Chen Y, Janowczyk AR, Palmer MB, Cassol CA, Sekulic M, Hodgin JB, Zee J, Hewitt SM, O'Toole J, Toro P, Sedor JR, Barisoni L, Madabhushi A. Development and evaluation of deep learning-based segmentation of histologic structures in the kidney cortex with multiple histologic stains. Kidney Int 2021; 99:86-101. [PMID: 32835732 PMCID: PMC8414393 DOI: 10.1016/j.kint.2020.07.044] [Citation(s) in RCA: 100] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 06/29/2020] [Accepted: 07/24/2020] [Indexed: 12/21/2022]
Abstract
The application of deep learning for automated segmentation (delineation of boundaries) of histologic primitives (structures) from whole slide images can facilitate the establishment of novel protocols for kidney biopsy assessment. Here, we developed and validated deep learning networks for the segmentation of histologic structures on kidney biopsies and nephrectomies. For development, we examined 125 biopsies for Minimal Change Disease collected across 29 NEPTUNE enrolling centers along with 459 whole slide images stained with Hematoxylin & Eosin (125), Periodic Acid Schiff (125), Silver (102), and Trichrome (107) divided into training, validation and testing sets (ratio 6:1:3). Histologic structures were manually segmented (30048 total annotations) by five nephropathologists. Twenty deep learning models were trained with optimal digital magnification across the structures and stains. Periodic Acid Schiff-stained whole slide images yielded the best concordance between pathologists and deep learning segmentation across all structures (F-scores: 0.93 for glomerular tufts, 0.94 for glomerular tuft plus Bowman's capsule, 0.91 for proximal tubules, 0.93 for distal tubular segments, 0.81 for peritubular capillaries, and 0.85 for arteries and afferent arterioles). Optimal digital magnifications were 5X for glomerular tuft/tuft plus Bowman's capsule, 10X for proximal/distal tubule, arteries and afferent arterioles, and 40X for peritubular capillaries. Silver stained whole slide images yielded the worst deep learning performance. Thus, this largest study to date adapted deep learning for the segmentation of kidney histologic structures across multiple stains and pathology laboratories. All data used for training and testing and a detailed online tutorial will be publicly available.
Collapse
Affiliation(s)
- Catherine P Jayapandian
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA.
| | - Yijiang Chen
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Andrew R Janowczyk
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA; Precision Oncology Center, Lausanne University Hospital, Vaud, Switzerland
| | - Matthew B Palmer
- Department of Pathology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | | - Miroslav Sekulic
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA; Department of Pathology, University Hospitals of Cleveland, Cleveland, Ohio, USA
| | - Jeffrey B Hodgin
- Department of Pathology, University of Michigan, Ann Arbor, Michigan, USA
| | - Jarcy Zee
- Department of Biostatistics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Stephen M Hewitt
- Laboratory of Pathology, National Institutes of Health, National Cancer Institute, Bethesda, Maryland, USA
| | - John O'Toole
- Lerner Research and Glickman Urology and Kidney Institutes, Cleveland Clinic, Cleveland, Ohio, USA
| | - Paula Toro
- Department of Pathology, Universidad Nacional de Colombia, Bogotá, Colombia
| | - John R Sedor
- Lerner Research and Glickman Urology and Kidney Institutes, Cleveland Clinic, Cleveland, Ohio, USA; Department of Physiology and Biophysics, Case Western Reserve University, Cleveland, Ohio, USA
| | - Laura Barisoni
- Department of Pathology and Medicine, Division of Nephrology, Duke University, Durham, North Carolina, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|