1
|
Brodsky V, Ullah E, Bychkov A, Song AH, Walk EE, Louis P, Rasool G, Singh RS, Mahmood F, Bui MM, Parwani AV. Generative Artificial Intelligence in Anatomic Pathology. Arch Pathol Lab Med 2025; 149:298-318. [PMID: 39836377 DOI: 10.5858/arpa.2024-0215-ra] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2024] [Indexed: 01/22/2025]
Abstract
CONTEXT.— Generative artificial intelligence (AI) has emerged as a transformative force in various fields, including anatomic pathology, where it offers the potential to significantly enhance diagnostic accuracy, workflow efficiency, and research capabilities. OBJECTIVE.— To explore the applications, benefits, and challenges of generative AI in anatomic pathology, with a focus on its impact on diagnostic processes, workflow efficiency, education, and research. DATA SOURCES.— A comprehensive review of current literature and recent advancements in the application of generative AI within anatomic pathology, categorized into unimodal and multimodal applications, and evaluated for clinical utility, ethical considerations, and future potential. CONCLUSIONS.— Generative AI demonstrates significant promise in various domains of anatomic pathology, including diagnostic accuracy enhanced through AI-driven image analysis, virtual staining, and synthetic data generation; workflow efficiency, with potential for improvement by automating routine tasks, quality control, and reflex testing; education and research, facilitated by AI-generated educational content, synthetic histology images, and advanced data analysis methods; and clinical integration, with preliminary surveys indicating cautious optimism for nondiagnostic AI tasks and growing engagement in academic settings. Ethical and practical challenges require rigorous validation, prompt engineering, federated learning, and synthetic data generation to help ensure trustworthy, reliable, and unbiased AI applications. Generative AI can potentially revolutionize anatomic pathology, enhancing diagnostic accuracy, improving workflow efficiency, and advancing education and research. Successful integration into clinical practice will require continued interdisciplinary collaboration, careful validation, and adherence to ethical standards to ensure the benefits of AI are realized while maintaining the highest standards of patient care.
Collapse
Affiliation(s)
- Victor Brodsky
- From the Department of Pathology and Immunology, Washington University School of Medicine in St Louis, St Louis, Missouri (Brodsky)
| | - Ehsan Ullah
- the Department of Surgery, Health New Zealand, Counties Manukau, New Zealand (Ullah)
| | - Andrey Bychkov
- the Department of Pathology, Kameda Medical Center, Kamogawa City, Chiba Prefecture, Japan (Bychkov)
- the Department of Pathology, Nagasaki University, Nagasaki, Japan (Bychkov)
| | - Andrew H Song
- the Department of Pathology, Brigham and Women's Hospital, Boston, Massachusetts (Song, Mahmood)
| | - Eric E Walk
- Office of the Chief Medical Officer, PathAI, Boston, Massachusetts (Walk)
| | - Peter Louis
- the Department of Pathology and Laboratory Medicine, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey (Louis)
| | - Ghulam Rasool
- the Department of Oncologic Sciences, Morsani College of Medicine and Department of Electrical Engineering, University of South Florida, Tampa (Rasool)
- the Department of Machine Learning, Moffitt Cancer Center and Research Institute, Tampa, Florida (Rasool)
- Department of Machine Learning, Neuro-Oncology, Moffitt Cancer Center and Research Institute, Tampa, Florida (Rasool)
| | - Rajendra S Singh
- Dermatopathology and Digital Pathology, Summit Health, Berkley Heights, New Jersey (Singh)
| | - Faisal Mahmood
- the Department of Pathology, Brigham and Women's Hospital, Boston, Massachusetts (Song, Mahmood)
| | - Marilyn M Bui
- Department of Machine Learning, Pathology, Moffitt Cancer Center and Research Institute, Tampa, Florida (Bui)
| | - Anil V Parwani
- the Department of Pathology, The Ohio State University, Columbus (Parwani)
| |
Collapse
|
2
|
Li G, Fan X, Xu C, Lv P, Wang R, Ruan Z, Zhou Z, Zhang Y. Detection of cervical cell based on multi-scale spatial information. Sci Rep 2025; 15:3117. [PMID: 39856153 PMCID: PMC11760966 DOI: 10.1038/s41598-025-87165-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 01/16/2025] [Indexed: 01/27/2025] Open
Abstract
Cervical cancer poses a significant health risk to women. Deep learning methods can assist pathologists in quickly screening images of suspected lesion cells, greatly improving the efficiency of cervical cancer screening and diagnosis. However, existing deep learning methods rely solely on single-scale features and local spatial information, failing to effectively capture the subtle morphological differences between abnormal and normal cervical cells. To tackle this problem effectively, we propose a cervical cell detection method that utilizes multi-scale spatial information. This approach efficiently captures and integrates spatial information at different scales. Firstly, we design the Multi-Scale Spatial Information Augmentation Module (MSA), which captures global spatial information by introducing a multi-scale spatial information extraction branch during the feature extraction stage. Secondly, the Channel Attention Enhanced Module (CAE) is introduced to achieve channel-level weighted processing, dynamically optimizing each output feature using channel weights to focus on critical features. We use Sparse R-CNN as the baseline and integrate MSA and CAE into it. Experiments on the CDetector dataset achieved an Average Precision (AP) of 65.3%, outperforming the state-of-the-art (SOTA) methods.
Collapse
Affiliation(s)
- Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Xinyu Fan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Chuanyun Xu
- School of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Pengfei Lv
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Ru Wang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Zihan Ruan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Zheng Zhou
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Yang Zhang
- School of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China.
| |
Collapse
|
3
|
Chen W, Liu J, Chow TWS, Yuan Y. STAR-RL: Spatial-Temporal Hierarchical Reinforcement Learning for Interpretable Pathology Image Super-Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4368-4379. [PMID: 38935476 DOI: 10.1109/tmi.2024.3419809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Pathology image are essential for accurately interpreting lesion cells in cytopathology screening, but acquiring high-resolution digital slides requires specialized equipment and long scanning times. Though super-resolution (SR) techniques can alleviate this problem, existing deep learning models recover pathology image in a black-box manner, which can lead to untruthful biological details and misdiagnosis. Additionally, current methods allocate the same computational resources to recover each pixel of pathology image, leading to the sub-optimal recovery issue due to the large variation of pathology image. In this paper, we propose the first hierarchical reinforcement learning framework named Spatial-Temporal hierARchical Reinforcement Learning (STAR-RL), mainly for addressing the aforementioned issues in pathology image super-resolution problem. We reformulate the SR problem as a Markov decision process of interpretable operations and adopt the hierarchical recovery mechanism in patch level, to avoid sub-optimal recovery. Specifically, the higher-level spatial manager is proposed to pick out the most corrupted patch for the lower-level patch worker. Moreover, the higher-level temporal manager is advanced to evaluate the selected patch and determine whether the optimization should be stopped earlier, thereby avoiding the over-processed problem. Under the guidance of spatial-temporal managers, the lower-level patch worker processes the selected patch with pixel-wise interpretable actions at each time step. Experimental results on medical images degraded by different kernels show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the promotion in tumor diagnosis with a large margin and shows generalizability under various degradations. The source code is available at https://github.com/CUHK-AIM-Group/STAR-RL.
Collapse
|
4
|
Wang R, Li Q, Shi G, Li Q, Zhong D. A deep learning framework for predicting endometrial cancer from cytopathologic images with different staining styles. PLoS One 2024; 19:e0306549. [PMID: 39083516 PMCID: PMC11290691 DOI: 10.1371/journal.pone.0306549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 06/18/2024] [Indexed: 08/02/2024] Open
Abstract
Endometrial cancer screening is crucial for clinical treatment. Currently, cytopathologists analyze cytopathology images is considered a popular screening method, but manual diagnosis is time-consuming and laborious. Deep learning can provide objective guidance efficiency. But endometrial cytopathology images often come from different medical centers with different staining styles. It decreases the generalization ability of deep learning models in cytopathology images analysis, leading to poor performance. This study presents a robust automated screening framework for endometrial cancer that can be applied to cytopathology images with different staining styles, and provide an objective diagnostic reference for cytopathologists, thus contributing to clinical treatment. We collected and built the XJTU-EC dataset, the first cytopathology dataset that includes segmentation and classification labels. And we propose an efficient two-stage framework for adapting different staining style images, and screening endometrial cancer at the cellular level. Specifically, in the first stage, a novel CM-UNet is utilized to segment cell clumps, with a channel attention (CA) module and a multi-level semantic supervision (MSS) module. It can ignore staining variance and focus on extracting semantic information for segmentation. In the second stage, we propose a robust and effective classification algorithm based on contrastive learning, ECRNet. By momentum-based updating and adding labeled memory banks, it can reduce most of the false negative results. On the XJTU-EC dataset, CM-UNet achieves an excellent segmentation performance, and ECRNet obtains an accuracy of 98.50%, a precision of 99.32% and a sensitivity of 97.67% on the test set, which outperforms other competitive classical models. Our method robustly predicts endometrial cancer on cytopathologic images with different staining styles, which will further advance research in endometrial cancer screening and provide early diagnosis for patients. The code will be available on GitHub.
Collapse
Affiliation(s)
- Ruijie Wang
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
| | - Qing Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
| | - Guizhi Shi
- Laboratory Animal Center, Institute of Biophysics, Chinese Academy of Sciences, and the University of Chinese Academy of Sciences, Beijing, China
| | - Qiling Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China
- Pazhou Laboratory, Guangzhou, P.R. China
- Research Institute of Xi’an Jiaotong University, Zhejiang, Hangzhou, P.R. China
| |
Collapse
|
5
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
6
|
Song Y, Zou J, Choi KS, Lei B, Qin J. Cell classification with worse-case boosting for intelligent cervical cancer screening. Med Image Anal 2024; 91:103014. [PMID: 37913578 DOI: 10.1016/j.media.2023.103014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 10/10/2023] [Accepted: 10/20/2023] [Indexed: 11/03/2023]
Abstract
Cell classification underpins intelligent cervical cancer screening, a cytology examination that effectively decreases both the morbidity and mortality of cervical cancer. This task, however, is rather challenging, mainly due to the difficulty of collecting a training dataset representative sufficiently of the unseen test data, as there are wide variations of cells' appearance and shape at different cancerous statuses. This difficulty makes the classifier, though trained properly, often classify wrongly for cells that are underrepresented by the training dataset, eventually leading to a wrong screening result. To address it, we propose a new learning algorithm, called worse-case boosting, for classifiers effectively learning from under-representative datasets in cervical cell classification. The key idea is to learn more from worse-case data for which the classifier has a larger gradient norm compared to other training data, so these data are more likely to correspond to underrepresented data, by dynamically assigning them more training iterations and larger loss weights for boosting the generalizability of the classifier on underrepresented data. We achieve this idea by sampling worse-case data per the gradient norm information and then enhancing their loss values to update the classifier. We demonstrate the effectiveness of this new learning algorithm on two publicly available cervical cell classification datasets (the two largest ones to the best of our knowledge), and positive results (4% accuracy improvement) yield in the extensive experiments. The source codes are available at: https://github.com/YouyiSong/Worse-Case-Boosting.
Collapse
Affiliation(s)
- Youyi Song
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Zou
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kup-Sze Choi
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Baiying Lei
- Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen, China.
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
7
|
You S, Lei B, Wang S, Chui CK, Cheung AC, Liu Y, Gan M, Wu G, Shen Y. Fine Perceptive GANs for Brain MR Image Super-Resolution in Wavelet Domain. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8802-8814. [PMID: 35254996 DOI: 10.1109/tnnls.2022.3153088] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Magnetic resonance (MR) imaging plays an important role in clinical and brain exploration. However, limited by factors such as imaging hardware, scanning time, and cost, it is challenging to acquire high-resolution MR images clinically. In this article, fine perceptive generative adversarial networks (FP-GANs) are proposed to produce super-resolution (SR) MR images from the low-resolution counterparts. By adopting the divide-and-conquer scheme, FP-GANs are designed to deal with the low-frequency (LF) and high-frequency (HF) components of MR images separately and parallelly. Specifically, FP-GANs first decompose an MR image into LF global approximation and HF anatomical texture subbands in the wavelet domain. Then, each subband generative adversarial network (GAN) simultaneously concentrates on super-resolving the corresponding subband image. In generator, multiple residual-in-residual dense blocks are introduced for better feature extraction. In addition, the texture-enhancing module is designed to trade off the weight between global topology and detailed textures. Finally, the reconstruction of the whole image is considered by integrating inverse discrete wavelet transformation in FP-GANs. Comprehensive experiments on the MultiRes_7T and ADNI datasets demonstrate that the proposed model achieves finer structure recovery and outperforms the competing methods quantitatively and qualitatively. Moreover, FP-GANs further show the value by applying the SR results in classification tasks.
Collapse
|
8
|
Liang Y, Feng S, Liu Q, Kuang H, Liu J, Liao L, Du Y, Wang J. Exploring Contextual Relationships for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2023; 27:4086-4097. [PMID: 37192032 DOI: 10.1109/jbhi.2023.3276919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.
Collapse
|
9
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
10
|
Manuel C, Zehnder P, Kaya S, Sullivan R, Hu F. Impact of color augmentation and tissue type in deep learning for hematoxylin and eosin image super resolution. J Pathol Inform 2022; 13:100148. [PMID: 36268062 PMCID: PMC9577134 DOI: 10.1016/j.jpi.2022.100148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/23/2022] [Accepted: 09/23/2022] [Indexed: 11/30/2022] Open
Affiliation(s)
| | | | | | | | - Fangyao Hu
- Corresponding author at: Genentech, 1 DNA Way, South San Francisco, CA 94080, USA.
| |
Collapse
|
11
|
Cervical cytopathology image refocusing via multi-scale attention features and domain normalization. Med Image Anal 2022; 81:102566. [DOI: 10.1016/j.media.2022.102566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 07/27/2022] [Accepted: 08/02/2022] [Indexed: 11/22/2022]
|
12
|
Ding Z, Zhao Y, Zhang G, Zhong M, Guan X, Zhang Y. Application of visual mechanical signal detection and loading platform with super‐resolution based on deep learning. INT J INTELL SYST 2022. [DOI: 10.1002/int.22905] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Affiliation(s)
- Zhiquan Ding
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Yu Zhao
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Guolong Zhang
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Meiling Zhong
- School of Materials Science and Engineering East China Jiaotong University Nanchang China
| | - Xiaohui Guan
- The National Engineering Research Center for Bioengineering Drugs and the Technologies Nanchang University Nanchang China
| | - Yuejin Zhang
- School of Information Engineering East China Jiaotong University Nanchang China
| |
Collapse
|
13
|
Sun K, Gao Y, Xie T, Wang X, Yang Q, Chen L, Wang K, Yu G. A low-cost pathological image digitalization method based on 5 times magnification scanning. Quant Imaging Med Surg 2022; 12:2813-2829. [PMID: 35502389 PMCID: PMC9014144 DOI: 10.21037/qims-21-749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 01/06/2022] [Indexed: 10/10/2023]
Abstract
BACKGROUND Digital pathology has aroused widespread interest in modern pathology. The key to digitalization is to scan the whole slide image (WSI) at high magnification. The file size of each WSI at 40 times magnification (40×) may range from 1 gigabyte (GB) to 5 GB depending on the size of the specimen, which leads to huge storage capacity, very slow scanning and network exchange, seriously increasing time and storage costs for digital pathology. METHODS We design a strategy to scan slides with low resolution (LR) (5×), and a superresolution (SR) method is proposed to restore the image details during diagnosis. The method is based on a multiscale generative adversarial network, which can sequentially generate three high-resolution (HR) images: 10×, 20×, and 40×. A dataset consisting of 100,000 pathological images from 10 types of human body systems is used for training and testing. The differences between the generated images and the real images have been extensively evaluated using quantitative evaluation, visual inspection, medical scoring, and diagnosis. RESULTS The file size of each 5× WSI is approximately 15 Megabytes. The peak-signal-to-noise ratios (PSNRs) of 10× to 40× generated images are 24.167±3.734 dB, 22.272±4.272 dB, and 20.436±3.845 dB, and the structural similarity (SSIM) index values are 0.845±0.089, 0.680±0.150, and 0.559±0.179, which are better than those of other SR networks and conventional digital zoom methods. Visual inspections show that the generated images have details similar to the real images. Visual scoring average with 0.95 confidence interval from three pathologists are 3.630±1.024, 3.700±1.126, and 3.740±1.095, respectively, and the P value of analysis of variance is 0.367, indicating the pathologists confirm that generated images include sufficient information for diagnosis. The average value of the Kappa test of the diagnoses of paired generated and real images is 0.990, meaning the diagnosis of generated images is highly consistent with that of the real images. CONCLUSIONS The proposed method can generate high-quality 10×, 20×, 40× images from 5× images, which can effectively reduce the time and storage costs of digitalization up to 1/64 of the previous costs, which shows the potential for clinical applications and is expected to be an alternative digitalization method after large-scale evaluation.
Collapse
Affiliation(s)
- Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Ting Xie
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Xun Wang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Qingqing Yang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Le Chen
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| |
Collapse
|
14
|
Ma J, Liu S, Cheng S, Chen R, Liu X, Chen L, Zeng S. STSRNet: Self-Texture Transfer Super-Resolution and Refocusing Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:383-393. [PMID: 34520352 DOI: 10.1109/tmi.2021.3112923] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Biomedical microscopy images with high-resolution (HR) and axial information can help analysis and diagnosis. However, obtaining such images usually takes more time and economic costs, which makes it impractical in most scenarios. In this paper, we first propose a novel Self-texture Transfer Super-resolution and Refocusing Network (STSRNet) to reconstruct HR multi-focal plane (MFP) images from a single 2D low-resolution (LR) wide field image without relying on scanning or any special devices. The proposed STSRNet consists of three parts: the backbone module for extracting features, the self-texture transfer module for transferring and fusing features, and the flexible reconstruction module for SR and refocusing. Specifically, the self-texture transfer module is designed for images with self-similarity such as cytological images and it searches for similar textures within the image and transfers to help MFP reconstruction. As for reconstruction module, it is composed of multiple pluggable components, each of which is responsible for a specific focal plane, so as to performs SR and refocusing all focal planes at one time to reduce computation. We conduct extensive experiments on cytological images and the experiments show that MFP images reconstructed by STSRNet have richer details in the axial and horizontal directions than input images. At the same time, the reconstructed MFP images also perform better than single 2D wide field images on high-level tasks. The proposed method provides relatively high-quality MFP images when real MFP images cannot be obtained, which greatly expands the application potential of LR wide-field images. To further promote the development of this field, we released our cytology dataset named RSDC for more researchers to use.
Collapse
|
15
|
Secure data stream transmission method for cell pathological image storage system. INT J INTELL SYST 2021. [DOI: 10.1002/int.22685] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
16
|
Chen X, Yu J, Cheng S, Geng X, Liu S, Han W, Hu J, Chen L, Liu X, Zeng S. An unsupervised style normalization method for cytopathology images. Comput Struct Biotechnol J 2021; 19:3852-3863. [PMID: 34285783 PMCID: PMC8273362 DOI: 10.1016/j.csbj.2021.06.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 06/10/2021] [Accepted: 06/15/2021] [Indexed: 11/17/2022] Open
Abstract
Diverse styles of cytopathology images have a negative effect on the generalization ability of automated image analysis algorithms. This article proposes an unsupervised method to normalize cytopathology image styles. We design a two-stage style normalization framework with a style removal module to convert the colorful cytopathology image into a gray-scale image with a color-encoding mask and a domain adversarial style reconstruction module to map them back to a colorful image with user-selected style. Our method enforces both hue and structure consistency before and after normalization by using the color-encoding mask and per-pixel regression. Intra-domain and inter-domain adversarial learning are applied to ensure the style of normalized images consistent with the user-selected for input images of different domains. Our method shows superior results against current unsupervised color normalization methods on six cervical cell datasets from different hospitals and scanners. We further demonstrate that our normalization method greatly improves the recognition accuracy of lesion cells on unseen cytopathology images, which is meaningful for model generalization.
Collapse
Affiliation(s)
- Xihao Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Jingya Yu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shenghua Cheng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiebo Geng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Sibo Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wei Han
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Junbo Hu
- Women and Children Hospital of Hubei Province, Wuhan, Hubei, China
| | - Li Chen
- Department of Clinical Laboratory, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiuli Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
17
|
Chen Z, Guo X, Woo PYM, Yuan Y. Super-Resolution Enhanced Medical Image Diagnosis With Sample Affinity Interaction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1377-1389. [PMID: 33507866 DOI: 10.1109/tmi.2021.3055290] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The degradation in image resolution harms the performance of medical image diagnosis. By inferring high-frequency details from low-resolution (LR) images, super-resolution (SR) techniques can introduce additional knowledge and assist high-level tasks. In this paper, we propose a SR enhanced diagnosis framework, consisting of an efficient SR network and a diagnosis network. Specifically, a Multi-scale Refined Context Network (MRC-Net) with Refined Context Fusion (RCF) is devised to leverage global and local features for SR tasks. Instead of learning from scratch, we first develop a recursive MRC-Net with temporal context, and then propose a recursion distillation scheme to enhance the performance of MRC-Net from the knowledge of the recursive one and reduce the computational cost. The diagnosis network jointly utilizes the reliable original images and more informative SR images by two branches, with the proposed Sample Affinity Interaction (SAI) blocks at different stages to effectively extract and integrate discriminative features towards diagnosis. Moreover, two novel constraints, sample affinity consistency and sample affinity regularization, are devised to refine the features and achieve the mutual promotion of these two branches. Extensive experiments of synthetic and real LR cases are conducted on wireless capsule endoscopy and histopathology images, verifying that our proposed method is significantly effective for medical image diagnosis.
Collapse
|