1
|
Su H, Jin X, Kong L, You Y, Wu H, Liou Y, Li L. The triage role of cytological DNA methylation in women with non-16/18, specifically genotyping high-risk HPV infection. Br J Cancer 2025:10.1038/s41416-025-03005-5. [PMID: 40204948 DOI: 10.1038/s41416-025-03005-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 03/03/2025] [Accepted: 03/26/2025] [Indexed: 04/11/2025] Open
Abstract
OBJECTIVES To evaluate cytological DNA methylation testing methods for risk stratification in women with non-16/18 HPV, focusing on high-risk HPV (hrHPV) genotyping. METHODS This study compared the triage performance of liquid-based cytology (LBC) testing, hrHPV genotyping, and PAX1/JAM3 gene methylation (CISCER) testing. The absolute risks of cervical intraepithelial neoplasia grade 2 or worse (CIN2+), grade 3 or worse (CIN3+), and colposcopy referral rates were calculated. RESULTS The CISCER test showed a CIN3+ risk of 39.1% for positive and 0.9% for negative results. In comparison, LBC ≥ ASCUS and HPV33/35 genotyping had CIN3+ risks of 9.8% and 19.3%, respectively, for positive result. The colposcopy referral rates were 17.4% for CISCER+, 61.9% for LBC ≥ ASCUS, and 8.9% for HPV33/35+ genotyping. The CIN3+ risks were 40.0% and 50.0% when CISCER+ was combined with LBC ≥ ASCUS and HPV33/35+, respectively. The CIN3+ risks were 0.0% and 1.0% when CISCER- was combined with LBC with no intraepithelial lesions or malignancy (NILM) and non-HPV33/35, respectively. Our analysis of CIN2+ patients yielded similar results. CONCLUSIONS DNA methylation testing outperformed LBC in triaging women with non-16/18 hrHPV infections, significantly reducing unnecessary colposcopy referrals, particularly when combined with HPV33/35 genotyping.
Collapse
Affiliation(s)
- Haiqi Su
- Department of Obstetrics and Gynecology, Peking Union Medical College Hospital, Beijing, China
- National Clinical Research Center for Obstetric & Gynecologic Diseases, Beijing, China
- State Key Laboratory for Complex, Severe and Rare Diseases, Peking Union Medical College Hospital, Beijing, China
| | - Xitong Jin
- Department of Medical Laboratory, Beijing Origin-Poly Bio-Tec Co. Ltd, Beijing, China
| | - Linghua Kong
- Department of Obstetrics and Gynecology, Peking Union Medical College Hospital, Beijing, China
- National Clinical Research Center for Obstetric & Gynecologic Diseases, Beijing, China
- State Key Laboratory for Complex, Severe and Rare Diseases, Peking Union Medical College Hospital, Beijing, China
| | - Yan You
- Department of Pathology, Peking Union Medical College Hospital, Beijing, China
| | - Huanwen Wu
- Department of Pathology, Peking Union Medical College Hospital, Beijing, China
| | - Yuligh Liou
- Department of Medical Laboratory, Beijing Origin-Poly Bio-Tec Co. Ltd, Beijing, China
- Clinical Precision Medicine Research Center, the First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou, China
| | - Lei Li
- Department of Obstetrics and Gynecology, Peking Union Medical College Hospital, Beijing, China.
- National Clinical Research Center for Obstetric & Gynecologic Diseases, Beijing, China.
- State Key Laboratory for Complex, Severe and Rare Diseases, Peking Union Medical College Hospital, Beijing, China.
| |
Collapse
|
2
|
Taghados Z, Azimifar Z, Monsefi M, Jahromi MA. CausalCervixNet: convolutional neural networks with causal insight (CICNN) in cervical cancer cell classification-leveraging deep learning models for enhanced diagnostic accuracy. BMC Cancer 2025; 25:607. [PMID: 40181353 PMCID: PMC11969838 DOI: 10.1186/s12885-025-13926-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/12/2025] [Indexed: 04/05/2025] Open
Abstract
Cervical cancer is a significant global health issue affecting women worldwide, necessitating prompt detection and effective management. According to the World Health Organization (WHO), approximately 660,000 new cases of cervical cancer and 350,000 deaths were reported globally in 2022, with the majority occurring in low- and middle-income countries. These figures emphasize the critical need for effective prevention, early detection, and diagnostic strategies. Recent advancements in machine learning (ML) and deep learning (DL) have greatly enhanced the accuracy of cervical cancer cell classification and diagnosis in manual screening. However, traditional predictive approaches often lack interpretability, which is critical for building explainable AI systems in medicine. Integrating causal reasoning, causal inference, and causal discovery into diagnostic frameworks addresses these challenges by uncovering latent causal relationships rather than relying solely on observational correlations. This ensures greater consistency, comprehensibility, and transparency in medical decision-making. This study introduces CausalCervixNet, a Convolutional Neural Network with Causal Insight (CICNN) tailored for cervical cancer cell classification. By leveraging causality-based methodologies, CausalCervixNet uncovers hidden causal factors in cervical cell images, enhancing both diagnostic accuracy and efficiency. The approach was validated on three datasets: SIPaKMeD, Herlev, and our self-collected ShUCSEIT (Shiraz University-Computer Science, Engineering, and Information Technology) dataset, containing detailed cervical cell cytopathology images. The proposed framework achieved classification accuracies of 99.14%, 97.31%, and 99.09% on the SIPaKMeD, Herlev, and ShUCSEIT datasets, respectively. These results highlight the importance of integrating causal discovery, causal reasoning, and causal inference into diagnostic workflows. By merging causal perspectives with advanced DL models, this research offers an interpretable, reliable, and efficient framework for cervical cancer diagnosis, contributing to improved patient outcomes and advancements in cervical cancer treatment.
Collapse
Affiliation(s)
- Zahra Taghados
- Department of Computer Science, Engineering and Information Technology, Shiraz University, Shiraz, Iran
| | - Zohreh Azimifar
- Department of Computer Science, Engineering and Information Technology, Shiraz University, Shiraz, Iran.
| | | | | |
Collapse
|
3
|
Wen J, Wu P, Li J, Xu H, Li Y, Chen K, Li G, Lv Z, Wang X. Application of bioelectrical impedance detection techniques: Cells and tissues. Biosens Bioelectron 2025; 273:117159. [PMID: 39837237 DOI: 10.1016/j.bios.2025.117159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 01/09/2025] [Accepted: 01/10/2025] [Indexed: 01/23/2025]
Abstract
Pathological conditions in organisms often arise from various cellular or tissue abnormalities, including dysregulation of cell numbers, infections, aberrant differentiation, and tissue pathologies such as lung tumors and skin tumors. Thus, developing methods for analyzing and identifying these biological abnormalities presents a significant challenge. While traditional bioanalytical methods such as flow cytometry and magnetic resonance imaging are well-established, they suffer from inefficiencies, high costs, complexity, and potential hazards. To address these challenges, bioelectrical impedance detection technology, which leverages the electrical properties of biological cells and tissues to extract relevant biomedical information, has garnered considerable attention in the field of biological detection due to its affordability, convenience, non-invasiveness, and label-free nature. This article first provides a brief introduction to the principles of bioelectrical impedance and related detection techniques, as well as the equivalent circuit models and numerical simulation models developed at the cellular and tissue levels. Next, this article delves into the applications of bioelectrical impedance technology at the cellular level, including recent advancements in cell counting, classification, concentration detection, differentiation, and infection, thereby enriching previous literature reviews from a multicellular perspective. In addition, this article highlights the applications of bioelectrical impedance technology in relevant tissues including muscle, skin, lungs, and so on. Finally, the article explores the future opportunities and challenges of bioelectrical impedance detection and analysis technology, focusing on interdisciplinary research areas and data-driven intelligent analysis, offering researchers broader research directions and perspectives.
Collapse
Affiliation(s)
- Jianming Wen
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua, China; The Institute of Precision Machinery and Smart Structure, College of Engineering, Zhejiang Normal University, Jinhua, China
| | - Pengjie Wu
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua, China; College of Computer Science and Technology, Zhejiang Normal University, Jinhua, China
| | - Jianping Li
- The Institute of Precision Machinery and Smart Structure, College of Engineering, Zhejiang Normal University, Jinhua, China
| | - Hao Xu
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua, China; Puyang Institute of Big Data and Artificial Intelligence, Puyang, China
| | - Ya Li
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua, China
| | - Kang Chen
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua, China
| | - Guangfei Li
- Department of Biomedical Engineering, College of Chemistry and Life Science, Beijing University of Technology, Beijing, China
| | - Zhong Lv
- Affiliated Dongyang Hospital of Wenzhou Medical University, Jinhua, China
| | - Xiaolin Wang
- College of Mathematical Medicine, Zhejiang Normal University, Jinhua, China; Affiliated Dongyang Hospital of Wenzhou Medical University, Jinhua, China.
| |
Collapse
|
4
|
Sun H, Guo D, Chen Z. Mixed-Supervised Learning for Cell Classification. SENSORS (BASEL, SWITZERLAND) 2025; 25:1207. [PMID: 40006436 PMCID: PMC11859526 DOI: 10.3390/s25041207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Revised: 02/10/2025] [Accepted: 02/14/2025] [Indexed: 02/27/2025]
Abstract
Cell classification based on histopathology images is crucial for tumor recognition and cancer diagnosis. Using deep learning, classification accuracy is hugely improved. Semi-supervised learning is an advanced deep learning approach that uses both labeled and unlabeled data. However, complex datasets that comprise diverse patterns may drive models towards learning harmful features. Therefore, it is useful to involve human guidance during training. Hence, we propose a mixed-supervised method incorporating semi-supervision and "human-in-the-loop" for cell classification. We design a sample selection mechanism that assigns highly confident unlabeled samples to automatic semi-supervised optimization and unreliable ones for online annotation correction. We use prior human annotations to pretrain the backbone and trustworthy pseudo labels and online human annotations to fine-tune the model for accurate cell classification. Experimental results show that the mixed-supervised model reaches overall accuracies as high as 86.56%, 99.33% and 74.12% on LUSC, BloodCell, and PanNuke datasets, respectively.
Collapse
Affiliation(s)
- Hao Sun
- School of Computer Science and Technology, Donghua University, Shanghai 201620, China; (H.S.); (D.G.)
| | - Danqi Guo
- School of Computer Science and Technology, Donghua University, Shanghai 201620, China; (H.S.); (D.G.)
| | - Zhao Chen
- School of Computer Science and Technology, Donghua University, Shanghai 201620, China; (H.S.); (D.G.)
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
| |
Collapse
|
5
|
Rio-Alvarez A, Marcos PG, González PP, Serrano-Pertierra E, Novelli A, Fernández-Sánchez MT, González VM. Evaluating deep learning techniques for optimal neurons counting and characterization in complex neuronal cultures. Med Biol Eng Comput 2025; 63:545-560. [PMID: 39417963 PMCID: PMC11750910 DOI: 10.1007/s11517-024-03202-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 09/12/2024] [Indexed: 10/19/2024]
Abstract
The counting and characterization of neurons in primary cultures have long been areas of significant scientific interest due to their multifaceted applications, ranging from neuronal viability assessment to the study of neuronal development. Traditional methods, often relying on fluorescence or colorimetric staining and manual segmentation, are time consuming, labor intensive, and prone to error, raising the need for the development of automated and reliable methods. This paper delves into the evaluation of three pivotal deep learning techniques: semantic segmentation, which allows for pixel-level classification and is solely suited for characterization; object detection, which focuses on counting and locating neurons; and instance segmentation, which amalgamates the features of the other two but employing more intricate structures. The goal of this research is to discern what technique or combination of those techniques yields the optimal results for automatic counting and characterization of neurons in images of neuronal cultures. Following rigorous experimentation, we conclude that instance segmentation stands out, providing superior outcomes for both challenges.
Collapse
Affiliation(s)
- Angel Rio-Alvarez
- Computer Science Department, University of Oviedo, Oviedo, Spain.
- Biomedical Engineering Center (BME), University of Oviedo, Oviedo, Spain.
| | - Pablo García Marcos
- Computer Science Department, University of Oviedo, Oviedo, Spain
- Biomedical Engineering Center (BME), University of Oviedo, Oviedo, Spain
| | | | - Esther Serrano-Pertierra
- Biochemistry and Molecular Biology Department, University of Oviedo, Oviedo, Spain
- University Institute of Biotechnology of Asturias (IUBA), University of Oviedo, Oviedo, Spain
- Biomedical Engineering Center (BME), University of Oviedo, Oviedo, Spain
| | - Antonello Novelli
- Psychology Department, University of Oviedo, Oviedo, Spain
- University Institute of Biotechnology of Asturias (IUBA), University of Oviedo, Oviedo, Spain
- Biomedical Engineering Center (BME), University of Oviedo, Oviedo, Spain
| | - M Teresa Fernández-Sánchez
- Biochemistry and Molecular Biology Department, University of Oviedo, Oviedo, Spain
- University Institute of Biotechnology of Asturias (IUBA), University of Oviedo, Oviedo, Spain
- Biomedical Engineering Center (BME), University of Oviedo, Oviedo, Spain
| | - Víctor M González
- Electrical Engineering Department, University of Oviedo, Oviedo, Spain
- Biomedical Engineering Center (BME), University of Oviedo, Oviedo, Spain
| |
Collapse
|
6
|
Zhang Y, Ning C, Yang W. An automatic cervical cell classification model based on improved DenseNet121. Sci Rep 2025; 15:3240. [PMID: 39863704 PMCID: PMC11762993 DOI: 10.1038/s41598-025-87953-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 01/23/2025] [Indexed: 01/27/2025] Open
Abstract
The cervical cell classification technique can determine the degree of cellular abnormality and pathological condition, which can help doctors to detect the risk of cervical cancer at an early stage and improve the cure and survival rates of cervical cancer patients. Addressing the issue of low accuracy in cervical cell classification, a deep convolutional neural network A2SDNet121 is proposed. A2SDNet121 takes DenseNet121 as the backbone network. Firstly, the SE module is embedded in DenseNet121 to increase the model's focus on the nucleus region, which contains important diagnostic information, and reduce the focus on redundant information. Secondly, the sizes of the convolutional kernel and pooling window of the Stem layer are adjusted to adapt to the characteristics of the cervical cell images, so that the model can extract the local detailed information more effectively. Finally, the Atrous Dense Block (ADB) is constructed, and four ADB modules are integrated into DenseNet121 to enable the model to acquire global and local salient feature information. The accuracy of A2SDNet121 for two and seven-classification tasks on the Herlev dataset is 99.75% and 99.14%, respectively. The accuracy for two, three, and five-classification tasks on the SIPaKMeD dataset reaches 99.55%, 99.75% and 99.22%, respectively. Compared with other state-of-the-art algorithms, the A2SDNet121 model performs better in the multi-classification task of cervical cells, which can significantly improve the accuracy and efficiency of cervical cancer screening.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
| | - Chunyu Ning
- Department of Biomedical Engineering, School of Life Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China.
| | - Wenjing Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
| |
Collapse
|
7
|
Li G, Fan X, Xu C, Lv P, Wang R, Ruan Z, Zhou Z, Zhang Y. Detection of cervical cell based on multi-scale spatial information. Sci Rep 2025; 15:3117. [PMID: 39856153 PMCID: PMC11760966 DOI: 10.1038/s41598-025-87165-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 01/16/2025] [Indexed: 01/27/2025] Open
Abstract
Cervical cancer poses a significant health risk to women. Deep learning methods can assist pathologists in quickly screening images of suspected lesion cells, greatly improving the efficiency of cervical cancer screening and diagnosis. However, existing deep learning methods rely solely on single-scale features and local spatial information, failing to effectively capture the subtle morphological differences between abnormal and normal cervical cells. To tackle this problem effectively, we propose a cervical cell detection method that utilizes multi-scale spatial information. This approach efficiently captures and integrates spatial information at different scales. Firstly, we design the Multi-Scale Spatial Information Augmentation Module (MSA), which captures global spatial information by introducing a multi-scale spatial information extraction branch during the feature extraction stage. Secondly, the Channel Attention Enhanced Module (CAE) is introduced to achieve channel-level weighted processing, dynamically optimizing each output feature using channel weights to focus on critical features. We use Sparse R-CNN as the baseline and integrate MSA and CAE into it. Experiments on the CDetector dataset achieved an Average Precision (AP) of 65.3%, outperforming the state-of-the-art (SOTA) methods.
Collapse
Affiliation(s)
- Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Xinyu Fan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Chuanyun Xu
- School of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Pengfei Lv
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Ru Wang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Zihan Ruan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Zheng Zhou
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Yang Zhang
- School of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China.
| |
Collapse
|
8
|
Cai D, Chen J, Zhao J, Xue Y, Yang S, Yuan W, Feng M, Weng H, Liu S, Peng Y, Zhu J, Wang K, Jackson C, Tang H, Huang J, Wang X. HiCervix: An Extensive Hierarchical Dataset and Benchmark for Cervical Cytology Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4344-4355. [PMID: 38923481 DOI: 10.1109/tmi.2024.3419697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2024]
Abstract
Cervical cytology is a critical screening strategy for early detection of pre-cancerous and cancerous cervical lesions. The challenge lies in accurately classifying various cervical cytology cell types. Existing automated cervical cytology methods are primarily trained on databases covering a narrow range of coarse-grained cell types, which fail to provide a comprehensive and detailed performance analysis that accurately represents real-world cytopathology conditions. To overcome these limitations, we introduce HiCervix, the most extensive, multi-center cervical cytology dataset currently available to the public. HiCervix includes 40,229 cervical cells from 4,496 whole slide images, categorized into 29 annotated classes. These classes are organized within a three-level hierarchical tree to capture fine-grained subtype information. To exploit the semantic correlation inherent in this hierarchical tree, we propose HierSwin, a hierarchical vision transformer-based classification network. HierSwin serves as a benchmark for detailed feature learning in both coarse-level and fine-level cervical cancer classification tasks. In our comprehensive experiments, HierSwin demonstrated remarkable performance, achieving 92.08% accuracy for coarse-level classification and 82.93% accuracy averaged across all three levels. When compared to board-certified cytopathologists, HierSwin achieved high classification performance (0.8293 versus 0.7359 averaged accuracy), highlighting its potential for clinical applications. This newly released HiCervix dataset, along with our benchmark HierSwin method, is poised to make a substantial impact on the advancement of deep learning algorithms for rapid cervical cancer screening and greatly improve cancer prevention and patient outcomes in real-world clinical settings.
Collapse
|
9
|
Yi J, Liu X, Cheng S, Chen L, Zeng S. Multi-scale window transformer for cervical cytopathology image recognition. Comput Struct Biotechnol J 2024; 24:314-321. [PMID: 38681132 PMCID: PMC11046249 DOI: 10.1016/j.csbj.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/01/2024] Open
Abstract
Cervical cancer is a major global health issue, particularly in developing countries where access to healthcare is limited. Early detection of pre-cancerous lesions is crucial for successful treatment and reducing mortality rates. However, traditional screening and diagnostic processes require cytopathology doctors to manually interpret a huge number of cells, which is time-consuming, costly, and prone to human experiences. In this paper, we propose a Multi-scale Window Transformer (MWT) for cervical cytopathology image recognition. We design multi-scale window multi-head self-attention (MW-MSA) to simultaneously integrate cell features of different scales. Small window self-attention is used to extract local cell detail features, and large window self-attention aims to integrate features from smaller-scale window attention to achieve window-to-window information interaction. Our design enables long-range feature integration but avoids whole image self-attention (SA) in ViT or twice local window SA in Swin Transformer. We find convolutional feed-forward networks (CFFN) are more efficient than original MLP-based FFN for representing cytopathology images. Our overall model adopts a pyramid architecture. We establish two multi-center cervical cell classification datasets of two-category 192,123 images and four-category 174,138 images. Extensive experiments demonstrate that our MWT outperforms state-of-the-art general classification networks and specialized classifiers for cytopathology images in the internal and external test sets. The results on large-scale datasets prove the effectiveness and generalization of our proposed model. Our work provides a reliable cytopathology image recognition method and helps establish computer-aided screening for cervical cancer. Our code is available at https://github.com/nmyz669/MWT, and our web service tool can be accessed at https://huggingface.co/spaces/nmyz/MWTdemo.
Collapse
Affiliation(s)
- Jiaxiang Yi
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Xiuli Liu
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Shenghua Cheng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Li Chen
- Department of Clinical Laboratory, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
10
|
Nie Z, Xu M, Wang Z, Lu X, Song W. A Review of Application of Deep Learning in Endoscopic Image Processing. J Imaging 2024; 10:275. [PMID: 39590739 PMCID: PMC11595772 DOI: 10.3390/jimaging10110275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Revised: 10/24/2024] [Accepted: 10/29/2024] [Indexed: 11/28/2024] Open
Abstract
Deep learning, particularly convolutional neural networks (CNNs), has revolutionized endoscopic image processing, significantly enhancing the efficiency and accuracy of disease diagnosis through its exceptional ability to extract features and classify complex patterns. This technology automates medical image analysis, alleviating the workload of physicians and enabling a more focused and personalized approach to patient care. However, despite these remarkable achievements, there are still opportunities to further optimize deep learning models for endoscopic image analysis, including addressing limitations such as the requirement for large annotated datasets and the challenge of achieving higher diagnostic precision, particularly for rare or subtle pathologies. This review comprehensively examines the profound impact of deep learning on endoscopic image processing, highlighting its current strengths and limitations. It also explores potential future directions for research and development, outlining strategies to overcome existing challenges and facilitate the integration of deep learning into clinical practice. Ultimately, the goal is to contribute to the ongoing advancement of medical imaging technologies, leading to more accurate, personalized, and optimized medical care for patients.
Collapse
Affiliation(s)
- Zihan Nie
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Muhao Xu
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Zhiyong Wang
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Xiaoqi Lu
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Weiye Song
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| |
Collapse
|
11
|
Yang T, Hu H, Li X, Meng Q, Huang Q. A two-stream decision fusion network for cervical pap-smear image classification tasks. Tissue Cell 2024; 90:102505. [PMID: 39116530 DOI: 10.1016/j.tice.2024.102505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 07/11/2024] [Accepted: 07/29/2024] [Indexed: 08/10/2024]
Abstract
Deep learning, especially Convolution Neural Networks (CNNs), has demonstrated superior performance in image recognition and classification tasks. They make complex pattern recognition possible by extracting image features through layers of abstraction. However, despite the excellent performance of deep learning in general image classification, its limitations are becoming apparent in specific domains such as cervical cell medical image classification. This is because although the morphology of cervical cells varies between normal, diseased and cancerous, these differences are sometimes very small and difficult to capture. To solve this problem, we propose a two-stream feature fusion model comprising a manual feature branch, a deep feature branch, and a decision fusion module. Specifically, We process cervical cells through a modified DarkNet backbone network to extract deep features. In order to enhance the learning of deep features, we have devised scale convolution blocks to substitute the original convolution, termed Basic convolution blocks. The manual feature branch comprises a range of traditional features and is linked to a multilayer perceptron. Additionally, we design three decision feature channels trained from both manual and deep features to enhance the model performance in cervical cell classification. Our proposed model demonstrates superior performance when compared to state-of-the-art cervical cell classification models. We establish a 15-category 148762 cervical cytopathology image dataset (CCID). In addition, we additionally conducted experiments on the SIPaKMeD dataset. Numerous experiments show that our proposed model performs excellently compared to state-of-the-art classification models. The outcomes illustrate that our approach can significantly aid pathologists in accurately evaluating cervical smears.
Collapse
Affiliation(s)
- Tianjin Yang
- College of Computer and Software Engineering, Hohai University, Nanjing 211100, PR China
| | - Hexuan Hu
- College of Computer and Software Engineering, Hohai University, Nanjing 211100, PR China.
| | - Xing Li
- College of information Science and Technology & College of Artificial Intelligence, Nanjing Forestry University, Nanjing 210037, PR China
| | - Qing Meng
- College of Computer and Software Engineering, Hohai University, Nanjing 211100, PR China
| | - Qian Huang
- College of Computer and Software Engineering, Hohai University, Nanjing 211100, PR China
| |
Collapse
|
12
|
Qin J, He Y, Liang Y, Kang L, Zhao J, Ding B. Cell comparative learning: A cervical cytopathology whole slide image classification method using normal and abnormal cells. Comput Med Imaging Graph 2024; 117:102427. [PMID: 39216344 DOI: 10.1016/j.compmedimag.2024.102427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 06/14/2024] [Accepted: 08/25/2024] [Indexed: 09/04/2024]
Abstract
Automated cervical cancer screening through computer-assisted diagnosis has shown considerable potential to improve screening accessibility and reduce associated costs and errors. However, classification performance on whole slide images (WSIs) remains suboptimal due to patient-specific variations. To improve the precision of the screening, pathologists not only analyze the characteristics of suspected abnormal cells, but also compare them with normal cells. Motivated by this practice, we propose a novel cervical cell comparative learning method that leverages pathologist knowledge to learn the differences between normal and suspected abnormal cells within the same WSI. Our method employs two pre-trained YOLOX models to detect suspected abnormal and normal cells in a given WSI. A self-supervised model then extracts features for the detected cells. Subsequently, a tailored Transformer encoder fuses the cell features to obtain WSI instance embeddings. Finally, attention-based multi-instance learning is applied to achieve classification. The experimental results show an AUC of 0.9319 for our proposed method. Moreover, the method achieved professional pathologist-level performance, indicating its potential for clinical applications.
Collapse
Affiliation(s)
- Jian Qin
- School of Computer Science and Technology, Anhui University of Technology, Maanshan, China.
| | - Yongjun He
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China.
| | - Yiqin Liang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Lanlan Kang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Jing Zhao
- College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin, China
| | - Bo Ding
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
13
|
Fei M, Shen Z, Song Z, Wang X, Cao M, Yao L, Zhao X, Wang Q, Zhang L. Distillation of multi-class cervical lesion cell detection via synthesis-aided pre-training and patch-level feature alignment. Neural Netw 2024; 178:106405. [PMID: 38815471 DOI: 10.1016/j.neunet.2024.106405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 05/10/2024] [Accepted: 05/20/2024] [Indexed: 06/01/2024]
Abstract
Automated detection of cervical abnormal cells from Thin-prep cytologic test (TCT) images is crucial for efficient cervical abnormal screening using computer-aided diagnosis systems. However, the construction of the detection model is hindered by the preparation of the training images, which usually suffers from issues of class imbalance and incomplete annotations. Additionally, existing methods often overlook the visual feature correlations among cells, which are crucial in cervical lesion cell detection as pathologists commonly rely on surrounding cells for identification. In this paper, we propose a distillation framework that utilizes a patch-level pre-training network to guide the training of an image-level detection network, which can be applied to various detectors without changing their architectures during inference. The main contribution is three-fold: (1) We propose the Balanced Pre-training Model (BPM) as the patch-level cervical cell classification model, which employs an image synthesis model to construct a class-balanced patch dataset for pre-training. (2) We design the Score Correction Loss (SCL) to enable the detection network to distill knowledge from the BPM model, thereby mitigating the impact of incomplete annotations. (3) We design the Patch Correlation Consistency (PCC) strategy to exploit the correlation information of extracted cells, consistent with the behavior of cytopathologists. Experiments on public and private datasets demonstrate the superior performance of the proposed distillation method, as well as its adaptability to various detection architectures.
Collapse
Affiliation(s)
- Manman Fei
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Zhenrong Shen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Zhiyun Song
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Xin Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Maosong Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, 201210, China
| | - Linlin Yao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Xiangyu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, 201210, China
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
| |
Collapse
|
14
|
Vaickus LJ, Kerr DA, Velez Torres JM, Levy J. Artificial Intelligence Applications in Cytopathology: Current State of the Art. Surg Pathol Clin 2024; 17:521-531. [PMID: 39129146 DOI: 10.1016/j.path.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The practice of cytopathology has been significantly refined in recent years, largely through the creation of consensus rule sets for the diagnosis of particular specimens (Bethesda, Milan, Paris, and so forth). In general, these diagnostic systems have focused on reducing intraobserver variance, removing nebulous/redundant categories, reducing the use of "atypical" diagnoses, and promoting the use of quantitative scoring systems while providing a uniform language to communicate these results. Computational pathology is a natural offshoot of this process in that it promises 100% reproducible diagnoses rendered by quantitative processes that are free from many of the biases of human practitioners.
Collapse
Affiliation(s)
- Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA.
| | - Darcy A Kerr
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA. https://twitter.com/darcykerrMD
| | - Jaylou M Velez Torres
- Department of Pathology and Laboratory Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Joshua Levy
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Cedars-Sinai Medical Center, 8700 Beverly Boulevard, Los Angeles, CA 90048, USA
| |
Collapse
|
15
|
Lou W, Wan X, Li G, Lou X, Li C, Gao F, Li H. Structure Embedded Nucleus Classification for Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3149-3160. [PMID: 38607704 DOI: 10.1109/tmi.2024.3388328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Nuclei classification provides valuable information for histopathology image analysis. However, the large variations in the appearance of different nuclei types cause difficulties in identifying nuclei. Most neural network based methods are affected by the local receptive field of convolutions, and pay less attention to the spatial distribution of nuclei or the irregular contour shape of a nucleus. In this paper, we first propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order, and employ a recurrent neural network that aggregates the sequential change in distance between key points to obtain learnable shape features. Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations. To capture the correlations between the categories of nuclei and their surrounding tissue patterns, we further introduce edge features that are defined as the background textures between adjacent nuclei. Lastly, we integrate both polygon and graph structure learning mechanisms into a whole framework that can extract intra and inter-nucleus structural characteristics for nuclei classification. Experimental results show that the proposed framework achieves significant improvements compared to the previous methods. Code and data are made available via https://github.com/lhaof/SENC.
Collapse
|
16
|
Kupas D, Hajdu A, Kovacs I, Hargitai Z, Szombathy Z, Harangi B. Annotated Pap cell images and smear slices for cell classification. Sci Data 2024; 11:743. [PMID: 38972893 PMCID: PMC11228026 DOI: 10.1038/s41597-024-03596-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 07/02/2024] [Indexed: 07/09/2024] Open
Abstract
Machine learning-based systems have become instrumental in augmenting global efforts to combat cervical cancer. A burgeoning area of research focuses on leveraging artificial intelligence to enhance the cervical screening process, primarily through the exhaustive examination of Pap smears, traditionally reliant on the meticulous and labor-intensive analysis conducted by specialized experts. Despite the existence of some comprehensive and readily accessible datasets, the field is presently constrained by the limited volume of publicly available images and smears. As a remedy, our work unveils APACC (Annotated PAp cell images and smear slices for Cell Classification), a comprehensive dataset designed to bridge this gap. The APACC dataset features a remarkable array of images crucial for advancing research in this field. It comprises 103,675 annotated cell images, carefully extracted from 107 whole smears, which are further divided into 21,371 sub-regions for a more refined analysis. This dataset includes a vast number of cell images from conventional Pap smears and their specific locations on each smear, offering a valuable resource for in-depth investigation and study.
Collapse
Affiliation(s)
- David Kupas
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary.
| | - Andras Hajdu
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Ilona Kovacs
- Department of Pathology, Kenezy Gyula University Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Zoltan Hargitai
- Department of Pathology, Kenezy Gyula University Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Zita Szombathy
- Department of Pathology, Kenezy Gyula University Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Balazs Harangi
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
17
|
Harangi B, Bogacsovics G, Toth J, Kovacs I, Dani E, Hajdu A. Pixel-wise segmentation of cells in digitized Pap smear images. Sci Data 2024; 11:733. [PMID: 38971865 PMCID: PMC11227563 DOI: 10.1038/s41597-024-03566-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 06/24/2024] [Indexed: 07/08/2024] Open
Abstract
A simple and cheap way to recognize cervical cancer is using light microscopic analysis of Pap smear images. Training artificial intelligence-based systems becomes possible in this domain, e.g., to follow the European recommendation to screen negative smears to reduce false negative cases. The first step for such a process is segmenting the cells. A large and manually segmented dataset is required for this task, which can be used to train deep learning-based solutions. We describe a corresponding dataset with accurate manual segmentations for the enclosed cells. Altogether, the APACS23 (Annotated PAp smear images for Cell Segmentation 2023) dataset contains about 37 000 manually segmented cells and is separated into dedicated training and test parts, which could be used for an official benchmark of scientific investigations or a grand challenge.
Collapse
Affiliation(s)
- Balazs Harangi
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary.
| | - Gergo Bogacsovics
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Janos Toth
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Ilona Kovacs
- Department of Pathology, Kenezy Gyula Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Erzsebet Dani
- Department of Library and Information Science, Faculty of Humanities, University of Debrecen, Debrecen, Hungary
| | - Andras Hajdu
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
18
|
Sun X, Zhang S, Ma S. Prediction Consistency Regularization for Learning with Noise Labels Based on Contrastive Clustering. ENTROPY (BASEL, SWITZERLAND) 2024; 26:308. [PMID: 38667864 PMCID: PMC11049179 DOI: 10.3390/e26040308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 03/28/2024] [Accepted: 03/29/2024] [Indexed: 04/28/2024]
Abstract
In the classification task, label noise has a significant impact on models' performance, primarily manifested in the disruption of prediction consistency, thereby reducing the classification accuracy. This work introduces a novel prediction consistency regularization that mitigates the impact of label noise on neural networks by imposing constraints on the prediction consistency of similar samples. However, determining which samples should be similar is a primary challenge. We formalize the similar sample identification as a clustering problem and employ twin contrastive clustering (TCC) to address this issue. To ensure similarity between samples within each cluster, we enhance TCC by adjusting clustering prior to distribution using label information. Based on the adjusted TCC's clustering results, we first construct the prototype for each cluster and then formulate a prototype-based regularization term to enhance prediction consistency for the prototype within each cluster and counteract the adverse effects of label noise. We conducted comprehensive experiments using benchmark datasets to evaluate the effectiveness of our method under various scenarios with different noise rates. The results explicitly demonstrate the enhancement in classification accuracy. Subsequent analytical experiments confirm that the proposed regularization term effectively mitigates noise and that the adjusted TCC enhances the quality of similar sample recognition.
Collapse
Affiliation(s)
- Xinkai Sun
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing 100049, China; (X.S.); (S.Z.)
- Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100049, China
| | - Sanguo Zhang
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing 100049, China; (X.S.); (S.Z.)
- Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100049, China
| | - Shuangge Ma
- Department of Biostatistics, Yale School of Public Health, New Haven, CT 06510, USA
| |
Collapse
|
19
|
邺 琳, 于 凡, 胡 正, 王 霞, 唐 袁. [Preliminary Study on the Identification of Aerobic Vaginitis by Artificial Intelligence Analysis System]. SICHUAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF SICHUAN UNIVERSITY. MEDICAL SCIENCE EDITION 2024; 55:461-468. [PMID: 38645857 PMCID: PMC11026878 DOI: 10.12182/20240360504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 03/20/2024] [Indexed: 04/23/2024]
Abstract
Objective To develop an artificial intelligence vaginal secretion analysis system based on deep learning and to evaluate the accuracy of automated microscopy in the clinical diagnosis of aerobic vaginitis (AV). Methods In this study, the vaginal secretion samples of 3769 patients receiving treatment at the Department of Obstetrics and Gynecology, West China Second Hospital, Sichuan University between January 2020 and December 2021 were selected. Using the results of manual microscopy as the control, we developed the linear kernel SVM algorithm, an artificial intelligence (AI) automated analysis software, with Python Scikit-learn script. The AI automated analysis software could identify leucocytes with toxic appearance and parabasal epitheliocytes (PBC). The bacterial grading parameters were reset using standard strains of lactobacillus and AV common isolates. The receiver operating characteristic (ROC) curve analysis was used to determine the cut-off value of AV evaluation results for different scoring items were obtained by using the results of manual microscopy as the control. Then, the parameters of automatic AV identification were determined and the automatic AV analysis scoring method was initially established. Results A total of 3769 vaginal secretion samples were collected. The AI automated analysis system incorporated five parameters and each parameter incorporated three severity scoring levels. We selected 1.5 μm as the cut-off value for the diameter between Lactobacillus and common AV bacterial isolates. The automated identification parameter of Lactobacillus was the ratio of bacteria ≥1.5 μm to those <1.5 μm. The cut-off scores were 2.5 and 0.5, In the parameter of white blood cells (WBC), the cut-off value of the absolute number of WBC was 103 μL-1 and the cut-off value of WBC-to-epithelial cell ratio was 10. The automated identification parameter of toxic WBC was the ratio of toxic WBC toWBC and the cut-off values were 1% and 15%. The parameter of background flora was bacteria<1.5 μm and the cut-off values were 5×103 μL-1 and 3×104 μL-1. The parameter of the parabasal epitheliocytes was the ratio of PBC to epithelial cells and the cut-off values were 1% and 10%. The agreement rate between the results of automated microscopy and those of manual microscopy was 92.5%. Out of 200 samples, automated microscopy and manual microscopy produced consistent scores for 185 samples, while the results for 15 samples were inconsistent. Conclusion We developed an AI recognition software for AV and established an automated vaginal secretion microscopy scoring system for AV. There was good overall concordance between automated microscopy and manual microscopy. The AI identification software for AV can complete clinical lab examination with rather high objectivity, sensitivity, and efficiency, markedly reducing the workload of manual microscopy.
Collapse
Affiliation(s)
- 琳玲 邺
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 凡 于
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 正强 胡
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 霞 王
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| | - 袁婷 唐
- 四川大学华西第二医院 检验科 (成都 610041)Department of Laboratory Medicine, West China Second University Hospital, Sichuan University, Chengdu 610041, China
- 出生缺陷与相关妇儿疾病教育部重点实验室(四川大学) (成都 610041)Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, Sichuan University, Chengdu 610041, China
| |
Collapse
|
20
|
Yu Z, Li X, Li J, Chen W, Tang Z, Geng D. HSA-net with a novel CAD pipeline boosts both clinical brain tumor MR image classification and segmentation. Comput Biol Med 2024; 170:108039. [PMID: 38308874 DOI: 10.1016/j.compbiomed.2024.108039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/07/2024] [Accepted: 01/26/2024] [Indexed: 02/05/2024]
Abstract
Brain tumors are among the most prevalent neoplasms in current medical studies. Accurately distinguishing and classifying brain tumor types accurately is crucial for patient treatment and survival in clinical practice. However, existing computer-aided diagnostic pipelines are inadequate for practical medical use due to tumor complexity. In this study, we curated a multi-centre brain tumor dataset that includes various clinical brain tumor data types, including segmentation and classification annotations, surpassing previous efforts. To enhance brain tumor segmentation accuracy, we propose a new segmentation method: HSA-Net. This method utilizes the Shared Weight Dilated Convolution module (SWDC) and Hybrid Dense Dilated Convolution module (HDense) to capture multi-scale information while minimizing parameter count. The Effective Multi-Dimensional Attention (EMA) and Important Feature Attention (IFA) modules effectively aggregate task-related information. We introduce a novel clinical brain tumor computer-aided diagnosis pipeline (CAD) that combines HSA-Net with pipeline modification. This approach not only improves segmentation accuracy but also utilizes the segmentation mask as an additional channel feature to enhance brain tumor classification results. Our experimental evaluation of 3327 real clinical data demonstrates the effectiveness of the proposed method, achieving an average Dice coefficient of 86.85 % for segmentation and a classification accuracy of 95.35 %. We also validated the effectiveness of our proposed method using the publicly available BraTS dataset.
Collapse
Affiliation(s)
- Zekuan Yu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China.
| | - Xiang Li
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China; School of Safety Science and Engineering, Anhui University of Science and Technology, Huainan, 232000, China
| | - Jiaxin Li
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000, China
| | - Weiqiang Chen
- Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000, China
| | - Zhiri Tang
- School of Intelligent Systems Science and Engineering, Jinan University, Zhuhai, China
| | - Daoying Geng
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China; Huashan Hospital, Fudan University, Shanghai, 200040, China.
| |
Collapse
|
21
|
Kim D, Sundling KE, Virk R, Thrall MJ, Alperstein S, Bui MM, Chen-Yost H, Donnelly AD, Lin O, Liu X, Madrigal E, Michelow P, Schmitt FC, Vielh PR, Zakowski MF, Parwani AV, Jenkins E, Siddiqui MT, Pantanowitz L, Li Z. Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the American Society of Cytopathology Digital Cytology Task Force. J Am Soc Cytopathol 2024; 13:97-110. [PMID: 38158317 DOI: 10.1016/j.jasc.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/28/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Digital cytology and artificial intelligence (AI) are gaining greater adoption in the cytology laboratory. However, peer-reviewed real-world data and literature are lacking in regard to the current clinical landscape. The American Society of Cytopathology in conjunction with the International Academy of Cytology and the Digital Pathology Association established a special task force comprising 20 members with expertise and/or interest in digital cytology. The aim of the group was to investigate the feasibility of incorporating digital cytology, specifically cytology whole slide scanning and AI applications, into the workflow of the laboratory. In turn, the impact on cytopathologists, cytologists (cytotechnologists), and cytology departments were also assessed. The task force reviewed existing literature on digital cytology, conducted a worldwide survey, and held a virtual roundtable discussion on digital cytology and AI with multiple industry corporate representatives. This white paper, presented in 2 parts, summarizes the current state of digital cytology and AI practice in global cytology practice. Part 1 of the white paper is presented as a separate paper which details a review and best practice recommendations for incorporating digital cytology into practice. Part 2 of the white paper presented here provides a comprehensive review of AI in cytology practice along with best practice recommendations and legal considerations. Additionally, the cytology global survey results highlighting current AI practices by various laboratories, as well as current attitudes, are reported.
Collapse
Affiliation(s)
- David Kim
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Kaitlin E Sundling
- The Wisconsin State Laboratory of Hygiene and Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison, Madison, Wisconsin
| | - Renu Virk
- Department of Pathology and Cell Biology, Columbia University, New York, New York
| | - Michael J Thrall
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, Texas
| | - Susan Alperstein
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Marilyn M Bui
- The Department of Pathology, Moffitt Cancer Center & Research Institute, Tampa, Florida
| | | | - Amber D Donnelly
- Diagnostic Cytology Education, University of Nebraska Medical Center, College of Allied Health Professions, Omaha, Nebraska
| | - Oscar Lin
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Emilio Madrigal
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Department of Pathology, National Health Laboratory Services, Johannesburg, South Africa
| | - Fernando C Schmitt
- Department of Pathology, Medical Faculty of Porto University, Porto, Portugal
| | - Philippe R Vielh
- Department of Pathology, Medipath and American Hospital of Paris, Paris, France
| | | | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | | - Momin T Siddiqui
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio.
| |
Collapse
|
22
|
Fang M, Fu M, Liao B, Lei X, Wu FX. Deep integrated fusion of local and global features for cervical cell classification. Comput Biol Med 2024; 171:108153. [PMID: 38364660 DOI: 10.1016/j.compbiomed.2024.108153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 02/18/2024]
Abstract
Cervical cytology image classification is of great significance to the cervical cancer diagnosis and prognosis. Recently, convolutional neural network (CNN) and visual transformer have been adopted as two branches to learn the features for image classification by simply adding local and global features. However, such the simple addition may not be effective to integrate these features. In this study, we explore the synergy of local and global features for cytology images for classification tasks. Specifically, we design a Deep Integrated Feature Fusion (DIFF) block to synergize local and global features of cytology images from a CNN branch and a transformer branch. Our proposed method is evaluated on three cervical cell image datasets (SIPaKMeD, CRIC, Herlev) and another large blood cell dataset BCCD for several multi-class and binary classification tasks. Experimental results demonstrate the effectiveness of the proposed method in cervical cell classification, which could assist medical specialists to better diagnose cervical cancer.
Collapse
Affiliation(s)
- Ming Fang
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Minghan Fu
- Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Bo Liao
- School of Mathematics and Statistics, Hainan Normal University, 99 Longkun South Road, Haikou, 571158, Hainan, China
| | - Xiujuan Lei
- School of Computer Science, Shaanxi Normal University, 620 West Chang'an Avenue, Xi'an, 710119, Shaanxi, China.
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Computer Science, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada.
| |
Collapse
|
23
|
Chen P, Liu F, Zhang J, Wang B. MFEM-CIN: A Lightweight Architecture Combining CNN and Transformer for the Classification of Pre-Cancerous Lesions of the Cervix. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:216-225. [PMID: 38606400 PMCID: PMC11008799 DOI: 10.1109/ojemb.2024.3367243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/03/2023] [Accepted: 02/05/2024] [Indexed: 04/13/2024] Open
Abstract
Goal: Cervical cancer is one of the most common cancers in women worldwide, ranking among the top four. Unfortunately, it is also the fourth leading cause of cancer-related deaths among women, particularly in developing countries where incidence and mortality rates are higher compared to developed nations. Colposcopy can aid in the early detection of cervical lesions, but its effectiveness is limited in areas with limited medical resources and a lack of specialized physicians. Consequently, many cases are diagnosed at later stages, putting patients at significant risk. Methods: This paper proposes an automated colposcopic image analysis framework to address these challenges. The framework aims to reduce the labor costs associated with cervical precancer screening in undeserved regions and assist doctors in diagnosing patients. The core of the framework is the MFEM-CIN hybrid model, which combines Convolutional Neural Networks (CNN) and Transformer to aggregate the correlation between local and global features. This combined analysis of local and global information is scientifically useful in clinical diagnosis. In the model, MSFE and MSFF are utilized to extract and fuse multi-scale semantics. This preserves important shallow feature information and allows it to interact with the deep feature, enriching the semantics to some extent. Conclusions: The experimental results demonstrate an accuracy rate of 89.2% in identifying cervical intraepithelial neoplasia while maintaining a lightweight model. This performance exceeds the average accuracy achieved by professional physicians, indicating promising potential for practical application. Utilizing automated colposcopic image analysis and the MFEM-CIN model, this research offers a practical solution to reduce the burden on healthcare providers and improve the efficiency and accuracy of cervical cancer diagnosis in resource-constrained areas.
Collapse
Affiliation(s)
- Peng Chen
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
- Fin China-Anhui University Joint Laboratory for Financial Big Data ResearchHefei Financial China Information and Technology Company, Ltd.Hefei230022China
| | - Fobao Liu
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Jun Zhang
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Bing Wang
- School of Management Science and EngineeringAnhui University of Finance and EconomicsBengbu233030China
| |
Collapse
|
24
|
Ma Y, Zhang X, Yi Z, Ding L, Cai B, Jiang Z, Liu W, Zou H, Wang X, Fu G. A study of machine learning models for rapid intraoperative diagnosis of thyroid nodules for clinical practice in China. Cancer Med 2024; 13:e6854. [PMID: 38189547 PMCID: PMC10904961 DOI: 10.1002/cam4.6854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 11/06/2023] [Accepted: 12/10/2023] [Indexed: 01/09/2024] Open
Abstract
BACKGROUND In China, rapid intraoperative diagnosis of frozen sections of thyroid nodules is used to guide surgery. However, the lack of subspecialty pathologists and delayed diagnoses are challenges in clinical treatment. This study aimed to develop novel diagnostic approaches to increase diagnostic effectiveness. METHODS Artificial intelligence and machine learning techniques were used to automatically diagnose histopathological slides. AI-based models were trained with annotations and selected as efficientnetV2-b0 from multi-set experiments. RESULTS On 191 test slides, the proposed method predicted benign and malignant categories with a sensitivity of 72.65%, specificity of 100.0%, and AUC of 86.32%. For the subtype diagnosis, the best AUC was 99.46% for medullary thyroid cancer with an average of 237.6 s per slide. CONCLUSIONS Within our testing dataset, the proposed method accurately diagnosed the thyroid nodules during surgery.
Collapse
Affiliation(s)
- Yan Ma
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Xiuming Zhang
- Department of PathologyThe First Affiliated Hospital, School of Medicine, Zhejiang UniversityHangzhouZhejiangChina
| | - Zhongliang Yi
- Department of PathologyHang Zhou Dian Medical LaboratoryHangzhouZhejiangP. R. China
| | - Liya Ding
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Bojun Cai
- Hangzhou PathoAI Technology Co., LtdHangzhouZhejiangChina
| | - Zhinong Jiang
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Wangwang Liu
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| | - Hong Zou
- Department of PathologyThe Second Affiliated Hospital of Zhejiang University School of MedicineHangzhouZhejiangChina
| | - Xiaomei Wang
- Hangzhou PathoAI Technology Co., LtdHangzhouZhejiangChina
| | - Guoxiang Fu
- Department of Pathology, Sir Run Run Shaw HospitalZhejiang University School of MedicineHangzhouZhejiangChina
| |
Collapse
|
25
|
Akash RS, Islam R, Badhon SMSI, Hossain KSMT. CerviXpert: A multi-structural convolutional neural network for predicting cervix type and cervical cell abnormalities. Digit Health 2024; 10:20552076241295440. [PMID: 39529914 PMCID: PMC11552049 DOI: 10.1177/20552076241295440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 10/09/2024] [Indexed: 11/16/2024] Open
Abstract
Objectives Cervical cancer, a leading cause of cancer-related deaths among women globally, has a significantly higher survival rate when diagnosed early. Traditional diagnostic methods like Pap smears and cervical biopsies rely heavily on the skills of cytologists, making the process prone to errors. This study aims to develop CerviXpert, a multi-structural convolutional neural network designed to classify cervix types and detect cervical cell abnormalities efficiently. Methods We introduced CerviXpert, a computationally efficient convolutional neural network model that classifies cervical cancer using images from the publicly available SiPaKMeD dataset. Our approach emphasizes simplicity, using a limited number of convolutional layers followed by max-pooling and dense layers, trained from scratch. We compared CerviXpert's performance against other state-of-the-art convolutional neural network models, including ResNet50, VGG16, MobileNetV2, and InceptionV3, evaluating them on accuracy, computational efficiency, and robustness using five-fold cross-validation. Results CerviXpert achieved an accuracy of 98.04% in classifying cervical cell abnormalities into three classes (normal, abnormal, and benign) and 98.60% for five-class cervix type classification, outperforming MobileNetV2 and InceptionV3 in both accuracy and computational demands. It demonstrated comparable results to ResNet50 and VGG16, with significantly reduced computational complexity and resource usage. Conclusion CerviXpert offers a promising solution for efficient cervical cancer screening and diagnosis, striking a balance between accuracy and computational feasibility. Its streamlined architecture makes it suitable for deployment in resource-constrained environments, potentially improving early detection and management of cervical cancer.
Collapse
Affiliation(s)
- Rashik Shahriar Akash
- Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Radiful Islam
- Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | | | | |
Collapse
|
26
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
27
|
Khan A, Han S, Ilyas N, Lee YM, Lee B. CervixFormer: A Multi-scale swin transformer-Based cervical pap-Smear WSI classification framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107718. [PMID: 37451230 DOI: 10.1016/j.cmpb.2023.107718] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/05/2023] [Accepted: 07/08/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Cervical cancer affects around 0.5 million women per year, resulting in over 0.3 million fatalities. Therefore, repetitive screening for cervical cancer is of utmost importance. Computer-assisted diagnosis is key for scaling up cervical cancer screening. Current recognition algorithms, however, perform poorly on the whole-slide image (WSI) analysis, fail to generalize for different staining methods and on uneven distribution for subtype imaging, and provide sub-optimal clinical-level interpretations. Herein, we developed CervixFormer-an end-to-end, multi-scale swin transformer-based adversarial ensemble learning framework to assess pre-cancerous and cancer-specific cervical malignant lesions on WSIs. METHODS The proposed framework consists of (1) a self-attention generative adversarial network (SAGAN) for generating synthetic images during patch-level training to address the class imbalanced problems; (2) a multi-scale transformer-based ensemble learning method for cell identification at various stages, including atypical squamous cells (ASC) and atypical squamous cells of undetermined significance (ASCUS), which have not been demonstrated in previous studies; and (3) a fusion model for concatenating ensemble-based results and producing final outcomes. RESULTS In the evaluation, the proposed method is first evaluated on a private dataset of 717 annotated samples from six classes, obtaining a high recall and precision of 0.940 and 0.934, respectively, in roughly 1.2 minutes. To further examine the generalizability of CervixFormer, we evaluated it on four independent, publicly available datasets, namely, the CRIC cervix, Mendeley LBC, SIPaKMeD Pap Smear, and Cervix93 Extended Depth of Field image datasets. CervixFormer obtained a fairly better performance on two-, three-, four-, and six-class classification of smear- and cell-level datasets. For clinical interpretation, we used GradCAM to visualize a coarse localization map, highlighting important regions in the WSI. Notably, CervixFormer extracts feature mostly from the cell nucleus and partially from the cytoplasm. CONCLUSIONS In comparison with the existing state-of-the-art benchmark methods, the CervixFormer outperforms them in terms of recall, accuracy, and computing time.
Collapse
Affiliation(s)
- Anwar Khan
- Center for Cancer Biology, Vlaams Instituut voor Biotechnologie (VIB), Belgium; Department of Oncology, Katholieke Universiteit (KU) Leuven, Belgium; Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Seunghyeon Han
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| | - Naveed Ilyas
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea; Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, UAE.
| | - Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, South Korea.
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea.
| |
Collapse
|
28
|
Kaur M, Singh D, Kumar V, Lee HN. MLNet: Metaheuristics-Based Lightweight Deep Learning Network for Cervical Cancer Diagnosis. IEEE J Biomed Health Inform 2023; 27:5004-5014. [PMID: 36399582 DOI: 10.1109/jbhi.2022.3223127] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
One of the leading causes of cancer-related deaths among women is cervical cancer. Early diagnosis and treatment can minimize the complications of this cancer. Recently, researchers have designed and implemented many deep learning-based automated cervical cancer diagnosis models. However, the majority of these models suffer from over-fitting, parameter tuning, and gradient vanishing problems. To overcome these problems, in this paper a metaheuristics-based lightweight deep learning network (MLNet) is proposed. Initially, the hyper-parameters tuning problem of convolutional neural network (CNN) is defined as a multi-objective problem. Particle swarm optimization (PSO) is used to optimally define the CNN architecture. Thereafter, Dynamically hybrid niching differential evolution (DHDE) is utilized to optimize the hyper-parameters of CNN layers. Each particle of PSO and solution of DHDE together represent the possible CNN configuration. F-score is used as a fitness function. The proposed MLNet is trained and validated on three benchmark cervical cancer datasets. On the Herlev dataset, MLNet outperforms the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.6254%, 1.5178%, 1.5780%, 1.7145%, and 1.4890%, respectively. Also, on the SIPaKMeD dataset, MLNet achieves better performance than the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 2.1250%, 2.2455%, 1.9074%, 1.9258%, and 1.8975%, respectively. Finally, on the Mendeley LBC dataset, MLNet achieves better performance than the competitive models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.4680%, 1.5845%, 1.3582%, 1.3926%, and 1.4125%, respectively.
Collapse
|
29
|
Lee YM, Lee B, Cho NH, Park JH. Beyond the Microscope: A Technological Overture for Cervical Cancer Detection. Diagnostics (Basel) 2023; 13:3079. [PMID: 37835821 PMCID: PMC10572593 DOI: 10.3390/diagnostics13193079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/25/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023] Open
Abstract
Cervical cancer is a common and preventable disease that poses a significant threat to women's health and well-being. It is the fourth most prevalent cancer among women worldwide, with approximately 604,000 new cases and 342,000 deaths in 2020, according to the World Health Organization. Early detection and diagnosis of cervical cancer are crucial for reducing mortality and morbidity rates. The Papanicolaou smear test is a widely used screening method that involves the examination of cervical cells under a microscope to identify any abnormalities. However, this method is time-consuming, labor-intensive, subjective, and prone to human errors. Artificial intelligence techniques have emerged as a promising alternative to improve the accuracy and efficiency of Papanicolaou smear diagnosis. Artificial intelligence techniques can automatically analyze Papanicolaou smear images and classify them into normal or abnormal categories, as well as detect the severity and type of lesions. This paper provides a comprehensive review of the recent advances in artificial intelligence diagnostics of the Papanicolaou smear, focusing on the methods, datasets, performance metrics, and challenges. The paper also discusses the potential applications and future directions of artificial intelligence diagnostics of the Papanicolaou smear.
Collapse
Affiliation(s)
- Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, Cheonan 31116, Republic of Korea;
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Republic of Korea;
| | - Nam-Hoon Cho
- Department of Pathology, Severance Hospital, College of Medicine, Yonsei University, Seoul 03722, Republic of Korea;
| | - Jae Hyun Park
- Department of Surgery, Wonju Severance Christian Hospital, Wonju College of Medicine, Yonsei University, Wonju 26492, Republic of Korea
| |
Collapse
|
30
|
Alsalatie M, Alquran H, Mustafa WA, Zyout A, Alqudah AM, Kaifi R, Qudsieh S. A New Weighted Deep Learning Feature Using Particle Swarm and Ant Lion Optimization for Cervical Cancer Diagnosis on Pap Smear Images. Diagnostics (Basel) 2023; 13:2762. [PMID: 37685299 PMCID: PMC10487265 DOI: 10.3390/diagnostics13172762] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/17/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
One of the most widespread health issues affecting women is cervical cancer. Early detection of cervical cancer through improved screening strategies will reduce cervical cancer-related morbidity and mortality rates worldwide. Using a Pap smear image is a novel method for detecting cervical cancer. Previous studies have focused on whole Pap smear images or extracted nuclei to detect cervical cancer. In this paper, we compared three scenarios of the entire cell, cytoplasm region, or nucleus region only into seven classes of cervical cancer. After applying image augmentation to solve imbalanced data problems, automated features are extracted using three pre-trained convolutional neural networks: AlexNet, DarkNet 19, and NasNet. There are twenty-one features as a result of these scenario combinations. The most important features are split into ten features by the principal component analysis, which reduces the dimensionality. This study employs feature weighting to create an efficient computer-aided cervical cancer diagnosis system. The optimization procedure uses the new evolutionary algorithms known as Ant lion optimization (ALO) and particle swarm optimization (PSO). Finally, two types of machine learning algorithms, support vector machine classifier, and random forest classifier, have been used in this paper to perform classification jobs. With a 99.5% accuracy rate for seven classes using the PSO algorithm, the SVM classifier outperformed the RF, which had a 98.9% accuracy rate in the same region. Our outcome is superior to other studies that used seven classes because of this focus on the tissues rather than just the nucleus. This method will aid physicians in diagnosing precancerous and early-stage cervical cancer by depending on the tissues, rather than on the nucleus. The result can be enhanced using a significant amount of data.
Collapse
Affiliation(s)
- Mohammed Alsalatie
- King Hussein Medical Center, Royal Jordanian Medical Service, The Institute of Biomedical Technology, Amman 11855, Jordan;
| | - Hiam Alquran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan; (A.Z.); (A.M.A.)
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering & Technology, Campus Pauh Putra, Universiti Malaysia Perlis, Arau 02600, Malaysia
- Advanced Computing (AdvCOMP), Centre of Excellence (CoE), Universiti Malaysia Perlis, Arau 02600, Malaysia
| | - Ala’a Zyout
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan; (A.Z.); (A.M.A.)
| | - Ali Mohammad Alqudah
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid 21163, Jordan; (A.Z.); (A.M.A.)
| | - Reham Kaifi
- College of Applied Medical Sciences, King Saud Bin Abdulaziz University for Health Sciences, Jeddah 21423, Saudi Arabia
- King Abdullah International Medical Research Center, Jeddah 22384, Saudi Arabia
| | - Suhair Qudsieh
- Department of Obstetrics and Gynecology, Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan;
| |
Collapse
|
31
|
Fan Z, Wu X, Li C, Chen H, Liu W, Zheng Y, Chen J, Li X, Sun H, Jiang T, Grzegorzek M, Li C. CAM-VT: A Weakly supervised cervical cancer nest image identification approach using conjugated attention mechanism and visual transformer. Comput Biol Med 2023; 162:107070. [PMID: 37295389 DOI: 10.1016/j.compbiomed.2023.107070] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/27/2023] [Accepted: 05/27/2023] [Indexed: 06/12/2023]
Abstract
Cervical cancer is the fourth most common cancer among women, and cytopathological images are often used to screen for this cancer. However, manual examination is very troublesome and the misdiagnosis rate is high. In addition, cervical cancer nest cells are denser and more complex, with high overlap and opacity, increasing the difficulty of identification. The appearance of the computer aided automatic diagnosis system solves this problem. In this paper, a weakly supervised cervical cancer nest image identification approach using Conjugated Attention Mechanism and Visual Transformer (CAM-VT), which can analyze pap slides quickly and accurately. CAM-VT proposes conjugated attention mechanism and visual transformer modules for local and global feature extraction respectively, and then designs an ensemble learning module to further improve the identification capability. In order to determine a reasonable interpretation, comparative experiments are conducted on our datasets. The average accuracy of the validation set of three repeated experiments using CAM-VT framework is 88.92%, which is higher than the optimal result of 22 well-known deep learning models. Moreover, we conduct ablation experiments and extended experiments on Hematoxylin and Eosin stained gastric histopathological image datasets to verify the ability and generalization ability of the framework. Finally, the top 5 and top 10 positive probability values of cervical nests are 97.36% and 96.84%, which have important clinical and practical significance. The experimental results show that the proposed CAM-VT framework has excellent performance in potential cervical cancer nest image identification tasks for practical clinical work.
Collapse
Affiliation(s)
- Zizhen Fan
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiangchen Wu
- Suzhou Ruiqian Technology Company Ltd., Suzhou, China
| | - Changzhong Li
- Suzhou Ruiqian Technology Company Ltd., Suzhou, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yuchao Zheng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jing Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China.
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
32
|
Pramanik R, Banerjee B, Sarkar R. MSENet: Mean and standard deviation based ensemble network for cervical cancer detection. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 123:106336. [DOI: 10.1016/j.engappai.2023.106336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
33
|
Liang Y, Feng S, Liu Q, Kuang H, Liu J, Liao L, Du Y, Wang J. Exploring Contextual Relationships for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2023; 27:4086-4097. [PMID: 37192032 DOI: 10.1109/jbhi.2023.3276919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.
Collapse
|
34
|
Cervical cell classification with deep-learning algorithms. Med Biol Eng Comput 2023; 61:821-833. [PMID: 36626113 DOI: 10.1007/s11517-022-02745-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 12/18/2022] [Indexed: 01/11/2023]
Abstract
Cervical cancer is a serious threat to the lives and health of women. The accurate analysis of cervical cell smear images is an important diagnostic basis for cancer identification. However, pathological data are often complex and difficult to analyze accurately because pathology images contain a wide variety of cells. To improve the recognition accuracy of cervical cell smear images, we propose a novel deep-learning model based on the improved Faster R-CNN, shallow feature enhancement networks, and generative adversarial networks. First, we used a global average pooling layer to enhance the robustness of the data feature transformation. Second, we designed a shallow feature enhancement network to improve the localization and recognition of weak cells. Finally, we established a data augmentation network to improve the detection capability of the model. The experimental results demonstrate that our proposed methods are superior to CenterNet, YOLOv5, and Faster R-CNN algorithms in some aspects, such as shorter time consumption, higher recognition precision, and stronger adaptive ability. Its maximum accuracy is 99.81%, and the overall mean average precision is 89.4% for the SIPaKMeD and Herlev datasets. Our method provides a useful reference for cervical cell smear image analysis. The missed diagnosis rate and false diagnosis rate are relatively high for cervical cell smear images of different pathologies and stages. Therefore, our algorithms need to be further improved to achieve a better balance. We will use a hyperspectral microscope to obtain more spectral data of cervical cells and input them into deep-learning models for data processing and classification research. First, we sent training samples of cervical cells into our proposed deep-learning model. Then, we used the proposed model to train eight types of cervical cells. Finally, we utilized the trained classifier to test the untrained samples and obtained the classification results. Fig 1. Deep-learning cervical cell classification framework.
Collapse
|
35
|
Monabbati S, Leo P, Bera K, Michael CW, Nezami BG, Harbhajanka A, Madabhushi A. Automated analysis of computerized morphological features of cell clusters associated with malignancy on bile duct brushing whole slide images. Cancer Med 2023; 12:6365-6378. [PMID: 36281473 PMCID: PMC10028025 DOI: 10.1002/cam4.5365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/01/2022] [Accepted: 08/07/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Bile duct brush specimens are difficult to interpret as they often present inflammatory and reactive backgrounds due to the local effects of stricture, atypical reactive changes, or previously installed stents, and often have low to intermediate cellularity. As a result, diagnosis of biliary adenocarcinomas is challenging and often results in large interobserver variability and low sensitivity OBJECTIVE: In this work, we used computational image analysis to evaluate the role of nuclear morphological and texture features of epithelial cell clusters to predict the presence of pancreatic and biliary tract adenocarcinoma on digitized brush cytology specimens. METHODS Whole slide images from 124 patients, either diagnosed as benign or malignant based on clinicopathological correlation, were collected and randomly split into training (ST , N = 58) and testing (Sv , N = 66) sets, with the exception of cases diagnosed as atypical on cytology were included in Sv . Nuclear boundaries on cell clusters extracted from each image were segmented via a watershed algorithm. A total of 536 quantitative morphometric features pertaining to nuclear shape, size, and aggregate cluster texture were extracted from within the cell clusters. The most predictive features from patients in ST were selected via rank-sum, t-test, and minimum redundancy maximum relevance (mRMR) schemes. The selected features were then used to train three machine-learning classifiers. RESULTS Malignant clusters tended to exhibit lower textural homogeneity within the nucleus, greater textural entropy around the nuclear membrane, and longer minor axis lengths. The sensitivity of cytology alone was 74% (without atypicals) and 46% (with atypicals). With machine diagnosis, the sensitivity improved to 68% from 46% when atypicals were included and treated as nonmalignant false negatives. The specificity of our model was 100% within the atypical category. CONCLUSION We achieved an area under the receiver operating characteristic curve (AUC) of 0.79 on Sv , which included atypical cytological diagnosis.
Collapse
Affiliation(s)
- Shayan Monabbati
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Patrick Leo
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Kaustav Bera
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Claire W. Michael
- Department of PathologyCase Western Reserve University School of Medicine, University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | - Behtash G. Nezami
- Department of PathologyCase Western Reserve University School of Medicine, University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | - Aparna Harbhajanka
- Department of PathologyCase Western Reserve University School of Medicine, University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOhioUSA
- Louis Stokes Cleveland Veterans Administration Medical CenterClevelandOhioUSA
| |
Collapse
|
36
|
Kavitha R, Jothi DK, Saravanan K, Swain MP, Gonzáles JLA, Bhardwaj RJ, Adomako E. Ant Colony Optimization-Enabled CNN Deep Learning Technique for Accurate Detection of Cervical Cancer. BIOMED RESEARCH INTERNATIONAL 2023; 2023:1742891. [PMID: 36865486 PMCID: PMC9974247 DOI: 10.1155/2023/1742891] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 10/03/2022] [Accepted: 02/07/2023] [Indexed: 02/23/2023]
Abstract
Cancer is characterized by abnormal cell growth and proliferation, which are both diagnostic indicators of the disease. When cancerous cells enter one organ, there is a risk that they may spread to adjacent tissues and eventually to other organs. Cancer of the cervix of the uterus often initially manifests itself in the uterine cervix, which is located at the very bottom of the uterus. Both the growth and death of cervical cells are characteristic features of this condition. False-negative results provide a significant moral dilemma since they may cause women to get an incorrect diagnosis of cancer, which in turn can result in the woman's premature death from the disease. False-positive results do not raise any significant ethical concerns; but they do require a patient to go through an expensive and time-consuming treatment process, and they also cause the patient to experience tension and anxiety that is not warranted. In order to detect cervical cancer in its earliest stages in women, a screening procedure known as a Pap test is often performed. This article describes a technique for improving images using Brightness Preserving Dynamic Fuzzy Histogram Equalization. To individual components and find the right area of interest, the fuzzy c-means approach is applied. The images are segmented using the fuzzy c-means method to find the right area of interest. The feature selection algorithm is the ACO algorithm. Following that, categorization is carried out utilizing the CNN, MLP, and ANN algorithms.
Collapse
Affiliation(s)
- R. Kavitha
- Sri Ram Nallamani Yadava Arts and Science College, Manonmaniam Sundaranar University, Tirunelveli, India
| | - D. Kiruba Jothi
- Department of Information Technology, Sri Ram Nallamani Yadava college of Arts and Science, Manonmaniam Sundaranar University, Tirunelveli, India
| | - K. Saravanan
- Department of Information Technology, R.M.D. Engineering College, Chennai, India
| | - Mahendra Pratap Swain
- Department of Pharmaceutical Sciences and Technology, Birla Institute of Technology, Mesra, Ranchi, India
| | | | - Rakhi Joshi Bhardwaj
- Department of Computer Engineering, Vishwakarma Institute of Technology, Savitribai Phule Pune University, Pune, India
| | | |
Collapse
|
37
|
Fekri-Ershad S, Alsaffar MF. Developing a Tuned Three-Layer Perceptron Fed with Trained Deep Convolutional Neural Networks for Cervical Cancer Diagnosis. Diagnostics (Basel) 2023; 13:686. [PMID: 36832174 PMCID: PMC9955324 DOI: 10.3390/diagnostics13040686] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/14/2023] [Accepted: 02/07/2023] [Indexed: 02/15/2023] Open
Abstract
Cervical cancer is one of the most common types of cancer among women, which has higher death-rate than many other cancer types. The most common way to diagnose cervical cancer is to analyze images of cervical cells, which is performed using Pap smear imaging test. Early and accurate diagnosis can save the lives of many patients and increase the chance of success of treatment methods. Until now, various methods have been proposed to diagnose cervical cancer based on the analysis of Pap smear images. Most of the existing methods can be divided into two groups of methods based on deep learning techniques or machine learning algorithms. In this study, a combination method is presented, whose overall structure is based on a machine learning strategy, where the feature extraction stage is completely separate from the classification stage. However, in the feature extraction stage, deep networks are used. In this paper, a multi-layer perceptron (MLP) neural network fed with deep features is presented. The number of hidden layer neurons is tuned based on four innovative ideas. Additionally, ResNet-34, ResNet-50 and VGG-19 deep networks have been used to feed MLP. In the presented method, the layers related to the classification phase are removed in these two CNN networks, and the outputs feed the MLP after passing through a flatten layer. In order to improve performance, both CNNs are trained on related images using the Adam optimizer. The proposed method has been evaluated on the Herlev benchmark database and has provided 99.23 percent accuracy for the two-classes case and 97.65 percent accuracy for the 7-classes case. The results have shown that the presented method has provided higher accuracy than the baseline networks and many existing methods.
Collapse
Affiliation(s)
- Shervan Fekri-Ershad
- Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad 8514143131, Iran
- Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad 8514143131, Iran
| | - Marwa Fadhil Alsaffar
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| |
Collapse
|
38
|
Maurya S, Tiwari S, Mothukuri MC, Tangeda CM, Nandigam RNS, Addagiri DC. A review on recent developments in cancer detection using Machine Learning and Deep Learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
39
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
40
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
41
|
Kalbhor M, Shinde S, Joshi H, Wajire P. Pap smear-based cervical cancer detection using hybrid deep learning and performance evaluation. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2022.2163704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Madhura Kalbhor
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Swati Shinde
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Hrushikesh Joshi
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Pankaj Wajire
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| |
Collapse
|
42
|
Depto DS, Rizvee MM, Rahman A, Zunair H, Rahman MS, Mahdy MRC. Quantifying imbalanced classification methods for leukemia detection. Comput Biol Med 2023; 152:106372. [PMID: 36516574 DOI: 10.1016/j.compbiomed.2022.106372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 11/01/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022]
Abstract
Uncontrolled proliferation of B-lymphoblast cells is a common characterization of Acute Lymphoblastic Leukemia (ALL). B-lymphoblasts are found in large numbers in peripheral blood in malignant cases. Early detection of the cell in bone marrow is essential as the disease progresses rapidly if left untreated. However, automated classification of the cell is challenging, owing to its fine-grained variability with B-lymphoid precursor cells and imbalanced data points. Deep learning algorithms demonstrate potential for such fine-grained classification as well as suffer from the imbalanced class problem. In this paper, we explore different deep learning-based State-Of-The-Art (SOTA) approaches to tackle imbalanced classification problems. Our experiment includes input, GAN (Generative Adversarial Networks), and loss-based methods to mitigate the issue of imbalanced class on the challenging C-NMC and ALLIDB-2 dataset for leukemia detection. We have shown empirical evidence that loss-based methods outperform GAN-based and input-based methods in imbalanced classification scenarios.
Collapse
Affiliation(s)
- Deponker Sarker Depto
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Md Mashfiq Rizvee
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh; Texas Tech University, Lubbock, TX, United States of America.
| | - Aimon Rahman
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | | | - M Sohel Rahman
- Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, ECE Building, West Palasi, Dhaka 1205, Bangladesh.
| | - M R C Mahdy
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| |
Collapse
|
43
|
Chowdary GJ, G S, M P, Yogarajah P. Nucleus segmentation and classification using residual SE-UNet and feature concatenation approach incervical cytopathology cell images. Technol Cancer Res Treat 2023; 22:15330338221134833. [PMID: 36744768 PMCID: PMC9905035 DOI: 10.1177/15330338221134833] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/30/2022] [Indexed: 02/07/2023] Open
Abstract
Introduction: Pap smear is considered to be the primary examination for the diagnosis of cervical cancer. But the analysis of pap smear slides is a time-consuming task and tedious as it requires manual intervention. The diagnostic efficiency depends on the medical expertise of the pathologist, and human error often hinders the diagnosis. Automated segmentation and classification of cervical nuclei will help diagnose cervical cancer in earlier stages. Materials and Methods: The proposed methodology includes three models: a Residual-Squeeze-and-Excitation-module based segmentation model, a fusion-based feature extraction model, and a Multi-layer Perceptron classification model. In the fusion-based feature extraction model, three sets of deep features are extracted from these segmented nuclei using the pre-trained and fine-tuned VGG19, VGG-F, and CaffeNet models, and two hand-crafted descriptors, Bag-of-Features and Linear-Binary-Patterns, are extracted for each image. For this work, Herlev, SIPaKMeD, and ISBI2014 datasets are used for evaluation. The Herlev datasetis used for evaluating both segmentation and classification models. Whereas the SIPaKMeD and ISBI2014 are used for evaluating the classification model, and the segmentation model respectively. Results: The segmentation network enhanced the precision and ZSI by 2.04%, and 2.00% on the Herlev dataset, and the precision and recall by 0.68%, and 2.59% on the ISBI2014 dataset. The classification approach enhanced the accuracy, recall, and specificity by 0.59%, 0.47%, and 1.15% on the Herlev dataset, and by 0.02%, 0.15%, and 0.22% on the SIPaKMed dataset. Conclusion: The experiments demonstrate that the proposed work achieves promising performance on segmentation and classification in cervical cytopathology cell images..
Collapse
Affiliation(s)
| | - Suganya G
- Vellore Institute of Technology, Chennai, India
| | | | | |
Collapse
|
44
|
Gao W, Xu C, Li G, Zhang Y, Bai N, Li M. Cervical Cell Image Classification-Based Knowledge Distillation. Biomimetics (Basel) 2022; 7:biomimetics7040195. [PMID: 36412723 PMCID: PMC9680356 DOI: 10.3390/biomimetics7040195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 11/03/2022] [Accepted: 11/05/2022] [Indexed: 11/12/2022] Open
Abstract
Current deep-learning-based cervical cell classification methods suffer from parameter redundancy and poor model generalization performance, which creates challenges for the intelligent classification of cervical cytology smear images. In this paper, we establish a method for such classification that combines transfer learning and knowledge distillation. This new method not only transfers common features between different source domain data, but also realizes model-to-model knowledge transfer using the unnormalized probability output between models as knowledge. A multi-exit classification network is then introduced as the student network, where a global context module is embedded in each exit branch. A self-distillation method is then proposed to fuse contextual information; deep classifiers in the student network guide shallow classifiers to learn, and multiple classifier outputs are fused using an average integration strategy to form a classifier with strong generalization performance. The experimental results show that the developed method achieves good results using the SIPaKMeD dataset. The accuracy, sensitivity, specificity, and F-measure of the five classifications are 98.52%, 98.53%, 98.68%, 98.59%, respectively. The effectiveness of the method is further verified on a natural image dataset.
Collapse
Affiliation(s)
- Wenjian Gao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
- Correspondence: (C.X.); (G.L.)
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (C.X.); (G.L.)
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
45
|
Huang H, You Z, Cai H, Xu J, Lin D. Fast detection method for prostate cancer cells based on an integrated ResNet50 and YoloV5 framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107184. [PMID: 36288685 DOI: 10.1016/j.cmpb.2022.107184] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/10/2022] [Accepted: 10/15/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE To propose a fast detection method for prostate cancer abnormal cells based on deep learning. The purpose of this method is to quickly and accurately locate and identify abnormal cells, so as to improve the efficiency of prostate precancerous screening and promote the application and popularization of prostate cancer cell assisted screening technology. METHOD The method includes two stages: preliminary screening of abnormal cell images and accurate identification of abnormal cells. In the preliminary screening stage of abnormal cell images, ResNet50 model is used as the image classification network to judge whether the local area contains cell clusters. In the another stage, YoloV5 model is used as the target detection network to locate and recognize abnormal cells in the image containing cell clusters. RESULTS This detection method aims at the pathological cell images obtained by the membrane method. And the double stage models proposed in this paper are compared with the single stage model method using only the target detection model. The results show that through the image classification network based on deep learning, we can first judge whether there are abnormal cells in the local area. If there are abnormal cells, we can further use the target detection method based on candidate box for analysis, which can reduce the reasoning time by 50% and improve the efficiency of abnormal cell detection under the condition of losing a small amount of accuracy and slightly increasing the complexity of the model. CONCLUSION This study proposes a fast detection method for prostate cancer abnormal cells based on deep learning, which can greatly shorten the reasoning time and improve the detection speed. It is able to improve the efficiency of prostate precancerous screening.
Collapse
Affiliation(s)
- Hongyuan Huang
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China.
| | - Zhijiao You
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| | - Huayu Cai
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| | - Jianfeng Xu
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| | - Dongxu Lin
- Department of Urology, Jinjiang Municipal Hospital, Quanzhou, Fujian Province, 362000, China
| |
Collapse
|
46
|
Yin H, Bai L, Jia H, Lin G. Noninvasive assessment of breast cancer molecular subtypes on multiparametric MRI using convolutional neural network with transfer learning. Thorac Cancer 2022; 13:3183-3191. [PMID: 36203226 PMCID: PMC9663668 DOI: 10.1111/1759-7714.14673] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND To evaluate the performances of multiparametric MRI-based convolutional neural networks (CNNs) for the preoperative assessment of breast cancer molecular subtypes. METHODS A total of 136 patients with 136 pathologically confirmed invasive breast cancers were randomly divided into training, validation, and testing sets in this retrospective study. The CNN models were established based on contrast-enhanced T1 -weighted imaging (T1 C), Apparent diffusion coefficient (ADC), and T2 -weighted imaging (T2 W) using the training and validation sets. The performances of CNN models were evaluated on the testing set. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were calculated to assess the performance. RESULTS For the separation of each subtype from other subtypes on the testing set, the T1 C-based models yielded AUCs from 0.762 to 0.920; the ADC-based models yielded AUCs from 0.686 to 0.851; and the T2 W-based models achieved AUCs from 0.639 to 0.697. CONCLUSION T1 C-based models performed better than ADC-based models and T2 W-based models in assessing the breast cancer molecular subtypes. The discriminating performances of our CNN models for triple negative and human epidermal growth factor receptor 2-enriched subtypes were better than that of luminal A and luminal B subtypes.
Collapse
Affiliation(s)
- Haolin Yin
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Lutian Bai
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Huihui Jia
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Guangwu Lin
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| |
Collapse
|
47
|
Song J, Im S, Lee SH, Jang HJ. Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images. Diagnostics (Basel) 2022; 12:2623. [PMID: 36359467 PMCID: PMC9689570 DOI: 10.3390/diagnostics12112623] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 08/11/2023] Open
Abstract
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
Collapse
Affiliation(s)
- JaeYen Song
- Department of Obstetrics and Gynecology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Soyoung Im
- Department of Hospital Pathology, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 16247, Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| |
Collapse
|
48
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
49
|
Xu C, Li M, Li G, Zhang Y, Sun C, Bai N. Cervical Cell/Clumps Detection in Cytology Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:2477. [PMID: 36292166 PMCID: PMC9600700 DOI: 10.3390/diagnostics12102477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 12/04/2022] Open
Abstract
Cervical cancer is one of the most common and deadliest cancers among women and poses a serious health risk. Automated screening and diagnosis of cervical cancer will help improve the accuracy of cervical cell screening. In recent years, there have been many studies conducted using deep learning methods for automatic cervical cancer screening and diagnosis. Deep-learning-based Convolutional Neural Network (CNN) models require large amounts of data for training, but large cervical cell datasets with annotations are difficult to obtain. Some studies have used transfer learning approaches to handle this problem. However, such studies used the same transfer learning method that is the backbone network initialization by the ImageNet pre-trained model in two different types of tasks, the detection and classification of cervical cell/clumps. Considering the differences between detection and classification tasks, this study proposes the use of COCO pre-trained models when using deep learning methods for cervical cell/clumps detection tasks to better handle limited data set problem at training time. To further improve the model detection performance, based on transfer learning, we conducted multi-scale training according to the actual situation of the dataset. Considering the effect of bounding box loss on the precision of cervical cell/clumps detection, we analyzed the effects of different bounding box losses on the detection performance of the model and demonstrated that using a loss function consistent with the type of pre-trained model can help improve the model performance. We analyzed the effect of mean and std of different datasets on the performance of the model. It was demonstrated that the detection performance was optimal when using the mean and std of the cervical cell dataset used in the current study. Ultimately, based on backbone Resnet50, the mean Average Precision (mAP) of the network model is 61.6% and Average Recall (AR) is 87.7%. Compared to the current values of 48.8% and 64.0% in the used dataset, the model detection performance is significantly improved by 12.8% and 23.7%, respectively.
Collapse
Affiliation(s)
- Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Chengjie Sun
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
50
|
Auxiliary classification of cervical cells based on multi-domain hybrid deep learning framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|