1
|
Yang J, Luo X, Guo L, Cheng H, Tang Y, Song Y, Li W, Xiong L, Gao F, Cheng W, Zhu Q. A novel scoring model to predict massive hemorrhage during dilatation and curettage following focused ultrasound ablation surgery in patients with type 2 cesarean scar pregnancy. Int J Hyperthermia 2025; 42:2495362. [PMID: 40296674 DOI: 10.1080/02656736.2025.2495362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2025] [Revised: 03/31/2025] [Accepted: 04/14/2025] [Indexed: 04/30/2025] Open
Abstract
OBJECTIVE To develop a predictive model for assessing massive hemorrhage risk during dilatation and curettage (D&C) after focused ultrasound ablation surgery (FUAS) in Type 2 cesarean scar pregnancy (CSP) patients. METHODS A retrospective analysis of 405 Type 2 CSP patients treated at Hunan Maternal and Child Health Hospital (2018-2023) was conducted. Multivariable logistic regression identified independent risk factors, and a nomogram was constructed. Model performance was evaluated using AUC, calibration curves, and decision curve analysis (DCA). Ten-fold cross-validation was performed, and external validation was conducted on 327 patients. RESULTS Independent risk factors included gestational sac maximum diameter (OR 1.11, 95% CI: [1.07-1.15], p < 0.001), GS blood flow US grade 3 (OR 9.96, 95% CI: [2.65-40.10], p < 0.001), and FUAS-curette time >24 h (OR 17.57, 95% CI: [3.88-84.48], p < 0.001). C-scar thickness and HCG levels were also included in the model as clinically significant factors. The model showed high discriminative ability (AUC 0.910, 95% CI: 0.867-0.953) and was validated through 10-fold cross-validation (mean AUC 0.838). External validation confirmed its robustness (AUC 0.812, 95% CI: 0.742-0.881). Calibration curves and DCA confirmed its accuracy and clinical utility. CONCLUSION The predictive model effectively assesses hemorrhage risk in Type 2 CSP patients post-FUAS, offering valuable clinical utility.
Collapse
Affiliation(s)
- Jing Yang
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Xiaomei Luo
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Litong Guo
- Department of Obstetrics and Gynecology, The First People's Hospital of Chenzhou, Chenzhou, Hunan, P.R. China
| | - Hui Cheng
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Yi Tang
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Yiqin Song
- Department of Obstetrics and Gynecology, The First People's Hospital of Chenzhou, Chenzhou, Hunan, P.R. China
| | - Wei Li
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Li Xiong
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Fang Gao
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Wei Cheng
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| | - Qiaoling Zhu
- Department of Obstetrics and Gynecology, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, Hunan, P.R. China
| |
Collapse
|
2
|
Liu L, Liu J, Su Q, Chu Y, Xia H, Xu R. Performance of artificial intelligence for diagnosing cervical intraepithelial neoplasia and cervical cancer: a systematic review and meta-analysis. EClinicalMedicine 2025; 80:102992. [PMID: 39834510 PMCID: PMC11743870 DOI: 10.1016/j.eclinm.2024.102992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/22/2024] [Accepted: 11/22/2024] [Indexed: 01/22/2025] Open
Abstract
Background Cervical cytology screening and colposcopy play crucial roles in cervical intraepithelial neoplasia (CIN) and cervical cancer prevention. Previous studies have provided evidence that artificial intelligence (AI) has remarkable diagnostic accuracy in these procedures. With this systematic review and meta-analysis, we aimed to examine the pooled accuracy, sensitivity, and specificity of AI-assisted cervical cytology screening and colposcopy for cervical intraepithelial neoplasia and cervical cancer screening. Methods In this systematic review and meta-analysis, we searched the PubMed, Embase, and Cochrane Library databases for studies published between January 1, 1986 and August 31, 2024. Studies investigating the sensitivity and specificity of AI-assisted cervical cytology screening and colposcopy for histologically verified cervical intraepithelial neoplasia and cervical cancer and a minimum of five cases were included. The performance of AI and experienced colposcopists was assessed via the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) through random effect models. Additionally, subgroup analyses of multiple diagnostic performance metrics in developed and developing countries were conducted. This study was registered with PROSPERO (CRD42024534049). Findings Seventy-seven studies met the eligibility criteria for inclusion in this study. The pooled diagnostic parameters of AI-assisted cervical cytology via Papanicolaou (Pap) smears were as follows: accuracy, 94% (95% CI 92-96); sensitivity, 95% (95% CI 91-98); specificity, 94% (95% CI 89-97); PPV, 88% (95% CI 78-96); and NPV, 95% (95% CI 89-99). The pooled accuracy, sensitivity, specificity, PPV, and NPV of AI-assisted cervical cytology via ThinPrep cytologic test (TCT) were 90% (95% CI 85-94), 97% (95% CI 95-99), 94% (95% CI 85-98), 84% (95% CI 64-98), and 96% (95% CI 94-98), respectively. Subgroup analysis revealed that, for AI-assisted cervical cytology diagnosis, certain performance indicators were superior in developed countries compared to developing countries. Compared with experienced colposcopists, AI demonstrated superior accuracy in colposcopic examinations (odds ratio (OR) 1.75; 95% CI 1.33-2.31; P < 0.0001; I2 = 93%). Interpretation These results underscore the potential and practical value of AI in preventing and enabling early diagnosis of cervical cancer. Further research should support the development of AI for cervical cancer screening, including in low- and middle-income countries with limited resources. Funding This study was supported by the National Natural Science Foundation of China (No. 81901493) and the Shanghai Pujiang Program (No. 21PJD006).
Collapse
Affiliation(s)
- Lei Liu
- Department of Gynecology, Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China
| | - Jiangang Liu
- Department of Obstetrics and Gynecology, Puren Hospital Affiliated to Wuhan University of Science and Technology, Wuhan, 430080, China
| | - Qing Su
- Department of Obstetrics and Gynecology, The Fourth Hospital of Changsha, Changsha, 410006, China
| | - Yuening Chu
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, 201204, China
| | - Hexia Xia
- Department of Gynecology, Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China
| | - Ran Xu
- Department of Obstetrics and Gynecology, Affiliated Zhejiang Hospital, Zhejiang University School of Medicine, Hangzhou, 310013, China
- Heidelberg University, Heidelberg, 69120, Germany
| |
Collapse
|
3
|
Dellino M, Cerbone M, d’Amati A, Bochicchio M, Laganà AS, Etrusco A, Malvasi A, Vitagliano A, Pinto V, Cicinelli E, Cazzato G, Cascardi E. Artificial Intelligence in Cervical Cancer Screening: Opportunities and Challenges. AI 2024; 5:2984-3000. [DOI: 10.3390/ai5040144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2025] Open
Abstract
Among gynecological pathologies, cervical cancer has always represented a health problem with great social impact. The giant strides made as a result of both the screening programs perfected and implemented over the years and the use of new and accurate technological equipment have in fact significantly improved our clinical approach in the management and personalized diagnosis of precancerous lesions of the cervix. In this context, the advent of artificial intelligence and digital algorithms could represent new directions available to gynecologists and pathologists for the following: (i) the standardization of screening procedures, (ii) the identification of increasingly early lesions, and (iii) heightening the diagnostic accuracy of targeted biopsies and prognostic analysis of cervical cancer. The purpose of our review was to evaluate to what extent artificial intelligence can be integrated into current protocols, to identify the strengths and/or weaknesses of this method, and, above all, determine what we should expect in the future to develop increasingly safer solutions, as well as increasingly targeted and personalized screening programs for these patients. Furthermore, in an innovative way, and through a multidisciplinary vision (gynecologists, pathologists, and computer scientists), with this manuscript, we highlight a key role that AI could have in the management of HPV-positive patients. In our vision, AI will move from being a simple diagnostic device to being used as a tool for performing risk analyses of HPV-related disease progression. This is thanks to the ability of new software not only to analyze clinical and histopathological images but also to evaluate and integrate clinical elements such as vaccines, the composition of the microbiota, and the immune status of patients. In fact, the single-factor evaluation of high-risk HPV strains represents a limitation that must be overcome. Therefore, AI, through multifactorial analysis, will be able to generate a risk score that will better stratify patients and will support clinicians in choosing highly personalized treatments overall. Our study remains an innovative proposal and idea, as the literature to date presents a limitation in that this topic is considered niche, but we believe that the union of common efforts can overcome this limitation.
Collapse
Affiliation(s)
- Miriam Dellino
- 1st Unit of Obstetrics and Gynecology, Department of Interdisciplinary Medicine (DIM), University of Bari, 70124 Bari, Italy
| | - Marco Cerbone
- 1st Unit of Obstetrics and Gynecology, Department of Interdisciplinary Medicine (DIM), University of Bari, 70124 Bari, Italy
| | - Antonio d’Amati
- Pathology Unit, Department of Precision and Regenerative Medicine and Ionian Area (DiMePRe-J), University of Bari, Piazza Giulio Cesare 11, 70124 Bari, Italy
| | - Mario Bochicchio
- Department of Computer Science, University of Bari, 70121 Bari, Italy
| | - Antonio Simone Laganà
- Unit of Obstetrics and Gynecology, “Paolo Giaccone” Hospital, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Andrea Etrusco
- Unit of Obstetrics and Gynecology, “Paolo Giaccone” Hospital, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Antonio Malvasi
- 1st Unit of Obstetrics and Gynecology, Department of Interdisciplinary Medicine (DIM), University of Bari, 70124 Bari, Italy
| | - Amerigo Vitagliano
- 1st Unit of Obstetrics and Gynecology, Department of Interdisciplinary Medicine (DIM), University of Bari, 70124 Bari, Italy
| | - Vincenzo Pinto
- 1st Unit of Obstetrics and Gynecology, Department of Interdisciplinary Medicine (DIM), University of Bari, 70124 Bari, Italy
| | - Ettore Cicinelli
- 1st Unit of Obstetrics and Gynecology, Department of Interdisciplinary Medicine (DIM), University of Bari, 70124 Bari, Italy
| | - Gerardo Cazzato
- Pathology Unit, Department of Precision and Regenerative Medicine and Ionian Area (DiMePRe-J), University of Bari, Piazza Giulio Cesare 11, 70124 Bari, Italy
| | - Eliano Cascardi
- Pathology Unit, Department of Precision and Regenerative Medicine and Ionian Area (DiMePRe-J), University of Bari, Piazza Giulio Cesare 11, 70124 Bari, Italy
| |
Collapse
|
4
|
Chen T, Zheng W, Hu H, Luo C, Chen J, Yuan C, Lu W, Chen DZ, Gao H, Wu J. A Corresponding Region Fusion Framework for Multi-Modal Cervical Lesion Detection. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:959-970. [PMID: 35635817 DOI: 10.1109/tcbb.2022.3178725] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cervical lesion detection (CLD) using colposcopic images of multi-modality (acetic and iodine) is critical to computer-aided diagnosis (CAD) systems for accurate, objective, and comprehensive cervical cancer screening. To robustly capture lesion features and conform with clinical diagnosis practice, we propose a novel corresponding region fusion network (CRFNet) for multi-modal CLD. CRFNet first extracts feature maps and generates proposals for each modality, then performs proposal shifting to obtain corresponding regions under large position shifts between modalities, and finally fuses those region features with a new corresponding channel attention to detect lesion regions on both modalities. To evaluate CRFNet, we build a large multi-modal colposcopic image dataset collected from our collaborative hospital. We show that our proposed CRFNet surpasses known single-modal and multi-modal CLD methods and achieves state-of-the-art performance, especially in terms of Average Precision.
Collapse
|
5
|
Jiang Y, Wang C, Zhou S. Artificial intelligence-based risk stratification, accurate diagnosis and treatment prediction in gynecologic oncology. Semin Cancer Biol 2023; 96:82-99. [PMID: 37783319 DOI: 10.1016/j.semcancer.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 08/27/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
As data-driven science, artificial intelligence (AI) has paved a promising path toward an evolving health system teeming with thrilling opportunities for precision oncology. Notwithstanding the tremendous success of oncological AI in such fields as lung carcinoma, breast tumor and brain malignancy, less attention has been devoted to investigating the influence of AI on gynecologic oncology. Hereby, this review sheds light on the ever-increasing contribution of state-of-the-art AI techniques to the refined risk stratification and whole-course management of patients with gynecologic tumors, in particular, cervical, ovarian and endometrial cancer, centering on information and features extracted from clinical data (electronic health records), cancer imaging including radiological imaging, colposcopic images, cytological and histopathological digital images, and molecular profiling (genomics, transcriptomics, metabolomics and so forth). However, there are still noteworthy challenges beyond performance validation. Thus, this work further describes the limitations and challenges faced in the real-word implementation of AI models, as well as potential solutions to address these issues.
Collapse
Affiliation(s)
- Yuting Jiang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shengtao Zhou
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
6
|
CTIFI: Clinical-experience-guided three-vision images features integration for diagnosis of cervical lesions. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Chen X, Pu X, Chen Z, Li L, Zhao KN, Liu H, Zhu H. Application of EfficientNet-B0 and GRU-based deep learning on classifying the colposcopy diagnosis of precancerous cervical lesions. Cancer Med 2023; 12:8690-8699. [PMID: 36629131 PMCID: PMC10134359 DOI: 10.1002/cam4.5581] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 11/23/2022] [Accepted: 12/17/2022] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND Colposcopy is indispensable for the diagnosis of cervical lesions. However, its diagnosis accuracy for high-grade squamous intraepithelial lesion (HSIL) is at about 50%, and the accuracy is largely dependent on the skill and experience of colposcopists. The advancement in computational power made it possible for the application of artificial intelligence (AI) to clinical problems. Here, we explored the feasibility and accuracy of the application of AI on precancerous and cancerous cervical colposcopic image recognition and classification. METHODS The images were collected from 6002 colposcopy examinations of normal control, low-grade squamous intraepithelial lesion (LSIL), and HSIL. For each patient, the original, Schiller test, and acetic-acid images were all collected. We built a new neural network classification model based on the hybrid algorithm. EfficientNet-b0 was used as the backbone network for the image feature extraction, and GRU(Gate Recurrent Unit)was applied for feature fusion of the three modes examinations (original, acetic acid, and Schiller test). RESULTS The connected network classifier achieved an accuracy of 90.61% in distinguishing HSIL from normal and LSIL. Furthermore, the model was applied to "Trichotomy", which reached an accuracy of 91.18% in distinguishing the HSIL, LSIL and normal control at the same time. CONCLUSION Our results revealed that as shown by the high accuracy of AI in the classification of colposcopic images, AI exhibited great potential to be an effective tool for the accurate diagnosis of cervical disease and for early therapeutic intervention in cervical precancer.
Collapse
Affiliation(s)
- Xiaoyue Chen
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaowen Pu
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zhirou Chen
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Lanzhen Li
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China.,Ningbo Artificial Intelligent Institute, Shanghai Jiao Tong University, Ningbo, China
| | - Kong-Nan Zhao
- School of Basic Medical Science, Wenzhou Medical University, Wenzhou, China.,Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, St Lucia, Queensland, Australia
| | - Haichun Liu
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China.,Ningbo Artificial Intelligent Institute, Shanghai Jiao Tong University, Ningbo, China
| | - Haiyan Zhu
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
8
|
Ma JH, You SF, Xue JS, Li XL, Chen YY, Hu Y, Feng Z. Computer-aided diagnosis of cervical dysplasia using colposcopic images. Front Oncol 2022; 12:905623. [PMID: 35992807 PMCID: PMC9389460 DOI: 10.3389/fonc.2022.905623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 07/11/2022] [Indexed: 11/13/2022] Open
Abstract
Backgroundcomputer-aided diagnosis of medical images is becoming more significant in intelligent medicine. Colposcopy-guided biopsy with pathological diagnosis is the gold standard in diagnosing CIN and invasive cervical cancer. However, it struggles with its low sensitivity in differentiating cancer/HSIL from LSIL/normal, particularly in areas with a lack of skilled colposcopists and access to adequate medical resources.Methodsthe model used the auto-segmented colposcopic images to extract color and texture features using the T-test method. It then augmented minority data using the SMOTE method to balance the skewed class distribution. Finally, it used an RBF-SVM to generate a preliminary output. The results, integrating the TCT, HPV tests, and age, were combined into a naïve Bayes classifier for cervical lesion diagnosis.Resultsthe multimodal machine learning model achieved physician-level performance (sensitivity: 51.2%, specificity: 86.9%, accuracy: 81.8%), and it could be interpreted by feature extraction and visualization. With the aid of the model, colposcopists improved the sensitivity from 53.7% to 70.7% with an acceptable specificity of 81.1% and accuracy of 79.6%.Conclusionusing a computer-aided diagnosis system, physicians could identify cancer/HSIL with greater sensitivity, which guided biopsy to take timely treatment.
Collapse
Affiliation(s)
| | | | | | | | | | - Yan Hu
- *Correspondence: Zhen Feng, ; Yan Hu,
| | - Zhen Feng
- *Correspondence: Zhen Feng, ; Yan Hu,
| |
Collapse
|
9
|
Li P, Wang X, Liu P, Xu T, Sun P, Dong B, Xue H. Cervical Lesion Classification Method Based on Cross-Validation Decision Fusion Method of Vision Transformer and DenseNet. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3241422. [PMID: 35607393 PMCID: PMC9124126 DOI: 10.1155/2022/3241422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/24/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022]
Abstract
Objective In order to better adapt to clinical applications, this paper proposes a cross-validation decision-making fusion method of Vision Transformer and DenseNet161. Methods The dataset is the most critical acetic acid image for clinical diagnosis, and the SR areas are processed by a specific method. Then, the Vision Transformer and DenseNet161 models are trained by the fivefold cross-validation method, and the fivefold prediction results corresponding to the two models are fused by different weights. Finally, the five fused results are averaged to obtain the category with the highest probability. Results The results show that the fusion method in this paper reaches an accuracy rate of 68% for the four classifications of cervical lesions. Conclusions It is more suitable for clinical environments, effectively reducing the missed detection rate and ensuring the life and health of patients.
Collapse
Affiliation(s)
- Ping Li
- Department of Gynecology and Obstetrics, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou 362000, Fujian, China
| | - Xiaoxia Wang
- School of Medicine, Huaqiao University, Quanzhou 362000, Fujian, China
| | - Peizhong Liu
- School of Medicine, Huaqiao University, Quanzhou 362000, Fujian, China
- College of Engineering, Huaqiao University, Quanzhou 362000, Fujian, China
| | - Tianxiang Xu
- College of Engineering, Huaqiao University, Quanzhou 362000, Fujian, China
| | - Pengming Sun
- Fujian Maternity and Child Health Hospital, Affiliated Hospital of Fujian Medical University, Fuzhou 350001, Fujian, China
| | - Binhua Dong
- Fujian Maternity and Child Health Hospital, Affiliated Hospital of Fujian Medical University, Fuzhou 350001, Fujian, China
| | - Huifeng Xue
- Fujian Maternity and Child Health Hospital, Affiliated Hospital of Fujian Medical University, Fuzhou 350001, Fujian, China
| |
Collapse
|
10
|
Elakkiya R, Subramaniyaswamy V, Vijayakumar V, Mahanti A. Cervical Cancer Diagnostics Healthcare System Using Hybrid Object Detection Adversarial Networks. IEEE J Biomed Health Inform 2022; 26:1464-1471. [PMID: 34214045 DOI: 10.1109/jbhi.2021.3094311] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cervical cancer is one of the common cancers among women and it causes significant mortality in many developing countries. Diagnosis of cervical lesions is done using pap smear test or visual inspection using acetic acid (staining). Digital colposcopy, an inexpensive methodology, provides painless and efficient screening results. Therefore, automating cervical cancer screening using colposcopy images will be highly useful in saving many lives. Nowadays, many automation techniques using computer vision and machine learning in cervical screening gained attention, paving the way for diagnosing cervical cancer. However, most of the methods rely entirely on the annotation of cervical spotting and segmentation. This paper aims to introduce the Faster Small-Object Detection Neural Networks (FSOD-GAN) to address the cervical screening and diagnosis of cervical cancer and the type of cancer using digital colposcopy images. The proposed approach automatically detects the cervical spot using Faster Region-Based Convolutional Neural Network (FR-CNN) and performs the hierarchical multiclass classification of three types of cervical cancer lesions. Experimentation was done with colposcopy data collected from available open sources consisting of 1,993 patients with three cervical categories, and the proposed approach shows 99% accuracy in diagnosing the stages of cervical cancer.
Collapse
|
11
|
Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front Oncol 2022; 12:851367. [PMID: 35359358 PMCID: PMC8963491 DOI: 10.3389/fonc.2022.851367] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/10/2022] [Indexed: 12/11/2022] Open
Abstract
Cervical cancer remains a leading cause of cancer death in women, seriously threatening their physical and mental health. It is an easily preventable cancer with early screening and diagnosis. Although technical advancements have significantly improved the early diagnosis of cervical cancer, accurate diagnosis remains difficult owing to various factors. In recent years, artificial intelligence (AI)-based medical diagnostic applications have been on the rise and have excellent applicability in the screening and diagnosis of cervical cancer. Their benefits include reduced time consumption, reduced need for professional and technical personnel, and no bias owing to subjective factors. We, thus, aimed to discuss how AI can be used in cervical cancer screening and diagnosis, particularly to improve the accuracy of early diagnosis. The application and challenges of using AI in the diagnosis and treatment of cervical cancer are also discussed.
Collapse
Affiliation(s)
- Xin Hou
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Guangyang Shen
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Liqiang Zhou
- Cancer Centre and Center of Reproduction, Development and Aging, Faculty of Health Sciences, University of Macau, Macau, Macau SAR, China
| | - Yinuo Li
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Tian Wang
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Xiangyi Ma,
| |
Collapse
|
12
|
Peng C, Zhang Y, Zheng J, Li B, Shen J, Li M, Liu L, Qiu B, Chen DZ. IMIIN: An inter-modality information interaction network for 3D multi-modal breast tumor segmentation. Comput Med Imaging Graph 2022; 95:102021. [PMID: 34861622 DOI: 10.1016/j.compmedimag.2021.102021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/02/2021] [Accepted: 11/23/2021] [Indexed: 11/22/2022]
Abstract
Breast tumor segmentation is critical to the diagnosis and treatment of breast cancer. In clinical breast cancer analysis, experts often examine multi-modal images since such images provide abundant complementary information on tumor morphology. Known multi-modal breast tumor segmentation methods extracted 2D tumor features and used information from one modal to assist another. However, these methods were not conducive to fusing multi-modal information efficiently, or may even fuse interference information, due to the lack of effective information interaction management between different modalities. Besides, these methods did not consider the effect of small tumor characteristics on the segmentation results. In this paper, We propose a new inter-modality information interaction network to segment breast tumors in 3D multi-modal MRI. Our network employs a hierarchical structure to extract local information of small tumors, which facilitates precise segmentation of tumor boundaries. Under this structure, we present a 3D tiny object segmentation network based on DenseVoxNet to preserve the boundary details of the segmented tumors (especially for small tumors). Further, we introduce a bi-directional request-supply information interaction module between different modalities so that each modal can request helpful auxiliary information according to its own needs. Experiments on a clinical 3D multi-modal MRI breast tumor dataset show that our new 3D IMIIN is superior to state-of-the-art methods and attains better segmentation results, suggesting that our new method has a good clinical application prospect.
Collapse
Affiliation(s)
- Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China; Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Yue Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Jian Zheng
- Suzhou Institute of Biomedical Engineering and Technology, CAS, Suzhou 215163, China.
| | - Bin Li
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Jun Shen
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou 510120, China.
| | - Ming Li
- Suzhou Institute of Biomedical Engineering and Technology, CAS, Suzhou 215163, China
| | - Lei Liu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Bensheng Qiu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Danny Z Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| |
Collapse
|
13
|
Takahashi T, Matsuoka H, Sakurai R, Akatsuka J, Kobayashi Y, Nakamura M, Iwata T, Banno K, Matsuzaki M, Takayama J, Aoki D, Yamamoto Y, Tamiya G. Development of a prognostic prediction support system for cervical intraepithelial neoplasia using artificial intelligence-based diagnosis. J Gynecol Oncol 2022; 33:e57. [PMID: 35712970 PMCID: PMC9428307 DOI: 10.3802/jgo.2022.33.e57] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 04/07/2022] [Accepted: 04/29/2022] [Indexed: 11/30/2022] Open
Abstract
Objective Human papillomavirus subtypes are predictive indicators of cervical intraepithelial neoplasia (CIN) progression. While colposcopy is also an essential part of cervical cancer prevention, its accuracy and reproducibility are limited because of subjective evaluation. This study aimed to develop an artificial intelligence (AI) algorithm that can accurately detect the optimal lesion associated with prognosis using colposcopic images of CIN2 patients by utilizing objective AI diagnosis. Methods We identified colposcopic findings associated with the prognosis of patients with CIN2. We developed a convolutional neural network that can automatically detect the rate of high-grade lesions in the uterovaginal area in 12 segments. We finally evaluated the detection accuracy of our AI algorithm compared with the scores by multiple gynecologic oncologists. Results High-grade lesion occupancy in the uterovaginal area detected by senior colposcopists was significantly correlated with the prognosis of patients with CIN2. The detection rate for high-grade lesions in 12 segments of the uterovaginal area by the AI system was 62.1% for recall, and the overall correct response rate was 89.7%. Moreover, the percentage of high-grade lesions detected by the AI system was significantly correlated with the rate detected by multiple gynecologic senior oncologists (r=0.61). Conclusion Our novel AI algorithm can accurately determine high-grade lesions associated with prognosis on colposcopic images, and these results provide an insight into the additional utility of colposcopy for the management of patients with CIN2. High-grade lesion occupancy in the uterovaginal area was significantly correlated with CIN2 patients’ prognosis. The number of high-grade lesions in 12 segments detected by an artificial intelligence (AI)-based system was comparable to that detected by senior colposcopists. The overall correct response rate of the AI algorithm for detecting high-grade lesions was 89.7%.
Collapse
Affiliation(s)
- Takayuki Takahashi
- Statistical Genetics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- Department of Obstetrics and Gynecology, Keio University School of Medicine, Tokyo, Japan
| | - Hikaru Matsuoka
- Statistical Genetics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Rieko Sakurai
- Statistical Genetics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Jun Akatsuka
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- Department of Urology, Nippon Medical School Hospital, Tokyo, Japan
| | - Yusuke Kobayashi
- Department of Obstetrics and Gynecology, Keio University School of Medicine, Tokyo, Japan
| | - Masaru Nakamura
- Department of Obstetrics and Gynecology, Keio University School of Medicine, Tokyo, Japan
| | - Takashi Iwata
- Department of Obstetrics and Gynecology, Keio University School of Medicine, Tokyo, Japan
| | - Kouji Banno
- Department of Obstetrics and Gynecology, Keio University School of Medicine, Tokyo, Japan
| | - Motomichi Matsuzaki
- Statistical Genetics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Jun Takayama
- Statistical Genetics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- Tohoku University Graduate School of Medicine, Miyagi, Japan
| | - Daisuke Aoki
- Department of Obstetrics and Gynecology, Keio University School of Medicine, Tokyo, Japan
| | - Yoichiro Yamamoto
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Gen Tamiya
- Statistical Genetics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- Tohoku University Graduate School of Medicine, Miyagi, Japan
| |
Collapse
|
14
|
Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening. J Digit Imaging 2021; 33:619-631. [PMID: 31848896 DOI: 10.1007/s10278-019-00269-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Transfer learning using deep pre-trained convolutional neural networks is increasingly used to solve a large number of problems in the medical field. In spite of being trained using images with entirely different domain, these networks are flexible to adapt to solve a problem in a different domain too. Transfer learning involves fine-tuning a pre-trained network with optimal values of hyperparameters such as learning rate, batch size, and number of training epochs. The process of training the network identifies the relevant features for solving a specific problem. Adapting the pre-trained network to solve a different problem requires fine-tuning until relevant features are obtained. This is facilitated through the use of large number of filters present in the convolutional layers of pre-trained network. A very few features out of these features are useful for solving the problem in a different domain, while others are irrelevant, use of which may only reduce the efficacy of the network. However, by minimizing the number of filters required to solve the problem, the efficiency of the training the network can be improved. In this study, we consider identification of relevant filters using the pre-trained networks namely AlexNet and VGG-16 net to detect cervical cancer from cervix images. This paper presents a novel hybrid transfer learning technique, in which a CNN is built and trained from scratch, with initial weights of only those filters which were identified as relevant using AlexNet and VGG-16 net. This study used 2198 cervix images with 1090 belonging to negative class and 1108 to positive class. Our experiment using hybrid transfer learning achieved an accuracy of 91.46%.
Collapse
|
15
|
Chen T, Liu X, Feng R, Wang W, Yuan C, Lu W, He H, Gao H, Ying H, Chen DZ, Wu J. Discriminative Cervical Lesion Detection in Colposcopic Images with Global Class Activation and Local Bin Excitation. IEEE J Biomed Health Inform 2021; 26:1411-1421. [PMID: 34314364 DOI: 10.1109/jbhi.2021.3100367] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate cervical lesion detection (CLD) methods using colposcopic images are highly demanded in computer-aided diagnosis (CAD) for automatic diagnosis of High-grade Squamous Intraepithelial Lesions (HSIL). However, compared to natural scene images, the specific characteristics of colposcopic images, such as low contrast, visual similarity, and ambiguous lesion boundaries, pose difficulties to accurately locating HSIL regions and also significantly impede the performance improvement of existing CLD approaches. To tackle these difficulties and better capture cervical lesions, we develop novel feature enhancing mechanisms from both global and local perspectives, and propose a new discriminative CLD framework, called CervixNet, with a Global Class Activation (GCA) module and a Local Bin Excitation (LBE) module. Specifically, the GCA module learns discriminative features by introducing an auxiliary classifier, and guides our model to focus on HSIL regions while ignoring noisy regions. It globally facilitates the feature extraction process and helps boost feature discriminability. Further, our LBE module excites lesion features in a local manner, and allows the lesion regions to be more fine-grained enhanced by explicitly modelling the inter-dependencies among bins of proposal feature. Extensive experiments on a number of 9888 clinical colposcopic images verify the superiority of our method (AP .75=20.45) over state-of-the-art models on four widely used metrics.
Collapse
|
16
|
Liu J, Liang T, Peng Y, Peng G, Sun L, Li L, Dong H. Segmentation of acetowhite region in uterine cervical image based on deep learning. Technol Health Care 2021; 30:469-482. [PMID: 34180439 DOI: 10.3233/thc-212890] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
BACKGROUND Acetowhite (AW) region is a critical physiological phenomenon of precancerous lesions of cervical cancer. An accurate segmentation of the AW region can provide a useful diagnostic tool for gynecologic oncologists in screening cervical cancers. Traditional approaches for the segmentation of AW regions relied heavily on manual or semi-automatic methods. OBJECTIVE To automatically segment the AW regions from colposcope images. METHODS First, the cervical region was extracted from the original colposcope images by k-means clustering algorithm. Second, a deep learning-based image semantic segmentation model named DeepLab V3+ was used to segment the AW region from the cervical image. RESULTS The results showed that, compared to the fuzzy clustering segmentation algorithm and the level set segmentation algorithm, the new method proposed in this study achieved a mean Jaccard Index (JI) accuracy of 63.6% (improved by 27.9% and 27.5% respectively), a mean specificity of 94.9% (improved by 55.8% and 32.3% respectively) and a mean accuracy of 91.2% (improved by 38.6% and 26.4% respectively). A mean sensitivity of 78.2% was achieved by the proposed method, which was 17.4% and 10.1% lower respectively. Compared to the image semantic segmentation models U-Net and PSPNet, the proposed method yielded a higher mean JI accuracy, mean sensitivity and mean accuracy. CONCLUSION The improved segmentation performance suggested that the proposed method may serve as a useful complimentary tool in screening cervical cancer.
Collapse
Affiliation(s)
- Jun Liu
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Tong Liang
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Yun Peng
- San Diego, California, CA 91355, USA
| | - Gengyou Peng
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Lechan Sun
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Jiangxi 330006, China
| | - Hua Dong
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| |
Collapse
|
17
|
Li Y, Liu ZH, Xue P, Chen J, Ma K, Qian T, Zheng Y, Qiao YL. GRAND: A large-scale dataset and benchmark for cervical intraepithelial Neoplasia grading with fine-grained lesion description. Med Image Anal 2021; 70:102006. [PMID: 33690025 DOI: 10.1016/j.media.2021.102006] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 12/10/2020] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
Cervical cancer causes the fourth most cancer-related deaths of women worldwide. Early detection of cervical intraepithelial neoplasia (CIN) can significantly increase the survival rate of patients. World Health Organization (WHO) divided the CIN into three grades (CIN1, CIN2 and CIN3). In clinical practice, different CIN grades require different treatments. Although existing studies proposed computer aided diagnosis (CAD) systems for cervical cancer diagnosis, most of them are fail to perform accurate separation between CIN1 and CIN2/3, due to the similar appearances under colposcopy. To boost the accuracy of CAD systems, we construct a colposcopic image dataset for GRAding cervical intraepithelial Neoplasia with fine-grained lesion Description (GRAND). The dataset consists of colposcopic images collected from 8,604 patients along with the pathological reports. Additionally, we invite the experienced colposcopist to annotate two main clues, which are usually adopted for clinical diagnosis of CIN grade, i.e., texture of acetowhite epithelium (TAE) and appearance of blood vessel (ABV). A multi-rater model using the annotated clues is benchmarked for our dataset. The proposed framework contains several sub-networks (raters) to exploit the fine-grained lesion features TAE and ABV, respectively, by contrastive learning and a backbone network to extract the global information from colposcopic images. A comprehensive experiment is conducted on our GRAND dataset. The experimental results demonstrate the benefit of using additional lesion descriptions (TAE and ABV), which increases the CIN grading accuracy by over 10%. Furthermore, we conduct a human-machine confrontation to evaluate the potential of the proposed benchmark framework for clinical applications. Particularly, three colposcopists on different professional levels (intern, in-service and professional) are invited to compete with our benchmark framework by investigating a same extra test set-our framework achieves a comparable CIN grading accuracy to that of a professional colposcopist.
Collapse
Affiliation(s)
| | - Zhi-Hua Liu
- Diagnosis and Treatment for Cervical Lesions Center, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | | | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | | | - You-Lin Qiao
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
18
|
Yu Y, Ma J, Zhao W, Li Z, Ding S. MSCI: A multistate dataset for colposcopy image classification of cervical cancer screening. Int J Med Inform 2020; 146:104352. [PMID: 33360117 DOI: 10.1016/j.ijmedinf.2020.104352] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 11/05/2020] [Accepted: 11/21/2020] [Indexed: 11/26/2022]
Abstract
BACKGROUND Cervical cancer is the second most common female cancer globally, and it is vital to detect cervical cancer with low cost at an early stage using automated screening methods of high accuracy, especially in areas with insufficient medical resources. Automatic detection of cervical intraepithelial neoplasia (CIN) can effectively prevent cervical cancer. OBJECTIVES Due to the deficiency of standard and accessible colposcopy image datasets, we present a dataset containing 4753 colposcopy images acquired from 679 patients in three states (acetic acid reaction, green filter, and iodine test) for detection of cervical intraepithelial neoplasia. Based on this dataset, a new computer-aided method for cervical cancer screening was proposed. METHODS We employed a wide range of methods to comprehensively evaluate our proposed dataset. Hand-crafted feature extraction methods and deep learning methods were used for the performance verification of the multistate colposcopy image (MSCI) dataset. Importantly, we propose a gated recurrent convolutional neural network (C-GCNN) for colposcopy image analysis that considers time series and combined multistate cervical images for CIN grading. RESULTS The experimental results showed that the proposed C-GCNN model achieves the best classification performance in CIN grading compared with hand-crafted feature extraction methods and classic deep learning methods. The results showed an accuracy of 96.87 %, a sensitivity of 95.68 %, and a specificity of 98.72 %. CONCLUSION A multistate colposcopy image dataset (MSCI) is proposed. A CIN grading model (C-GCNN) based on the MSCI dataset is established, which provides a potential method for automated cervical cancer screening.
Collapse
Affiliation(s)
- Yao Yu
- The School of Management, Hefei University of Technology, China
| | - Jie Ma
- The First Affiliated Hospital of USTC, China
| | | | - Zhenmin Li
- The School of Microelectronics, Hefei University of Technology, China
| | - Shuai Ding
- The School of Management, Hefei University of Technology, China.
| |
Collapse
|
19
|
Li Y, Chen J, Xue P, Tang C, Chang J, Chu C, Ma K, Li Q, Zheng Y, Qiao Y. Computer-Aided Cervical Cancer Diagnosis Using Time-Lapsed Colposcopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3403-3415. [PMID: 32406830 DOI: 10.1109/tmi.2020.2994778] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Cervical cancer causes the fourth most cancer-related deaths of women worldwide. Early detection of cervical intraepithelial neoplasia (CIN) can significantly increase the survival rate of patients. In this paper, we propose a deep learning framework for the accurate identification of LSIL+ (including CIN and cervical cancer) using time-lapsed colposcopic images. The proposed framework involves two main components, i.e., key-frame feature encoding networks and feature fusion network. The features of the original (pre-acetic-acid) image and the colposcopic images captured at around 60s, 90s, 120s and 150s during the acetic acid test are encoded by the feature encoding networks. Several fusion approaches are compared, all of which outperform the existing automated cervical cancer diagnosis systems using a single time slot. A graph convolutional network with edge features (E-GCN) is found to be the most suitable fusion approach in our study, due to its excellent explainability consistent with the clinical practice. A large-scale dataset, containing time-lapsed colposcopic images from 7,668 patients, is collected from the collaborative hospital to train and validate our deep learning framework. Colposcopists are invited to compete with our computer-aided diagnosis system. The proposed deep learning framework achieves a classification accuracy of 78.33%-comparable to that of an in-service colposcopist-which demonstrates its potential to provide assistance in the realistic clinical scenario.
Collapse
|
20
|
The challenges of colposcopy for cervical cancer screening in LMICs and solutions by artificial intelligence. BMC Med 2020; 18:169. [PMID: 32493320 PMCID: PMC7271416 DOI: 10.1186/s12916-020-01613-x] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Accepted: 04/30/2020] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The World Health Organization (WHO) called for global action towards the elimination of cervical cancer. One of the main strategies is to screen 70% of women at the age between 35 and 45 years and 90% of women managed appropriately by 2030. So far, approximately 85% of cervical cancers occur in low- and middle-income countries (LMICs). The colposcopy-guided biopsy is crucial for detecting cervical intraepithelial neoplasia (CIN) and becomes the main bottleneck limiting screening performance. Unprecedented advances in artificial intelligence (AI) enable the synergy of deep learning and digital colposcopy, which offers opportunities for automatic image-based diagnosis. To this end, we discuss the main challenges of traditional colposcopy and the solutions applying AI-guided digital colposcopy as an auxiliary diagnostic tool in low- and middle- income countries (LMICs). MAIN BODY Existing challenges for the application of colposcopy in LMICs include strong dependence on the subjective experience of operators, substantial inter- and intra-operator variabilities, shortage of experienced colposcopists, consummate colposcopy training courses, and uniform diagnostic standard and strict quality control that are hard to be followed by colposcopists with limited diagnostic ability, resulting in discrepant reporting and documentation of colposcopy impressions. Organized colposcopy training courses should be viewed as an effective way to enhance the diagnostic ability of colposcopists, but implementing these courses in practice may not always be feasible to improve the overall diagnostic performance in a short period of time. Fortunately, AI has the potential to address colposcopic bottleneck, which could assist colposcopists in colposcopy imaging judgment, detection of underlying CINs, and guidance of biopsy sites. The automated workflow of colposcopy examination could create a novel cervical cancer screening model, reduce potentially false negatives and false positives, and improve the accuracy of colposcopy diagnosis and cervical biopsy. CONCLUSION We believe that a practical and accurate AI-guided digital colposcopy has the potential to strengthen the diagnostic ability in guiding cervical biopsy, thereby improves cervical cancer screening performance in LMICs and accelerates the process of global cervical cancer elimination eventually.
Collapse
|
21
|
Konstandinou C, Kostopoulos S, Glotsos D, Pappa D, Ravazoula P, Michail G, Kalatzis I, Asvestas P, Lavdas E, Cavouras D, Sakellaropoulos G. GPU-enabled design of an adaptable pattern recognition system for discriminating squamous intraepithelial lesions of the cervix. ACTA ACUST UNITED AC 2020; 65:315-325. [PMID: 31747374 DOI: 10.1515/bmt-2019-0040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/30/2019] [Indexed: 11/15/2022]
Abstract
The aim of the present study was to design an adaptable pattern recognition (PR) system to discriminate low- from high-grade squamous intraepithelial lesions (LSIL and HSIL, respectively) of the cervix using microscopy images of hematoxylin and eosin (H&E)-stained biopsy material from two different medical centers. Clinical material comprised H&E-stained biopsies of 66 patients diagnosed with LSIL (34 cases) or HSIL (32 cases). Regions of interest were selected from each patient's digitized microscopy images. Seventy-seven features were generated, regarding the texture, morphology and spatial distribution of nuclei. The probabilistic neural network (PNN) classifier, the exhaustive search feature selection method, the leave-one-out (LOO) and the bootstrap validation methods were used to design the PR system and to assess its precision. Optimal PR system design and evaluation were made feasible by the employment of graphics processing unit (GPU) and Compute Unified Device Architecture (CUDA) technologies. The accuracy of the PR-system was 93% and 88.6% when using the LOO and bootstrap validation methods, respectively. The proposed PR system for discriminating LSIL from HSIL of the cervix was designed to operate in a clinical environment, having the capability of being redesigned when new verified cases are added to its repository and when data from other medical centers are included, following similar biopsy material preparation procedures.
Collapse
Affiliation(s)
- Christos Konstandinou
- Department of Medical Physics, School of Health Sciences, Faculty of Medicine, University of Patras, Rio, Patras, Greece
| | - Spiros Kostopoulos
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Ag. Spyridonos Street, Egaleo, 122 43 Athens, Greece
| | - Dimitris Glotsos
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Dimitra Pappa
- Department of Pathology, IASO Thessalias, Larissa, Greece
| | | | - George Michail
- Department of Obstetrics and Gynecology, University Hospital of Patras, Rio, Greece
| | - Ioannis Kalatzis
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Pantelis Asvestas
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Eleftherios Lavdas
- Department of Biomedical Sciences, University of West Attica, Athens, Greece
| | - Dionisis Cavouras
- Medical Image and Signal Processing Laboratory (MEDISP), Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - George Sakellaropoulos
- Department of Medical Physics, School of Health Sciences, Faculty of Medicine, University of Patras, Rio, Patras, Greece
| |
Collapse
|
22
|
Yue Z, Ding S, Zhao W, Wang H, Ma J, Zhang Y, Zhang Y. Automatic CIN Grades Prediction of Sequential Cervigram Image Using LSTM With Multistate CNN Features. IEEE J Biomed Health Inform 2020; 24:844-854. [DOI: 10.1109/jbhi.2019.2922682] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
Zhang T, Luo YM, Li P, Liu PZ, Du YZ, Sun P, Dong B, Xue H. Cervical precancerous lesions classification using pre-trained densely connected convolutional networks with colposcopy images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101566] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
24
|
Chen H, Yang L, Li L, Li M, Chen Z. An efficient cervical disease diagnosis approach using segmented images and cytology reporting. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2019.07.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
25
|
Kudva V, Prasad K, Guruvare S. Andriod Device-Based Cervical Cancer Screening for Resource-Poor Settings. J Digit Imaging 2019; 31:646-654. [PMID: 29777323 DOI: 10.1007/s10278-018-0083-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022] Open
Abstract
Visual inspection with acetic acid (VIA) is an effective, affordable and simple test for cervical cancer screening in resource-poor settings. But considerable expertise is needed to differentiate cancerous lesions from normal lesions, which is lacking in developing countries. Many studies have attempted automation of cervical cancer detection from cervix images acquired during the VIA process. These studies used images acquired through colposcopy or cervicography. However, colposcopy is expensive and hence is not feasible as a screening tool in resource-poor settings. Cervicography uses a digital camera to acquire cervix images which are subsequently sent to experts for evaluation. Hence, cervicography does not provide a real-time decision of whether the cervix is normal or not, during the VIA examination. In case the cervix is found to be abnormal, the patient may be referred to a hospital for further evaluation using Pap smear and/or biopsy. An android device with an inbuilt app to acquire images and provide instant results would be an obvious choice in resource-poor settings. In this paper, we propose an algorithm for analysis of cervix images acquired using an android device, which can be used for the development of decision support system to provide instant decision during cervical cancer screening. This algorithm offers an accuracy of 97.94%, a sensitivity of 99.05% and specificity of 97.16%.
Collapse
Affiliation(s)
- Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.,NMAMIT, Nitte, 574110, India
| | - Keerthana Prasad
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - Shyamala Guruvare
- Department of Obstetrics and Gynecology, Kasturba Medical College, Manipal, Karnataka, 576104, India
| |
Collapse
|
26
|
Conceição T, Braga C, Rosado L, Vasconcelos MJM. A Review of Computational Methods for Cervical Cells Segmentation and Abnormality Classification. Int J Mol Sci 2019; 20:E5114. [PMID: 31618951 PMCID: PMC6834130 DOI: 10.3390/ijms20205114] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 02/07/2023] Open
Abstract
Cervical cancer is the one of the most common cancers in women worldwide, affecting around 570,000 new patients each year. Although there have been great improvements over the years, current screening procedures can still suffer from long and tedious workflows and ambiguities. The increasing interest in the development of computer-aided solutions for cervical cancer screening is to aid with these common practical difficulties, which are especially frequent in the low-income countries where most deaths caused by cervical cancer occur. In this review, an overview of the disease and its current screening procedures is firstly introduced. Furthermore, an in-depth analysis of the most relevant computational methods available on the literature for cervical cells analysis is presented. Particularly, this work focuses on topics related to automated quality assessment, segmentation and classification, including an extensive literature review and respective critical discussion. Since the major goal of this timely review is to support the development of new automated tools that can facilitate cervical screening procedures, this work also provides some considerations regarding the next generation of computer-aided diagnosis systems and future research directions.
Collapse
Affiliation(s)
| | | | - Luís Rosado
- Fraunhofer Portugal AICOS, 4200-135 Porto, Portugal.
| | | |
Collapse
|
27
|
Hu L, Bell D, Antani S, Xue Z, Yu K, Horning MP, Gachuhi N, Wilson B, Jaiswal MS, Befano B, Long LR, Herrero R, Einstein MH, Burk RD, Demarco M, Gage JC, Rodriguez AC, Wentzensen N, Schiffman M. An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening. J Natl Cancer Inst 2019; 111:923-932. [PMID: 30629194 PMCID: PMC6748814 DOI: 10.1093/jnci/djy225] [Citation(s) in RCA: 197] [Impact Index Per Article: 32.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 10/12/2018] [Accepted: 12/03/2018] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND Human papillomavirus vaccination and cervical screening are lacking in most lower resource settings, where approximately 80% of more than 500 000 cancer cases occur annually. Visual inspection of the cervix following acetic acid application is practical but not reproducible or accurate. The objective of this study was to develop a "deep learning"-based visual evaluation algorithm that automatically recognizes cervical precancer/cancer. METHODS A population-based longitudinal cohort of 9406 women ages 18-94 years in Guanacaste, Costa Rica was followed for 7 years (1993-2000), incorporating multiple cervical screening methods and histopathologic confirmation of precancers. Tumor registry linkage identified cancers up to 18 years. Archived, digitized cervical images from screening, taken with a fixed-focus camera ("cervicography"), were used for training/validation of the deep learning-based algorithm. The resultant image prediction score (0-1) could be categorized to balance sensitivity and specificity for detection of precancer/cancer. All statistical tests were two-sided. RESULTS Automated visual evaluation of enrollment cervigrams identified cumulative precancer/cancer cases with greater accuracy (area under the curve [AUC] = 0.91, 95% confidence interval [CI] = 0.89 to 0.93) than original cervigram interpretation (AUC = 0.69, 95% CI = 0.63 to 0.74; P < .001) or conventional cytology (AUC = 0.71, 95% CI = 0.65 to 0.77; P < .001). A single visual screening round restricted to women at the prime screening ages of 25-49 years could identify 127 (55.7%) of 228 precancers (cervical intraepithelial neoplasia 2/cervical intraepithelial neoplasia 3/adenocarcinoma in situ [AIS]) diagnosed cumulatively in the entire adult population (ages 18-94 years) while referring 11.0% for management. CONCLUSIONS The results support consideration of automated visual evaluation of cervical images from contemporary digital cameras. If achieved, this might permit dissemination of effective point-of-care cervical screening.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Mark Schiffman
- Correspondence to: Mark Schiffman, MD, MPH, National Cancer Institute, Room 6E544, 9609 Medical Center Drive, Rockville, MD 20850 (e-mail: )
| |
Collapse
|
28
|
P E, M S. Automatic Approach for Cervical Cancer Detection and Segmentation Using Neural Network Classifier. Asian Pac J Cancer Prev 2018; 19:3571-3580. [PMID: 30583685 PMCID: PMC6428557 DOI: 10.31557/apjcp.2018.19.12.3571] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Cervical cancer leads to major death disease in women around the world every year. This cancer can be cured if it
is initially screened and giving timely treatment to the patients. This paper proposes a novel methodology for screening
the cervical cancer using cervigram images. Oriented Local Histogram Technique (OLHT) is applied on the cervical
image to enhance the edges and then Dual Tree Complex Wavelet Transform (DT-CWT) is applied on it to obtain
multi resolution image. Then, features as wavelet, Grey Level Co-occurrence Matrix (GLCM), moment invariant
and Local Binary Pattern (LBP) features are extracted from this transformed multi resolution cervical image. These
extracted features are trained and also tested by feed forward back propagation neural network to classify the given
cervical image into normal and abnormal. The morphological operations are applied on the abnormal cervical image
to detect and segment the cancer region. The performance of the proposed cervical cancer detection system is analyzed
in the terms of sensitivity, specificity, accuracy, positive predictive value, negative predictive value, Likelihood Ratio
positive, Likelihood ratio negative, precision, false positive rate and false negative rate. The performance measures for
the cervical cancer detection system achieves 97.42% of sensitivity, 99.36% of specificity, 98.29% of accuracy, PPV
of 97.28%, NPV of 92.17%, LRP of 141.71, LRN of 0.0936, 97.38 % precision, 96.72% FPR and 91.36% NPR. From
the simulation results, the proposed methodology outperforms the conventional methodologies for cervical cancer
detection and segmentation process.
Collapse
Affiliation(s)
- Elayaraja P
- Department of Ece, Kongunadu College of Engineering and Technology, Trichy, Tamilnadu, India.
| | | |
Collapse
|
29
|
Jaya BK, Kumar SS. Image Registration based Cervical Cancer Detection and Segmentation Using ANFIS Classifier. Asian Pac J Cancer Prev 2018; 19:3203-3209. [PMID: 30486611 PMCID: PMC6318403 DOI: 10.31557/apjcp.2018.19.11.3203] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Cervical cancer is the leading cancer in women around the world. In this paper, Adaptive Neuro Fuzzy Inference System (ANFIS) classifier based cervical cancer detection and segmentation methodology is proposed. This proposed system consists of the following stages as Image Registration, Feature extraction, Classifications and Segmentation. Fast Fourier Transform (FFT) is used for image registration. Then, Grey Level Co-occurrence Matrix (GLCM), Grey level and trinary features are extracted from the registered cervical image. Next, these extracted features are trained and classified using ANFIS classifier. Morphological operations are now applied over the classified cervical image to detect and segment the cancer region in cervical images. Simulations on large cervical image dataset demonstrate that the proposed cervical cancer detection and segmentation methodology outperforms the state of-the-art methods in terms of sensitivity, specificity and accuracy.
Collapse
Affiliation(s)
- B Karthiga Jaya
- ECE, Dhanalakshmi Srinivasan Engineering College, Tamilnadu, India.
| | | |
Collapse
|
30
|
Multifeature Quantification of Nuclear Properties from Images of H&E-Stained Biopsy Material for Investigating Changes in Nuclear Structure with Advancing CIN Grade. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:6358189. [PMID: 30073048 PMCID: PMC6057323 DOI: 10.1155/2018/6358189] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 05/03/2018] [Accepted: 06/03/2018] [Indexed: 01/27/2023]
Abstract
Background Cervical dysplasia is a precancerous condition, and if left untreated, it may lead to cervical cancer, which is the second most common cancer in women. The purpose of this study was to investigate differences in nuclear properties of the H&E-stained biopsy material between low CIN and high CIN cases and associate those properties with the CIN grade. Methods The clinical material comprised hematoxylin and eosin- (H&E-) stained biopsy specimens from lesions of 44 patients diagnosed with cervical intraepithelial neoplasia (CIN). Four or five nonoverlapping microscopy images were digitized from each patient's H&E specimens, from regions indicated by the expert physician. Sixty-three textural and morphological nuclear features were generated for each patient's images. The Wilcoxon statistical test and the point biserial correlation were used to estimate each feature's discriminatory power between low CIN and high CIN cases and its correlation with the advancing CIN grade, respectively. Results Statistical analysis showed 19 features that quantify nuclear shape, size, and texture and sustain statistically significant differences between low CIN and high CIN cases. These findings revealed that nuclei in high CIN cases, as compared to nuclei in low CIN cases, have more irregular shape, are larger in size, are coarser in texture, contain higher edges, have higher local contrast, are more inhomogeneous, and comprise structures of different intensities. Conclusion A systematic statistical analysis of nucleus features, quantified from the H&E-stained biopsy material, showed that there are significant differences in the shape, size, and texture of nuclei between low CIN and high CIN cases.
Collapse
|
31
|
Liu J, Li L, Wang L. Acetowhite region segmentation in uterine cervix images using a registered ratio image. Comput Biol Med 2018; 93:47-55. [DOI: 10.1016/j.compbiomed.2017.12.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 12/14/2017] [Accepted: 12/14/2017] [Indexed: 12/24/2022]
|
32
|
Xu T, Zhang H, Xin C, Kim E, Long LR, Xue Z, Antani S, Huang X. Multi-feature based Benchmark for Cervical Dysplasia Classification Evaluation. PATTERN RECOGNITION 2017; 63:468-475. [PMID: 28603299 PMCID: PMC5464748 DOI: 10.1016/j.patcog.2016.09.027] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Cervical cancer is one of the most common types of cancer in women worldwide. Most deaths due to the disease occur in less developed areas of the world. In this work, we introduce a new image dataset along with expert annotated diagnoses for evaluating image-based cervical disease classification algorithms. A large number of Cervigram® images are selected from a database provided by the US National Cancer Institute. For each image, we extract three complementary pyramid features: Pyramid histogram in L*A*B* color space (PLAB), Pyramid Histogram of Oriented Gradients (PHOG), and Pyramid histogram of Local Binary Patterns (PLBP). Other than hand-crafted pyramid features, we investigate the performance of convolutional neural network (CNN) features for cervical disease classification. Our experimental results demonstrate the effectiveness of both our hand-crafted and our deep features. We intend to release this multi-feature dataset and our extensive evaluations using seven classic classifiers can serve as the baseline.
Collapse
Affiliation(s)
- Tao Xu
- Computer Science and Engineering Department, Lehigh University, Bethlehem, PA, USA
| | - Han Zhang
- Department of Computer Science, Rutgers University, Piscataway, NJ, USA
| | - Cheng Xin
- Computer Science and Engineering Department, Lehigh University, Bethlehem, PA, USA
| | - Edward Kim
- Computing Sciences Department, Villanova University, Villanova, PA, USA
| | - L. Rodney Long
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Xiaolei Huang
- Computer Science and Engineering Department, Lehigh University, Bethlehem, PA, USA
| |
Collapse
|