1
|
Liu L, Liu J, Su Q, Chu Y, Xia H, Xu R. Performance of artificial intelligence for diagnosing cervical intraepithelial neoplasia and cervical cancer: a systematic review and meta-analysis. EClinicalMedicine 2025; 80:102992. [PMID: 39834510 PMCID: PMC11743870 DOI: 10.1016/j.eclinm.2024.102992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/22/2024] [Accepted: 11/22/2024] [Indexed: 01/22/2025] Open
Abstract
Background Cervical cytology screening and colposcopy play crucial roles in cervical intraepithelial neoplasia (CIN) and cervical cancer prevention. Previous studies have provided evidence that artificial intelligence (AI) has remarkable diagnostic accuracy in these procedures. With this systematic review and meta-analysis, we aimed to examine the pooled accuracy, sensitivity, and specificity of AI-assisted cervical cytology screening and colposcopy for cervical intraepithelial neoplasia and cervical cancer screening. Methods In this systematic review and meta-analysis, we searched the PubMed, Embase, and Cochrane Library databases for studies published between January 1, 1986 and August 31, 2024. Studies investigating the sensitivity and specificity of AI-assisted cervical cytology screening and colposcopy for histologically verified cervical intraepithelial neoplasia and cervical cancer and a minimum of five cases were included. The performance of AI and experienced colposcopists was assessed via the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) through random effect models. Additionally, subgroup analyses of multiple diagnostic performance metrics in developed and developing countries were conducted. This study was registered with PROSPERO (CRD42024534049). Findings Seventy-seven studies met the eligibility criteria for inclusion in this study. The pooled diagnostic parameters of AI-assisted cervical cytology via Papanicolaou (Pap) smears were as follows: accuracy, 94% (95% CI 92-96); sensitivity, 95% (95% CI 91-98); specificity, 94% (95% CI 89-97); PPV, 88% (95% CI 78-96); and NPV, 95% (95% CI 89-99). The pooled accuracy, sensitivity, specificity, PPV, and NPV of AI-assisted cervical cytology via ThinPrep cytologic test (TCT) were 90% (95% CI 85-94), 97% (95% CI 95-99), 94% (95% CI 85-98), 84% (95% CI 64-98), and 96% (95% CI 94-98), respectively. Subgroup analysis revealed that, for AI-assisted cervical cytology diagnosis, certain performance indicators were superior in developed countries compared to developing countries. Compared with experienced colposcopists, AI demonstrated superior accuracy in colposcopic examinations (odds ratio (OR) 1.75; 95% CI 1.33-2.31; P < 0.0001; I2 = 93%). Interpretation These results underscore the potential and practical value of AI in preventing and enabling early diagnosis of cervical cancer. Further research should support the development of AI for cervical cancer screening, including in low- and middle-income countries with limited resources. Funding This study was supported by the National Natural Science Foundation of China (No. 81901493) and the Shanghai Pujiang Program (No. 21PJD006).
Collapse
Affiliation(s)
- Lei Liu
- Department of Gynecology, Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China
| | - Jiangang Liu
- Department of Obstetrics and Gynecology, Puren Hospital Affiliated to Wuhan University of Science and Technology, Wuhan, 430080, China
| | - Qing Su
- Department of Obstetrics and Gynecology, The Fourth Hospital of Changsha, Changsha, 410006, China
| | - Yuening Chu
- Department of Obstetrics and Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, 201204, China
| | - Hexia Xia
- Department of Gynecology, Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China
| | - Ran Xu
- Department of Obstetrics and Gynecology, Affiliated Zhejiang Hospital, Zhejiang University School of Medicine, Hangzhou, 310013, China
- Heidelberg University, Heidelberg, 69120, Germany
| |
Collapse
|
2
|
Sha Y, Zhang Q, Zhai X, Hou M, Lu J, Meng W, Wang Y, Li K, Ma J. CerviFusionNet: A multi-modal, hybrid CNN-transformer-GRU model for enhanced cervical lesion multi-classification. iScience 2024; 27:111313. [PMID: 39634563 PMCID: PMC11615576 DOI: 10.1016/j.isci.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 06/10/2024] [Accepted: 10/30/2024] [Indexed: 12/07/2024] Open
Abstract
Cervical lesions pose a significant threat to women's health worldwide. Colposcopy is essential for screening and treating cervical lesions, but its effectiveness depends on the doctor's experience. Artificial intelligence-based solutions via colposcopy images have shown great potential in cervical lesions screening. However, some challenges still need to be addressed, such as low algorithm performance and lack of high-quality multi-modal datasets. Here, we established a multi-modal colposcopy dataset of 2,273 HPV+ patients, comprising original colposcopy images, acetic acid reactions at 60s and 120s, iodine staining, diagnostic reports, and pathological results. Utilizing this dataset, we developed CerviFusionNet, a hybrid architecture that merges convolutional neural networks and vision transformers to learn robust representations. We designed a temporal module to capture dynamic changes in acetic acid sequences, which can boost the model performance without sacrificing inference speed. Compared with several existing methods, CerviFusionNet demonstrated excellent accuracy and efficiency.
Collapse
Affiliation(s)
- Yuyang Sha
- Center for Artificial Intelligence Driven Drug Discovery, Faculty of Applied Sciences, Macao Polytechnic University, Macau SAR 999078, China
| | - Qingyue Zhang
- First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin 300381, China
- National Clinical Research Center for Chinese Medicine Acupuncture and Moxibustion, Tianjin 300381, China
| | - Xiaobing Zhai
- Center for Artificial Intelligence Driven Drug Discovery, Faculty of Applied Sciences, Macao Polytechnic University, Macau SAR 999078, China
| | - Menghui Hou
- First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin 300381, China
- National Clinical Research Center for Chinese Medicine Acupuncture and Moxibustion, Tianjin 300381, China
| | - Jingtao Lu
- Beijing University of Technology, School of Mathematical Statistics and Mechanics, Beijing 100124, China
| | - Weiyu Meng
- Center for Artificial Intelligence Driven Drug Discovery, Faculty of Applied Sciences, Macao Polytechnic University, Macau SAR 999078, China
| | - Yuefei Wang
- National Key Laboratory of Chinese Medicine Modernization, State Key Laboratory of Component-based Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
- Haihe Laboratory of Modern Chinese Medicine, Tianjin 301617, China
| | - Kefeng Li
- Center for Artificial Intelligence Driven Drug Discovery, Faculty of Applied Sciences, Macao Polytechnic University, Macau SAR 999078, China
| | - Jing Ma
- First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin 300381, China
- National Clinical Research Center for Chinese Medicine Acupuncture and Moxibustion, Tianjin 300381, China
| |
Collapse
|
3
|
Yang H, Song Y, Li Y, Hong Z, Liu J, Li J, Zhang D, Fu L, Lu J, Qiu L. A Dual-Branch Residual Network with Attention Mechanisms for Enhanced Classification of Vaginal Lesions in Colposcopic Images. Bioengineering (Basel) 2024; 11:1182. [PMID: 39768001 PMCID: PMC11673476 DOI: 10.3390/bioengineering11121182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 11/15/2024] [Accepted: 11/20/2024] [Indexed: 01/11/2025] Open
Abstract
Vaginal intraepithelial neoplasia (VAIN), linked to HPV infection, is a condition that is often overlooked during colposcopy, especially in the vaginal vault area, as clinicians tend to focus more on cervical lesions. This oversight can lead to missed or delayed diagnosis and treatment for patients with VAIN. Timely and accurate classification of VAIN plays a crucial role in the evaluation of vaginal lesions and the formulation of effective diagnostic approaches. The challenge is the high similarity between different classes and the low variability in the same class in colposcopic images, which can affect the accuracy, precision, and recall rates, depending on the image quality and the clinician's experience. In this study, a dual-branch lesion-aware residual network (DLRNet), designed for small medical sample sizes, is introduced, which classifies vaginal lesions by examining the relationship between cervical and vaginal lesions. The DLRNet model includes four main components: a lesion localization module, a dual-branch classification module, an attention-guidance module, and a pretrained network module. The dual-branch classification module combines the original images with segmentation maps obtained from the lesion localization module using a pretrained ResNet network to fine-tune parameters at different levels, explore lesion-specific features from both global and local perspectives, and facilitate layered interactions. The feature guidance module focuses the local branch network on vaginal-specific features by using spatial and channel attention mechanisms. The final integration involves a shared feature extraction module and independent fully connected layers, which represent and merge the dual-branch inputs. The weighted fusion method effectively integrates multiple inputs, enhancing the discriminative and generalization capabilities of the model. Classification experiments on 1142 collected colposcopic images demonstrate that this method raises the existing classification levels, achieving the classification of VAIN into three lesion grades, thus providing a valuable tool for the early screening of vaginal diseases.
Collapse
Affiliation(s)
- Haima Yang
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Space Active Opto-Electronics Technology, Chinese Academy of Sciences, Shanghai 200083, China
| | - Yeye Song
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Yuling Li
- Department of Obstetrics and Gynecology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Department of Obstetrics and Gynecology, Shanxi Bethune Hospital, Taiyuan 050081, China
| | - Zubei Hong
- Department of Obstetrics and Gynecology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Jin Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Jun Li
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Dawei Zhang
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Le Fu
- Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai 200092, China
| | - Jinyu Lu
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Lihua Qiu
- Department of Obstetrics and Gynecology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
| |
Collapse
|
4
|
Qin J, He Y, Liang Y, Kang L, Zhao J, Ding B. Cell comparative learning: A cervical cytopathology whole slide image classification method using normal and abnormal cells. Comput Med Imaging Graph 2024; 117:102427. [PMID: 39216344 DOI: 10.1016/j.compmedimag.2024.102427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 06/14/2024] [Accepted: 08/25/2024] [Indexed: 09/04/2024]
Abstract
Automated cervical cancer screening through computer-assisted diagnosis has shown considerable potential to improve screening accessibility and reduce associated costs and errors. However, classification performance on whole slide images (WSIs) remains suboptimal due to patient-specific variations. To improve the precision of the screening, pathologists not only analyze the characteristics of suspected abnormal cells, but also compare them with normal cells. Motivated by this practice, we propose a novel cervical cell comparative learning method that leverages pathologist knowledge to learn the differences between normal and suspected abnormal cells within the same WSI. Our method employs two pre-trained YOLOX models to detect suspected abnormal and normal cells in a given WSI. A self-supervised model then extracts features for the detected cells. Subsequently, a tailored Transformer encoder fuses the cell features to obtain WSI instance embeddings. Finally, attention-based multi-instance learning is applied to achieve classification. The experimental results show an AUC of 0.9319 for our proposed method. Moreover, the method achieved professional pathologist-level performance, indicating its potential for clinical applications.
Collapse
Affiliation(s)
- Jian Qin
- School of Computer Science and Technology, Anhui University of Technology, Maanshan, China.
| | - Yongjun He
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China.
| | - Yiqin Liang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Lanlan Kang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Jing Zhao
- College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin, China
| | - Bo Ding
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
5
|
Zuo X, Liu J, Hu M, He Y, Hong L. A Deep Learning Model for Cervical Optical Coherence Tomography Image Classification. Diagnostics (Basel) 2024; 14:2009. [PMID: 39335688 PMCID: PMC11431053 DOI: 10.3390/diagnostics14182009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 08/21/2024] [Accepted: 09/04/2024] [Indexed: 09/30/2024] Open
Abstract
Objectives: Optical coherence tomography (OCT) has recently been used in gynecology to detect cervical lesions in vivo and proven more effective than colposcopy in clinical trials. However, most gynecologists are unfamiliar with this new imaging technique, requiring intelligent computer-aided diagnosis approaches to help them interpret cervical OCT images efficiently. This study aims to (1) develop a clinically-usable deep learning (DL)-based classification model of 3D OCT volumes from cervical tissue and (2) validate the DL model's effectiveness in detecting high-risk cervical lesions, including high-grade squamous intraepithelial lesions and cervical cancer. Method: The proposed DL model, designed based on the convolutional neural network architecture, combines a feature pyramid network (FPN) with texture encoding and deep supervision. We extracted, represent, and fused four-scale texture features to improve classification performance on high-risk local lesions. We also designed an auxiliary classification mechanism based on deep supervision to adjust the weight of each scale in FPN adaptively, enabling low-cost training of the whole model. Results: In the binary classification task detecting positive subjects with high-risk cervical lesions, our DL model achieved an 81.55% (95% CI, 72.70-88.51%) F1-score with 82.35% (95% CI, 69.13-91.60%) sensitivity and 81.48% (95% CI, 68.57-90.75%) specificity on the Renmin dataset, outperforming five experienced medical experts. It also achieved an 84.34% (95% CI, 74.71-91.39%) F1-score with 87.50% (95% CI, 73.20-95.81%) sensitivity and 90.59% (95% CI, 82.29-95.85%) specificity on the Huaxi dataset, comparable to the overall level of the best investigator. Moreover, our DL model provides visual diagnostic evidence of histomorphological and texture features learned in OCT images to assist gynecologists in making clinical decisions quickly. Conclusions: Our DL model holds great promise to be used in cervical lesion screening with OCT efficiently and effectively.
Collapse
Affiliation(s)
| | | | | | | | - Li Hong
- Department of Obstetrics and Gynecology, Renmin Hospital of Wuhan University, Wuhan 430060, China; (X.Z.); (J.L.); (M.H.); (Y.H.)
| |
Collapse
|
6
|
Aquilina A, Papagiannakis E. Deep Learning Diagnostic Classification of Cervical Images to Augment Colposcopic Impression. J Low Genit Tract Dis 2024; 28:224-230. [PMID: 38713522 DOI: 10.1097/lgt.0000000000000815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2024]
Abstract
OBJECTIVE A deep learning classifier that improves the accuracy of colposcopic impression. METHODS Colposcopy images taken 56 seconds after acetic acid application were processed by a cervix detection algorithm to identify the cervical region. We optimized models based on the SegFormer architecture to classify each cervix as high-grade or negative/low-grade. The data were split into histologically stratified, random training, validation, and test subsets (80%-10%-10%). We replicated a 10-fold experiment to align with a prior study utilizing expert reviewer analysis of the same images. To evaluate the model's robustness across different cameras, we retrained it after dividing the dataset by camera type. Subsequently, we retrained the model on a new, histologically stratified random data split and integrated the results with patients' age and referral data to train a Gradient Boosted Tree model for final classification. Model accuracy was assessed by the receiver operating characteristic area under the curve (AUC), Youden's index (YI), sensitivity, and specificity compared to the histology. RESULTS Out of 5,485 colposcopy images, 4,946 with histology and a visible cervix were used. The model's average performance in the 10-fold experiment was AUC = 0.75, YI = 0.37 (sensitivity = 63%, specificity = 74%), outperforming the experts' average YI of 0.16. Transferability across camera types was effective, with AUC = 0.70, YI = 0.33. Integrating image-based predictions with referral data improved outcomes to AUC = 0.81 and YI = 0.46. The use of model predictions alongside the original colposcopic impression boosted overall performance. CONCLUSIONS Deep learning cervical image classification demonstrated robustness and outperformed experts. Further improved by including additional patient information, it shows potential for clinical utility complementing colposcopy.
Collapse
|
7
|
Li J, Hu P, Gao H, Shen N, Hua K. Classification of cervical lesions based on multimodal features fusion. Comput Biol Med 2024; 177:108589. [PMID: 38781641 DOI: 10.1016/j.compbiomed.2024.108589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/20/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024]
Abstract
Cervical cancer is a severe threat to women's health worldwide with a long cancerous cycle and a clear etiology, making early screening vital for the prevention and treatment. Based on the dataset provided by the Obstetrics and Gynecology Hospital of Fudan University, a four-category classification model for cervical lesions including Normal, low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HSIL) and cancer (Ca) is developed. Considering the dataset characteristics, to fully utilize the research data and ensure the dataset size, the model inputs include original and acetic colposcopy images, lesion segmentation masks, human papillomavirus (HPV), thinprep cytologic test (TCT) and age, but exclude iodine images that have a significant overlap with lesions under acetic images. Firstly, the change information between original and acetic images is introduced by calculating the acetowhite opacity to mine the correlation between the acetowhite thickness and lesion grades. Secondly, the lesion segmentation masks are utilized to introduce prior knowledge of lesion location and shape into the classification model. Lastly, a cross-modal feature fusion module based on the self-attention mechanism is utilized to fuse image information with clinical text information, revealing the features correlation. Based on the dataset used in this study, the proposed model is comprehensively compared with five excellent models over the past three years, demonstrating that the proposed model has superior classification performance and a better balance between performance and complexity. The modules ablation experiments further prove that each proposed improved module can independently improve the model performance.
Collapse
Affiliation(s)
- Jing Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Peng Hu
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Huayu Gao
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Nanyan Shen
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Keqin Hua
- Obstetrics and Gynecology Hospital of Fudan University, Shanghai, 200011, China.
| |
Collapse
|
8
|
Chen T, Zheng W, Hu H, Luo C, Chen J, Yuan C, Lu W, Chen DZ, Gao H, Wu J. A Corresponding Region Fusion Framework for Multi-Modal Cervical Lesion Detection. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:959-970. [PMID: 35635817 DOI: 10.1109/tcbb.2022.3178725] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cervical lesion detection (CLD) using colposcopic images of multi-modality (acetic and iodine) is critical to computer-aided diagnosis (CAD) systems for accurate, objective, and comprehensive cervical cancer screening. To robustly capture lesion features and conform with clinical diagnosis practice, we propose a novel corresponding region fusion network (CRFNet) for multi-modal CLD. CRFNet first extracts feature maps and generates proposals for each modality, then performs proposal shifting to obtain corresponding regions under large position shifts between modalities, and finally fuses those region features with a new corresponding channel attention to detect lesion regions on both modalities. To evaluate CRFNet, we build a large multi-modal colposcopic image dataset collected from our collaborative hospital. We show that our proposed CRFNet surpasses known single-modal and multi-modal CLD methods and achieves state-of-the-art performance, especially in terms of Average Precision.
Collapse
|
9
|
Zhang Z, Yao P, Chen M, Zeng L, Shao P, Shen S, Xu RX. SCAC: A Semi-Supervised Learning Approach for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2024; 28:3501-3512. [PMID: 38470598 DOI: 10.1109/jbhi.2024.3375889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Cervical abnormal cell detection plays a crucial role in the early screening of cervical cancer. In recent years, some deep learning-based methods have been proposed. However, these methods rely heavily on large amounts of annotated images, which are time-consuming and labor-intensive to acquire, thus limiting the detection performance. In this paper, we present a novel Semi-supervised Cervical Abnormal Cell detector (SCAC), which effectively utilizes the abundant unlabeled data. We utilize Transformer as the backbone of SCAC to capture long-range dependencies to mimic the diagnostic process of pathologists. In addition, in SCAC, we design a Unified Strong and Weak Augment strategy (USWA) that unifies two data augmentation pipelines, implementing consistent regularization in semi-supervised learning and enhancing the diversity of the training data. We also develop a Global Attention Feature Pyramid Network (GAFPN), which utilizes the attention mechanism to better extract multi-scale features from cervical cytology images. Notably, we have created an unlabeled cervical cytology image dataset, which can be leveraged by semi-supervised learning to enhance detection accuracy. To the best of our knowledge, this is the first publicly available large unlabeled cervical cytology image dataset. By combining this dataset with two publicly available annotated datasets, we demonstrate that SCAC outperforms other existing methods, achieving state-of-the-art performance. Additionally, comprehensive ablation studies are conducted to validate the effectiveness of USWA and GAFPN. These promising results highlight the capability of SCAC to achieve high diagnostic accuracy and extensive clinical applications.
Collapse
|
10
|
Chen P, Liu F, Zhang J, Wang B. MFEM-CIN: A Lightweight Architecture Combining CNN and Transformer for the Classification of Pre-Cancerous Lesions of the Cervix. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:216-225. [PMID: 38606400 PMCID: PMC11008799 DOI: 10.1109/ojemb.2024.3367243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/03/2023] [Accepted: 02/05/2024] [Indexed: 04/13/2024] Open
Abstract
Goal: Cervical cancer is one of the most common cancers in women worldwide, ranking among the top four. Unfortunately, it is also the fourth leading cause of cancer-related deaths among women, particularly in developing countries where incidence and mortality rates are higher compared to developed nations. Colposcopy can aid in the early detection of cervical lesions, but its effectiveness is limited in areas with limited medical resources and a lack of specialized physicians. Consequently, many cases are diagnosed at later stages, putting patients at significant risk. Methods: This paper proposes an automated colposcopic image analysis framework to address these challenges. The framework aims to reduce the labor costs associated with cervical precancer screening in undeserved regions and assist doctors in diagnosing patients. The core of the framework is the MFEM-CIN hybrid model, which combines Convolutional Neural Networks (CNN) and Transformer to aggregate the correlation between local and global features. This combined analysis of local and global information is scientifically useful in clinical diagnosis. In the model, MSFE and MSFF are utilized to extract and fuse multi-scale semantics. This preserves important shallow feature information and allows it to interact with the deep feature, enriching the semantics to some extent. Conclusions: The experimental results demonstrate an accuracy rate of 89.2% in identifying cervical intraepithelial neoplasia while maintaining a lightweight model. This performance exceeds the average accuracy achieved by professional physicians, indicating promising potential for practical application. Utilizing automated colposcopic image analysis and the MFEM-CIN model, this research offers a practical solution to reduce the burden on healthcare providers and improve the efficiency and accuracy of cervical cancer diagnosis in resource-constrained areas.
Collapse
Affiliation(s)
- Peng Chen
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
- Fin China-Anhui University Joint Laboratory for Financial Big Data ResearchHefei Financial China Information and Technology Company, Ltd.Hefei230022China
| | - Fobao Liu
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Jun Zhang
- National Engineering Research Center for Agro-Ecological Big Data Analysis and Application, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology and School of InternetAnhui UniversityHefei230601China
| | - Bing Wang
- School of Management Science and EngineeringAnhui University of Finance and EconomicsBengbu233030China
| |
Collapse
|
11
|
P E, S K, Sagayam KM, J A. An automated cervical cancer diagnosis using genetic algorithm and CANFIS approaches. Technol Health Care 2024; 32:2193-2209. [PMID: 38251073 DOI: 10.3233/thc-230926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
BACKGROUND Cervical malignancy is considered among the most perilous cancers affecting women in numerous East African and South Asian nations, both in terms of its prevalence and fatality rates. OBJECTIVE This research aims to propose an efficient automated system for the segmentation of cancerous regions in cervical images. METHODS The proposed techniques encompass preprocessing, feature extraction with an optimized feature set, classification, and segmentation. The original cervical image undergoes smoothing using the Gaussian Filter technique, followed by the extraction of Local Binary Pattern (LBP) and Grey Level Co-occurrence Matrix (GLCM) features from the enhanced cervical images. LBP features capture pixel relationships within a mask window, while GLCM features quantify energy metrics across all pixels in the images. These features serve to distinguish normal cervical images from abnormal ones. The extracted features are optimized using Genetic Algorithm (GA) as an optimization method, and the optimized sets of features are classified using the Co-Active Adaptive Neuro-Fuzzy Inference System (CANFIS) classification method. Subsequently, a morphological segmentation technique is employed to categorize irregular cervical images, identifying and segmenting malignant regions within them. RESULTS The proposed approach achieved a sensitivity of 99.09%, specificity of 99.39%, and accuracy of 99.36%. CONCLUSION The proposed approach demonstrated superior performance compared to state-of-the-art techniques, and the results have been validated by expert radiologists.
Collapse
Affiliation(s)
- Elayaraja P
- Department of Electronics and Communication Engineering, Kongunadu College of Engineering and Technology, Trichy, India
| | - Kumarganesh S
- Department of Electronics and Communication Engineering, Knowledge Institute of Technology, Salem, India
| | - K Martin Sagayam
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - Andrew J
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
12
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
13
|
Li Z, Zeng CM, Dong YG, Cao Y, Yu LY, Liu HY, Tian X, Tian R, Zhong CY, Zhao TT, Liu JS, Chen Y, Li LF, Huang ZY, Wang YY, Hu Z, Zhang J, Liang JX, Zhou P, Lu YQ. A segmentation model to detect cevical lesions based on machine learning of colposcopic images. Heliyon 2023; 9:e21043. [PMID: 37928028 PMCID: PMC10623278 DOI: 10.1016/j.heliyon.2023.e21043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 11/07/2023] Open
Abstract
Background Semantic segmentation is crucial in medical image diagnosis. Traditional deep convolutional neural networks excel in image classification and object detection but fall short in segmentation tasks. Enhancing the accuracy and efficiency of detecting high-level cervical lesions and invasive cancer poses a primary challenge in segmentation model development. Methods Between 2018 and 2022, we retrospectively studied a total of 777 patients, comprising 339 patients with high-level cervical lesions and 313 patients with microinvasive or invasive cervical cancer. Overall, 1554 colposcopic images were put into the DeepLabv3+ model for learning. Accuracy, Precision, Specificity, and mIoU were employed to evaluate the performance of the model in the prediction of cervical high-level lesions and cancer. Results Experiments showed that our segmentation model had better diagnosis efficiency than colposcopic experts and other artificial intelligence models, and reached Accuracy of 93.29 %, Precision of 87.2 %, Specificity of 90.1 %, and mIoU of 80.27 %, respectively. Conclution The DeepLabv3+ model had good performance in the segmentation of cervical lesions in colposcopic post-acetic-acid images and can better assist colposcopists in improving the diagnosis.
Collapse
Affiliation(s)
- Zhen Li
- Department of Gynecological Oncology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Chu-Mei Zeng
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Yan-Gang Dong
- Institute for Brain Research and Rehabilitation, the South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Ying Cao
- Department of Obstetrics and Gynecology, Academician expert workstation, The Central Hospital of Wuhan, Tongji Medical College Huazhong University of Science and Technology, Wuhan, Hubei, 430014, China
| | - Li-Yao Yu
- Department of Obstetrics and Gynecology, Academician expert workstation, The Central Hospital of Wuhan, Tongji Medical College Huazhong University of Science and Technology, Wuhan, Hubei, 430014, China
| | - Hui-Ying Liu
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Xun Tian
- Department of Obstetrics and Gynecology, Academician expert workstation, The Central Hospital of Wuhan, Tongji Medical College Huazhong University of Science and Technology, Wuhan, Hubei, 430014, China
| | - Rui Tian
- the Generulor Company Bio-X Lab, Zhuhai, Guangdong, 519060, China
| | - Chao-Yue Zhong
- the Generulor Company Bio-X Lab, Zhuhai, Guangdong, 519060, China
| | - Ting-Ting Zhao
- the Generulor Company Bio-X Lab, Zhuhai, Guangdong, 519060, China
| | - Jia-Shuo Liu
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Ye Chen
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Li-Fang Li
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Zhe-Ying Huang
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Yu-Yan Wang
- Department of Obstetrics and gynecology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, 510062, China
| | - Zheng Hu
- Department of Gynecological Oncology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Jingjing Zhang
- Department of Gynecological Oncology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Jiu-Xing Liang
- Institute for Brain Research and Rehabilitation, the South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Ping Zhou
- Department of Gynecology, Dongguan Maternal and Child Hospital, Dongguan, Guangdong, 523057, China
| | - Yi-Qin Lu
- Department of Gynecology, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, 101121, China
| |
Collapse
|
14
|
Jiang Y, Wang C, Zhou S. Artificial intelligence-based risk stratification, accurate diagnosis and treatment prediction in gynecologic oncology. Semin Cancer Biol 2023; 96:82-99. [PMID: 37783319 DOI: 10.1016/j.semcancer.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 08/27/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
As data-driven science, artificial intelligence (AI) has paved a promising path toward an evolving health system teeming with thrilling opportunities for precision oncology. Notwithstanding the tremendous success of oncological AI in such fields as lung carcinoma, breast tumor and brain malignancy, less attention has been devoted to investigating the influence of AI on gynecologic oncology. Hereby, this review sheds light on the ever-increasing contribution of state-of-the-art AI techniques to the refined risk stratification and whole-course management of patients with gynecologic tumors, in particular, cervical, ovarian and endometrial cancer, centering on information and features extracted from clinical data (electronic health records), cancer imaging including radiological imaging, colposcopic images, cytological and histopathological digital images, and molecular profiling (genomics, transcriptomics, metabolomics and so forth). However, there are still noteworthy challenges beyond performance validation. Thus, this work further describes the limitations and challenges faced in the real-word implementation of AI models, as well as potential solutions to address these issues.
Collapse
Affiliation(s)
- Yuting Jiang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shengtao Zhou
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
15
|
Wang Q, Chen K, Dou W, Ma Y. Cross-Attention Based Multi-Resolution Feature Fusion Model for Self-Supervised Cervical OCT Image Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2541-2554. [PMID: 37027657 DOI: 10.1109/tcbb.2023.3246979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Cervical cancer seriously endangers the health of the female reproductive system and even risks women's life in severe cases. Optical coherence tomography (OCT) is a non-invasive, real-time, high-resolution imaging technology for cervical tissues. However, since the interpretation of cervical OCT images is a knowledge-intensive, time-consuming task, it is tough to acquire a large number of high-quality labeled images quickly, which is a big challenge for supervised learning. In this study, we introduce the vision Transformer (ViT) architecture, which has recently achieved impressive results in natural image analysis, into the classification task of cervical OCT images. Our work aims to develop a computer-aided diagnosis (CADx) approach based on a self-supervised ViT-based model to classify cervical OCT images effectively. We leverage masked autoencoders (MAE) to perform self-supervised pre-training on cervical OCT images, so the proposed classification model has a better transfer learning ability. In the fine-tuning process, the ViT-based classification model extracts multi-scale features from OCT images of different resolutions and fuses them with the cross-attention module. The ten-fold cross-validation results on an OCT image dataset from a multi-center clinical study of 733 patients in China indicate that our model achieved an AUC value of 0.9963 ± 0.0069 with a 95.89 ± 3.30% sensitivity and 98.23 ± 1.36 % specificity, outperforming some state-of-the-art classification models based on Transformers and convolutional neural networks (CNNs) in the binary classification task of detecting high-risk cervical diseases, including high-grade squamous intraepithelial lesion (HSIL) and cervical cancer. Furthermore, our model with the cross-shaped voting strategy achieved a sensitivity of 92.06% and specificity of 95.56% on an external validation dataset containing 288 three-dimensional (3D) OCT volumes from 118 Chinese patients in a different new hospital. This result met or exceeded the average of four medical experts who have used OCT for over one year. In addition to promising classification performance, our model has a remarkable ability to detect and visualize local lesions using the attention map of the standard ViT model, providing good interpretability for gynecologists to locate and diagnose possible cervical diseases.
Collapse
|
16
|
Mustafa WA, Ismail S, Mokhtar FS, Alquran H, Al-Issa Y. Cervical Cancer Detection Techniques: A Chronological Review. Diagnostics (Basel) 2023; 13:diagnostics13101763. [PMID: 37238248 DOI: 10.3390/diagnostics13101763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 05/12/2023] [Accepted: 05/15/2023] [Indexed: 05/28/2023] Open
Abstract
Cervical cancer is known as a major health problem globally, with high mortality as well as incidence rates. Over the years, there have been significant advancements in cervical cancer detection techniques, leading to improved accuracy, sensitivity, and specificity. This article provides a chronological review of cervical cancer detection techniques, from the traditional Pap smear test to the latest computer-aided detection (CAD) systems. The traditional method for cervical cancer screening is the Pap smear test. It consists of examining cervical cells under a microscope for abnormalities. However, this method is subjective and may miss precancerous lesions, leading to false negatives and a delayed diagnosis. Therefore, a growing interest has been in shown developing CAD methods to enhance cervical cancer screening. However, the effectiveness and reliability of CAD systems are still being evaluated. A systematic review of the literature was performed using the Scopus database to identify relevant studies on cervical cancer detection techniques published between 1996 and 2022. The search terms used included "(cervix OR cervical) AND (cancer OR tumor) AND (detect* OR diagnosis)". Studies were included if they reported on the development or evaluation of cervical cancer detection techniques, including traditional methods and CAD systems. The results of the review showed that CAD technology for cervical cancer detection has come a long way since it was introduced in the 1990s. Early CAD systems utilized image processing and pattern recognition techniques to analyze digital images of cervical cells, with limited success due to low sensitivity and specificity. In the early 2000s, machine learning (ML) algorithms were introduced to the CAD field for cervical cancer detection, allowing for more accurate and automated analysis of digital images of cervical cells. ML-based CAD systems have shown promise in several studies, with improved sensitivity and specificity reported compared to traditional screening methods. In summary, this chronological review of cervical cancer detection techniques highlights the significant advancements made in this field over the past few decades. ML-based CAD systems have shown promise for improving the accuracy and sensitivity of cervical cancer detection. The Hybrid Intelligent System for Cervical Cancer Diagnosis (HISCCD) and the Automated Cervical Screening System (ACSS) are two of the most promising CAD systems. Still, deeper validation and research are required before being broadly accepted. Continued innovation and collaboration in this field may help enhance cervical cancer detection as well as ultimately reduce the disease's burden on women worldwide.
Collapse
Affiliation(s)
- Wan Azani Mustafa
- Faculty of Electrical Engineering Technology, Campus Pauh Putra, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Advanced Computing (AdvComp), Centre of Excellence (CoE), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Shahrina Ismail
- Faculty of Science and Technology, Universiti Sains Islam Malaysia (USIM), Bandar Baru Nilai 71800, Negeri Sembilan, Malaysia
| | - Fahirah Syaliza Mokhtar
- Faculty of Business, Economy and Social Development, Universiti Malaysia Terengganu, Kuala Nerus 21300, Terengganu, Malaysia
| | - Hiam Alquran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, 556, Irbid 21163, Jordan
| | - Yazan Al-Issa
- Department of Computer Engineering, Yarmouk University, Irbid 22110, Jordan
| |
Collapse
|
17
|
Shinohara T, Murakami K, Matsumura N. Diagnosis Assistance in Colposcopy by Segmenting Acetowhite Epithelium Using U-Net with Images before and after Acetic Acid Solution Application. Diagnostics (Basel) 2023; 13:diagnostics13091596. [PMID: 37174987 PMCID: PMC10178183 DOI: 10.3390/diagnostics13091596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/20/2023] [Accepted: 04/27/2023] [Indexed: 05/15/2023] Open
Abstract
Colposcopy is an essential examination tool to identify cervical intraepithelial neoplasia (CIN), a precancerous lesion of the uterine cervix, and to sample its tissues for histological examination. In colposcopy, gynecologists visually identify the lesion highlighted by applying an acetic acid solution to the cervix using a magnifying glass. This paper proposes a deep learning method to aid the colposcopic diagnosis of CIN by segmenting lesions. In this method, to segment the lesion effectively, the colposcopic images taken before acetic acid solution application were input to the deep learning network, U-Net, for lesion segmentation with the images taken following acetic acid solution application. We conducted experiments using 30 actual colposcopic images of acetowhite epithelium, one of the representative types of CIN. As a result, it was confirmed that accuracy, precision, and F1 scores, which were 0.894, 0.837, and 0.834, respectively, were significantly better when images taken before and after acetic acid solution application were used than when only images taken after acetic acid solution application were used (0.882, 0.823, and 0.823, respectively). This result indicates that the image taken before acetic acid solution application is helpful for accurately segmenting the CIN in deep learning.
Collapse
Affiliation(s)
- Toshihiro Shinohara
- Department of Computational Systems Biology, Faculty of Biology-Oriented Science and Technology, Kindai University, Kinokawa 649-6493, Wakayama, Japan
| | - Kosuke Murakami
- Department of Obstetrics and Gynecology, Faculty of Medicine, Kindai University, Osakasayama 589-8511, Osaka, Japan
| | - Noriomi Matsumura
- Department of Obstetrics and Gynecology, Faculty of Medicine, Kindai University, Osakasayama 589-8511, Osaka, Japan
| |
Collapse
|
18
|
Chauhan NK, Singh K, Kumar A, Kolambakar SB. HDFCN: A Robust Hybrid Deep Network Based on Feature Concatenation for Cervical Cancer Diagnosis on WSI Pap Smear Slides. BIOMED RESEARCH INTERNATIONAL 2023; 2023:4214817. [PMID: 37101692 PMCID: PMC10125740 DOI: 10.1155/2023/4214817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/06/2023] [Accepted: 03/18/2023] [Indexed: 04/28/2023]
Abstract
Cervical cancer is a critical imperilment to a female's health due to its malignancy and fatality rate. The disease can be thoroughly cured by locating and treating the infected tissues in the preliminary phase. The traditional practice for screening cervical cancer is the examination of cervix tissues using the Papanicolaou (Pap) test. Manual inspection of pap smears involves false-negative outcomes due to human error even in the presence of the infected sample. Automated computer vision diagnosis revamps this obstacle and plays a substantial role in screening abnormal tissues affected due to cervical cancer. Here, in this paper, we propose a hybrid deep feature concatenated network (HDFCN) following two-step data augmentation to detect cervical cancer for binary and multiclass classification on the Pap smear images. This network carries out the classification of malignant samples for whole slide images (WSI) of the openly accessible SIPaKMeD database by utilizing the concatenation of features extracted from the fine-tuning of the deep learning (DL) models, namely, VGG-16, ResNet-152, and DenseNet-169, pretrained on the ImageNet dataset. The performance outcomes of the proposed model are compared with the individual performances of the aforementioned DL networks using transfer learning (TL). Our proposed model achieved an accuracy of 97.45% and 99.29% for 5-class and 2-class classifications, respectively. Additionally, the experiment is performed to classify liquid-based cytology (LBC) WSI data containing pap smear images.
Collapse
Affiliation(s)
- Nitin Kumar Chauhan
- USIC&T, Guru Gobind Singh Indraprastha University, New Delhi 110078, India
- Department of ECE, Indore Institute of Science & Technology, Indore 453331, India
| | - Krishna Singh
- DSEU Okhla Campus-I, Formerly G. B. Pant Engineering College, New Delhi 110020, India
| | - Amit Kumar
- Department of ECE, Indore Institute of Science & Technology, Indore 453331, India
- Department of Electronics Engineering, Indian Institute of Technology (BHU), Varanasi 221005, India
| | | |
Collapse
|
19
|
Tang S, Yu X, Cheang CF, Ji X, Yu HH, Choi IC. CLELNet: A continual learning network for esophageal lesion analysis on endoscopic images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107399. [PMID: 36780717 DOI: 10.1016/j.cmpb.2023.107399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/03/2023] [Accepted: 02/01/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE A deep learning-based intelligent diagnosis system can significantly reduce the burden of endoscopists in the daily analysis of esophageal lesions. Considering the need to add new tasks in the diagnosis system, a deep learning model that can train a series of tasks incrementally using endoscopic images is essential for identifying the types and regions of esophageal lesions. METHOD In this paper, we proposed a continual learning-based esophageal lesion network (CLELNet), in which a convolutional autoencoder was designed to extract representation features of endoscopic images among different esophageal lesions. The proposed CLELNet consists of shared layers and task-specific layers. Shared layers are used to extract common features among different lesions while task-specific layers can complete different tasks. The first two tasks trained by the CLELNet are the classification (task 1) and the segmentation (task 2). We collected a dataset of esophageal endoscopic images from Macau Kiang Wu Hospital for training and testing the CLELNet. RESULTS The experimental results showed that the classification accuracy of task 1 was 95.96%, and the Intersection Over Union and the Dice Similarity Coefficient of task 2 were 65.66% and 78.08%, respectively. CONCLUSIONS The proposed CLELNet can realize task-incremental learning without forgetting the previous tasks and thus become a useful computer-aided diagnosis system in esophageal lesions analysis.
Collapse
Affiliation(s)
- Suigu Tang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR
| | - Xiaoyuan Yu
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR
| | - Chak Fong Cheang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR.
| | - Xiaoyu Ji
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR
| | - Hon Ho Yu
- Kiang Wu Hospital, Rua de Coelho do Amaral, Macau SAR
| | - I Cheong Choi
- Kiang Wu Hospital, Rua de Coelho do Amaral, Macau SAR
| |
Collapse
|
20
|
Dash S, Sethy PK, Behera SK. Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images. Cancer Inform 2023; 22:11769351231161477. [PMID: 37008072 PMCID: PMC10064461 DOI: 10.1177/11769351231161477] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 02/16/2023] [Indexed: 03/31/2023] Open
Abstract
The second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the transformation zone, a region of transforming cells. This article suggests a 2-phase method that includes segmenting and classifying the transformation zone to identify the type of cervical cancer. In the initial stage, the transformation zone is segmented from the colposcopy images. The segmented images are then subjected to the augmentation process and identified with the improved inception-resnet-v2. Here, multi-scale feature fusion framework that utilizes 3 × 3 convolution kernels from Reduction-A and Reduction-B of inception-resnet-v2 is introduced. The feature extracted from Reduction-A and Reduction -B is concatenated and fed to SVM for classification. This way, the model combines the benefits of residual networks and Inception convolution, increasing network width and resolving the deep network's training issue. The network can extract several scales of contextual information due to the multi-scale feature fusion, which increases accuracy. The experimental results reveal 81.24% accuracy, 81.24% sensitivity, 90.62% specificity, 87.52% precision, 9.38% FPR, and 81.68% F1 score, 75.27% MCC, and 57.79% Kappa coefficient.
Collapse
Affiliation(s)
- Srikanta Dash
- Department of Electronics, Sambalpur University, Sambalpur, Odisha, India
| | | | | |
Collapse
|
21
|
Xue P, Seery S, Wang S, Jiang Y, Qiao Y. Developing a predictive nomogram for colposcopists: a retrospective, multicenter study of cervical precancer identification in China. BMC Cancer 2023; 23:163. [PMID: 36803785 PMCID: PMC9938572 DOI: 10.1186/s12885-023-10646-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Accepted: 02/14/2023] [Indexed: 02/19/2023] Open
Abstract
BACKGROUND Colposcopic examination with biopsy is the standard procedure for referrals with abnormal cervical cancer screening results; however, the decision to biopsy is controvertible. Having a predictive model may help to improve high-grade squamous intraepithelial lesion or worse (HSIL+) predictions which could reduce unnecessary testing and protecting women from unnecessary harm. METHODS This retrospective multicenter study involved 5,854 patients identified through colposcopy databases. Cases were randomly assigned to a training set for development or to an internal validation set for performance assessment and comparability testing. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used to reduce the number of candidate predictors and select statistically significant factors. Multivariable logistic regression was then used to establish a predictive model which generates risk scores for developing HSIL+. The predictive model is presented as a nomogram and was assessed for discriminability, and with calibration and decision curves. The model was externally validated with 472 consecutive patients and compared to 422 other patients from two additional hospitals. RESULTS The final predictive model included age, cytology results, human papillomavirus status, transformation zone types, colposcopic impressions, and size of lesion area. The model had good overall discrimination when predicting HSIL + risk, which was internally validated (Area Under the Curve [AUC] of 0.92 (95%CI 0.90-0.94)). External validation found an AUC of 0.91 (95%CI 0.88-0.94) across the consecutive sample, and 0.88 (95%CI 0.84-0.93) across the comparative sample. Calibration suggested good coherence between predicted and observed probabilities. Decision curve analysis also suggested this model would be clinically useful. CONCLUSION We developed and validated a nomogram which incorporates multiple clinically relevant variables to better identify HSIL + cases during colposcopic examination. This model may help clinicians determining next steps and in particular, around the need to refer patients for colposcopy-guided biopsies.
Collapse
Affiliation(s)
- Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, 100730, Beijing, China.
| | - Samuel Seery
- grid.9835.70000 0000 8190 6402Division of Health Research, Lancaster University, Lancaster, UK
| | - Sumeng Wang
- grid.506261.60000 0001 0706 7839Department of Cancer Epidemiology, Chinese Academy of Medical Sciences and Peking Union Medical College, National Cancer Center, National Clinical Research Center for Cancer/Cancer Hospital, 100021 Beijing, China
| | - Yu Jiang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, 100730, Beijing, China.
| | - Youlin Qiao
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, 100730, Beijing, China.
| |
Collapse
|
22
|
CTIFI: Clinical-experience-guided three-vision images features integration for diagnosis of cervical lesions. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
23
|
Chen X, Pu X, Chen Z, Li L, Zhao KN, Liu H, Zhu H. Application of EfficientNet-B0 and GRU-based deep learning on classifying the colposcopy diagnosis of precancerous cervical lesions. Cancer Med 2023; 12:8690-8699. [PMID: 36629131 PMCID: PMC10134359 DOI: 10.1002/cam4.5581] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 11/23/2022] [Accepted: 12/17/2022] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND Colposcopy is indispensable for the diagnosis of cervical lesions. However, its diagnosis accuracy for high-grade squamous intraepithelial lesion (HSIL) is at about 50%, and the accuracy is largely dependent on the skill and experience of colposcopists. The advancement in computational power made it possible for the application of artificial intelligence (AI) to clinical problems. Here, we explored the feasibility and accuracy of the application of AI on precancerous and cancerous cervical colposcopic image recognition and classification. METHODS The images were collected from 6002 colposcopy examinations of normal control, low-grade squamous intraepithelial lesion (LSIL), and HSIL. For each patient, the original, Schiller test, and acetic-acid images were all collected. We built a new neural network classification model based on the hybrid algorithm. EfficientNet-b0 was used as the backbone network for the image feature extraction, and GRU(Gate Recurrent Unit)was applied for feature fusion of the three modes examinations (original, acetic acid, and Schiller test). RESULTS The connected network classifier achieved an accuracy of 90.61% in distinguishing HSIL from normal and LSIL. Furthermore, the model was applied to "Trichotomy", which reached an accuracy of 91.18% in distinguishing the HSIL, LSIL and normal control at the same time. CONCLUSION Our results revealed that as shown by the high accuracy of AI in the classification of colposcopic images, AI exhibited great potential to be an effective tool for the accurate diagnosis of cervical disease and for early therapeutic intervention in cervical precancer.
Collapse
Affiliation(s)
- Xiaoyue Chen
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaowen Pu
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zhirou Chen
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Lanzhen Li
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China.,Ningbo Artificial Intelligent Institute, Shanghai Jiao Tong University, Ningbo, China
| | - Kong-Nan Zhao
- School of Basic Medical Science, Wenzhou Medical University, Wenzhou, China.,Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, St Lucia, Queensland, Australia
| | - Haichun Liu
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China.,Ningbo Artificial Intelligent Institute, Shanghai Jiao Tong University, Ningbo, China
| | - Haiyan Zhu
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
24
|
Chola C, Muaad AY, Bin Heyat MB, Benifa JVB, Naji WR, Hemachandran K, Mahmoud NF, Samee NA, Al-Antari MA, Kadah YM, Kim TS. BCNet: A Deep Learning Computer-Aided Diagnosis Framework for Human Peripheral Blood Cell Identification. Diagnostics (Basel) 2022; 12:diagnostics12112815. [PMID: 36428875 PMCID: PMC9689932 DOI: 10.3390/diagnostics12112815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/03/2022] [Accepted: 11/12/2022] [Indexed: 11/19/2022] Open
Abstract
Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.
Collapse
Affiliation(s)
- Channabasava Chola
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
| | - Abdullah Y. Muaad
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
- Centre for VLSI and Embedded System Technologies, International Institute of Information Technology, Hyderabad 500032, India
- Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW 2770, Australia
| | - J. V. Bibal Benifa
- Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kerala 686635, India
| | - Wadeea R. Naji
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - K. Hemachandran
- Department of Artificial Intelligence, Woxsen University, Hyderabad 502345, India
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Mugahed A. Al-Antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| |
Collapse
|
25
|
Allahqoli L, Laganà AS, Mazidimoradi A, Salehiniya H, Günther V, Chiantera V, Karimi Goghari S, Ghiasvand MM, Rahmani A, Momenimovahed Z, Alkatout I. Diagnosis of Cervical Cancer and Pre-Cancerous Lesions by Artificial Intelligence: A Systematic Review. Diagnostics (Basel) 2022; 12:2771. [PMID: 36428831 PMCID: PMC9689914 DOI: 10.3390/diagnostics12112771] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/06/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE The likelihood of timely treatment for cervical cancer increases with timely detection of abnormal cervical cells. Automated methods of detecting abnormal cervical cells were established because manual identification requires skilled pathologists and is time consuming and prone to error. The purpose of this systematic review is to evaluate the diagnostic performance of artificial intelligence (AI) technologies for the prediction, screening, and diagnosis of cervical cancer and pre-cancerous lesions. MATERIALS AND METHODS Comprehensive searches were performed on three databases: Medline, Web of Science Core Collection (Indexes = SCI-EXPANDED, SSCI, A & HCI Timespan) and Scopus to find papers published until July 2022. Articles that applied any AI technique for the prediction, screening, and diagnosis of cervical cancer were included in the review. No time restriction was applied. Articles were searched, screened, incorporated, and analyzed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. RESULTS The primary search yielded 2538 articles. After screening and evaluation of eligibility, 117 studies were incorporated in the review. AI techniques were found to play a significant role in screening systems for pre-cancerous and cancerous cervical lesions. The accuracy of the algorithms in predicting cervical cancer varied from 70% to 100%. AI techniques make a distinction between cancerous and normal Pap smears with 80-100% accuracy. AI is expected to serve as a practical tool for doctors in making accurate clinical diagnoses. The reported sensitivity and specificity of AI in colposcopy for the detection of CIN2+ were 71.9-98.22% and 51.8-96.2%, respectively. CONCLUSION The present review highlights the acceptable performance of AI systems in the prediction, screening, or detection of cervical cancer and pre-cancerous lesions, especially when faced with a paucity of specialized centers or medical resources. In combination with human evaluation, AI could serve as a helpful tool in the interpretation of cervical smears or images.
Collapse
Affiliation(s)
- Leila Allahqoli
- Midwifery Department, Ministry of Health and Medical Education, Tehran 1467664961, Iran
| | - Antonio Simone Laganà
- Unit of Gynecologic Oncology, ARNAS “Civico-Di Cristina-Benfratelli”, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Afrooz Mazidimoradi
- Neyriz Public Health Clinic, Shiraz University of Medical Sciences, Shiraz 7134814336, Iran
| | - Hamid Salehiniya
- Social Determinants of Health Research Center, Birjand University of Medical Sciences, Birjand 9717853577, Iran
| | - Veronika Günther
- University Hospitals Schleswig-Holstein, Campus Kiel, Kiel School of Gynaecological Endoscopy, Arnold-Heller-Str. 3, Haus 24, 24105 Kiel, Germany
| | - Vito Chiantera
- Unit of Gynecologic Oncology, ARNAS “Civico-Di Cristina-Benfratelli”, Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, 90127 Palermo, Italy
| | - Shirin Karimi Goghari
- School of Industrial and Systems Engineering, Tarbiat Modares University (TMU), Tehran 1411713114, Iran
| | - Mohammad Matin Ghiasvand
- Department of Computer Engineering, Amirkabir University of Technology (AUT), Tehran 1591634311, Iran
| | - Azam Rahmani
- Nursing and Midwifery Care Research Centre, School of Nursing and Midwifery, Tehran University of Medical Sciences, Tehran 141973317, Iran
| | - Zohre Momenimovahed
- Reproductive Health Department, Qom University of Medical Sciences, Qom 3716993456, Iran
| | - Ibrahim Alkatout
- University Hospitals Schleswig-Holstein, Campus Kiel, Kiel School of Gynaecological Endoscopy, Arnold-Heller-Str. 3, Haus 24, 24105 Kiel, Germany
| |
Collapse
|
26
|
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107161. [PMID: 36228495 DOI: 10.1016/j.cmpb.2022.107161] [Citation(s) in RCA: 155] [Impact Index Per Article: 51.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/16/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community. METHODS Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. RESULTS In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others. CONCLUSION We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
Collapse
Affiliation(s)
- Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Silvia Seoni
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - Prabal Datta Barua
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - U Rajendra Acharya
- School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia; School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
27
|
Chen H, Gomez C, Huang CM, Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med 2022; 5:156. [PMID: 36261476 PMCID: PMC9581990 DOI: 10.1038/s41746-022-00699-2] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Collapse
Affiliation(s)
- Haomin Chen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Catalina Gomez
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Chien-Ming Huang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
28
|
Kim J, Park CM, Kim SY, Cho A. Convolutional neural network-based classification of cervical intraepithelial neoplasias using colposcopic image segmentation for acetowhite epithelium. Sci Rep 2022; 12:17228. [PMID: 36241761 PMCID: PMC9568549 DOI: 10.1038/s41598-022-21692-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 09/30/2022] [Indexed: 01/06/2023] Open
Abstract
Colposcopy is a test performed to detect precancerous lesions of cervical cancer. Since cervical cancer progresses slowly, finding and treating precancerous lesions helps prevent cervical cancer. In particular, it is clinically important to detect high-grade squamous intraepithelial lesions (HSIL) that require surgical treatment among precancerous lesions of cervix. There have been several studies using convolutional neural network (CNN) for classifying colposcopic images. However, no studies have been reported on using the segmentation technique to detect HSIL. In present study, we aimed to examine whether the accuracy of a CNN model in detecting HSIL from colposcopic images can be improved when segmentation information for acetowhite epithelium is added. Without segmentation information, ResNet-18, 50, and 101 achieved classification accuracies of 70.2%, 66.2%, and 69.3%, respectively. The experts classified the same test set with accuracies of 74.6% and 73.0%. After adding segmentation information of acetowhite epithelium to the original images, the classification accuracies of ResNet-18, 50, and 101 improved to 74.8%, 76.3%, and 74.8%, respectively. We demonstrated that the HSIL detection accuracy improved by adding segmentation information to the CNN model, and the improvement in accuracy was consistent across different ResNets.
Collapse
Affiliation(s)
- Jisoo Kim
- grid.35541.360000000121053345Center for Artificial Intelligence, Korea Institute of Science and Technology, 5 Hwarangro14-gil, Seongbuk-gu, Seoul, 02792 Republic of Korea
| | - Chul Min Park
- grid.411842.aDepartment of Obstetrics and Gynecology, Jeju National University Hospital, Aran 13gil 15 (Ara-1Dong), Jeju City, 63241 Jeju Self-Governing Province Republic of Korea
| | - Sung Yeob Kim
- grid.411842.aDepartment of Obstetrics and Gynecology, Jeju National University Hospital, Aran 13gil 15 (Ara-1Dong), Jeju City, 63241 Jeju Self-Governing Province Republic of Korea
| | - Angela Cho
- grid.411842.aDepartment of Obstetrics and Gynecology, Jeju National University Hospital, Aran 13gil 15 (Ara-1Dong), Jeju City, 63241 Jeju Self-Governing Province Republic of Korea
| |
Collapse
|
29
|
Chen T, Zheng W, Ying H, Tan X, Li K, Li X, Chen DZ, Wu J. A Task Decomposing and Cell Comparing Method for Cervical Lesion Cell Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2432-2442. [PMID: 35349436 DOI: 10.1109/tmi.2022.3163171] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic detection of cervical lesion cells or cell clumps using cervical cytology images is critical to computer-aided diagnosis (CAD) for accurate, objective, and efficient cervical cancer screening. Recently, many methods based on modern object detectors were proposed and showed great potential for automatic cervical lesion detection. Although effective, several issues still hinder further performance improvement of such known methods, such as large appearance variances between single-cell and multi-cell lesion regions, neglecting normal cells, and visual similarity among abnormal cells. To tackle these issues, we propose a new task decomposing and cell comparing network, called TDCC-Net, for cervical lesion cell detection. Specifically, our task decomposing scheme decomposes the original detection task into two subtasks and models them separately, which aims to learn more efficient and useful feature representations for specific cell structures and then improve the detection performance of the original task. Our cell comparing scheme imitates clinical diagnosis of experts and performs cell comparison with a dynamic comparing module (normal-abnormal cells comparing) and an instance contrastive loss (abnormal-abnormal cells comparing). Comprehensive experiments on a large cervical cytology image dataset confirm the superiority of our method over state-of-the-art methods.
Collapse
|
30
|
Thai PL, Merry Geisa J. Classification of microscopic cervical blood cells using inception ResNet V2 with modified activation function. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Cervical cancer is the most frequent and fatal malignancy among women worldwide. If this tumor is detected and treated early enough, the complications it causes can be minimized. Deep learning demonstrated significant promise when imposed on biomedical difficulties such as medical image processing and disease prognostication. Therefore, in this paper, an automatic cervical cell classification approach named IR-PapNet is developed based on Inception-ResNet which is an optimized version of Inception. The learning model’s conventional ReLu activation is replaced with the parametric-rectified linear unit (PReLu) to overcome the nullification of negative values and dying ReLu. Finally, the model loss function is minimized with the SGD optimization model by modifying the attributes of the neural network. Furthermore, we present a simple but efficient noise removal technique called 2D-Discrete Wavelet Transform (2D-DWT) algorithm for enhancing image quality. Experimental results show that this model can achieve a top-1 average identification accuracy of 99.8% on the pap smear cervical Herlev datasets, which verifies its satisfactory performance. The restructured Inception-ResNet network model can obtain significant improvements over most of the state-of-the-art models in 2-class classification, and it achieves a high learning rate without experiencing dead nodes.
Collapse
Affiliation(s)
- Pon L.T. Thai
- Department of Computer Science and Engineering, Arunachala College of Engineering for Women, Nagercoil, Tamil Nadu, India
| | - J. Merry Geisa
- Department of Electrical and ElectronicsEngineering, St. Xavier’s Catholic College of Engineering, Nagercoil, Tamil Nadu, India
| |
Collapse
|
31
|
Ma JH, You SF, Xue JS, Li XL, Chen YY, Hu Y, Feng Z. Computer-aided diagnosis of cervical dysplasia using colposcopic images. Front Oncol 2022; 12:905623. [PMID: 35992807 PMCID: PMC9389460 DOI: 10.3389/fonc.2022.905623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 07/11/2022] [Indexed: 11/13/2022] Open
Abstract
Backgroundcomputer-aided diagnosis of medical images is becoming more significant in intelligent medicine. Colposcopy-guided biopsy with pathological diagnosis is the gold standard in diagnosing CIN and invasive cervical cancer. However, it struggles with its low sensitivity in differentiating cancer/HSIL from LSIL/normal, particularly in areas with a lack of skilled colposcopists and access to adequate medical resources.Methodsthe model used the auto-segmented colposcopic images to extract color and texture features using the T-test method. It then augmented minority data using the SMOTE method to balance the skewed class distribution. Finally, it used an RBF-SVM to generate a preliminary output. The results, integrating the TCT, HPV tests, and age, were combined into a naïve Bayes classifier for cervical lesion diagnosis.Resultsthe multimodal machine learning model achieved physician-level performance (sensitivity: 51.2%, specificity: 86.9%, accuracy: 81.8%), and it could be interpreted by feature extraction and visualization. With the aid of the model, colposcopists improved the sensitivity from 53.7% to 70.7% with an acceptable specificity of 81.1% and accuracy of 79.6%.Conclusionusing a computer-aided diagnosis system, physicians could identify cancer/HSIL with greater sensitivity, which guided biopsy to take timely treatment.
Collapse
Affiliation(s)
| | | | | | | | | | - Yan Hu
- *Correspondence: Zhen Feng, ; Yan Hu,
| | - Zhen Feng
- *Correspondence: Zhen Feng, ; Yan Hu,
| |
Collapse
|
32
|
Yu H, Fan Y, Ma H, Zhang H, Cao C, Yu X, Sun J, Cao Y, Liu Y. Segmentation of the cervical lesion region in colposcopic images based on deep learning. Front Oncol 2022; 12:952847. [PMID: 35992860 PMCID: PMC9385196 DOI: 10.3389/fonc.2022.952847] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 07/04/2022] [Indexed: 11/13/2022] Open
Abstract
Background Colposcopy is an important method in the diagnosis of cervical lesions. However, experienced colposcopists are lacking at present, and the training cycle is long. Therefore, the artificial intelligence-based colposcopy-assisted examination has great prospects. In this paper, a cervical lesion segmentation model (CLS-Model) was proposed for cervical lesion region segmentation from colposcopic post-acetic-acid images and accurate segmentation results could provide a good foundation for further research on the classification of the lesion and the selection of biopsy site. Methods First, the improved Faster Region-convolutional neural network (R-CNN) was used to obtain the cervical region without interference from other tissues or instruments. Afterward, a deep convolutional neural network (CLS-Net) was proposed, which used EfficientNet-B3 to extract the features of the cervical region and used the redesigned atrous spatial pyramid pooling (ASPP) module according to the size of the lesion region and the feature map after subsampling to capture multiscale features. We also used cross-layer feature fusion to achieve fine segmentation of the lesion region. Finally, the segmentation result was mapped to the original image. Results Experiments showed that on 5455 LSIL+ (including cervical intraepithelial neoplasia and cervical cancer) colposcopic post-acetic-acid images, the accuracy, specificity, sensitivity, and dice coefficient of the proposed model were 93.04%, 96.00%, 74.78%, and 73.71%, respectively, which were all higher than those of the mainstream segmentation model. Conclusion The CLS-Model proposed in this paper has good performance in the segmentation of cervical lesions in colposcopic post-acetic-acid images and can better assist colposcopists in improving the diagnostic level.
Collapse
Affiliation(s)
- Hui Yu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Yinuo Fan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Huizhan Ma
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Haifeng Zhang
- Obstetrics and Gynecology, Affiliated Hospital of Weifang Medical University, Weifang, China
| | - Chengcheng Cao
- Obstetrics and Gynecology, Affiliated Hospital of Weifang Medical University, Weifang, China
| | - Xuyao Yu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin, China
| | - Jinglai Sun
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Yuzhen Cao
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Yuzhen Liu
- Obstetrics and Gynecology, Affiliated Hospital of Weifang Medical University, Weifang, China
| |
Collapse
|
33
|
Li Z, Huang K, Liu L, Zhang Z. Early detection of COPD based on graph convolutional network and small and weakly labeled data. Med Biol Eng Comput 2022; 60:2321-2333. [PMID: 35750976 PMCID: PMC9244127 DOI: 10.1007/s11517-022-02589-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 05/08/2022] [Indexed: 11/25/2022]
Abstract
Chronic obstructive pulmonary disease (COPD) is a common disease with high morbidity and mortality, where early detection benefits the population. However, the early diagnosis rate of COPD is low due to the absence or slight early symptoms. In this paper, a novel method based on graph convolution network (GCN) for early detection of COPD is proposed, which uses small and weakly labeled chest computed tomography image data from the publicly available Danish Lung Cancer Screening Trial database. The key idea is to construct a graph using regions of interest randomly selected from the segmented lung parenchyma and then input it into the GCN model for COPD detection. In this way, the model can not only extract the feature information of each region of interest but also the topological structure information between regions of interest, that is, graph structure information. The proposed GCN model achieves an acceptable performance with an accuracy of 0.77 and an area under a curve of 0.81, which is higher than the previous studies on the same dataset. GCN model also outperforms several state-of-the-art methods trained at the same time. As far as we know, it is also the first time using the GCN model on this dataset for COPD detection.
Collapse
Affiliation(s)
- Zongli Li
- Department of Pulmonary and Critical Care Medicine, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, 100020, People's Republic of China
- Beijing Institute of Respiratory Medicine, Beijing, 100020, People's Republic of China
- Department of Respiratory, Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, 100043, People's Republic of China
| | - Kewu Huang
- Department of Pulmonary and Critical Care Medicine, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, 100020, People's Republic of China.
- Beijing Institute of Respiratory Medicine, Beijing, 100020, People's Republic of China.
| | - Ligong Liu
- Department of Enterprise Management, China Energy Engineering Corporation Limited, Beijing, 100022, People's Republic of China
| | - Zuoqing Zhang
- Department of Respiratory, Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, 100043, People's Republic of China
| |
Collapse
|
34
|
Fan Y, Ma H, Fu Y, Liang X, Yu H, Liu Y. Colposcopic multimodal fusion for the classification of cervical lesions. Phys Med Biol 2022; 67. [PMID: 35617940 DOI: 10.1088/1361-6560/ac73d4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 05/26/2022] [Indexed: 01/01/2023]
Abstract
Objective: Cervical cancer is one of the two biggest killers of women and early detection of cervical precancerous lesions can effectively improve the survival rate of patients. Manual diagnosis by combining colposcopic images and clinical examination results is the main clinical diagnosis method at present. Developing an intelligent diagnosis algorithm based on artificial intelligence is an inevitable trend to solve the objectification of diagnosis and improve the quality and efficiency of diagnosis.Approach: A colposcopic multimodal fusion convolutional neural network (CMF-CNN) was proposed for the classification of cervical lesions. Mask region convolutional neural network was used to detect the cervical region while the encoding network EfficientNet-B3 was introduced to extract the multimodal image features from the acetic image and iodine image. Finally, Squeeze-and-Excitation, Atrous Spatial Pyramid Pooling, and convolution block were also adopted to encode and fuse the patient's clinical text information.Main results: The experimental results showed that in 7106 cases of colposcopy, the accuracy, macro F1-score, macro-areas under the curve of the proposed model were 92.70%, 92.74%, 98.56%, respectively. They are superior to the mainstream unimodal image classification models.Significance: CMF-CNN proposed in this paper combines multimodal information, which has high performance in the classification of cervical lesions in colposcopy, so it can provide comprehensive diagnostic aid.
Collapse
Affiliation(s)
- Yinuo Fan
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Huizhan Ma
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yuanbin Fu
- The College of Intelligence and Computidng, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xiaoyun Liang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Hui Yu
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yuzhen Liu
- The Department of Obstetrics and Gynecology, Affiliated Hospital of Weifang Medical University, Weifang 261042, People's Republic of China
| |
Collapse
|
35
|
Liu J, Sun X, Li R, Peng Y. Recognition of cervical precancerous lesions based on probability distribution feature guidance. Curr Med Imaging 2022; 18:1204-1213. [DOI: 10.2174/1573405618666220428104541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/07/2022] [Accepted: 03/13/2022] [Indexed: 11/22/2022]
Abstract
INTRODUCTION:
Cervical cancer is a high incidence of cancer in women and cervical precancerous screening plays an important role in reducing the mortality rate.
METHOD:
- In this study, we proposed a multichannel feature extraction method based on the probability distribution features of the acetowhite (AW) region to identify cervical precancerous lesions, with the overarching goal to improve the accuracy of cervical precancerous screening. A k-means clustering algorithm was first used to extract the cervical region images from the original colposcopy images. We then used a deep learning model called DeepLab V3+ to segment the AW region of the cervical image after the acetic acid experiment, from which the probability distribution map of the AW region after segmentation was obtained. This probability distribution map was fed into a neural network classification model for multichannel feature extraction, which resulted in the final classification performance.
RESULT:
Results of the experimental evaluation showed that the proposed method achieved an average accuracy of 87.7%, an average sensitivity of 89.3%, and an average specificity of 85.6%. Compared with the methods that did not add segmented probability features, the proposed method increased the average accuracy rate, sensitivity, and specificity by 8.3%, 8%, and 8.4%, respectively.
CONCLUSION:
Overall, the proposed method holds great promise for enhancing the screening of cervical precancerous lesions in the clinic by providing the physician with more reliable screening results that might reduce their workload.
Collapse
Affiliation(s)
- Jun Liu
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Xiaoxue Sun
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Rihui Li
- Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Yuanxiu Peng
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| |
Collapse
|
36
|
Brenes D, Barberan CJ, Hunt B, Parra SG, Salcedo MP, Possati-Resende JC, Cremer ML, Castle PE, Fregnani JHTG, Maza M, Schmeler KM, Baraniuk R, Richards-Kortum R. Multi-task network for automated analysis of high-resolution endomicroscopy images to detect cervical precancer and cancer. Comput Med Imaging Graph 2022; 97:102052. [PMID: 35299096 PMCID: PMC9250128 DOI: 10.1016/j.compmedimag.2022.102052] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 02/04/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
Cervical cancer is a public health emergency in low- and middle-income countries where resource limitations hamper standard-of-care prevention strategies. The high-resolution endomicroscope (HRME) is a low-cost, point-of-care device with which care providers can image the nuclear morphology of cervical lesions. Here, we propose a deep learning framework to diagnose cervical intraepithelial neoplasia grade 2 or more severe from HRME images. The proposed multi-task convolutional neural network uses nuclear segmentation to learn a diagnostically relevant representation. Nuclear segmentation was trained via proxy labels to circumvent the need for expensive, manually annotated nuclear masks. A dataset of images from over 1600 patients was used to train, validate, and test our algorithm; data from 20% of patients were reserved for testing. An external evaluation set with images from 508 patients was used to further validate our findings. The proposed method consistently outperformed other state-of-the art architectures achieving a test per patient area under the receiver operating characteristic curve (AUC-ROC) of 0.87. Performance was comparable to expert colposcopy with a test sensitivity and specificity of 0.94 (p = 0.3) and 0.58 (p = 1.0), respectively. Patients with recurrent human papillomavirus (HPV) infections are at a higher risk of developing cervical cancer. Thus, we sought to incorporate HPV DNA test results as a feature to inform prediction. We found that incorporating patient HPV status improved test specificity to 0.71 at a sensitivity of 0.94.
Collapse
Affiliation(s)
| | | | - Brady Hunt
- Rice University, Houston, TX 77005, USA.
| | | | - Mila P Salcedo
- University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA.
| | | | | | | | | | - Mauricio Maza
- Basic Health International, San Savlador, El Salvador.
| | | | | | | |
Collapse
|
37
|
Chen K, Wang Q, Ma Y. Cervical optical coherence tomography image classification based on contrastive self-supervised texture learning. Med Phys 2022; 49:3638-3653. [PMID: 35342956 DOI: 10.1002/mp.15630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/26/2022] [Accepted: 03/16/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Cervical cancer seriously affects the health of the female reproductive system. Optical coherence tomography (OCT) emerged as a non-invasive, high-resolution imaging technology for cervical disease detection. However, OCT image annotation is knowledge-intensive and time-consuming, which impedes the training process of deep-learning-based classification models. PURPOSE This study aims to develop a computer-aided diagnosis (CADx) approach to classifying in-vivo cervical OCT images based on self-supervised learning. METHODS In addition to high-level semantic features extracted by a convolutional neural network (CNN), the proposed CADx approach designs a contrastive texture learning (CTL) strategy to leverage unlabeled cervical OCT images' texture features. We conducted ten-fold cross-validation on the OCT image dataset from a multi-center clinical study on 733 patients from China. RESULTS In a binary classification task for detecting high-risk diseases, including high-grade squamous intraepithelial lesion and cervical cancer, our method achieved an area-under-the-curve value of 0.9798 ± 0.0157 with a sensitivity of 91.17 ± 4.99% and a specificity of 93.96 ± 4.72% for OCT image patches; also, it outperformed two out of four medical experts on the test set. Furthermore, our method achieved a 91.53% sensitivity and 97.37% specificity on an external validation dataset containing 287 3D OCT volumes from 118 Chinese patients in a new hospital using a cross-shaped threshold voting strategy. CONCLUSIONS The proposed contrastive-learning-based CADx method outperformed the end-to-end CNN models and provided better interpretability based on texture features, which holds great potential to be used in the clinical protocol of "see-and-treat." This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Kaiyi Chen
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Qingbin Wang
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Yutao Ma
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| |
Collapse
|
38
|
Li X, Jiang Y, Liu Y, Zhang J, Yin S, Luo H. RAGCN: Region Aggregation Graph Convolutional Network for Bone Age Assessment From X-Ray Images. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2022; 71:1-12. [DOI: 10.1109/tim.2022.3190025] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
Affiliation(s)
- Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Yiliu Liu
- Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
39
|
Yue Z, Ding S, Li X, Yang S, Zhang Y. Automatic Acetowhite Lesion Segmentation via Specular Reflection Removal and Deep Attention Network. IEEE J Biomed Health Inform 2021; 25:3529-3540. [PMID: 33684051 DOI: 10.1109/jbhi.2021.3064366] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Automatic acetowhite lesion segmentation in colposcopy images (cervigrams) is essential in assisting gynecologists for the diagnosis of cervical intraepithelial neoplasia grades and cervical cancer. It can also help gynecologists determine the correct lesion areas for further pathological examination. Existing computer-aided diagnosis algorithms show poor segmentation performance because of specular reflections, insufficient training data and the inability to focus on semantically meaningful lesion parts. In this paper, a novel computer-aided diagnosis algorithm is proposed to segment acetowhite lesions in cervigrams automatically. To reduce the interference of specularities on segmentation performance, a specular reflection removal mechanism is presented to detect and inpaint these areas with precision. Moreover, we design a cervigram image classification network to classify pathology results and generate lesion attention maps, which are subsequently leveraged to guide a more accurate lesion segmentation task by the proposed lesion-aware convolutional neural network. We conducted comprehensive experiments to evaluate the proposed approaches on 3045 clinical cervigrams. Our results show that our method outperforms state-of-the-art approaches and achieves better Dice similarity coefficient and Hausdorff Distance values in acetowhite legion segmentation.
Collapse
|
40
|
Liu L, Wang Y, Liu X, Han S, Jia L, Meng L, Yang Z, Chen W, Zhang Y, Qiao X. Computer-aided diagnostic system based on deep learning for classifying colposcopy images. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1045. [PMID: 34422957 PMCID: PMC8339824 DOI: 10.21037/atm-21-885] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 05/23/2021] [Indexed: 12/24/2022]
Abstract
Background Colposcopy is widely used to detect cervical cancer, but developing countries lack the experienced colposcopists necessary for accurate diagnosis. Artificial intelligence (AI) is being widely used in computer-aided diagnosis (CAD) systems. In this study, we developed and validated a CAD model based on deep learning to classify cervical lesions on colposcopy images. Methods Patient data, including clinical information, colposcopy images, and pathological results, were collected from Qilu Hospital. The study included 15,276 images from 7,530 patients. We performed two tasks in this study: normal cervix (NC) vs. low grade squamous intraepithelial lesion or worse (LSIL+) and high-grade squamous intraepithelial lesion (HSIL)- vs. HSIL+. The residual neural network (ResNet) probability was calculated for each patient to reflect the probability of lesions through a ResNet model. Next, a combination model was constructed by incorporating the ResNet probability and clinical features. We divided the dataset into a training set, validation set, and testing set at a ratio of 7:1:2. Finally, we randomly selected 300 patients from the testing set and compared the results with the diagnosis of a senior colposcopist and a junior colposcopist. Results The model that combines ResNet and clinical features performs better than ResNet alone. In the classification of NC and LSIL+, the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 0.953, 0.886, 0.932, 0.846, 0.838, and 0.936, respectively. In the classification of HSIL- and HSIL+, the AUC, accuracy, sensitivity, specificity, PPV, and NPV were 0.900, 0.807, 0.823, 0.800, 0.618, and 0.920, respectively. In the two classification tasks, the diagnostic performance of the model was determined to be comparable to that of the senior colposcopist and exhibited a stronger diagnostic performance than the junior colposcopist. Conclusions The CAD system for cervical lesion diagnosis based on deep learning performs well in the classification of cervical lesions and can provide an objective diagnostic basis for colposcopists.
Collapse
Affiliation(s)
- Lu Liu
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Ying Wang
- Department of Obstetrics and Gynecology, Yidu Central Hospital of Weifang, Weifang, China
| | - Xiaoli Liu
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Sai Han
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Lin Jia
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Lihua Meng
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Ziyan Yang
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Wei Chen
- School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University & Shandong Key Laboratory of Oral Tissue Regeneration & Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Jinan, China
| | - Youzhong Zhang
- Department of Obstetrics and Gynecology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xu Qiao
- School of Control Science and Engineering, Shandong University, Jinan, China
| |
Collapse
|
41
|
Classification of colposcopic images using a multi-breakpoints discretization approach on temporal patterns. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102918] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
42
|
Liu J, Liang T, Peng Y, Peng G, Sun L, Li L, Dong H. Segmentation of acetowhite region in uterine cervical image based on deep learning. Technol Health Care 2021; 30:469-482. [PMID: 34180439 DOI: 10.3233/thc-212890] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
BACKGROUND Acetowhite (AW) region is a critical physiological phenomenon of precancerous lesions of cervical cancer. An accurate segmentation of the AW region can provide a useful diagnostic tool for gynecologic oncologists in screening cervical cancers. Traditional approaches for the segmentation of AW regions relied heavily on manual or semi-automatic methods. OBJECTIVE To automatically segment the AW regions from colposcope images. METHODS First, the cervical region was extracted from the original colposcope images by k-means clustering algorithm. Second, a deep learning-based image semantic segmentation model named DeepLab V3+ was used to segment the AW region from the cervical image. RESULTS The results showed that, compared to the fuzzy clustering segmentation algorithm and the level set segmentation algorithm, the new method proposed in this study achieved a mean Jaccard Index (JI) accuracy of 63.6% (improved by 27.9% and 27.5% respectively), a mean specificity of 94.9% (improved by 55.8% and 32.3% respectively) and a mean accuracy of 91.2% (improved by 38.6% and 26.4% respectively). A mean sensitivity of 78.2% was achieved by the proposed method, which was 17.4% and 10.1% lower respectively. Compared to the image semantic segmentation models U-Net and PSPNet, the proposed method yielded a higher mean JI accuracy, mean sensitivity and mean accuracy. CONCLUSION The improved segmentation performance suggested that the proposed method may serve as a useful complimentary tool in screening cervical cancer.
Collapse
Affiliation(s)
- Jun Liu
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Tong Liang
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Yun Peng
- San Diego, California, CA 91355, USA
| | - Gengyou Peng
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Lechan Sun
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Jiangxi 330006, China
| | - Hua Dong
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| |
Collapse
|
43
|
Using Dynamic Features for Automatic Cervical Precancer Detection. Diagnostics (Basel) 2021; 11:diagnostics11040716. [PMID: 33920732 PMCID: PMC8073487 DOI: 10.3390/diagnostics11040716] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/07/2021] [Accepted: 04/15/2021] [Indexed: 11/17/2022] Open
Abstract
Cervical cancer remains a major public health concern in developing countries due to financial and human resource constraints. Visual inspection with acetic acid (VIA) of the cervix was widely promoted and routinely used as a low-cost primary screening test in low- and middle-income countries. It can be performed by a variety of health workers and the result is immediate. VIA provides a transient whitening effect which appears and disappears differently in precancerous and cancerous lesions, as compared to benign conditions. Colposcopes are often used during VIA to magnify the view of the cervix and allow clinicians to visually assess it. However, this assessment is generally subjective and unreliable even for experienced clinicians. Computer-aided techniques may improve the accuracy of VIA diagnosis and be an important determinant in the promotion of cervical cancer screening. This work proposes a smartphone-based solution that automatically detects cervical precancer from the dynamic features extracted from videos taken during VIA. The proposed solution achieves a sensitivity and specificity of 0.9 and 0.87 respectively, and could be a solution for screening in countries that suffer from the lack of expensive tools such as colposcopes and well-trained clinicians.
Collapse
|
44
|
Xue P, Tang C, Li Q, Li Y, Shen Y, Zhao Y, Chen J, Wu J, Li L, Wang W, Li Y, Cui X, Zhang S, Zhang W, Zhang X, Ma K, Zheng Y, Qian T, Ng MTA, Liu Z, Qiao Y, Jiang Y, Zhao F. Development and validation of an artificial intelligence system for grading colposcopic impressions and guiding biopsies. BMC Med 2020; 18:406. [PMID: 33349257 PMCID: PMC7754595 DOI: 10.1186/s12916-020-01860-y] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 11/19/2020] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Colposcopy diagnosis and directed biopsy are the key components in cervical cancer screening programs. However, their performance is limited by the requirement for experienced colposcopists. This study aimed to develop and validate a Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) for grading colposcopic impressions and guiding biopsies. METHODS Anonymized digital records of 19,435 patients were obtained from six hospitals across China. These records included colposcopic images, clinical information, and pathological results (gold standard). The data were randomly assigned (7:1:2) to a training and a tuning set for developing CAIADS and to a validation set for evaluating performance. RESULTS The agreement between CAIADS-graded colposcopic impressions and pathology findings was higher than that of colposcopies interpreted by colposcopists (82.2% versus 65.9%, kappa 0.750 versus 0.516, p < 0.001). For detecting pathological high-grade squamous intraepithelial lesion or worse (HSIL+), CAIADS showed higher sensitivity than the use of colposcopies interpreted by colposcopists at either biopsy threshold (low-grade or worse 90.5%, 95% CI 88.9-91.4% versus 83.5%, 81.5-85.3%; high-grade or worse 71.9%, 69.5-74.2% versus 60.4%, 57.9-62.9%; all p < 0.001), whereas the specificities were similar (low-grade or worse 51.8%, 49.8-53.8% versus 52.0%, 50.0-54.1%; high-grade or worse 93.9%, 92.9-94.9% versus 94.9%, 93.9-95.7%; all p > 0.05). The CAIADS also demonstrated a superior ability in predicting biopsy sites, with a median mean-intersection-over-union (mIoU) of 0.758. CONCLUSIONS The CAIADS has potential in assisting beginners and for improving the diagnostic quality of colposcopy and biopsy in the detection of cervical precancer/cancer.
Collapse
Affiliation(s)
- Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Chao Tang
- School of Public Health, Dalian Medical University, Dalian, China
| | - Qing Li
- Diagnosis and Treatment for Cervical Lesions Center, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | | | - Yu Shen
- Zonsun Healthcare, Shenzhen, China
| | - Yuqian Zhao
- Center for Cancer Prevention Research, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | | | | | - Longyu Li
- Jiangxi Maternal and Child Health Hospital, Nanchang, China
| | - Wei Wang
- Chengdu Women's and Children's Central Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Yucong Li
- Chongqing University Cancer Hospital, Chongqing, China
| | - Xiaoli Cui
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Shaokai Zhang
- Affiliated Cancer Hospital of Zhengzhou University/Henan Cancer Hospital, Zhengzhou, China
| | - Wenhua Zhang
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xun Zhang
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | | | | | - Zhihua Liu
- Department of Gynecology, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Youlin Qiao
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yu Jiang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Fanghui Zhao
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|