1
|
Cervical pre-cancerous lesion detection: development of smartphone-based VIA application using artificial intelligence. BMC Res Notes 2022; 15:356. [PMID: 36463193 PMCID: PMC9719132 DOI: 10.1186/s13104-022-06250-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 11/18/2022] [Indexed: 12/04/2022] Open
Abstract
OBJECTIVE Visual inspection of cervix after acetic acid application (VIA) has been considered an alternative to Pap smear in resource-limited settings, like Indonesia. However, VIA results mainly depend on examiner's experience and with the lack of comprehensive training of healthcare workers, VIA accuracy keeps declining. We aimed to develop an artificial intelligence (AI)-based Android application that can automatically determine VIA results in real time and may be further developed as a health care support system in cervical cancer screening. RESULT A total of 199 women who underwent VIA test was studied. Images of cervix before and after VIA test were taken with smartphone, then evaluated and labelled by experienced oncologist as VIA positive or negative. Our AI model training pipeline consists of 3 steps: image pre-processing, feature extraction, and classifier development. Out of the 199 data, 134 were used as train-validation data and the remaining 65 data were used as test data. The trained AI model generated a sensitivity of 80%, specificity of 96.4%, accuracy of 93.8%, precision of 80%, and ROC/AUC of 0.85 (95% CI 0.66-1.0). The developed AI-based Android application may potentially aid cervical cancer screening, especially in low resource settings.
Collapse
|
2
|
Park J, Yang H, Roh HJ, Jung W, Jang GJ. Encoder-Weighted W-Net for Unsupervised Segmentation of Cervix Region in Colposcopy Images. Cancers (Basel) 2022; 14:3400. [PMID: 35884460 PMCID: PMC9317688 DOI: 10.3390/cancers14143400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 07/05/2022] [Accepted: 07/11/2022] [Indexed: 11/26/2022] Open
Abstract
Cervical cancer can be prevented and treated better if it is diagnosed early. Colposcopy, a way of clinically looking at the cervix region, is an efficient method for cervical cancer screening and its early detection. The cervix region segmentation significantly affects the performance of computer-aided diagnostics using a colposcopy, particularly cervical intraepithelial neoplasia (CIN) classification. However, there are few studies of cervix segmentation in colposcopy, and no studies of fully unsupervised cervix region detection without image pre- and post-processing. In this study, we propose a deep learning-based unsupervised method to identify cervix regions without pre- and post-processing. A new loss function and a novel scheduling scheme for the baseline W-Net are proposed for fully unsupervised cervix region segmentation in colposcopy. The experimental results showed that the proposed method achieved the best performance in the cervix segmentation with a Dice coefficient of 0.71 with less computational cost. The proposed method produced cervix segmentation masks with more reduction in outliers and can be applied before CIN detection or other diagnoses to improve diagnostic performance. Our results demonstrate that the proposed method not only assists medical specialists in diagnosis in practical situations but also shows the potential of an unsupervised segmentation approach in colposcopy.
Collapse
Affiliation(s)
- Jinhee Park
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Korea;
- Neopons, Daegu 41404, Korea
| | - Hyunmo Yang
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (H.Y.); (W.J.)
| | - Hyun-Jin Roh
- Department of Obstetrics and Gynaecology, University of Ulsan College of Medicine, Ulsan University Hospital, Ulsan 44033, Korea;
| | - Woonggyu Jung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (H.Y.); (W.J.)
| | - Gil-Jin Jang
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Korea;
- School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
| |
Collapse
|
3
|
Liu J, Sun X, Li R, Peng Y. Recognition of cervical precancerous lesions based on probability distribution feature guidance. Curr Med Imaging 2022; 18:1204-1213. [DOI: 10.2174/1573405618666220428104541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/07/2022] [Accepted: 03/13/2022] [Indexed: 11/22/2022]
Abstract
INTRODUCTION:
Cervical cancer is a high incidence of cancer in women and cervical precancerous screening plays an important role in reducing the mortality rate.
METHOD:
- In this study, we proposed a multichannel feature extraction method based on the probability distribution features of the acetowhite (AW) region to identify cervical precancerous lesions, with the overarching goal to improve the accuracy of cervical precancerous screening. A k-means clustering algorithm was first used to extract the cervical region images from the original colposcopy images. We then used a deep learning model called DeepLab V3+ to segment the AW region of the cervical image after the acetic acid experiment, from which the probability distribution map of the AW region after segmentation was obtained. This probability distribution map was fed into a neural network classification model for multichannel feature extraction, which resulted in the final classification performance.
RESULT:
Results of the experimental evaluation showed that the proposed method achieved an average accuracy of 87.7%, an average sensitivity of 89.3%, and an average specificity of 85.6%. Compared with the methods that did not add segmented probability features, the proposed method increased the average accuracy rate, sensitivity, and specificity by 8.3%, 8%, and 8.4%, respectively.
CONCLUSION:
Overall, the proposed method holds great promise for enhancing the screening of cervical precancerous lesions in the clinic by providing the physician with more reliable screening results that might reduce their workload.
Collapse
Affiliation(s)
- Jun Liu
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Xiaoxue Sun
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Rihui Li
- Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Yuanxiu Peng
- College of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| |
Collapse
|
4
|
Dual-attention EfficientNet based on multi-view feature fusion for cervical squamous intraepithelial lesions diagnosis. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
5
|
Yue Z, Ding S, Li X, Yang S, Zhang Y. Automatic Acetowhite Lesion Segmentation via Specular Reflection Removal and Deep Attention Network. IEEE J Biomed Health Inform 2021; 25:3529-3540. [PMID: 33684051 DOI: 10.1109/jbhi.2021.3064366] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Automatic acetowhite lesion segmentation in colposcopy images (cervigrams) is essential in assisting gynecologists for the diagnosis of cervical intraepithelial neoplasia grades and cervical cancer. It can also help gynecologists determine the correct lesion areas for further pathological examination. Existing computer-aided diagnosis algorithms show poor segmentation performance because of specular reflections, insufficient training data and the inability to focus on semantically meaningful lesion parts. In this paper, a novel computer-aided diagnosis algorithm is proposed to segment acetowhite lesions in cervigrams automatically. To reduce the interference of specularities on segmentation performance, a specular reflection removal mechanism is presented to detect and inpaint these areas with precision. Moreover, we design a cervigram image classification network to classify pathology results and generate lesion attention maps, which are subsequently leveraged to guide a more accurate lesion segmentation task by the proposed lesion-aware convolutional neural network. We conducted comprehensive experiments to evaluate the proposed approaches on 3045 clinical cervigrams. Our results show that our method outperforms state-of-the-art approaches and achieves better Dice similarity coefficient and Hausdorff Distance values in acetowhite legion segmentation.
Collapse
|
6
|
Liu J, Liang T, Peng Y, Peng G, Sun L, Li L, Dong H. Segmentation of acetowhite region in uterine cervical image based on deep learning. Technol Health Care 2021; 30:469-482. [PMID: 34180439 DOI: 10.3233/thc-212890] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
BACKGROUND Acetowhite (AW) region is a critical physiological phenomenon of precancerous lesions of cervical cancer. An accurate segmentation of the AW region can provide a useful diagnostic tool for gynecologic oncologists in screening cervical cancers. Traditional approaches for the segmentation of AW regions relied heavily on manual or semi-automatic methods. OBJECTIVE To automatically segment the AW regions from colposcope images. METHODS First, the cervical region was extracted from the original colposcope images by k-means clustering algorithm. Second, a deep learning-based image semantic segmentation model named DeepLab V3+ was used to segment the AW region from the cervical image. RESULTS The results showed that, compared to the fuzzy clustering segmentation algorithm and the level set segmentation algorithm, the new method proposed in this study achieved a mean Jaccard Index (JI) accuracy of 63.6% (improved by 27.9% and 27.5% respectively), a mean specificity of 94.9% (improved by 55.8% and 32.3% respectively) and a mean accuracy of 91.2% (improved by 38.6% and 26.4% respectively). A mean sensitivity of 78.2% was achieved by the proposed method, which was 17.4% and 10.1% lower respectively. Compared to the image semantic segmentation models U-Net and PSPNet, the proposed method yielded a higher mean JI accuracy, mean sensitivity and mean accuracy. CONCLUSION The improved segmentation performance suggested that the proposed method may serve as a useful complimentary tool in screening cervical cancer.
Collapse
Affiliation(s)
- Jun Liu
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Tong Liang
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Yun Peng
- San Diego, California, CA 91355, USA
| | - Gengyou Peng
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Lechan Sun
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Jiangxi 330006, China
| | - Hua Dong
- Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China
| |
Collapse
|
7
|
Pal A, Xue Z, Befano B, Rodriguez AC, Long LR, Schiffman M, Antani S. Deep Metric Learning for Cervical Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:53266-53275. [PMID: 34178558 PMCID: PMC8224396 DOI: 10.1109/access.2021.3069346] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Cervical cancer is caused by the persistent infection of certain types of the Human Papillomavirus (HPV) and is a leading cause of female mortality particularly in low and middle-income countries (LMIC). Visual inspection of the cervix with acetic acid (VIA) is a commonly used technique in cervical screening. While this technique is inexpensive, clinical assessment is highly subjective, and relatively poor reproducibility has been reported. A deep learning-based algorithm for automatic visual evaluation (AVE) of aceto-whitened cervical images was shown to be effective in detecting confirmed precancer (i.e. direct precursor to invasive cervical cancer). The images were selected from a large longitudinal study conducted by the National Cancer Institute in the Guanacaste province of Costa Rica. The training of AVE used annotation for cervix boundary, and the data scarcity challenge was dealt with manually optimized data augmentation. In contrast, we present a novel approach for cervical precancer detection using a deep metric learning-based (DML) framework which does not incorporate any effort for cervix boundary marking. The DML is an advanced learning strategy that can deal with data scarcity and bias training due to class imbalance data in a better way. Three different widely-used state-of-the-art DML techniques are evaluated- (a) Contrastive loss minimization, (b) N-pair embedding loss minimization, and, (c) Batch-hard loss minimization. Three popular Deep Convolutional Neural Networks (ResNet-50, MobileNet, NasNet) are configured for training with DML to produce class-separated (i.e. linearly separable) image feature descriptors. Finally, a K-Nearest Neighbor (KNN) classifier is trained with the extracted deep features. Both the feature quality and classification performance are quantitatively evaluated on the same data set as used in AVE. It shows that, unlike AVE, without using any data augmentation, the best model produced from our research improves specificity in disease detection without compromising sensitivity. The present research thus paves the way for new research directions for the related field.
Collapse
Affiliation(s)
- Anabik Pal
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Zhiyun Xue
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Brian Befano
- Information Management Services, Calverton, MD 20705, USA
| | | | - L Rodney Long
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Mark Schiffman
- National Cancer Institute, National Institutes of Health, Rockville, MD 20850, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| |
Collapse
|
8
|
Yu Y, Ma J, Zhao W, Li Z, Ding S. MSCI: A multistate dataset for colposcopy image classification of cervical cancer screening. Int J Med Inform 2020; 146:104352. [PMID: 33360117 DOI: 10.1016/j.ijmedinf.2020.104352] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 11/05/2020] [Accepted: 11/21/2020] [Indexed: 11/26/2022]
Abstract
BACKGROUND Cervical cancer is the second most common female cancer globally, and it is vital to detect cervical cancer with low cost at an early stage using automated screening methods of high accuracy, especially in areas with insufficient medical resources. Automatic detection of cervical intraepithelial neoplasia (CIN) can effectively prevent cervical cancer. OBJECTIVES Due to the deficiency of standard and accessible colposcopy image datasets, we present a dataset containing 4753 colposcopy images acquired from 679 patients in three states (acetic acid reaction, green filter, and iodine test) for detection of cervical intraepithelial neoplasia. Based on this dataset, a new computer-aided method for cervical cancer screening was proposed. METHODS We employed a wide range of methods to comprehensively evaluate our proposed dataset. Hand-crafted feature extraction methods and deep learning methods were used for the performance verification of the multistate colposcopy image (MSCI) dataset. Importantly, we propose a gated recurrent convolutional neural network (C-GCNN) for colposcopy image analysis that considers time series and combined multistate cervical images for CIN grading. RESULTS The experimental results showed that the proposed C-GCNN model achieves the best classification performance in CIN grading compared with hand-crafted feature extraction methods and classic deep learning methods. The results showed an accuracy of 96.87 %, a sensitivity of 95.68 %, and a specificity of 98.72 %. CONCLUSION A multistate colposcopy image dataset (MSCI) is proposed. A CIN grading model (C-GCNN) based on the MSCI dataset is established, which provides a potential method for automated cervical cancer screening.
Collapse
Affiliation(s)
- Yao Yu
- The School of Management, Hefei University of Technology, China
| | - Jie Ma
- The First Affiliated Hospital of USTC, China
| | | | - Zhenmin Li
- The School of Microelectronics, Hefei University of Technology, China
| | - Shuai Ding
- The School of Management, Hefei University of Technology, China.
| |
Collapse
|
9
|
Asiedu MN, Skerrett E, Sapiro G, Ramanujam N. Combining multiple contrasts for improving machine learning-based classification of cervical cancers with a low-cost point-of-care Pocket colposcope. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1148-1151. [PMID: 33018190 DOI: 10.1109/embc44109.2020.9175858] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
We apply feature-extraction and machine learning methods to multiple sources of contrast (acetic acid, Lugol's iodine and green light) from the white Pocket Colposcope, a low-cost point of care colposcope for cervical cancer screening. We combine features from the sources of contrast and analyze diagnostic improvements with addition of each contrast. We find that overall AUC increases with additional contrast agents compared to using only one source.
Collapse
|
10
|
Yue Z, Ding S, Zhao W, Wang H, Ma J, Zhang Y, Zhang Y. Automatic CIN Grades Prediction of Sequential Cervigram Image Using LSTM With Multistate CNN Features. IEEE J Biomed Health Inform 2020; 24:844-854. [DOI: 10.1109/jbhi.2019.2922682] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
11
|
Guo P, Xue Z, Long LR, Antani S. Cross-Dataset Evaluation of Deep Learning Networks for Uterine Cervix Segmentation. Diagnostics (Basel) 2020; 10:diagnostics10010044. [PMID: 31947707 PMCID: PMC7167955 DOI: 10.3390/diagnostics10010044] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 12/27/2019] [Accepted: 01/07/2020] [Indexed: 02/05/2023] Open
Abstract
Evidence from recent research shows that automatic visual evaluation (AVE) of photographic images of the uterine cervix using deep learning-based algorithms presents a viable solution for improving cervical cancer screening by visual inspection with acetic acid (VIA). However, a significant performance determinant in AVE is the photographic image quality. While this includes image sharpness and focus, an important criterion is the localization of the cervix region. Deep learning networks have been successfully applied for object localization and segmentation in images, providing impetus for studying their use for fine contour segmentation of the cervix. In this paper, we present an evaluation of two state-of-the-art deep learning-based object localization and segmentation methods, viz., Mask R-convolutional neural network (CNN) and MaskX R-CNN, for automatic cervix segmentation using three datasets. We carried out extensive experimental tests and algorithm comparisons on each individual dataset and across datasets, and achieved performance either notably higher than, or comparable to, that reported in the literature. The highest Dice and intersection-over-union (IoU) scores that we obtained using Mask R-CNN were 0.947 and 0.901, respectively.
Collapse
Affiliation(s)
- Peng Guo
- Correspondence: ; Tel.: +1-301-827-4171
| | | | | | | |
Collapse
|
12
|
Chen H, Yang L, Li L, Li M, Chen Z. An efficient cervical disease diagnosis approach using segmented images and cytology reporting. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2019.07.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
Kudva V, Prasad K, Guruvare S. Andriod Device-Based Cervical Cancer Screening for Resource-Poor Settings. J Digit Imaging 2019; 31:646-654. [PMID: 29777323 DOI: 10.1007/s10278-018-0083-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022] Open
Abstract
Visual inspection with acetic acid (VIA) is an effective, affordable and simple test for cervical cancer screening in resource-poor settings. But considerable expertise is needed to differentiate cancerous lesions from normal lesions, which is lacking in developing countries. Many studies have attempted automation of cervical cancer detection from cervix images acquired during the VIA process. These studies used images acquired through colposcopy or cervicography. However, colposcopy is expensive and hence is not feasible as a screening tool in resource-poor settings. Cervicography uses a digital camera to acquire cervix images which are subsequently sent to experts for evaluation. Hence, cervicography does not provide a real-time decision of whether the cervix is normal or not, during the VIA examination. In case the cervix is found to be abnormal, the patient may be referred to a hospital for further evaluation using Pap smear and/or biopsy. An android device with an inbuilt app to acquire images and provide instant results would be an obvious choice in resource-poor settings. In this paper, we propose an algorithm for analysis of cervix images acquired using an android device, which can be used for the development of decision support system to provide instant decision during cervical cancer screening. This algorithm offers an accuracy of 97.94%, a sensitivity of 99.05% and specificity of 97.16%.
Collapse
Affiliation(s)
- Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.,NMAMIT, Nitte, 574110, India
| | - Keerthana Prasad
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - Shyamala Guruvare
- Department of Obstetrics and Gynecology, Kasturba Medical College, Manipal, Karnataka, 576104, India
| |
Collapse
|
14
|
Asiedu MN, Simhal A, Chaudhary U, Mueller JL, Lam CT, Schmitt JW, Venegas G, Sapiro G, Ramanujam N. Development of Algorithms for Automated Detection of Cervical Pre-Cancers With a Low-Cost, Point-of-Care, Pocket Colposcope. IEEE Trans Biomed Eng 2018; 66:2306-2318. [PMID: 30575526 DOI: 10.1109/tbme.2018.2887208] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
GOAL In this paper, we propose methods for (1) automatic feature extraction and classification for acetic acid and Lugol's iodine cervigrams and (2) methods for combining features/diagnosis of different contrasts in cervigrams for improved performance. METHODS We developed algorithms to pre-process pathology-labeled cervigrams and extract simple but powerful color and textural-based features. The features were used to train a support vector machine model to classify cervigrams based on corresponding pathology for visual inspection with acetic acid, visual inspection with Lugol's iodine, and a combination of the two contrasts. RESULTS The proposed framework achieved a sensitivity, specificity, and accuracy of 81.3%, 78.6%, and 80.0%, respectively, when used to distinguish cervical intraepithelial neoplasia (CIN+) relative to normal and benign tissues. This is superior to the average values achieved by three expert physicians on the same data set for discriminating normal/benign cases from CIN+ (77% sensitivity, 51% specificity, and 63% accuracy). CONCLUSION The results suggest that utilizing simple color- and textural-based features from visual inspection with acetic acid and visual inspection with Lugol's iodine images may provide unbiased automation of cervigrams. SIGNIFICANCE This would enable automated, expert-level diagnosis of cervical pre-cancer at the point of care.
Collapse
|
15
|
Liu J, Li L, Wang L. Acetowhite region segmentation in uterine cervix images using a registered ratio image. Comput Biol Med 2018; 93:47-55. [DOI: 10.1016/j.compbiomed.2017.12.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 12/14/2017] [Accepted: 12/14/2017] [Indexed: 12/24/2022]
|
16
|
Detection of Specular Reflection and Segmentation of Cervix Region in Uterine Cervix Images for Cervical Cancer Screening. Ing Rech Biomed 2017. [DOI: 10.1016/j.irbm.2017.08.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
17
|
Alsaleh SM, Aviles AI, Sobrevilla P, Casals A, Hahn JK. Automatic and robust single-camera specular highlight removal in cardiac images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:675-8. [PMID: 26736352 DOI: 10.1109/embc.2015.7318452] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
In computer-assisted beating heart surgeries, accurate tracking of the heart's motion is of huge importance and there is a continuous need to eliminate any source of error that might disturb the tracking process. One source of error is the specular reflection that appears on the glossy surface of the heart. In this paper, we propose a robust solution for the detection and removal of specular highlights. A hybrid color attributes and wavelet based edge projection approach is applied to accurately identify the affected regions. These regions are then recovered using a dynamic search-based inpainting with adaptive windowing. Experimental results demonstrate the precision and efficiency of the proposed method. Moreover, it has a real-time performance and can be generalized to various other applications.
Collapse
|
18
|
Holmen SD, Kjetland EF, Taylor M, Kleppa E, Lillebø K, Gundersen SG, Onsrud M, Albregtsen F. Colourimetric image analysis as a diagnostic tool in female genital schistosomiasis. Med Eng Phys 2015; 37:309-14. [PMID: 25630808 DOI: 10.1016/j.medengphy.2014.12.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Revised: 11/21/2014] [Accepted: 12/21/2014] [Indexed: 11/25/2022]
Abstract
Female genital schistosomiasis (FGS) is a highly prevalent waterborne disease in some of the poorest areas of sub-Saharan Africa. Reliable and affordable diagnostics are unavailable. We explored colourimetric image analysis to identify the characteristic, yellow lesions caused by FGS. We found that the method may yield a sensitivity of 83% and a specificity of 73% in colposcopic images. The accuracy was also explored in images of simulated inferior quality, to assess the possibility of implementing such a method in simple, electronic devices. This represents the first step towards developing a safe and affordable aid in clinical diagnosis, allowing for a point-of-care approach.
Collapse
Affiliation(s)
- Sigve Dhondup Holmen
- Centre for Imported and Tropical Diseases, Oslo University Hospital, Oslo, Norway; Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Norway.
| | | | - Myra Taylor
- School of Public Health, Nelson Mandela School of Medicine, University of KwaZulu-Natal, South Africa
| | - Elisabeth Kleppa
- Centre for Imported and Tropical Diseases, Oslo University Hospital, Oslo, Norway; Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Norway
| | - Kristine Lillebø
- Centre for Imported and Tropical Diseases, Oslo University Hospital, Oslo, Norway
| | - Svein Gunnar Gundersen
- Centre for Imported and Tropical Diseases, Oslo University Hospital, Oslo, Norway; Research Department, Sørlandet Hospital HF, Kristiansand, Norway; Institute of Development Studies, University of Agder, Kristiansand, Norway
| | - Mathias Onsrud
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Norway
| | - Fritz Albregtsen
- Department of Informatics, University of Oslo, Oslo, Norway; Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
19
|
Song D, Kim E, Huang X, Patruno J, Muñoz-Avila H, Heflin J, Long LR, Antani S. Multimodal entity coreference for cervical dysplasia diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:229-45. [PMID: 25167547 PMCID: PMC11977577 DOI: 10.1109/tmi.2014.2352311] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Cervical cancer is the second most common type of cancer for women. Existing screening programs for cervical cancer, such as Pap Smear, suffer from low sensitivity. Thus, many patients who are ill are not detected in the screening process. Using images of the cervix as an aid in cervical cancer screening has the potential to greatly improve sensitivity, and can be especially useful in resource-poor regions of the world. In this paper, we develop a data-driven computer algorithm for interpreting cervical images based on color and texture. We are able to obtain 74% sensitivity and 90% specificity when differentiating high-grade cervical lesions from low-grade lesions and normal tissue. On the same dataset, using Pap tests alone yields a sensitivity of 37% and specificity of 96%, and using HPV test alone gives a 57% sensitivity and 93% specificity. Furthermore, we develop a comprehensive algorithmic framework based on Multimodal Entity Coreference for combining various tests to perform disease classification and diagnosis. When integrating multiple tests, we adopt information gain and gradient-based approaches for learning the relative weights of different tests. In our evaluation, we present a novel algorithm that integrates cervical images, Pap, HPV, and patient age, which yields 83.21% sensitivity and 94.79% specificity, a statistically significant improvement over using any single source of information alone.
Collapse
Affiliation(s)
- Dezhao Song
- Research and Development, Thomson Reuters, Eagan, MN 55122, USA
| | - Edward Kim
- Department of Computing Sciences, Villanova University, Villanova, PA 19085, USA
| | - Xiaolei Huang
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - Joseph Patruno
- Department of Obstetrics and Gynecology, Lehigh Valley Health Network, Allentown 18105, PA, USA
| | - Héctor Muñoz-Avila
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - Jeff Heflin
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - L. Rodney Long
- Communications Engineering Branch, National Library of Medicine, Bethesda, MD 20894, USA
| | - Sameer Antani
- Communications Engineering Branch, National Library of Medicine, Bethesda, MD 20894, USA
| |
Collapse
|
20
|
Jusman Y, Ng SC, Abu Osman NA. Intelligent screening systems for cervical cancer. ScientificWorldJournal 2014; 2014:810368. [PMID: 24955419 PMCID: PMC4037632 DOI: 10.1155/2014/810368] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2013] [Accepted: 02/11/2014] [Indexed: 12/20/2022] Open
Abstract
Advent of medical image digitalization leads to image processing and computer-aided diagnosis systems in numerous clinical applications. These technologies could be used to automatically diagnose patient or serve as second opinion to pathologists. This paper briefly reviews cervical screening techniques, advantages, and disadvantages. The digital data of the screening techniques are used as data for the computer screening system as replaced in the expert analysis. Four stages of the computer system are enhancement, features extraction, feature selection, and classification reviewed in detail. The computer system based on cytology data and electromagnetic spectra data achieved better accuracy than other data.
Collapse
Affiliation(s)
- Yessi Jusman
- Department of Biomedical Engineering, Faculty of Engineering Building, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Siew Cheok Ng
- Department of Biomedical Engineering, Faculty of Engineering Building, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Noor Azuan Abu Osman
- Department of Biomedical Engineering, Faculty of Engineering Building, University of Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
21
|
Chitchian S, Vincent KL, Vargas G, Motamedi M. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography. JOURNAL OF BIOMEDICAL OPTICS 2012; 17:116004. [PMID: 23117799 PMCID: PMC3484240 DOI: 10.1117/1.jbo.17.11.116004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 09/22/2012] [Accepted: 09/25/2012] [Indexed: 05/29/2023]
Abstract
We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.
Collapse
Affiliation(s)
- Shahab Chitchian
- University of Texas Medical Branch, Center for Biomedical Engineering, Galveston, Texas 77555
- University of Texas Medical Branch, Department of Ophthalmology, Galveston, Texas 77555
| | - Kathleen L. Vincent
- University of Texas Medical Branch, Center for Biomedical Engineering, Galveston, Texas 77555
- University of Texas Medical Branch, Department of Obstetrics and Gynecology, Galveston, Texas 77555
| | - Gracie Vargas
- University of Texas Medical Branch, Center for Biomedical Engineering, Galveston, Texas 77555
- University of Texas Medical Branch, Department of Neuroscience and Cell Biology, Galveston, Texas 77555
| | - Massoud Motamedi
- University of Texas Medical Branch, Center for Biomedical Engineering, Galveston, Texas 77555
- University of Texas Medical Branch, Department of Ophthalmology, Galveston, Texas 77555
| |
Collapse
|
22
|
Park SY, Sargent D, Lieberman R, Gustafsson U. Domain-specific image analysis for cervical neoplasia detection based on conditional random fields. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:867-78. [PMID: 21245006 DOI: 10.1109/tmi.2011.2106796] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
This paper presents a domain-specific automated image analysis framework for the detection of pre-cancerous and cancerous lesions of the uterine cervix. Our proposed framework departs from previous methods in that we include domain-specific diagnostic features in a probabilistic manner using conditional random fields. Likewise, we provide a novel window-based performance assessment scheme for 2D image analysis which addresses the intrinsic problem of image misalignment. Image regions corresponding to different tissue types are indentified for the extraction of domain-specific anatomical features. The unique optical properties of each tissue type and the diagnostic relationships between neighboring regions are incorporated in the proposed conditional random field model. The validity of our method is examined using clinical data from 48 patients, and its diagnostic potential is demonstrated by a performance comparison with expert colposcopy annotations, using histopathology as the ground truth. The proposed automated diagnostic approach can support or potentially replace conventional colposcopy, allow tissue specimen sampling to be performed in a more objective manner, and lower the number of cervical cancer cases in developing countries by providing a cost effective screening solution in low-resource settings.
Collapse
Affiliation(s)
- Sun Y Park
- Science and Technology International Medical Systems, San Diego, CA 92037, USA.
| | | | | | | |
Collapse
|
23
|
Xue Z, Long LR, Antani S, Neve L, Zhu Y, Thoma GR. A unified set of analysis tools for uterine cervix image segmentation. Comput Med Imaging Graph 2010; 34:593-604. [PMID: 20510585 PMCID: PMC2955170 DOI: 10.1016/j.compmedimag.2010.04.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2009] [Revised: 04/24/2010] [Accepted: 04/26/2010] [Indexed: 11/23/2022]
Abstract
Segmentation is a fundamental component of many medical image-processing applications, and it has long been recognized as a challenging problem. In this paper, we report our research and development efforts on analyzing and extracting clinically meaningful regions from uterine cervix images in a large database created for the study of cervical cancer. In addition to proposing new algorithms, we also focus on developing open source tools which are in synchrony with the research objectives. These efforts have resulted in three Web-accessible tools which address three important and interrelated sub-topics in medical image segmentation, respectively: the Boundary Marking Tool (BMT), Cervigram Segmentation Tool (CST), and Multi-Observer Segmentation Evaluation System (MOSES). The BMT is for manual segmentation, typically to collect "ground truth" image regions from medical experts. The CST is for automatic segmentation, and MOSES is for segmentation evaluation. These tools are designed to be a unified set in which data can be conveniently exchanged. They have value not only for improving the reliability and accuracy of algorithms of uterine cervix image segmentation, but also promoting collaboration between biomedical experts and engineers which are crucial to medical image-processing applications. Although the CST is designed for the unique characteristics of cervigrams, the BMT and MOSES are very general and extensible, and can be easily adapted to other biomedical image collections.
Collapse
Affiliation(s)
- Zhiyun Xue
- National Library of Medicine, Bethesda, MD, USA.
| | | | | | | | | | | |
Collapse
|
24
|
Alush A, Greenspan H, Goldberger J. Automated and interactive lesion detection and segmentation in uterine cervix images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:488-501. [PMID: 20129849 DOI: 10.1109/tmi.2009.2037201] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This paper presents a procedure for automatic extraction and segmentation of a class-specific object (or region) by learning class-specific boundaries. We describe and evaluate the method with a specific focus on the detection of lesion regions in uterine cervix images. The watershed segmentation map of the input image is modeled using a Markov random field (MRF) in which watershed regions correspond to binary random variables indicating whether the region is part of the lesion tissue or not. The local pairwise factors on the arcs of the watershed map indicate whether the arc is part of the object boundary. The factors are based on supervised learning of a visual word distribution. The final lesion region segmentation is obtained using a loopy belief propagation applied to the watershed arc-level MRF. Experimental results on real data show state-of-the-art segmentation results on this very challenging task that, if necessary, can be interactively enhanced.
Collapse
Affiliation(s)
- Amir Alush
- Department of Biomedical Engineering, Tel-Aviv University, 69978 Tel Aviv, Israel.
| | | | | |
Collapse
|
25
|
Shape priors for segmentation of the cervix region within uterine cervix images. J Digit Imaging 2008; 22:286-96. [PMID: 18704582 DOI: 10.1007/s10278-008-9134-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2008] [Revised: 04/30/2008] [Accepted: 05/10/2008] [Indexed: 10/21/2022] Open
Abstract
The work focuses on a unique medical repository of digital uterine cervix images ("cervigrams") collected by the National Cancer Institute (NCI), National Institute of Health, in longitudinal multiyear studies. NCI together with the National Library of Medicine is developing a unique web-based database of the digitized cervix images to study the evolution of lesions related to cervical cancer. Tools are needed for the automated analysis of the cervigram content to support the cancer research. In recent works, a multistage automated system for segmenting and labeling regions of medical and anatomical interest within the cervigrams was developed. The current paper concentrates on incorporating prior-shape information in the cervix region segmentation task. In accordance with the fact that human experts mark the cervix region as circular or elliptical, two shape models (and corresponding methods) are suggested. The shape models are embedded within an active contour framework that relies on image features. Experiments indicate that incorporation of the prior shape information augments previous results.
Collapse
|