301
|
Shahid AH, Singh M. Computational intelligence techniques for medical diagnosis and prognosis: Problems and current developments. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.05.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
302
|
Performance Analysis of Low-Level and High-Level Intuitive Features for Melanoma Detection. ELECTRONICS 2019. [DOI: 10.3390/electronics8060672] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper presents an intelligent approach for the detection of Melanoma—a deadly skin cancer. The first step in this direction includes the extraction of the textural features of the skin lesion along with the color features. The extracted features are used to train the Multilayer Feed-Forward Artificial Neural Networks. We evaluate the trained networks for the classification of test samples. This work entails three sets of experiments including 50 % , 70 % and 90 % of the data used for training, while the remaining 50 % , 30 % , and 10 % constitute the test sets. Haralick’s statistical parameters are computed for the extraction of textural features from the lesion. Such parameters are based on the Gray Level Co-occurrence Matrices (GLCM) with an offset of 2 , 4 , 8 , 12 , 16 , 20 , 24 and 28, each with an angle of 0 , 45 , 90 and 135 degrees, respectively. In order to distill color features, we have calculated the mean, median and standard deviation of the three color planes of the region of interest. These features are fed to an Artificial Neural Network (ANN) for the detection of skin cancer. The combination of Haralick’s parameters and color features have proven better than considering the features alone. Experimentation based on another set of features such as Asymmetry, Border irregularity, Color and Diameter (ABCD) features usually observed by dermatologists has also been demonstrated. The ‘D’ feature is however modified and named Oblongness. This feature captures the ratio between the length and the width. Furthermore, the use of modified standard deviation coupled with ABCD features improves the detection of Melanoma by an accuracy of 93.7 %
Collapse
|
303
|
Kalwa U, Legner C, Kong T, Pandey S. Skin Cancer Diagnostics with an All-Inclusive Smartphone Application. Symmetry (Basel) 2019; 11:790. [DOI: 10.3390/sym11060790] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Among the different types of skin cancer, melanoma is considered to be the deadliest and is difficult to treat at advanced stages. Detection of melanoma at earlier stages can lead to reduced mortality rates. Desktop-based computer-aided systems have been developed to assist dermatologists with early diagnosis. However, there is significant interest in developing portable, at-home melanoma diagnostic systems which can assess the risk of cancerous skin lesions. Here, we present a smartphone application that combines image capture capabilities with preprocessing and segmentation to extract the Asymmetry, Border irregularity, Color variegation, and Diameter (ABCD) features of a skin lesion. Using the feature sets, classification of malignancy is achieved through support vector machine classifiers. By using adaptive algorithms in the individual data-processing stages, our approach is made computationally light, user friendly, and reliable in discriminating melanoma cases from benign ones. Images of skin lesions are either captured with the smartphone camera or imported from public datasets. The entire process from image capture to classification runs on an Android smartphone equipped with a detachable 10x lens, and processes an image in less than a second. The overall performance metrics are evaluated on a public database of 200 images with Synthetic Minority Over-sampling Technique (SMOTE) (80% sensitivity, 90% specificity, 88% accuracy, and 0.85 area under curve (AUC)) and without SMOTE (55% sensitivity, 95% specificity, 90% accuracy, and 0.75 AUC). The evaluated performance metrics and computation times are comparable or better than previous methods. This all-inclusive smartphone application is designed to be easy-to-download and easy-to-navigate for the end user, which is imperative for the eventual democratization of such medical diagnostic systems.
Collapse
Affiliation(s)
- Upender Kalwa
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA
| | - Christopher Legner
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA
| | - Taejoon Kong
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA
| | - Santosh Pandey
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA
| |
Collapse
|
304
|
Movahed RA, Mohammadi E, Orooji M. Automatic segmentation of Sperm's parts in microscopic images of human semen smears using concatenated learning approaches. Comput Biol Med 2019; 109:242-253. [DOI: 10.1016/j.compbiomed.2019.04.032] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Revised: 04/22/2019] [Accepted: 04/23/2019] [Indexed: 11/29/2022]
|
305
|
Khan MA, Akram T, Sharif M, Saba T, Javed K, Lali IU, Tanik UJ, Rehman A. Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc Res Tech 2019; 82:741-763. [PMID: 30768826 DOI: 10.1002/jemt.23220] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 11/09/2018] [Accepted: 12/29/2018] [Indexed: 01/22/2023]
Abstract
Skin cancer is being a most deadly type of cancers which have grown extensively worldwide from the last decade. For an accurate detection and classification of melanoma, several measures should be considered which include, contrast stretching, irregularity measurement, selection of most optimal features, and so forth. A poor contrast of lesion affects the segmentation accuracy and also increases classification error. To overcome this problem, an efficient model for accurate border detection and classification is presented. The proposed model improves the segmentation accuracy in its preprocessing phase, utilizing contrast enhancement of lesion area compared to the background. The enhanced 2D blue channel is selected for the construction of saliency map, at the end of which threshold function produces the binary image. In addition, particle swarm optimization (PSO) based segmentation is also utilized for accurate border detection and refinement. Few selected features including shape, texture, local, and global are also extracted which are later selected based on genetic algorithm with an advantage of identifying the fittest chromosome. Finally, optimized features are later fed into the support vector machine (SVM) for classification. Comprehensive experiments have been carried out on three datasets named as PH2, ISBI2016, and ISIC (i.e., ISIC MSK-1, ISIC MSK-2, and ISIC UDA). The improved accuracy of 97.9, 99.1, 98.4, and 93.8%, respectively obtained for each dataset. The SVM outperforms on the selected dataset in terms of sensitivity, precision rate, accuracy, and FNR. Furthermore, the selection method outperforms and successfully removed the redundant features.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science and Engineering, HITEC University, Museum Road, Taxila, Pakistan
| | - Tallha Akram
- Department of Electrical Engineering, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, SA
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Ikram Ullah Lali
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Urcun John Tanik
- Computer Science and Information Systems Texas A&M University-Commerce, USA
| | - Amjad Rehman
- Department of Information Systems, Al Yamamah University, Riyadh, KSA
| |
Collapse
|
306
|
Hosseinzadeh Kassani S, Hosseinzadeh Kassani P. A comparative study of deep learning architectures on melanoma detection. Tissue Cell 2019; 58:76-83. [DOI: 10.1016/j.tice.2019.04.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2019] [Accepted: 04/19/2019] [Indexed: 11/16/2022]
|
307
|
Hosny KM, Kassem MA, Foaud MM. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS One 2019; 14:e0217293. [PMID: 31112591 PMCID: PMC6529006 DOI: 10.1371/journal.pone.0217293] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 05/08/2019] [Indexed: 11/19/2022] Open
Abstract
Skin cancer is one of most deadly diseases in humans. According to the high similarity between melanoma and nevus lesions, physicians take much more time to investigate these lesions. The automated classification of skin lesions will save effort, time and human life. The purpose of this paper is to present an automatic skin lesions classification system with higher classification rate using the theory of transfer learning and the pre-trained deep neural network. The transfer learning has been applied to the Alex-net in different ways, including fine-tuning the weights of the architecture, replacing the classification layer with a softmax layer that works with two or three kinds of skin lesions, and augmenting dataset by fixed and random rotation angles. The new softmax layer has the ability to classify the segmented color image lesions into melanoma and nevus or into melanoma, seborrheic keratosis, and nevus. The three well-known datasets, MED-NODE, Derm (IS & Quest) and ISIC, are used in testing and verifying the proposed method. The proposed DCNN weights have been fine-tuned using the training and testing dataset from ISIC in addition to 10-fold cross validation for MED-NODE and DermIS—DermQuest. The accuracy, sensitivity, specificity, and precision measures are used to evaluate the performance of the proposed method and the existing methods. For the datasets, MED-NODE, Derm (IS & Quest) and ISIC, the proposed method has achieved accuracy percentages of 96.86%, 97.70%, and 95.91% respectively. The performance of the proposed method has outperformed the performance of the existing classification methods of skin cancer.
Collapse
Affiliation(s)
- Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig, Egypt
- * E-mail: , ,
| | | | - Mohamed M. Foaud
- Department of Electronics and Communication, Faculty of Engineering, Zagazig University, Zagazig, Egypt
| |
Collapse
|
308
|
Medical image classification using synergic deep learning. Med Image Anal 2019; 54:10-19. [DOI: 10.1016/j.media.2019.02.010] [Citation(s) in RCA: 152] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 01/21/2019] [Accepted: 02/15/2019] [Indexed: 02/07/2023]
|
309
|
Barata C, Celebi ME, Marques JS. A Survey of Feature Extraction in Dermoscopy Image Analysis of Skin Cancer. IEEE J Biomed Health Inform 2019; 23:1096-1109. [DOI: 10.1109/jbhi.2018.2845939] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
310
|
Pathan S, Gopalakrishna Prabhu K, Siddalingaswamy P. Automated detection of melanocytes related pigmented skin lesions: A clinical framework. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.02.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
311
|
Wu M, Wang Q, Rigall E, Li K, Zhu W, He B, Yan T. ECNet: Efficient Convolutional Networks for Side Scan Sonar Image Segmentation. SENSORS 2019; 19:s19092009. [PMID: 31035673 PMCID: PMC6540294 DOI: 10.3390/s19092009] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 04/26/2019] [Accepted: 04/26/2019] [Indexed: 11/18/2022]
Abstract
This paper presents a novel and practical convolutional neural network architecture to implement semantic segmentation for side scan sonar (SSS) image. As a widely used sensor for marine survey, SSS provides higher-resolution images of the seafloor and underwater target. However, for a large number of background pixels in SSS image, the imbalance classification remains an issue. What is more, the SSS images contain undesirable speckle noise and intensity inhomogeneity. We define and detail a network and training strategy that tackle these three important issues for SSS images segmentation. Our proposed method performs image-to-image prediction by leveraging fully convolutional neural networks and deeply-supervised nets. The architecture consists of an encoder network to capture context, a corresponding decoder network to restore full input-size resolution feature maps from low-resolution ones for pixel-wise classification and a single stream deep neural network with multiple side-outputs to optimize edge segmentation. We performed prediction time of our network on our dataset, implemented on a NVIDIA Jetson AGX Xavier, and compared it to other similar semantic segmentation networks. The experimental results show that the presented method for SSS image segmentation brings obvious advantages, and is applicable for real-time processing tasks.
Collapse
Affiliation(s)
- Meihan Wu
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong 266000, China.
| | - Qi Wang
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong 266000, China.
| | - Eric Rigall
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong 266000, China.
| | - Kaige Li
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong 266000, China.
| | - Wenbo Zhu
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong 266000, China.
| | - Bo He
- School of Information Science and Engineering, Ocean University of China, Qingdao, Shandong 266000, China.
| | - Tianhong Yan
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China.
| |
Collapse
|
312
|
Zhu Z, Albadawy E, Saha A, Zhang J, Harowicz MR, Mazurowski MA. Deep learning for identifying radiogenomic associations in breast cancer. Comput Biol Med 2019; 109:85-90. [PMID: 31048129 DOI: 10.1016/j.compbiomed.2019.04.018] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 03/05/2019] [Accepted: 04/20/2019] [Indexed: 11/27/2022]
Abstract
RATIONALE AND OBJECTIVES To determine whether deep learning models can distinguish between breast cancer molecular subtypes based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). MATERIALS AND METHODS In this institutional review board-approved single-center study, we analyzed DCE-MR images of 270 patients at our institution. Lesions of interest were identified by radiologists. The task was to automatically determine whether the tumor is of the Luminal A subtype or of another subtype based on the MR image patches representing the tumor. Three different deep learning approaches were used to classify the tumor according to their molecular subtypes: learning from scratch where only tumor patches were used for training, transfer learning where networks pre-trained on natural images were fine-tuned using tumor patches, and off-the-shelf deep features where the features extracted by neural networks trained on natural images were used for classification with a support vector machine. Network architectures utilized in our experiments were GoogleNet, VGG, and CIFAR. We used 10-fold crossvalidation method for validation and area under the receiver operating characteristic (AUC) as the measure of performance. RESULTS The best AUC performance for distinguishing molecular subtypes was 0.65 (95% CI:[0.57,0.71]) and was achieved by the off-the-shelf deep features approach. The highest AUC performance for training from scratch was 0.58 (95% CI:[0.51,0.64]) and the best AUC performance for transfer learning was 0.60 (95% CI:[0.52,0.65]) respectively. For the off-the-shelf approach, the features extracted from the fully connected layer performed the best. CONCLUSION Deep learning may play a role in discovering radiogenomic associations in breast cancer.
Collapse
Affiliation(s)
- Zhe Zhu
- Department of Radiology, Duke University, USA.
| | | | | | - Jun Zhang
- Department of Radiology, Duke University, USA.
| | | | - Maciej A Mazurowski
- Department of Radiology and Department of Electrical and Computer Engineering, Duke University, USA.
| |
Collapse
|
313
|
Tang H, Xu X, Xiao W, Liao Y, Xiao X, Li L, Li K, Jia X, Feng H. Silencing of microRNA-27a facilitates autophagy and apoptosis of melanoma cells through the activation of the SYK-dependent mTOR signaling pathway. J Cell Biochem 2019; 120:13262-13274. [PMID: 30994959 DOI: 10.1002/jcb.28600] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 02/11/2019] [Indexed: 12/19/2022]
Abstract
Melanoma is considered as an aggressive neoplastic transformation and featured with high metastatic potential. Although some studies have provided targets for novel therapeutic interventions, clinical development of targeted drugs for melanoma still remains obscure. Therefore, this study aims to identify the role of microRNA-27a (miR-27a) in autophagy and apoptosis of melanoma cells in regulating spleen tyrosine kinase (SYK)-mediated the mammalian target of rapamycin (mTOR) signaling pathway. A microarray-based analysis was made to screen differentially expressed genes and predict target miRNA. Melanoma specimens were collected with pigmented nevus as a control. Melanoma cell line Mel-RM was treated with miR-27a inhibitor or pcDNA-SYK to prove their effects on autophagy and apoptosis of melanoma cells. The volume change and tumor mass of nude mice in each group were detected by the tumorigenesis assay. Microarray-based analysis results showed that SYK was lowly expressed in melanoma cells and may be regulated by miR-27a. Besides, miR-27a expression was increased whereas SYK expression was decreased in melanoma tissues. Meanwhile, miR-27a was positively correlated with tumor stage and lymph node metastasis of melanoma tissues. Furthermore, miR-27a targeted SYK and silencing of miR-27a or overexpression of SYK cells promoted autophagy and apoptosis of melanoma cells and reduced their tumorigenic ability in vivo. In conclusion, this study proves that silencing of miR-27a facilitates autophagy and apoptosis of melanoma cells by upregulating SYK expression and activating the mTOR signaling pathway. The finding offers new ideas for the clinical development of melanoma.
Collapse
Affiliation(s)
- Hua Tang
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Xiaopeng Xu
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Weirong Xiao
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Yangying Liao
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Xiao Xiao
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Lan Li
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Ke Li
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Xiaomin Jia
- Department of Pathology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| | - Hao Feng
- Department of Dermatology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, P.R. China
| |
Collapse
|
314
|
Zhang L, Yang G, Ye X. Automatic skin lesion segmentation by coupling deep fully convolutional networks and shallow network with textons. J Med Imaging (Bellingham) 2019; 6:024001. [PMID: 31001568 PMCID: PMC6462764 DOI: 10.1117/1.jmi.6.2.024001] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Accepted: 03/29/2019] [Indexed: 12/27/2022] Open
Abstract
Segmentation of skin lesions is an important step in computer-aided diagnosis of melanoma; it is also a very challenging task due to fuzzy lesion boundaries and heterogeneous lesion textures. We present a fully automatic method for skin lesion segmentation based on deep fully convolutional networks (FCNs). We investigate a shallow encoding network to model clinically valuable prior knowledge, in which spatial filters simulating simple cell receptive fields function in the primary visual cortex (V1) is considered. An effective fusing strategy using skip connections and convolution operators is then leveraged to couple prior knowledge encoded via shallow network with hierarchical data-driven features learned from the FCNs for detailed segmentation of the skin lesions. To our best knowledge, this is the first time the domain-specific hand craft features have been built into a deep network trained in an end-to-end manner for skin lesion segmentation. The method has been evaluated on both ISBI 2016 and ISBI 2017 skin lesion challenge datasets. We provide comparative evidence to demonstrate that our newly designed network can gain accuracy for lesion segmentation by coupling the prior knowledge encoded by the shallow network with the deep FCNs. Our method is robust without the need for data augmentation or comprehensive parameter tuning, and the experimental results show great promise of the method with effective model generalization compared to other state-of-the-art-methods.
Collapse
Affiliation(s)
- Lei Zhang
- University of Lincoln, Laboratory of Vision Engineering, School of Computer Science, Lincoln, United Kingdom
| | - Guang Yang
- Royal Brompton Hospital, Imperial College London and Cardiovascular Research Centre, National Heart and Lung Institute, London, United Kingdom
| | - Xujiong Ye
- University of Lincoln, Laboratory of Vision Engineering, School of Computer Science, Lincoln, United Kingdom
| |
Collapse
|
315
|
Nida N, Irtaza A, Javed A, Yousaf MH, Mahmood MT. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering. Int J Med Inform 2019; 124:37-48. [DOI: 10.1016/j.ijmedinf.2019.01.005] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 01/05/2019] [Accepted: 01/08/2019] [Indexed: 10/27/2022]
|
316
|
Yu Z, Jiang X, Zhou F, Qin J, Ni D, Chen S, Lei B, Wang T. Melanoma Recognition in Dermoscopy Images via Aggregated Deep Convolutional Features. IEEE Trans Biomed Eng 2019; 66:1006-1016. [DOI: 10.1109/tbme.2018.2866166] [Citation(s) in RCA: 107] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
317
|
Miyagi Y, Habara T, Hirata R, Hayashi N. Feasibility of deep learning for predicting live birth from a blastocyst image in patients classified by age. Reprod Med Biol 2019; 18:190-203. [PMID: 30996683 PMCID: PMC6452012 DOI: 10.1002/rmb2.12266] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 01/02/2019] [Accepted: 01/28/2019] [Indexed: 02/01/2023] Open
Abstract
PURPOSE To identify artificial intelligence (AI) classifiers in images of blastocysts to predict the probability of achieving a live birth in patients classified by age. Results are compared to those obtained by conventional embryo (CE) evaluation. METHODS A total of 5691 blastocysts were retrospectively enrolled. Images captured 115 hours after insemination (or 139 hours if not yet large enough) were classified according to maternal age as follows: <35, 35-37, 38-39, 40-41, and ≥42 years. The classifiers for each category and a classifier for all ages were related to convolutional neural networks associated with deep learning. Then, the live birth functions predicted by the AI and the multivariate logistic model functions predicted by CE were tested. The feasibility of the AI was investigated. RESULTS The accuracies of AI/CE for predicting live birth were 0.64/0.61, 0.71/0.70, 0.78/0.77, 0.81/0.83, 0.88/0.94, and 0.72/0.74 for the age categories <35, 35-37, 38-39, 40-41, and ≥42 years and all ages, respectively. The sum value of the sensitivity and specificity revealed that AI performed better than CE (P = 0.01). CONCLUSIONS AI classifiers categorized by age can predict the probability of live birth from an image of the blastocyst and produced better results than were achieved using CE.
Collapse
Affiliation(s)
- Yasunari Miyagi
- Medical Data LaboOkayama CityJapan
- Department of Gynecologic OncologySaitama Medical University International Medical CenterHidaka CityJapan
| | | | - Rei Hirata
- Okayama Couple’s ClinicOkayama CityJapan
| | | |
Collapse
|
318
|
Gonzalez-Diaz I. DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE J Biomed Health Inform 2019; 23:547-559. [DOI: 10.1109/jbhi.2018.2806962] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
319
|
Navarro F, Escudero-Vinolo M, Bescos J. Accurate Segmentation and Registration of Skin Lesion Images to Evaluate Lesion Change. IEEE J Biomed Health Inform 2019; 23:501-508. [DOI: 10.1109/jbhi.2018.2825251] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
320
|
Kawahara J, Hamarneh G. Fully Convolutional Neural Networks to Detect Clinical Dermoscopic Features. IEEE J Biomed Health Inform 2019; 23:578-585. [DOI: 10.1109/jbhi.2018.2831680] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
321
|
Yuan Y, Lo YC. Improving Dermoscopic Image Segmentation With Enhanced Convolutional-Deconvolutional Networks. IEEE J Biomed Health Inform 2019; 23:519-526. [DOI: 10.1109/jbhi.2017.2787487] [Citation(s) in RCA: 102] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
322
|
Cloud-Based Skin Lesion Diagnosis System Using Convolutional Neural Networks. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2019. [DOI: 10.1007/978-3-030-22871-2_70] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
323
|
Alom MZ, Yakopcic C, Hasan M, Taha TM, Asari VK. Recurrent residual U-Net for medical image segmentation. J Med Imaging (Bellingham) 2019; 6:014006. [PMID: 30944843 PMCID: PMC6435980 DOI: 10.1117/1.jmi.6.1.014006] [Citation(s) in RCA: 230] [Impact Index Per Article: 38.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 03/05/2019] [Indexed: 12/12/2022] Open
Abstract
Deep learning (DL)-based semantic segmentation methods have been providing state-of-the-art performance in the past few years. More specifically, these techniques have been successfully applied in medical image classification, segmentation, and detection tasks. One DL technique, U-Net, has become one of the most popular for these applications. We propose a recurrent U-Net model and a recurrent residual U-Net model, which are named RU-Net and R2U-Net, respectively. The proposed models utilize the power of U-Net, residual networks, and recurrent convolutional neural networks. There are several advantages to using these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architectures. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architectures with the same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets, such as blood vessel segmentation in retinal images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models, including a variant of a fully connected convolutional neural network called SegNet, U-Net, and residual U-Net.
Collapse
Affiliation(s)
- Md Zahangir Alom
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | - Chris Yakopcic
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | | | - Tarek M. Taha
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | - Vijayan K. Asari
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| |
Collapse
|
324
|
Carcagnì P, Leo M, Cuna A, Mazzeo PL, Spagnolo P, Celeste G, Distante C. Classification of Skin Lesions by Combining Multilevel Learnings in a DenseNet Architecture. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-30642-7_30] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
325
|
Chi Y, Bi L, Kim J, Feng D, Kumar A. Controlled Synthesis of Dermoscopic Images via a New Color Labeled Generative Style Transfer Network to Enhance Melanoma Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2591-2594. [PMID: 30440938 DOI: 10.1109/embc.2018.8512842] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Dermoscopic imaging is an established technique to detect, track, and diagnose malignant melanoma, and one of the ways to improve this technique is via computer-aided image segmentation. Image segmentation is an important step towards building computerized detection and classification systems by delineating the area of interest, in our case, the skin lesion, from the background. However, current segmentation techniques are hard pressed to account for color artifacts within dermoscopic images that are often incorrectly detected as part of the lesion. Often there are few annotated examples of these artifacts, which limits training segmentation methods like the fully convolutional network (FCN) due to the skewed dataset. We propose to improve FCN training by augmenting the dataset with synthetic images created in a controlled manner using a generative adversarial network (GAN). Our novelty lies in the use of a color label (CL) to specify the different characteristics (approximate size, location, and shape) of the different regions (skin, lesion, artifacts) in the synthetic images. Our GAN is trained to perform style transfer of real melanoma image characteristics (e.g. texture) onto these color labels, allowing us to generate specific types of images containing artifacts. Our experimental results demonstrate that the synthetic images generated by our technique have a lower mean average error when compared to synthetic images generated using traditional binary labels. As a consequence, we demonstrated improvements in melanoma image segmentation when using synthetic images generated by our technique.
Collapse
|
326
|
A Multi-tree Genetic Programming Representation for Melanoma Detection Using Local and Global Features. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-030-03991-2_12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
327
|
P Santos I, van Doorn R, Caspers PJ, Bakker Schut TC, Barroso EM, Nijsten TEC, Noordhoek Hegt V, Koljenović S, Puppels GJ. Improving clinical diagnosis of early-stage cutaneous melanoma based on Raman spectroscopy. Br J Cancer 2018; 119:1339-1346. [PMID: 30410059 PMCID: PMC6265324 DOI: 10.1038/s41416-018-0257-9] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 08/06/2018] [Accepted: 08/17/2018] [Indexed: 12/31/2022] Open
Abstract
Background Clinical diagnosis of early melanoma (Breslow thickness less than 0.8 mm) is crucial to disease-free survival. However, it is subjective and can be exceedingly difficult, leading to missed melanomas, or unnecessary excision of benign pigmented skin lesions. An objective technique is needed to improve the diagnosis of early melanoma. Methods We have developed a method to improve diagnosis of (thin) melanoma, based on Raman spectroscopy. In an ex vivo study in a tertiary referral (pigmented lesions) centre, high-wavenumber Raman spectra were collected from 174 freshly excised melanocytic lesions suspicious for melanoma. Measurements were performed on multiple locations within the lesions. A diagnostic model was developed and validated on an independent data set of 96 lesions. Results Approximately 60% of the melanomas included in this study were melanomas in situ. The invasive melanomas had an average Breslow thickness of 0.89 mm. The diagnostic model correctly classified all melanomas (including in situ) with a specificity of 43.8%, and showed a potential improvement of the number needed to treat from 6.0 to 2.7, at a sensitivity of 100%. Conclusion This work signifies an important step towards accurate and objective clinical diagnosis of melanoma and in particular melanoma with Breslow thickness <0.8 mm.
Collapse
Affiliation(s)
- Inês P Santos
- Department of Dermatology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Remco van Doorn
- Department of Dermatology, Leiden University Medical Center, Leiden, Netherlands
| | - Peter J Caspers
- Department of Dermatology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Tom C Bakker Schut
- Department of Dermatology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Elisa M Barroso
- Department of Oral & Maxillofacial Surgery, Special Dental Care, and Orthodontics, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Tamar E C Nijsten
- Department of Dermatology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Vincent Noordhoek Hegt
- Department of Pathology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Senada Koljenović
- Department of Pathology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Gerwin J Puppels
- Department of Dermatology, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands.
| |
Collapse
|
328
|
Harangi B. Skin lesion classification with ensembles of deep convolutional neural networks. J Biomed Inform 2018; 86:25-32. [DOI: 10.1016/j.jbi.2018.08.006] [Citation(s) in RCA: 175] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 06/14/2018] [Accepted: 08/07/2018] [Indexed: 11/25/2022]
|
329
|
Improving the performance of convolutional neural network for skin image classification using the response of image analysis filters. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3711-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
330
|
Rebouças Filho PP, Peixoto SA, Medeiros da Nóbrega RV, Hemanth DJ, Medeiros AG, Sangaiah AK, de Albuquerque VHC. Automatic histologically-closer classification of skin lesions. Comput Med Imaging Graph 2018; 68:40-54. [DOI: 10.1016/j.compmedimag.2018.05.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 05/20/2018] [Accepted: 05/29/2018] [Indexed: 10/14/2022]
|
331
|
Al-antari MA, Al-masni MA, Choi MT, Han SM, Kim TS. A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int J Med Inform 2018; 117:44-54. [DOI: 10.1016/j.ijmedinf.2018.06.003] [Citation(s) in RCA: 184] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 05/22/2018] [Accepted: 06/06/2018] [Indexed: 11/28/2022]
|
332
|
Skin Lesion Segmentation Method for Dermoscopy Images Using Artificial Bee Colony Algorithm. Symmetry (Basel) 2018. [DOI: 10.3390/sym10080347] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
The occurrence rates of melanoma are rising rapidly, which are resulting in higher death rates. However, if the melanoma is diagnosed in Phase I, the survival rates increase. The segmentation of the melanoma is one of the largest tasks to undertake and achieve when considering both beneath and over the segmentation. In this work, a new approach based on the artificial bee colony (ABC) algorithm is proposed for the detection of melanoma from digital images. This method is simple, fast, flexible, and requires fewer parameters compared with other algorithms. The proposed approach is applied on the PH2, ISBI 2016 challenge, the ISBI 2017 challenge, and Dermis datasets. These bases contained images are affected by different abnormalities. The formation of the databases consists of images collected from different sources; they are bases with different types of resolution, lighting, etc., so in the first step, the noise was removed from the images by using morphological filtering. In the next step, the ABC algorithm is used to find the optimum threshold value for the melanoma detection. The proposed approach achieved good results in the conditions of high specificity. The experimental results suggest that the proposed method accomplished higher performance compared to the ground truth images supported by a Dermatologist. For the melanoma detection, the method achieved an average accuracy and Jaccard’s coefficient in the range of 95.24–97.61%, and 83.56–85.25% in these four databases. To show the robustness of this work, the results were compared to existing methods in the literature for melanoma detection. High values for estimation performance confirmed that the proposed melanoma detection is better than other algorithms, which demonstrates the highly differential power of the newly introduced features.
Collapse
|
333
|
Połap D, Winnicka A, Serwata K, Kęsik K, Woźniak M. An Intelligent System for Monitoring Skin Diseases. SENSORS 2018; 18:s18082552. [PMID: 30081540 PMCID: PMC6111999 DOI: 10.3390/s18082552] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Revised: 07/27/2018] [Accepted: 08/02/2018] [Indexed: 01/06/2023]
Abstract
The practical increase of interest in intelligent technologies has caused a rapid development of all activities in terms of sensors and automatic mechanisms for smart operations. The implementations concentrate on technologies which avoid unnecessary actions on user side while examining health conditions. One of important aspects is the constant inspection of the skin health due to possible diseases such as melanomas that can develop under excessive influence of the sunlight. Smart homes can be equipped with a variety of motion sensors and cameras which can be used to detect and identify possible disease development. In this work, we present a smart home system which is using in-built sensors and proposed artificial intelligence methods to diagnose the skin health condition of the residents of the house. The proposed solution has been tested and discussed due to potential use in practice.
Collapse
Affiliation(s)
- Dawid Połap
- Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland.
| | - Alicja Winnicka
- Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland.
| | - Kalina Serwata
- Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland.
| | - Karolina Kęsik
- Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland.
| | - Marcin Woźniak
- Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland.
| |
Collapse
|
334
|
Ashour AS, Guo Y, Kucukkulahli E, Erdogmus P, Polat K. A hybrid dermoscopy images segmentation approach based on neutrosophic clustering and histogram estimation. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2018.05.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
335
|
Al-Masni MA, Al-Antari MA, Choi MT, Han SM, Kim TS. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:221-231. [PMID: 29903489 DOI: 10.1016/j.cmpb.2018.05.027] [Citation(s) in RCA: 146] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 04/30/2018] [Accepted: 05/17/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic segmentation of skin lesions in dermoscopy images is still a challenging task due to the large shape variations and indistinct boundaries of the lesions. Accurate segmentation of skin lesions is a key prerequisite step for any computer-aided diagnostic system to recognize skin melanoma. METHODS In this paper, we propose a novel segmentation methodology via full resolution convolutional networks (FrCN). The proposed FrCN method directly learns the full resolution features of each individual pixel of the input data without the need for pre- or post-processing operations such as artifact removal, low contrast adjustment, or further enhancement of the segmented skin lesion boundaries. We evaluated the proposed method using two publicly available databases, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets. To evaluate the proposed method, we compared the segmentation performance with the latest deep learning segmentation approaches such as the fully convolutional network (FCN), U-Net, and SegNet. RESULTS Our results showed that the proposed FrCN method segmented the skin lesions with an average Jaccard index of 77.11% and an overall segmentation accuracy of 94.03% for the ISBI 2017 test dataset and 84.79% and 95.08%, respectively, for the PH2 dataset. In comparison to FCN, U-Net, and SegNet, the proposed FrCN outperformed them by 4.94%, 15.47%, and 7.48% for the Jaccard index and 1.31%, 3.89%, and 2.27% for the segmentation accuracy, respectively. Furthermore, the proposed FrCN achieved a segmentation accuracy of 95.62% for some representative clinical benign cases, 90.78% for the melanoma cases, and 91.29% for the seborrheic keratosis cases in the ISBI 2017 test dataset, exhibiting better performance than those of FCN, U-Net, and SegNet. CONCLUSIONS We conclude that using the full spatial resolutions of the input image could enable to learn better specific and prominent features, leading to an improvement in the segmentation performance.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| | - Mugahed A Al-Antari
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| | - Mun-Taek Choi
- School of Mechanical Engineering, Sungkyunkwan University, Republic of Korea.
| | - Seung-Moo Han
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| |
Collapse
|
336
|
Codella NCF, Anderson D, Philips T, Porto A, Massey K, Snowdon J, Feris R, Smith J. Segmentation of Both Diseased and Healthy Skin From Clinical Photographs in a Primary Care Setting. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:3414-3417. [PMID: 30441121 DOI: 10.1109/embc.2018.8512980] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This work presents the first segmentation study of both diseased and healthy skin in standard camera photographs from a clinical environment. Challenges arise from varied lighting conditions, skin types, backgrounds, and pathological states. For study, 400 clinical photographs (with skin segmentation masks) representing various pathological states of skin are retrospectively collected from a primary care network. 100 images are used for training and fine-tuning, and 300 are used for evaluation. This distribution between training and test partitions is chosen to reflect the difficulty in amassing large quantities of labeled data in this domain. A deep learning approach is used, and 3 public segmentation datasets of healthy skin are collected to study the potential benefits of pretraining. Two variants of U-Net are evaluated: U-Net and Dense Residual U-Net. We find that Dense Residual U-Nets have a 7.8% improvement in Jaccard, compared to classical U-Net architectures (0.55 vs. 0.51 Jaccard), for direct transfer, where fine-tuning data is not utilized. However, U-Net outperforms Dense Residual U-Net for both direct training (0.83 vs. 0.80) and fine-tuning (0.89 vs. 0.88). The stark performance improvement with fine-tuning compared to direct transfer and direct training emphasizes both the need for adequate representative data of diseased skin, and the utility of other publicly available data sources for this task.
Collapse
|
337
|
Khan MA, Akram T, Sharif M, Shahzad A, Aurangzeb K, Alhussein M, Haider SI, Altamrah A. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 2018; 18:638. [PMID: 29871593 PMCID: PMC5989438 DOI: 10.1186/s12885-018-4465-8] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Accepted: 04/30/2018] [Indexed: 01/28/2023] Open
Abstract
BACKGROUND Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in its early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for the highly equipped environment. The recent advancements in computerized solutions for this diagnosis are highly promising with improved accuracy and efficiency. METHODS In this article, a method for the identification and classification of the lesion based on probabilistic distribution and best features selection is proposed. The probabilistic distribution such as normal distribution and uniform distribution are implemented for segmentation of lesion in the dermoscopic images. Then multi-level features are extracted and parallel strategy is performed for fusion. A novel entropy-based method with the combination of Bhattacharyya distance and variance are calculated for the selection of best features. Only selected features are classified using multi-class support vector machine, which is selected as a base classifier. RESULTS The proposed method is validated on three publicly available datasets such as PH2, ISIC (i.e. ISIC MSK-2 and ISIC UDA), and Combined (ISBI 2016 and ISBI 2017), including multi-resolution RGB images and achieved accuracy of 97.5%, 97.75%, and 93.2%, respectively. CONCLUSION The base classifier performs significantly better on proposed features fusion and selection method as compared to other methods in terms of sensitivity, specificity, and accuracy. Furthermore, the presented method achieved satisfactory segmentation results on selected datasets.
Collapse
Affiliation(s)
- M. Attique Khan
- Department of Computer Science, COMSATS Institute of Information Technology, Wah, Pakistan
| | - Tallha Akram
- Department of Electrical Engineering, COMSATS Institute of Information Technology, Wah, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS Institute of Information Technology, Wah, Pakistan
| | - Aamir Shahzad
- Department of Electrical Engineering, COMSATS Institute of Information Technology, Abbottabad, Pakistan
| | - Khursheed Aurangzeb
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
- Department of Electrical Engineering, COMSATS Institute of Information Technology, Attock, Pakistan
| | - Musaed Alhussein
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Syed Irtaza Haider
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Abdualziz Altamrah
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
338
|
Jahanifar M, Zamani Tajeddin N, Mohammadzadeh Asl B, Gooya A. Supervised Saliency Map Driven Segmentation of Lesions in Dermoscopic Images. IEEE J Biomed Health Inform 2018; 23:509-518. [PMID: 29994323 DOI: 10.1109/jbhi.2018.2839647] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners, and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). A DRFI method incorporates multilevel segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and postprocessing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation algorithms and its performance is comparable with most recent approaches that are based on deep convolutional neural networks.
Collapse
|
339
|
Hair detection and lesion segmentation in dermoscopic images using domain knowledge. Med Biol Eng Comput 2018; 56:2051-2065. [DOI: 10.1007/s11517-018-1837-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 04/23/2018] [Indexed: 10/16/2022]
|
340
|
Mo J, Zhang L, Feng Y. Exudate-based diabetic macular edema recognition in retinal images using cascaded deep residual networks. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.02.035] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
341
|
Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu CW, Heng PA. SV-RCNet: Workflow Recognition From Surgical Videos Using Recurrent Convolutional Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1114-1126. [PMID: 29727275 DOI: 10.1109/tmi.2017.2787657] [Citation(s) in RCA: 120] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose an analysis of surgical videos that is based on a novel recurrent convolutional network (SV-RCNet), specifically for automatic workflow recognition from surgical videos online, which is a key component for developing the context-aware computer-assisted intervention systems. Different from previous methods which harness visual and temporal information separately, the proposed SV-RCNet seamlessly integrates a convolutional neural network (CNN) and a recurrent neural network (RNN) to form a novel recurrent convolutional architecture in order to take full advantages of the complementary information of visual and temporal features learned from surgical videos. We effectively train the SV-RCNet in an end-to-end manner so that the visual representations and sequential dynamics can be jointly optimized in the learning process. In order to produce more discriminative spatio-temporal features, we exploit a deep residual network (ResNet) and a long short term memory (LSTM) network, to extract visual features and temporal dependencies, respectively, and integrate them into the SV-RCNet. Moreover, based on the phase transition-sensitive predictions from the SV-RCNet, we propose a simple yet effective inference scheme, namely the prior knowledge inference (PKI), by leveraging the natural characteristic of surgical video. Such a strategy further improves the consistency of results and largely boosts the recognition performance. Extensive experiments have been conducted with the MICCAI 2016 Modeling and Monitoring of Computer Assisted Interventions Workflow Challenge dataset and Cholec80 dataset to validate SV-RCNet. Our approach not only achieves superior performance on these two datasets but also outperforms the state-of-the-art methods by a significant margin.
Collapse
|
342
|
A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images. Symmetry (Basel) 2018. [DOI: 10.3390/sym10040119] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
343
|
Kawahara J, Daneshvar S, Argenziano G, Hamarneh G. 7-Point Checklist and Skin Lesion Classification using Multi-Task Multi-Modal Neural Nets. IEEE J Biomed Health Inform 2018; 23:538-546. [PMID: 29993994 DOI: 10.1109/jbhi.2018.2824327] [Citation(s) in RCA: 134] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We propose a multi-task deep convolutional neural network, trained on multi-modal data (clinical and dermoscopic images, and patient meta-data), to classify the 7-point melanoma checklist criteria and perform skin lesion diagnosis. Our neural network is trained using several multi-task loss functions, where each loss considers different combinations of the input modalities, which allows our model to be robust to missing data at inference time. Our final model classifies the 7-point checklist and skin condition diagnosis, produces multi-modal feature vectors suitable for image retrieval, and localizes clinically discriminant regions. We benchmark our approach using 1011 lesion cases, and report comprehensive results over all 7-point criteria and diagnosis. We also make our dataset (images and metadata) publicly available online at http://derm.cs.sfu.ca.
Collapse
|
344
|
Jiang S, Liao J, Bian Z, Guo K, Zhang Y, Zheng G. Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging. BIOMEDICAL OPTICS EXPRESS 2018; 9:1601-1612. [PMID: 29675305 PMCID: PMC5905909 DOI: 10.1364/boe.9.001601] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 03/01/2018] [Accepted: 03/02/2018] [Indexed: 05/21/2023]
Abstract
A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.
Collapse
Affiliation(s)
- Shaowei Jiang
- Biomedical Engineering, University of Connecticut, Storrs, CT, 06269, USA
- These authors contributed equally to this work
| | - Jun Liao
- Biomedical Engineering, University of Connecticut, Storrs, CT, 06269, USA
- These authors contributed equally to this work
| | - Zichao Bian
- Biomedical Engineering, University of Connecticut, Storrs, CT, 06269, USA
| | - Kaikai Guo
- Biomedical Engineering, University of Connecticut, Storrs, CT, 06269, USA
| | - Yongbing Zhang
- Shenzhen Key Lab of Broadband Network and Multimedia, Graduate School at Shenzhen, Tsinghua University, Shenzhen, 518055, China
| | - Guoan Zheng
- Biomedical Engineering, University of Connecticut, Storrs, CT, 06269, USA
- Electrical and Computer Engineering, University of Connecticut, Storrs, CT, 06269, USA
| |
Collapse
|
345
|
Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, Ferrero E, Agapow PM, Zietz M, Hoffman MM, Xie W, Rosen GL, Lengerich BJ, Israeli J, Lanchantin J, Woloszynek S, Carpenter AE, Shrikumar A, Xu J, Cofer EM, Lavender CA, Turaga SC, Alexandari AM, Lu Z, Harris DJ, DeCaprio D, Qi Y, Kundaje A, Peng Y, Wiley LK, Segler MHS, Boca SM, Swamidass SJ, Huang A, Gitter A, Greene CS. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface 2018; 15:20170387. [PMID: 29618526 PMCID: PMC5938574 DOI: 10.1098/rsif.2017.0387] [Citation(s) in RCA: 900] [Impact Index Per Article: 128.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Accepted: 03/07/2018] [Indexed: 11/12/2022] Open
Abstract
Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.
Collapse
Affiliation(s)
- Travers Ching
- Molecular Biosciences and Bioengineering Graduate Program, University of Hawaii at Manoa, Honolulu, HI, USA
| | - Daniel S Himmelstein
- Department of Systems Pharmacology and Translational Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Brett K Beaulieu-Jones
- Genomics and Computational Biology Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Alexandr A Kalinin
- Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | - Gregory P Way
- Department of Systems Pharmacology and Translational Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Enrico Ferrero
- Computational Biology and Stats, Target Sciences, GlaxoSmithKline, Stevenage, UK
| | | | - Michael Zietz
- Department of Systems Pharmacology and Translational Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
| | - Wei Xie
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Gail L Rosen
- Ecological and Evolutionary Signal-processing and Informatics Laboratory, Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Benjamin J Lengerich
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Johnny Israeli
- Biophysics Program, Stanford University, Stanford, CA, USA
| | - Jack Lanchantin
- Department of Computer Science, University of Virginia, Charlottesville, VA, USA
| | - Stephen Woloszynek
- Ecological and Evolutionary Signal-processing and Informatics Laboratory, Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Anne E Carpenter
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Avanti Shrikumar
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Jinbo Xu
- Toyota Technological Institute at Chicago, Chicago, IL, USA
| | - Evan M Cofer
- Department of Computer Science, Trinity University, San Antonio, TX, USA
- Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, USA
| | - Christopher A Lavender
- Integrative Bioinformatics, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, USA
| | - Srinivas C Turaga
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA
| | - Amr M Alexandari
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information and National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - David J Harris
- Department of Wildlife Ecology and Conservation, University of Florida, Gainesville, FL, USA
| | | | - Yanjun Qi
- Department of Computer Science, University of Virginia, Charlottesville, VA, USA
| | - Anshul Kundaje
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Department of Genetics, Stanford University, Stanford, CA, USA
| | - Yifan Peng
- National Center for Biotechnology Information and National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Laura K Wiley
- Division of Biomedical Informatics and Personalized Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Marwin H S Segler
- Institute of Organic Chemistry, Westfälische Wilhelms-Universität Münster, Münster, Germany
| | - Simina M Boca
- Innovation Center for Biomedical Informatics, Georgetown University Medical Center, Washington, DC, USA
| | - S Joshua Swamidass
- Department of Pathology and Immunology, Washington University in Saint Louis, St Louis, MO, USA
| | - Austin Huang
- Department of Medicine, Brown University, Providence, RI, USA
| | - Anthony Gitter
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| | - Casey S Greene
- Department of Systems Pharmacology and Translational Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
346
|
Oliveira RB, Pereira AS, Tavares JMRS. Computational diagnosis of skin lesions from dermoscopic images using combined features. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3439-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
347
|
A deep learning approach for pose estimation from volumetric OCT data. Med Image Anal 2018; 46:162-179. [PMID: 29550582 DOI: 10.1016/j.media.2018.03.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 01/18/2018] [Accepted: 03/09/2018] [Indexed: 01/22/2023]
Abstract
Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively.
Collapse
|
348
|
Sánchez-Monedero J, Pérez-Ortiz M, Sáez A, Gutiérrez PA, Hervás-Martínez C. Partial order label decomposition approaches for melanoma diagnosis. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2017.11.042] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
349
|
Li Y, Shen L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. SENSORS (BASEL, SWITZERLAND) 2018; 18:E556. [PMID: 29439500 PMCID: PMC5855504 DOI: 10.3390/s18020556] [Citation(s) in RCA: 185] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 02/08/2018] [Accepted: 02/08/2018] [Indexed: 11/23/2022]
Abstract
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.
Collapse
Affiliation(s)
- Yuexiang Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China.
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China.
| | - Linlin Shen
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China.
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|
350
|
Sato M, Horie K, Hara A, Miyamoto Y, Kurihara K, Tomio K, Yokota H. Application of deep learning to the classification of images from colposcopy. Oncol Lett 2018; 15:3518-3523. [PMID: 29456725 PMCID: PMC5795879 DOI: 10.3892/ol.2018.7762] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2017] [Accepted: 11/20/2017] [Indexed: 02/05/2023] Open
Abstract
The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.
Collapse
Affiliation(s)
- Masakazu Sato
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| | - Koji Horie
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| | - Aki Hara
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| | - Yuichiro Miyamoto
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| | - Kazuko Kurihara
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| | - Kensuke Tomio
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| | - Harushige Yokota
- Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan
| |
Collapse
|