1
|
Kaur R, GholamHosseini H, Lindén M. Advanced Deep Learning Models for Melanoma Diagnosis in Computer-Aided Skin Cancer Detection. SENSORS (BASEL, SWITZERLAND) 2025; 25:594. [PMID: 39943236 PMCID: PMC11821218 DOI: 10.3390/s25030594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 01/05/2025] [Accepted: 01/08/2025] [Indexed: 02/16/2025]
Abstract
The most deadly type of skin cancer is melanoma. A visual examination does not provide an accurate diagnosis of melanoma during its early to middle stages. Therefore, an automated model could be developed that assists with early skin cancer detection. It is possible to limit the severity of melanoma by detecting it early and treating it promptly. This study aims to develop efficient approaches for various phases of melanoma computer-aided diagnosis (CAD), such as preprocessing, segmentation, and classification. The first step of the CAD pipeline includes the proposed hybrid method, which uses morphological operations and context aggregation-based deep neural networks to remove hairlines and improve poor contrast in dermoscopic skin cancer images. An image segmentation network based on deep learning is then used to extract lesion regions for detailed analysis and calculate the optimized classification features. Lastly, a deep neural network is used to distinguish melanoma from benign lesions. The proposed approaches use a benchmark dataset named International Skin Imaging Collaboration (ISIC) 2020. In this work, two forms of evaluations are performed with the classification model. The first experiment involves the incorporation of the results from the preprocessing and segmentation stages into the classification model. The second experiment involves the evaluation of the classifier without employing these stages i.e., using raw images. From the study results, it can be concluded that a classification model using segmented and cleaned images contributes more to achieving an accurate classification rate of 93.40% with a 1.3 s test time on a single image.
Collapse
Affiliation(s)
- Ranpreet Kaur
- Department of Software Engineering & AI, Media Design School, Auckland 1010, New Zealand
| | - Hamid GholamHosseini
- School of Engineering, Computer, and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand;
| | - Maria Lindén
- Division of Intelligent Future Technologies, Mälardalen University, 721 23 Västerås, Sweden;
| |
Collapse
|
2
|
Xie F, Xu P, Xi X, Gu X, Zhang P, Wang H, Shen X. Oral mucosal disease recognition based on dynamic self-attention and feature discriminant loss. Oral Dis 2024; 30:3094-3107. [PMID: 37731172 DOI: 10.1111/odi.14732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/22/2023] [Accepted: 08/25/2023] [Indexed: 09/22/2023]
Abstract
OBJECTIVES To develop a dynamic self-attention and feature discrimination loss function (DSDF) model for identifying oral mucosal diseases presented to solve the problems of data imbalance, complex image background, and high similarity and difference of visual characteristics among different types of lesion areas. METHODS In DSDF, dynamic self-attention network can fully mine the context information between adjacent areas, improve the visual representation of the network, and promote the network model to learn and locate the image area of interest. Then, the feature discrimination loss function is used to constrain the diversity of channel characteristics, so as to enhance the feature discrimination ability of local similar areas. RESULTS The experimental results show that the recognition accuracy of the proposed method for oral mucosal disease is the highest at 91.16%, and is about 6% ahead of other advanced methods. In addition, DSDF has recall of 90.87% and F1 of 90.60%. CONCLUSIONS Convolutional neural networks can effectively capture the visual features of the oral mucosal disease lesions, and the distinguished visual features of different oral lesions can be extracted better using dynamic self-attention and feature discrimination loss function, which is conducive to the auxiliary diagnosis of oral mucosal diseases.
Collapse
Affiliation(s)
- Fei Xie
- Xi'an Key Laboratory of Human-Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi'an, China
- School of AOAIR, Xidian University, Xi'an, China
| | - Pengfei Xu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Xinyi Xi
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Xiaokang Gu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Panpan Zhang
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Hexu Wang
- Xi'an Key Laboratory of Human-Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi'an, China
| | - Xuemin Shen
- Department of Oral Mucosal Diseases, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
3
|
Kandhro IA, Manickam S, Fatima K, Uddin M, Malik U, Naz A, Dandoush A. Performance evaluation of E-VGG19 model: Enhancing real-time skin cancer detection and classification. Heliyon 2024; 10:e31488. [PMID: 38826726 PMCID: PMC11141372 DOI: 10.1016/j.heliyon.2024.e31488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 05/16/2024] [Indexed: 06/04/2024] Open
Abstract
Skin cancer is a pervasive and potentially life-threatening disease. Early detection plays a crucial role in improving patient outcomes. Machine learning (ML) techniques, particularly when combined with pre-trained deep learning models, have shown promise in enhancing the accuracy of skin cancer detection. In this paper, we enhanced the VGG19 pre-trained model with max pooling and dense layer for the prediction of skin cancer. Moreover, we also explored the pre-trained models such as Visual Geometry Group 19 (VGG19), Residual Network 152 version 2 (ResNet152v2), Inception-Residual Network version 2 (InceptionResNetV2), Dense Convolutional Network 201 (DenseNet201), Residual Network 50 (ResNet50), Inception version 3 (InceptionV3), For training, skin lesions dataset is used with malignant and benign cases. The models extract features and divide skin lesions into two categories: malignant and benign. The features are then fed into machine learning methods, including Linear Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR) and Support Vector Machine (SVM), our results demonstrate that combining E-VGG19 model with traditional classifiers significantly improves the overall classification accuracy for skin cancer detection and classification. Moreover, we have also compared the performance of baseline classifiers and pre-trained models with metrics (recall, F1 score, precision, sensitivity, and accuracy). The experiment results provide valuable insights into the effectiveness of various models and classifiers for accurate and efficient skin cancer detection. This research contributes to the ongoing efforts to create automated technologies for detecting skin cancer that can help healthcare professionals and individuals identify potential skin cancer cases at an early stage, ultimately leading to more timely and effective treatments.
Collapse
Affiliation(s)
- Irfan Ali Kandhro
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Selvakumar Manickam
- National Advanced IPv6 Centre (NAv6), Universiti Sains Malaysia, Gelugor, Penang, 11800, Malaysia
| | - Kanwal Fatima
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Mueen Uddin
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| | - Urooj Malik
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Anum Naz
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Abdulhalim Dandoush
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| |
Collapse
|
4
|
Yuan W, Du Z, Han S. Semi-supervised skin cancer diagnosis based on self-feedback threshold focal learning. Discov Oncol 2024; 15:180. [PMID: 38776027 PMCID: PMC11111630 DOI: 10.1007/s12672-024-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 05/17/2024] [Indexed: 05/25/2024] Open
Abstract
Worldwide, skin cancer prevalence necessitates accurate diagnosis to alleviate public health burdens. Although the application of artificial intelligence in image analysis and pattern recognition has improved the accuracy and efficiency of early skin cancer diagnosis, existing supervised learning methods are limited due to their reliance on a large amount of labeled data. To overcome the limitations of data labeling and enhance the performance of diagnostic models, this study proposes a semi-supervised skin cancer diagnostic model based on Self-feedback Threshold Focal Learning (STFL), capable of utilizing partial labeled and a large scale of unlabeled medical images for training models in unseen scenarios. The proposed model dynamically adjusts the selection threshold of unlabeled samples during training, effectively filtering reliable unlabeled samples and using focal learning to mitigate the impact of class imbalance in further training. The study is experimentally validated on the HAM10000 dataset, which includes images of various types of skin lesions, with experiments conducted across different scales of labeled samples. With just 500 annotated samples, the model demonstrates robust performance (0.77 accuracy, 0.6408 Kappa, 0.77 recall, 0.7426 precision, and 0.7462 F1-score), showcasing its efficiency with limited labeled data. Further, comprehensive testing validates the semi-supervised model's significant advancements in diagnostic accuracy and efficiency, underscoring the value of integrating unlabeled data. This model offers a new perspective on medical image processing and contributes robust scientific support for the early diagnosis and treatment of skin cancer.
Collapse
Affiliation(s)
- Weicheng Yuan
- College of Basic Medicine, Hebei Medical University, Zhongshan East, Shijiazhuang, 050017, Hebei, China
| | - Zeyu Du
- School of Health Science, University of Manchester, Sackville Street, Manchester, 610101, England, UK
| | - Shuo Han
- Department of Anatomy, Hebei Medical University, Zhongshan East, Shijiazhuang, 050017, Hebei, China.
| |
Collapse
|
5
|
Farhatullah, Chen X, Zeng D, Xu J, Nawaz R, Ullah R. Classification of Skin Lesion With Features Extraction Using Quantum Chebyshev Polynomials and Autoencoder From Wavelet-Transformed Images. IEEE ACCESS 2024; 12:193923-193936. [DOI: 10.1109/access.2024.3502513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Farhatullah
- School of Computer Science, China University of Geosciences, Wuhan, China
| | - Xin Chen
- School of Automation, China University of Geosciences, Wuhan, China
| | - Deze Zeng
- School of Computer Science, China University of Geosciences, Wuhan, China
| | - Jiafeng Xu
- School of Automation, China University of Geosciences, Wuhan, China
| | - Rab Nawaz
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, U.K
| | - Rahmat Ullah
- School of Computer Science, China University of Geosciences, Wuhan, China
| |
Collapse
|
6
|
Alinia S, Asghari-Jafarabadi M, Mahmoudi L, Norouzi S, Safari M, Roshanaei G. Survival prediction and prognostic factors in colorectal cancer after curative surgery: insights from cox regression and neural networks. Sci Rep 2023; 13:15675. [PMID: 37735621 PMCID: PMC10514146 DOI: 10.1038/s41598-023-42926-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 09/16/2023] [Indexed: 09/23/2023] Open
Abstract
Medical research frequently relies on Cox regression to analyze the survival distribution of cancer patients. Nonetheless, in specific scenarios, neural networks hold the potential to serve as a robust alternative. In this study, we aim to scrutinize the effectiveness of Cox regression and neural network models in assessing the survival outcomes of patients who have undergone treatment for colorectal cancer. We conducted a retrospective study on 284 colorectal cancer patients who underwent surgery at Imam Khomeini clinic in Hamadan between 2001 and 2017. The data was used to train both Cox regression and neural network models, and their predictive accuracy was compared using diagnostic measures such as sensitivity, specificity, positive predictive value, accuracy, negative predictive value, and area under the receiver operating characteristic curve. The analyses were performed using STATA 17 and R4.0.4 software. The study revealed that the best neural network model had a sensitivity of 74.5% (95% CI 61.0-85.0), specificity of 83.3% (65.3-94.4), positive predictive value of 89.1% (76.4-96.4), negative predictive value of 64.1% (47.2-78.8), AUC of 0.79 (0.70-0.88), and accuracy of 0.776 for death prediction. For recurrence, the best neural network model had a sensitivity of 88.1% (74.4-96.0%), specificity of 83.7% (69.3-93.2%), positive predictive value of 84.1% (69.9-93.4%), negative predictive value of 87.8% (73.8-95.9%), AUC of 0.86 (0.78-0.93), and accuracy of 0.859. The Cox model had comparable results, with a sensitivity of 73.6% (64.8-81.2) and 85.5% (78.3-91.0), specificity of 89.6% (83.8-93.8) and 98.0% (94.4-99.6), positive predictive value of 84.0% (75.6-90.4) and 97.4% (92.6-99.5), negative predictive value of 82.0% (75.6-90.4) and 88.8% (0.83-93.1), AUC of 0.82 (0.77-0.86) and 0.92 (0.89-0.95), and accuracy of 0.88 and 0.92 for death and recurrence prediction, respectively. In conclusion, the study found that both Cox regression and neural network models are effective in predicting early recurrence and death in patients with colorectal cancer after curative surgery. The neural network model showed slightly better sensitivity and negative predictive value for death, while the Cox model had better specificity and positive predictive value for recurrence. Overall, both models demonstrated high accuracy and AUC, indicating their usefulness in predicting these outcomes.
Collapse
Affiliation(s)
- Shayeste Alinia
- Department of Statistics and Epidemiology, School of Medicine, Zanjan University of Medical Sciences, Mahdavi Blvd, Zanjan, 4513956111, Iran
| | - Mohammad Asghari-Jafarabadi
- Faculty of Health, Road Traffic Injury Research Center, Tabriz University of Medical Sciences, Golgasht St. Attar E Neshabouri St., Tabriz, 5166614711, Iran.
- Cabrini Research, Cabrini Health, Malvern, VIC, 3144, Australia.
- Faculty of Medicine, Nursing and Health Sciences, School of Public Health and Preventative Medicine, Monash University, Melbourne, VIC, 3004, Australia.
- Department of Psychiatry, Faculty of Medicine, Nursing and Health Sciences, School of Clinical Sciences, Monash University, Clayton, VIC, 3168, Australia.
| | - Leila Mahmoudi
- Department of Statistics and Epidemiology, School of Medicine, Zanjan University of Medical Sciences, Mahdavi Blvd, Zanjan, 4513956111, Iran.
| | - Solmaz Norouzi
- Department of Statistics and Epidemiology, School of Medicine, Zanjan University of Medical Sciences, Mahdavi Blvd, Zanjan, 4513956111, Iran
| | - Maliheh Safari
- Department of Biostatistics, School of Medicine, Arak University of Medical Sciences, Arak, Iran
| | - Ghodratollah Roshanaei
- Department of Biostatistics, Modeling of Non-Communicable Diseases Research Center, School of Public Health, Hamadan University of Medical Sciences, Hamadan, Iran
| |
Collapse
|
7
|
Szijártó Á, Somfai E, Lőrincz A. Design of a Machine Learning System to Predict the Thickness of a Melanoma Lesion in a Non-Invasive Way from Dermoscopic Images. Healthc Inform Res 2023; 29:112-119. [PMID: 37190735 DOI: 10.4258/hir.2023.29.2.112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
OBJECTIVES Melanoma is the deadliest form of skin cancer, but it can be fully cured through early detection and treatment in 99% of cases. Our aim was to develop a non-invasive machine learning system that can predict the thickness of a melanoma lesion, which is a proxy for tumor progression, through dermoscopic images. This method can serve as a valuable tool in identifying urgent cases for treatment. METHODS A modern convolutional neural network architecture (EfficientNet) was used to construct a model capable of classifying dermoscopic images of melanoma lesions into three distinct categories based on thickness. We incorporated techniques to reduce the impact of an imbalanced training dataset, enhanced the generalization capacity of the model through image augmentation, and utilized five-fold cross-validation to produce more reliable metrics. RESULTS Our method achieved 71% balanced accuracy for three-way classification when trained on a small public dataset of 247 melanoma images. We also presented performance projections for larger training datasets. CONCLUSIONS Our model represents a new state-of-the-art method for classifying melanoma thicknesses. Performance can be further optimized by expanding training datasets and utilizing model ensembles. We have shown that earlier claims of higher performance were mistaken due to data leakage during the evaluation process.
Collapse
Affiliation(s)
- Ádám Szijártó
- Department of Artificial Intelligence, Faculty of Informatics, Eötvös Loránd University, Budapest, Hungary
| | - Ellák Somfai
- Department of Artificial Intelligence, Faculty of Informatics, Eötvös Loránd University, Budapest, Hungary
- Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Budapest, Hungary
| | - András Lőrincz
- Department of Artificial Intelligence, Faculty of Informatics, Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
8
|
Wang L, Zhang L, Shu X, Yi Z. Intra-class consistency and inter-class discrimination feature learning for automatic skin lesion classification. Med Image Anal 2023; 85:102746. [PMID: 36638748 DOI: 10.1016/j.media.2023.102746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 10/24/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Automated skin lesion classification has been proved to be capable of improving the diagnostic performance for dermoscopic images. Although many successes have been achieved, accurate classification remains challenging due to the significant intra-class variation and inter-class similarity. In this article, a deep learning method is proposed to increase the intra-class consistency as well as the inter-class discrimination of learned features in the automatic skin lesion classification. To enhance the inter-class discriminative feature learning, a CAM-based (class activation mapping) global-lesion localization module is proposed by optimizing the distance of CAMs for the same dermoscopic image generated by different skin lesion tasks. Then, a global features guided intra-class similarity learning module is proposed to generate the class center according to the deep features of all samples in one class and the history feature of one sample during the learning process. In this way, the performance can be improved with the collaboration of CAM-based inter-class feature discriminating and global features guided intra-class feature concentrating. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on the ISIC-2017 and ISIC-2018 datasets. Experimental results with different backbones have demonstrated that the proposed method has good generalizability and can adaptively focus on more discriminative regions of the skin lesion.
Collapse
Affiliation(s)
- Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
9
|
Yang G, Luo S, Greer P. A Novel Vision Transformer Model for Skin Cancer Classification. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11204-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
AbstractSkin cancer can be fatal if it is found to be malignant. Modern diagnosis of skin cancer heavily relies on visual inspection through clinical screening, dermoscopy, or histopathological examinations. However, due to similarity among cancer types, it is usually challenging to identify the type of skin cancer, especially at its early stages. Deep learning techniques have been developed over the last few years and have achieved success in helping to improve the accuracy of diagnosis and classification. However, the latest deep learning algorithms still do not provide ideal classification accuracy. To further improve the performance of classification accuracy, this paper presents a novel method of classifying skin cancer in clinical skin images. The method consists of four blocks. First, class rebalancing is applied to the images of seven skin cancer types for better classification performance. Second, an image is preprocessed by being split into patches of the same size and then flattened into a series of tokens. Third, a transformer encoder is used to process the flattened patches. The transformer encoder consists of N identical layers with each layer containing two sublayers. Sublayer one is a multihead self-attention unit, and sublayer two is a fully connected feed-forward network unit. For each of the two sublayers, a normalization operation is applied to its input, and a residual connection of its input and its output is calculated. Finally, a classification block is implemented after the transformer encoder. The block consists of a flattened layer and a dense layer with batch normalization. Transfer learning is implemented to build the whole network, where the ImageNet dataset is used to pretrain the network and the HAM10000 dataset is used to fine-tune the network. Experiments have shown that the method has achieved a classification accuracy of 94.1%, outperforming the current state-of-the-art model IRv2 with soft attention on the same training and testing datasets. On the Edinburgh DERMOFIT dataset also, the method has better performance compared with baseline models.
Collapse
|
10
|
Zhang W, Lu F, Zhao W, Hu Y, Su H, Yuan M. ACCPG-Net: A skin lesion segmentation network with Adaptive Channel-Context-Aware Pyramid Attention and Global Feature Fusion. Comput Biol Med 2023; 154:106580. [PMID: 36716686 DOI: 10.1016/j.compbiomed.2023.106580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 01/09/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
The computer-aided diagnosis system based on dermoscopic images has played an important role in the clinical treatment of skin lesion. An accurate, efficient, and automatic skin lesion segmentation method is an important auxiliary tool for clinical diagnosis. At present, skin lesion segmentation still suffers from great challenges. Existing deep-learning-based automatic segmentation methods frequently use convolutional neural networks (CNN). However, the globally-sharing feature re-weighting vector may not be optimal for the prediction of lesion areas in dermoscopic images. The presence of hairs and spots in some samples aggravates the interference of similar categories, and reduces the segmentation accuracy. To solve this problem, this paper proposes a new deep network for precise skin lesion segmentation based on a U-shape structure. To be specific, two lightweight attention modules: adaptive channel-context-aware pyramid attention (ACCAPA) module and global feature fusion (GFF) module, are embedded in the network. The ACCAPA module can model the characteristics of the lesion areas by dynamically learning the channel information, contextual information and global structure information. GFF is used for different levels of semantic information interaction between encoder and decoder layers. To validate the effectiveness of the proposed method, we test the performance of ACCPG-Net on several public skin lesion datasets. The results show that our method achieves better segmentation performance compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Wenyu Zhang
- School of Information Science and Engineering, Lanzhou University, China
| | - Fuxiang Lu
- School of Information Science and Engineering, Lanzhou University, China.
| | - Wei Zhao
- School of Information Science and Engineering, Lanzhou University, China
| | - Yawen Hu
- School of Information Science and Engineering, Lanzhou University, China
| | - Hongjing Su
- School of Information Science and Engineering, Lanzhou University, China
| | - Min Yuan
- School of Information Science and Engineering, Lanzhou University, China
| |
Collapse
|
11
|
Nancy Jane Y, Charanya SK, Amsaprabhaa M, Jayashanker P, Nehemiah H K. 2-HDCNN: A two-tier hybrid dual convolution neural network feature fusion approach for diagnosing malignant melanoma. Comput Biol Med 2023; 152:106333. [PMID: 36463793 DOI: 10.1016/j.compbiomed.2022.106333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 11/20/2022]
Abstract
Melanoma is a fatal form of skin cancer, which causes excess skin cell growth in the body. The objective of this work is to develop a two-tier hybrid dual convolution neural network (2-HDCNN) feature fusion approach for malignant melanoma prediction. The first-tier baseline Convolutional Neural Network (CNN) extracts the hard to classify samples based on the confidence factor (class probability variance score) and generates a Baseline Segregated Dataset (BSD). The BSD is then preprocessed using hair removal and data augmentation techniques. The preprocessed BSD is trained with the second-tier CNN that yields the bottleneck features. These features are then combined with the derived features from the ABCD (Asymmetry, Border, Color and Diameter) medical rule to improve classification accuracy. The generated hybrid fused features are fed to different classifiers like Gradient boosting classifiers, Bagging classifiers, XGBoost classifiers, Decision trees, Support Vector Machine, Logistic regression and Multi-layer perceptron. For performance assessment, the proposed framework is trained on the ISIC 2018 dataset. The experimental results prove that the presented 2-HDCNN feature fusion approach has reached an accuracy of 92.15%, precision of 96.96%, specificity of 96.8%, sensitivity of 86.48%, and AUC (Area Under Curve) value of 0.96 for diagnosing malignant melanoma.
Collapse
Affiliation(s)
- Y Nancy Jane
- Department of Computer Technology, Madras Institute of Technology (Anna University), Chennai, 600044, India
| | - S K Charanya
- Department of Computer Technology, Madras Institute of Technology (Anna University), Chennai, 600044, India
| | - M Amsaprabhaa
- Department of Computer Technology, Madras Institute of Technology (Anna University), Chennai, 600044, India
| | - Preetiha Jayashanker
- Department of Computer Technology, Madras Institute of Technology (Anna University), Chennai, 600044, India
| | | |
Collapse
|
12
|
Polesie S, Gillstedt M, Kittler H, Rinner C, Tschandl P, Paoli J. Assessment of melanoma thickness based on dermoscopy images: an open, web-based, international, diagnostic study. J Eur Acad Dermatol Venereol 2022; 36:2002-2007. [PMID: 35841304 PMCID: PMC9796258 DOI: 10.1111/jdv.18436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 06/14/2022] [Indexed: 01/01/2023]
Abstract
BACKGROUND Preoperative assessment of whether a melanoma is invasive or in situ (MIS) is a common task that might have important implications for triage, prognosis and the selection of surgical margins. Several dermoscopic features suggestive of melanoma have been described, but only a few of these are useful in differentiating MIS from invasive melanoma. OBJECTIVE The primary aim of this study was to evaluate how accurately a large number of international readers, individually as well as collectively, were able to discriminate between MIS and invasive melanomas as well as estimate the Breslow thickness of invasive melanomas based on dermoscopy images. The secondary aim was to compare the accuracy of two machine learning convolutional neural networks (CNNs) and the collective reader response. METHODS We conducted an open, web-based, international, diagnostic reader study using an online platform. The online challenge opened on 10 May 2021 and closed on 19 July 2021 (71 days) and was advertised through several social media channels. The investigation included, 1456 dermoscopy images of melanomas (788 MIS; 474 melanomas ≤1.0 mm and 194 >1.0 mm). A test set comprising 277 MIS and 246 invasive melanomas was used to compare readers and CNNs. RESULTS We analysed 22 314 readings by 438 international readers. The overall accuracy (95% confidence interval) for melanoma thickness was 56.4% (55.7%-57.0%), 63.4% (62.5%-64.2%) for MIS and 71.0% (70.3%-72.1%) for invasive melanoma. Readers accurately predicted the thickness in 85.9% (85.4%-86.4%) of melanomas ≤1.0 mm (including MIS) and in 70.8% (69.2%-72.5%) of melanomas >1.0 mm. The reader collective outperformed a de novo CNN but not a pretrained CNN in differentiating MIS from invasive melanoma. CONCLUSIONS Using dermoscopy images, readers and CNNs predict melanoma thickness with fair to moderate accuracy. Readers most accurately discriminated between thin (≤1.0 mm including MIS) and thick melanomas (>1.0 mm).
Collapse
Affiliation(s)
- S. Polesie
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Department of Dermatology and Venereology, Region Västra GötalandSahlgrenska University HospitalGothenburgSweden
| | - M. Gillstedt
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Department of Dermatology and Venereology, Region Västra GötalandSahlgrenska University HospitalGothenburgSweden
| | - H. Kittler
- Department of DermatologyMedical University of ViennaViennaAustria
| | - C. Rinner
- Center of Medical Statistics, Informatics and Intelligent Systems (CeMSIIS)Medical University of ViennaViennaAustria
| | - P. Tschandl
- Department of DermatologyMedical University of ViennaViennaAustria
| | - J. Paoli
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Department of Dermatology and Venereology, Region Västra GötalandSahlgrenska University HospitalGothenburgSweden
| |
Collapse
|
13
|
Skin lesion classification of dermoscopic images using machine learning and convolutional neural network. Sci Rep 2022; 12:18134. [PMID: 36307467 PMCID: PMC9616944 DOI: 10.1038/s41598-022-22644-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 10/18/2022] [Indexed: 12/30/2022] Open
Abstract
Detecting dangerous illnesses connected to the skin organ, particularly malignancy, requires the identification of pigmented skin lesions. Image detection techniques and computer classification capabilities can boost skin cancer detection accuracy. The dataset used for this research work is based on the HAM10000 dataset which consists of 10015 images. The proposed work has chosen a subset of the dataset and performed augmentation. A model with data augmentation tends to learn more distinguishing characteristics and features rather than a model without data augmentation. Involving data augmentation can improve the accuracy of the model. But that model cannot give significant results with the testing data until it is robust. The k-fold cross-validation technique makes the model robust which has been implemented in the proposed work. We have analyzed the classification accuracy of the Machine Learning algorithms and Convolutional Neural Network models. We have concluded that Convolutional Neural Network provides better accuracy compared to other machine learning algorithms implemented in the proposed work. In the proposed system, as the highest, we obtained an accuracy of 95.18% with the CNN model. The proposed work helps early identification of seven classes of skin disease and can be validated and treated appropriately by medical practitioners.
Collapse
|
14
|
Aldhyani THH, Verma A, Al-Adhaileh MH, Koundal D. Multi-Class Skin Lesion Classification Using a Lightweight Dynamic Kernel Deep-Learning-Based Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12092048. [PMID: 36140447 PMCID: PMC9497471 DOI: 10.3390/diagnostics12092048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/20/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
Skin is the primary protective layer of the internal organs of the body. Nowadays, due to increasing pollution and multiple other factors, various types of skin diseases are growing globally. With variable shapes and multiple types, the classification of skin lesions is a challenging task. Motivated by this spreading deformity in society, a lightweight and efficient model is proposed for the highly accurate classification of skin lesions. Dynamic-sized kernels are used in layers to obtain the best results, resulting in very few trainable parameters. Further, both ReLU and leakyReLU activation functions are purposefully used in the proposed model. The model accurately classified all of the classes of the HAM10000 dataset. The model achieved an overall accuracy of 97.85%, which is much better than multiple state-of-the-art heavy models. Further, our work is compared with some popular state-of-the-art and recent existing models.
Collapse
Affiliation(s)
- Theyazn H. H. Aldhyani
- Applied College in Abqaiq, King Faisal University, P.O. Box 400, Al-Ahsa 31982, Saudi Arabia
- Correspondence:
| | - Amit Verma
- School of Computer Science, University of Petroleum & Energy Studies, Dehradun 248007, India
| | - Mosleh Hmoud Al-Adhaileh
- Deanship of E-Learning and Distance Education, King Faisal University, P.O. Box 4000, Al-Ahsa 31982, Saudi Arabia
| | - Deepika Koundal
- School of Computer Science, University of Petroleum & Energy Studies, Dehradun 248007, India
| |
Collapse
|
15
|
Chu Y, Guo S, Cui D, Fu X, Ma Y. DeephageTP: a convolutional neural network framework for identifying phage-specific proteins from metagenomic sequencing data. PeerJ 2022; 10:e13404. [PMID: 35698617 PMCID: PMC9188312 DOI: 10.7717/peerj.13404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 04/18/2022] [Indexed: 01/14/2023] Open
Abstract
Bacteriophages (phages) are the most abundant and diverse biological entity on Earth. Due to the lack of universal gene markers and database representatives, there about 50-90% of genes of phages are unable to assign functions. This makes it a challenge to identify phage genomes and annotate functions of phage genes efficiently by homology search on a large scale, especially for newly phages. Portal (portal protein), TerL (large terminase subunit protein), and TerS (small terminase subunit protein) are three specific proteins of Caudovirales phage. Here, we developed a CNN (convolutional neural network)-based framework, DeephageTP, to identify the three specific proteins from metagenomic data. The framework takes one-hot encoding data of original protein sequences as the input and automatically extracts predictive features in the process of modeling. To overcome the false positive problem, a cutoff-loss-value strategy is introduced based on the distributions of the loss values of protein sequences within the same category. The proposed model with a set of cutoff-loss-values demonstrates high performance in terms of Precision in identifying TerL and Portal sequences (94% and 90%, respectively) from the mimic metagenomic dataset. Finally, we tested the efficacy of the framework using three real metagenomic datasets, and the results shown that compared to the conventional alignment-based methods, our proposed framework had a particular advantage in identifying the novel phage-specific protein sequences of portal and TerL with remote homology to their counterparts in the training datasets. In summary, our study for the first time develops a CNN-based framework for identifying the phage-specific protein sequences with high complexity and low conservation, and this framework will help us find novel phages in metagenomic sequencing data. The DeephageTP is available at https://github.com/chuym726/DeephageTP.
Collapse
Affiliation(s)
- Yunmeng Chu
- Shenzhen Key Laboratory of Synthetic Genomics, Guangdong Provincial Key Laboratory of Synthetic Genomics, CAS Key Laboratory of Quantitative Engineering Biology, Shenzhen Institute of Synthetic Biology, Shenzhen Institutes of Advanced Technology, Chinese, Shenzhen, Guangdong, P.R. China,Department of Bioengineering and Biotechnology, Huaqiao University, Xiamen, Fujian, P.R. China
| | - Shun Guo
- Shenzhen Key Laboratory of Synthetic Genomics, Guangdong Provincial Key Laboratory of Synthetic Genomics, CAS Key Laboratory of Quantitative Engineering Biology, Shenzhen Institute of Synthetic Biology, Shenzhen Institutes of Advanced Technology, Chinese, Shenzhen, Guangdong, P.R. China
| | - Dachao Cui
- Shenzhen Key Laboratory of Synthetic Genomics, Guangdong Provincial Key Laboratory of Synthetic Genomics, CAS Key Laboratory of Quantitative Engineering Biology, Shenzhen Institute of Synthetic Biology, Shenzhen Institutes of Advanced Technology, Chinese, Shenzhen, Guangdong, P.R. China
| | - Xiongfei Fu
- Shenzhen Key Laboratory of Synthetic Genomics, Guangdong Provincial Key Laboratory of Synthetic Genomics, CAS Key Laboratory of Quantitative Engineering Biology, Shenzhen Institute of Synthetic Biology, Shenzhen Institutes of Advanced Technology, Chinese, Shenzhen, Guangdong, P.R. China
| | - Yingfei Ma
- Shenzhen Key Laboratory of Synthetic Genomics, Guangdong Provincial Key Laboratory of Synthetic Genomics, CAS Key Laboratory of Quantitative Engineering Biology, Shenzhen Institute of Synthetic Biology, Shenzhen Institutes of Advanced Technology, Chinese, Shenzhen, Guangdong, P.R. China
| |
Collapse
|
16
|
Patil R, Bellary S. Machine learning approach in melanoma cancer stage detection. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2020.09.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
17
|
A Dermoscopic Skin Lesion Classification Technique Using YOLO-CNN and Traditional Feature Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-021-05571-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
18
|
Nies HW, Mohamad MS, Zakaria Z, Chan WH, Remli MA, Nies YH. Enhanced Directed Random Walk for the Identification of Breast Cancer Prognostic Markers from Multiclass Expression Data. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1232. [PMID: 34573857 PMCID: PMC8472068 DOI: 10.3390/e23091232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 09/14/2021] [Accepted: 09/16/2021] [Indexed: 12/12/2022]
Abstract
Artificial intelligence in healthcare can potentially identify the probability of contracting a particular disease more accurately. There are five common molecular subtypes of breast cancer: luminal A, luminal B, basal, ERBB2, and normal-like. Previous investigations showed that pathway-based microarray analysis could help in the identification of prognostic markers from gene expressions. For example, directed random walk (DRW) can infer a greater reproducibility power of the pathway activity between two classes of samples with a higher classification accuracy. However, most of the existing methods (including DRW) ignored the characteristics of different cancer subtypes and considered all of the pathways to contribute equally to the analysis. Therefore, an enhanced DRW (eDRW+) is proposed to identify breast cancer prognostic markers from multiclass expression data. An improved weight strategy using one-way ANOVA (F-test) and pathway selection based on the greatest reproducibility power is proposed in eDRW+. The experimental results show that the eDRW+ exceeds other methods in terms of AUC. Besides this, the eDRW+ identifies 294 gene markers and 45 pathway markers from the breast cancer datasets with better AUC. Therefore, the prognostic markers (pathway markers and gene markers) can identify drug targets and look for cancer subtypes with clinically distinct outcomes.
Collapse
Affiliation(s)
- Hui Wen Nies
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (Z.Z.); (W.H.C.)
| | - Mohd Saberi Mohamad
- Health Data Science Lab, Department of Genetics and Genomics, College of Medical and Health Sciences, United Arab Emirates University, Al Ain 17666, United Arab Emirates;
| | - Zalmiyah Zakaria
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (Z.Z.); (W.H.C.)
| | - Weng Howe Chan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (Z.Z.); (W.H.C.)
| | - Muhammad Akmal Remli
- Institute for Artificial Intelligence and Big Data, Universiti Malaysia Kelantan, Kota Bharu 16100, Malaysia;
| | - Yong Hui Nies
- Department of Anatomy, Faculty of Medicine, Universiti Kebangsaan Malaysia Medical Centre, Cheras, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
19
|
Liu Z, Ni S, Yang C, Sun W, Huang D, Su H, Shu J, Qin N. Axillary lymph node metastasis prediction by contrast-enhanced computed tomography images for breast cancer patients based on deep learning. Comput Biol Med 2021; 136:104715. [PMID: 34388460 DOI: 10.1016/j.compbiomed.2021.104715] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/09/2021] [Accepted: 07/27/2021] [Indexed: 12/09/2022]
Abstract
When doctors use contrast-enhanced computed tomography (CECT) images to predict the metastasis of axillary lymph nodes (ALN) for breast cancer patients, the prediction performance could be degraded by subjective factors such as experience, psychological factors, and degree of fatigue. This study aims to exploit efficient deep learning schemes to predict the metastasis of ALN automatically via CECT images. A new construction called deformable sampling module (DSM) was meticulously designed as a plug-and-play sampling module in the proposed deformable attention VGG19 (DA-VGG19). A dataset of 800 samples labeled from 800 CECT images of 401 breast cancer patients retrospectively enrolled in the last three years was adopted to train, validate, and test the deep convolutional neural network models. By comparing the accuracy, positive predictive value, negative predictive value, sensitivity and specificity indices, the performance of the proposed model is analyzed in detail. The best-performing DA-VGG19 model achieved an accuracy of 0.9088, which is higher than that of other classification neural networks. As such, the proposed intelligent diagnosis algorithm can provide doctors with daily diagnostic assistance and advice and reduce the workload of doctors. The source code mentioned in this article will be released later.
Collapse
Affiliation(s)
- Ziyi Liu
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Sijie Ni
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China
| | - Weihao Sun
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Deqing Huang
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Hu Su
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China.
| | - Na Qin
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China.
| |
Collapse
|
20
|
Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics (Basel) 2021; 11:1390. [PMID: 34441324 PMCID: PMC8391467 DOI: 10.3390/diagnostics11081390] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/04/2022] Open
Abstract
Computer-aided systems for skin lesion diagnosis is a growing area of research. Recently, researchers have shown an increasing interest in developing computer-aided diagnosis systems. This paper aims to review, synthesize and evaluate the quality of evidence for the diagnostic accuracy of computer-aided systems. This study discusses the papers published in the last five years in ScienceDirect, IEEE, and SpringerLink databases. It includes 53 articles using traditional machine learning methods and 49 articles using deep learning methods. The studies are compared based on their contributions, the methods used and the achieved results. The work identified the main challenges of evaluating skin lesion segmentation and classification methods such as small datasets, ad hoc image selection and racial bias.
Collapse
Affiliation(s)
- Mohamed A. Kassem
- Department of Robotics and Intelligent Machines, Faculty of Artificial Intelligence, Kaferelshiekh University, Kaferelshiekh 33511, Egypt;
| | - Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Mohamed Meselhy Eltoukhy
- Computer Science Department, Faculty of Computers and Informatics, Suez Canal University, Ismailia 41522, Egypt;
| |
Collapse
|
21
|
Papadakis M, Paschos A, Manios A, Lehmann P, Manios G, Zirngibl H. Computer-aided clinical image analysis for non-invasive assessment of tumor thickness in cutaneous melanoma. BMC Res Notes 2021; 14:232. [PMID: 34127072 PMCID: PMC8201878 DOI: 10.1186/s13104-021-05650-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Accepted: 06/09/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVE Computerized clinical image analysis is shown to improve diagnostic accuracy for cutaneous melanoma but its effectiveness in preoperative assessment of melanoma thickness has not been studied. The aim of this study, is to explore how melanoma thickness correlates with computer-assisted objectively obtained color and geometric variables. All patients diagnosed with cutaneous melanoma with available clinical images prior to tumor excision were included in the study. All images underwent digital processing with an automated non-commercial software. The software provided measurements for geometrical variables, i.e., overall lesion surface, maximum diameter, perimeter, circularity, eccentricity, mean radius, as well as for color variables, i.e., range, standard deviation, coefficient of variation and skewness in the red, green, and blue color space. RESULTS One hundred fifty-six lesions were included in the final analysis. The mean tumor thickness was 1.84 mm (range 0.2-25). Melanoma thickness was strongly correlated with overall surface area, maximum diameter, perimeter and mean lesion radius. Thickness was moderately correlated with eccentricity, green color and blue color. We conclude that geometrical and color parameters, as objectively extracted by computer-aided clinical image processing, may correlate with tumor thickness in patients with cutaneous melanoma. However, these correlations are not strong enough to reliably predict tumor thickness.
Collapse
Affiliation(s)
- Marios Papadakis
- Division of Surgery II, University of Witten-Herdecke, Heusnerstr. 40, 42283, Wuppertal, Germany.
| | - Alexandros Paschos
- Department of Dermatology, Helios St. Elisabeth Hospital Oberhausen, Oberhausen, Germany
| | - Andreas Manios
- Department of Surgical Oncology, School of Medicine, University Hospital Heraklion, Heraklion, Greece
| | - Percy Lehmann
- Department of Dermatology, Helios University Hospital, Wuppertal, Germany
| | - Georgios Manios
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Volos, Greece
| | - Hubert Zirngibl
- Department of Surgery, Helios University Hospital, Wuppertal, Germany
| |
Collapse
|
22
|
Optical Technologies for the Improvement of Skin Cancer Diagnosis: A Review. SENSORS 2021; 21:s21010252. [PMID: 33401739 PMCID: PMC7795742 DOI: 10.3390/s21010252] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/24/2020] [Accepted: 12/26/2020] [Indexed: 02/04/2023]
Abstract
The worldwide incidence of skin cancer has risen rapidly in the last decades, becoming one in three cancers nowadays. Currently, a person has a 4% chance of developing melanoma, the most aggressive form of skin cancer, which causes the greatest number of deaths. In the context of increasing incidence and mortality, skin cancer bears a heavy health and economic burden. Nevertheless, the 5-year survival rate for people with skin cancer significantly improves if the disease is detected and treated early. Accordingly, large research efforts have been devoted to achieve early detection and better understanding of the disease, with the aim of reversing the progressive trend of rising incidence and mortality, especially regarding melanoma. This paper reviews a variety of the optical modalities that have been used in the last years in order to improve non-invasive diagnosis of skin cancer, including confocal microscopy, multispectral imaging, three-dimensional topography, optical coherence tomography, polarimetry, self-mixing interferometry, and machine learning algorithms. The basics of each of these technologies together with the most relevant achievements obtained are described, as well as some of the obstacles still to be resolved and milestones to be met.
Collapse
|
23
|
Abstract
Segmenting brain tumors accurately and reliably is an essential part of cancer diagnosis and treatment planning. Brain tumor segmentation of glioma patients is a challenging task because of the wide variety of tumor sizes, shapes, positions, scanning modalities, and scanner’s acquisition protocols. Many convolutional neural network (CNN) based methods have been proposed to solve the problem of brain tumor segmentation and achieved great success. However, most previous studies do not fully take into account multiscale tumors and often fail to segment small tumors, which may have a significant impact on finding early-stage cancers. This paper deals with the brain tumor segmentation of any sizes, but specially focuses on accurately identifying small tumors, thereby increasing the performance of the brain tumor segmentation of overall sizes. Instead of using heavyweight networks with multi-resolution or multiple kernel sizes, we propose a novel approach for better segmentation of small tumors by dilated convolution and multi-task learning. Dilated convolution is used for multiscale feature extraction, however it does not work well with very small tumor segmentation. For dealing with small-sized tumors, we try multi-task learning, where an auxiliary task of feature reconstruction is used to retain the features of small tumors. The experiment shows the effectiveness of segmenting small tumors with the proposed method. This paper contributes to the detection and segmentation of small tumors, which have seldom been considered before and the new development of hierarchical analysis using multi-task learning.
Collapse
|
24
|
Xie Y, Zhang J, Xia Y, Shen C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2482-2493. [PMID: 32070946 DOI: 10.1109/tmi.2020.2972964] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Automated skin lesion segmentation and classification are two most essential and related tasks in the computer-aided diagnosis of skin cancer. Despite their prevalence, deep learning models are usually designed for only one task, ignoring the potential benefits in jointly performing both tasks. In this paper, we propose the mutual bootstrapping deep convolutional neural networks (MB-DCNN) model for simultaneous skin lesion segmentation and classification. This model consists of a coarse segmentation network (coarse-SN), a mask-guided classification network (mask-CN), and an enhanced segmentation network (enhanced-SN). On one hand, the coarse-SN generates coarse lesion masks that provide a prior bootstrapping for mask-CN to help it locate and classify skin lesions accurately. On the other hand, the lesion localization maps produced by mask-CN are then fed into enhanced-SN, aiming to transfer the localization information learned by mask-CN to enhanced-SN for accurate lesion segmentation. In this way, both segmentation and classification networks mutually transfer knowledge between each other and facilitate each other in a bootstrapping way. Meanwhile, we also design a novel rank loss and jointly use it with the Dice loss in segmentation networks to address the issues caused by class imbalance and hard-easy pixel imbalance. We evaluate the proposed MB-DCNN model on the ISIC-2017 and PH2 datasets, and achieve a Jaccard index of 80.4% and 89.4% in skin lesion segmentation and an average AUC of 93.8% and 97.7% in skin lesion classification, which are superior to the performance of representative state-of-the-art skin lesion segmentation and classification methods. Our results suggest that it is possible to boost the performance of skin lesion segmentation and classification simultaneously via training a unified model to perform both tasks in a mutual bootstrapping way.
Collapse
|
25
|
Artificial Neural Network and Cox Regression Models for Predicting Mortality after Hip Fracture Surgery: A Population-Based Comparison. ACTA ACUST UNITED AC 2020; 56:medicina56050243. [PMID: 32438724 PMCID: PMC7279348 DOI: 10.3390/medicina56050243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/13/2020] [Accepted: 05/13/2020] [Indexed: 01/31/2023]
Abstract
This study purposed to validate the accuracy of an artificial neural network (ANN) model for predicting the mortality after hip fracture surgery during the study period, and to compare performance indices between the ANN model and a Cox regression model. A total of 10,534 hip fracture surgery patients during 1996–2010 were recruited in the study. Three datasets were used: a training dataset (n = 7374) was used for model development, a testing dataset (n = 1580) was used for internal validation, and a validation dataset (1580) was used for external validation. Global sensitivity analysis also was performed to evaluate the relative importances of input predictors in the ANN model. Mortality after hip fracture surgery was significantly associated with referral system, age, gender, urbanization of residence area, socioeconomic status, Charlson comorbidity index (CCI) score, intracapsular fracture, hospital volume, and surgeon volume (p < 0.05). For predicting mortality after hip fracture surgery, the ANN model had higher prediction accuracy and overall performance indices compared to the Cox model. Global sensitivity analysis of the ANN model showed that the referral to lower-level medical institutions was the most important variable affecting mortality, followed by surgeon volume, hospital volume, and CCI score. Compared with the Cox regression model, the ANN model was more accurate in predicting postoperative mortality after a hip fracture. The forecasting predictors associated with postoperative mortality identified in this study can also bae used to educate candidates for hip fracture surgery with respect to the course of recovery and health outcomes.
Collapse
|
26
|
Tan TY, Zhang L, Lim CP. Intelligent skin cancer diagnosis using improved particle swarm optimization and deep learning models. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105725] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
27
|
Chatterjee S, Dey D, Munshi S. Integration of morphological preprocessing and fractal based feature extraction with recursive feature elimination for skin lesion types classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:201-218. [PMID: 31416550 DOI: 10.1016/j.cmpb.2019.06.018] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 06/03/2019] [Accepted: 06/15/2019] [Indexed: 05/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Skin cancer is the commonest form of cancer in the worldwide population. Non-invasive and non-contact imaging modalities are being used for the screening of melanoma and other cutaneous malignancies to endorse early detection and prevention of the disease. Traditionally it has been a problem for medical personnel to differentiate melanoma, dysplastic nevi and basal cell carcinoma (BCC) diseases from one another due to the confusing appearance and similarity in the characteristics of the pigmented lesions. The paper reports an integrated method developed for identifying these skin diseases from the dermoscopic images. METHODS The proposed integrated computer-aided method has been employed for the identification of each of these diseases using recursive feature elimination (RFE) based layered structured multiclass image classification technique. Prior to the classification, different quantitative features have been extracted by analyzing the shape, the border irregularity, the texture and the color of the skin lesions, using different image processing tools. Primarily, a combination of gray level co-occurrence matrix (GLCM) and a proposed fractal-based regional texture analysis (FRTA) algorithm has been used for the quantification of textural information. The performance of the framework has been evaluated using a layered structure classification model using support vector machine (SVM) classifier with radial basis function (RBF). RESULTS The performance of the morphological skin lesion segmentation algorithm has been evaluated by estimating the pixel level sensitivity (Sen) of 0.9172, 0.9788 specificity (Spec), 0.9521 accuracy (ACU), along with the image similarity measuring indices as Jaccard similarity index (JSI) of 0.8562 and Dice similarity coefficient (DSC) of 0.9142 with respect to the corresponding ground truth (GT) images. The quantitative features extracted from the proposed feature extraction algorithms have been employed for the proposed multi-class skin disease identification. The proposed layered structure identifies all the three classes of skin diseases with a highly acceptable classification accuracy of 98.99%, 97.54% and 99.65% for melanoma, dysplastic nevi and BCC respectively. CONCLUSION To overcome the difficulties of proper diagnosis of diseases based on visual evaluation, the proposed integrated system plays an important role by quantifying the effective features and identifying the diseases with higher degree of accuracy. This combined approach of quantitative and qualitative analysis not only increases the diagnostic accuracy, but also provides some important information not obtainable from qualitative assessment alone.
Collapse
Affiliation(s)
| | - Debangshu Dey
- Electrical Engineering Department, Jadavpur University, Kolkata-700032, India
| | - Sugata Munshi
- Electrical Engineering Department, Jadavpur University, Kolkata-700032, India
| |
Collapse
|
28
|
Chatterjee S, Dey D, Munshi S, Gorai S. Extraction of features from cross correlation in space and frequency domains for classification of skin lesions. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101581] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
29
|
Barata C, Celebi ME, Marques JS. A Survey of Feature Extraction in Dermoscopy Image Analysis of Skin Cancer. IEEE J Biomed Health Inform 2019; 23:1096-1109. [DOI: 10.1109/jbhi.2018.2845939] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK. Medical Image Analysis using Convolutional Neural Networks: A Review. J Med Syst 2018; 42:226. [DOI: 10.1007/s10916-018-1088-1] [Citation(s) in RCA: 247] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 09/25/2018] [Indexed: 01/03/2023]
|
31
|
Dermoscopic assisted diagnosis in melanoma: Reviewing results, optimizing methodologies and quantifying empirical guidelines. Knowl Based Syst 2018. [DOI: 10.1016/j.knosys.2018.05.016] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
32
|
Sánchez-Monedero J, Pérez-Ortiz M, Sáez A, Gutiérrez PA, Hervás-Martínez C. Partial order label decomposition approaches for melanoma diagnosis. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2017.11.042] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|