1
|
Maheswari M, Ahamed Ayoobkhan MU, Shirley CP, Lakshmi TRV. Optimized attention-induced multihead convolutional neural network with efficientnetv2-fostered melanoma classification using dermoscopic images. Med Biol Eng Comput 2024:10.1007/s11517-024-03106-y. [PMID: 38833025 DOI: 10.1007/s11517-024-03106-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 04/20/2024] [Indexed: 06/06/2024]
Abstract
Melanoma is an uncommon and dangerous type of skin cancer. Dermoscopic imaging aids skilled dermatologists in detection, yet the nuances between melanoma and non-melanoma conditions complicate diagnosis. Early identification of melanoma is vital for successful treatment, but manual diagnosis is time-consuming and requires a dermatologist with training. To overcome this issue, this article proposes an Optimized Attention-Induced Multihead Convolutional Neural Network with EfficientNetV2-fostered melanoma classification using dermoscopic images (AIMCNN-ENetV2-MC). The input pictures are extracted from the dermoscopic images dataset. Adaptive Distorted Gaussian Matched Filter (ADGMF) is used to remove the noise and maximize the superiority of skin dermoscopic images. These pre-processed images are fed to AIMCNN. The AIMCNN-ENetV2 classifies acral melanoma and benign nevus. Boosted Chimp Optimization Algorithm (BCOA) optimizes the AIMCNN-ENetV2 classifier for accurate classification. The proposed AIMCNN-ENetV2-MC is implemented using Python. The proposed approach attains an outstanding overall accuracy of 98.75%, less computation time of 98 s compared with the existing models.
Collapse
Affiliation(s)
- M Maheswari
- Department of Information Technology, DMI College of Engineering, Chennai, Tamil Nadu, India.
| | | | - C P Shirley
- Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - T R Vijaya Lakshmi
- Department of Electronics and Communication Engineering, Mahatma Gandhi Institute of Technology, Gandipet, Hyderabad, India
| |
Collapse
|
2
|
Bibi S, Khan MA, Shah JH, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics (Basel) 2023; 13:3063. [PMID: 37835807 PMCID: PMC10572512 DOI: 10.3390/diagnostics13193063] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/19/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.
Collapse
Affiliation(s)
- Sobia Bibi
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102-2801, Lebanon;
- Department of CS, HITEC University, Taxila 47080, Pakistan
| | - Jamal Hussain Shah
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
3
|
Elshahawy M, Elnemr A, Oproescu M, Schiopu AG, Elgarayhi A, Elmogy MM, Sallah M. Early Melanoma Detection Based on a Hybrid YOLOv5 and ResNet Technique. Diagnostics (Basel) 2023; 13:2804. [PMID: 37685342 PMCID: PMC10486497 DOI: 10.3390/diagnostics13172804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/11/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
Skin cancer, specifically melanoma, is a serious health issue that arises from the melanocytes, the cells that produce melanin, the pigment responsible for skin color. With skin cancer on the rise, the timely identification of skin lesions is crucial for effective treatment. However, the similarity between some skin lesions can result in misclassification, which is a significant problem. It is important to note that benign skin lesions are more prevalent than malignant ones, which can lead to overly cautious algorithms and incorrect results. As a solution, researchers are developing computer-assisted diagnostic tools to detect malignant tumors early. First, a new model based on the combination of "you only look once" (YOLOv5) and "ResNet50" is proposed for melanoma detection with its degree using humans against a machine with 10,000 training images (HAM10000). Second, feature maps integrate gradient change, which allows rapid inference, boosts precision, and reduces the number of hyperparameters in the model, making it smaller. Finally, the current YOLOv5 model is changed to obtain the desired outcomes by adding new classes for dermatoscopic images of typical lesions with pigmented skin. The proposed approach improves melanoma detection with a real-time speed of 0.4 MS of non-maximum suppression (NMS) per image. The performance metrics average is 99.0%, 98.6%, 98.8%, 99.5, 98.3%, and 98.7% for the precision, recall, dice similarity coefficient (DSC), accuracy, mean average precision (MAP) from 0.0 to 0.5, and MAP from 0.5 to 0.95, respectively. Compared to current melanoma detection approaches, the provided approach is more efficient in using deep features.
Collapse
Affiliation(s)
- Manar Elshahawy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Elnemr
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt; (A.E.); (A.E.)
| | - Mihai Oproescu
- Faculty of Electronics, Communication, and Computer Science, University of Pitesti, 110040 Pitesti, Romania
| | - Adriana-Gabriela Schiopu
- Department of Manufacturing and Industrial Management, Faculty of Mechanics and Technology, University of Pitesti, 110040 Pitesti, Romania;
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt; (A.E.); (A.E.)
| | - Mohammed M. Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt;
| | - Mohammed Sallah
- Department of Physics, College of Sciences, University of Bisha, P.O. Box 344, Bisha 61922, Saudi Arabia;
| |
Collapse
|
4
|
Ding Y, Yi Z, Li M, long J, Lei S, Guo Y, Fan P, Zuo C, Wang Y. HI-MViT: A lightweight model for explainable skin disease classification based on modified MobileViT. Digit Health 2023; 9:20552076231207197. [PMID: 37846401 PMCID: PMC10576942 DOI: 10.1177/20552076231207197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/26/2023] [Indexed: 10/18/2023] Open
Abstract
Objective To develop an explainable lightweight skin disease high-precision classification model that can be deployed to the mobile terminal. Methods In this study, we present HI-MViT, a lightweight network for explainable skin disease classification based on Modified MobileViT. HI-MViT is mainly composed of ordinary convolution, Improved-MV2, MobileViT block, global pooling, and fully connected layers. Improved-MV2 uses the combination of shortcut and depth classifiable convolution to substantially decrease the amount of computation while ensuring the efficient implementation of information interaction and memory. The MobileViT block can efficiently encode local and global information. In addition, semantic feature dimensionality reduction visualization and class activation mapping visualization methods are used for HI-MViT to further understand the attention area of the model when learning skin lesion images. Results The International Skin Imaging Collaboration has assembled and made available the ISIC series dataset. Experiments using the HI-MViT model on the ISIC-2018 dataset achieved scores of 0.931, 0.932, 0.961, and 0.977 on F1-Score, Accuracy, Average Precision (AP), and area under the curve (AUC). Compared with the top five algorithms of ISIC-2018 Task 3, Marco's average F1-Score, AP, and AUC have increased by 6.9%, 6.8%, and 0.8% compared with the suboptimal performance model. Compared with ConvNeXt, the most competitive convolutional neural network architecture, our model is 5.0%, 3.4%, 2.3%, and 2.2% higher in F1-Score, Accuracy, AP, and AUC, respectively. The experiments on the ISIC-2017 dataset also achieved excellent results, and all indicators were better than the top five algorithms of ISIC-2017 Task 3. Using the trained model to test on the PH2 dataset, an excellent performance score is obtained, which shows that it has good generalization performance. Conclusions The skin disease classification model HI-MViT proposed in this article shows excellent classification performance and generalization performance in experiments. It demonstrates how the classification outcomes can be applied to dermatologists' computer-assisted diagnostics, enabling medical professionals to classify various dermoscopic images more rapidly and reliably.
Collapse
Affiliation(s)
- Yuhan Ding
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Zhenglin Yi
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- Departments of Urology, Xiangya Hospital, Central South University, Changsha, China
| | - Mengjuan Li
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Jianhong long
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Shaorong Lei
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Yu Guo
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Pengju Fan
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Chenchen Zuo
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Yongjie Wang
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
5
|
Abdelhafeez A, Mohamed HK, Maher A, Khalil NA. A novel approach toward skin cancer classification through fused deep features and neutrosophic environment. Front Public Health 2023; 11:1123581. [PMID: 37139387 PMCID: PMC10150637 DOI: 10.3389/fpubh.2023.1123581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 03/13/2023] [Indexed: 05/05/2023] Open
Abstract
Variations in the size and texture of melanoma make the classification procedure more complex in a computer-aided diagnostic (CAD) system. The research proposes an innovative hybrid deep learning-based layer-fusion and neutrosophic-set technique for identifying skin lesions. The off-the-shelf networks are examined to categorize eight types of skin lesions using transfer learning on International Skin Imaging Collaboration (ISIC) 2019 skin lesion datasets. The top two networks, which are GoogleNet and DarkNet, achieved an accuracy of 77.41 and 82.42%, respectively. The proposed method works in two successive stages: first, boosting the classification accuracy of the trained networks individually. A suggested feature fusion methodology is applied to enrich the extracted features' descriptive power, which promotes the accuracy to 79.2 and 84.5%, respectively. The second stage explores how to combine these networks for further improvement. The error-correcting output codes (ECOC) paradigm is utilized for constructing a set of well-trained true and false support vector machine (SVM) classifiers via fused DarkNet and GoogleNet feature maps, respectively. The ECOC's coding matrices are designed to train each true classifier and its opponent in a one-versus-other fashion. Consequently, contradictions between true and false classifiers in terms of their classification scores create an ambiguity zone quantified by the indeterminacy set. Recent neutrosophic techniques resolve this ambiguity to tilt the balance toward the correct skin cancer class. As a result, the classification score is increased to 85.74%, outperforming the recent proposals by an obvious step. The trained models alongside the implementation of the proposed single-valued neutrosophic sets (SVNSs) will be publicly available for aiding relevant research fields.
Collapse
Affiliation(s)
- Ahmed Abdelhafeez
- Faculty of Information Systems and Computer Science, October 6th University, Cairo, Egypt
- *Correspondence: Ahmed Abdelhafeez,
| | | | - Ali Maher
- Military Technical College, Cairo, Egypt
| | - Nariman A. Khalil
- Faculty of Information Systems and Computer Science, October 6th University, Cairo, Egypt
| |
Collapse
|
6
|
Nie Y, Sommella P, Carratù M, O’Nils M, Lundgren J. A Deep CNN Transformer Hybrid Model for Skin Lesion Classification of Dermoscopic Images Using Focal Loss. Diagnostics (Basel) 2022; 13:diagnostics13010072. [PMID: 36611363 PMCID: PMC9818899 DOI: 10.3390/diagnostics13010072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/08/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Skin cancers are the most cancers diagnosed worldwide, with an estimated > 1.5 million new cases in 2020. Use of computer-aided diagnosis (CAD) systems for early detection and classification of skin lesions helps reduce skin cancer mortality rates. Inspired by the success of the transformer network in natural language processing (NLP) and the deep convolutional neural network (DCNN) in computer vision, we propose an end-to-end CNN transformer hybrid model with a focal loss (FL) function to classify skin lesion images. First, the CNN extracts low-level, local feature maps from the dermoscopic images. In the second stage, the vision transformer (ViT) globally models these features, then extracts abstract and high-level semantic information, and finally sends this to the multi-layer perceptron (MLP) head for classification. Based on an evaluation of three different loss functions, the FL-based algorithm is aimed to improve the extreme class imbalance that exists in the International Skin Imaging Collaboration (ISIC) 2018 dataset. The experimental analysis demonstrates that impressive results of skin lesion classification are achieved by employing the hybrid model and FL strategy, which shows significantly high performance and outperforms the existing work.
Collapse
Affiliation(s)
- Yali Nie
- Department of Electronics Design, Mid Sweden University, 85170 Sundsvall, Sweden
| | - Paolo Sommella
- Department of Industrial Engineering, University of Salerno, 84084 Fisciano, SA, Italy
| | - Marco Carratù
- Department of Industrial Engineering, University of Salerno, 84084 Fisciano, SA, Italy
| | - Mattias O’Nils
- Department of Electronics Design, Mid Sweden University, 85170 Sundsvall, Sweden
| | - Jan Lundgren
- Department of Electronics Design, Mid Sweden University, 85170 Sundsvall, Sweden
- Correspondence: ; Tel.: +46-1014-28556
| |
Collapse
|
7
|
An Ensemble of Transfer Learning Models for the Prediction of Skin Cancers with Conditional Generative Adversarial Networks. Diagnostics (Basel) 2022; 12:diagnostics12123145. [PMID: 36553152 PMCID: PMC9777332 DOI: 10.3390/diagnostics12123145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/04/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
Skin cancer is one of the most severe forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, diagnosing and treating skin cancer patients at an early stage is crucial. Since a manual skin cancer diagnosis is both time-consuming and expensive, an incorrect diagnosis is made due to the high similarity between the various skin cancers. Improved categorization of multiclass skin cancers requires the development of automated diagnostic systems. Herein, we propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using Conditional Generative Adversarial Network (CGAN) techniques. Thereafter, the traditional augmentation methods are used to augment our existing training set to improve the performance of pre-trained deep models on the skin cancer classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%. A comprehensive discussion on the performance of the models concluded that using this method possibly leads to enhanced performance in skin cancer categorization compared to the efforts made in the past.
Collapse
|
8
|
Liu X, Xu B, Gu W, Yin Y, Wang H. Plant leaf veins coupling feature representation and measurement method based on DeepLabV3. FRONTIERS IN PLANT SCIENCE 2022; 13:1043884. [PMID: 36507417 PMCID: PMC9730334 DOI: 10.3389/fpls.2022.1043884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 11/08/2022] [Indexed: 06/17/2023]
Abstract
The plant leaf veins coupling feature representation and measurement method based on DeepLabV3+ is proposed to solve problems of slow segmentation, partial occlusion of leaf veins, and low measurement accuracy of leaf veins parameters. Firstly, to solve the problem of slow segmentation, the lightweight MobileNetV2 is selected as the extraction network for DeepLabV3+. On this basis, the Convex Hull-Scan method is applied to repair leaf veins. Subsequently, a refinement algorithm, Floodfill MorphologyEx Medianblur Morphological Skeleton (F-3MS), is proposed, reducing the burr phenomenon of leaf veins' skeleton lines. Finally, leaf veins' related parameters are measured. In this study, mean intersection over union (MIoU) and mean pixel accuracy (mPA) reach 81.50% and 92.89%, respectively, and the average segmentation speed reaches 9.81 frames per second. Furthermore, the network model parameters are compressed by 89.375%, down to 5.813M. Meanwhile, leaf veins' length and width are measured, yielding an accuracy of 96.3642% and 96.1358%, respectively.
Collapse
|
9
|
Wan L, Ai Z, Chen J, Jiang Q, Chen H, Li Q, Lu Y, Chen L. Detection algorithm for pigmented skin disease based on classifier-level and feature-level fusion. Front Public Health 2022; 10:1034772. [PMID: 36339204 PMCID: PMC9632750 DOI: 10.3389/fpubh.2022.1034772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 09/30/2022] [Indexed: 01/29/2023] Open
Abstract
Pigmented skin disease is caused by abnormal melanocyte and melanin production, which can be induced by genetic and environmental factors. It is also common among the various types of skin diseases. The timely and accurate diagnosis of pigmented skin disease is important for reducing mortality. Patients with pigmented dermatosis are generally diagnosed by a dermatologist through dermatoscopy. However, due to the current shortage of experts, this approach cannot meet the needs of the population, so a computer-aided system would help to diagnose skin lesions in remote areas containing insufficient experts. This paper proposes an algorithm based on a fusion network for the detection of pigmented skin disease. First, we preprocess the images in the acquired dataset, and then we perform image flipping and image style transfer to augment the images to alleviate the imbalance between the various categories in the dataset. Finally, two feature-level fusion optimization schemes based on deep features are compared with a classifier-level fusion scheme based on a classification layer to effectively determine the best fusion strategy for satisfying the pigmented skin disease detection requirements. Gradient-weighted Class Activation Mapping (Grad_CAM) and Grad_CAM++ are used for visualization purposes to verify the effectiveness of the proposed fusion network. The results show that compared with those of the traditional detection algorithm for pigmented skin disease, the accuracy and Area Under Curve (AUC) of the method in this paper reach 92.1 and 95.3%, respectively. The evaluation indices are greatly improved, proving the adaptability and accuracy of the proposed method. The proposed method can assist clinicians in screening and diagnosing pigmented skin disease and is suitable for real-world applications.
Collapse
Affiliation(s)
- Li Wan
- Dermatology Department, Wuhan No.1 Hospital, Hubei, China,Dermatology Hospital of Southern Medical University, Guangzhou, China
| | - Zhuang Ai
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| | - Jinbo Chen
- Dermatology Department, Wuhan No.1 Hospital, Hubei, China
| | - Qian Jiang
- Dermatology Department, Wuhan No.1 Hospital, Hubei, China
| | - Hongying Chen
- Dermatology Department, Wuhan No.1 Hospital, Hubei, China
| | - Qi Li
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| | - Yaping Lu
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China,*Correspondence: Yaping Lu
| | - Liuqing Chen
- Dermatology Department, Wuhan No.1 Hospital, Hubei, China,Liuqing Chen
| |
Collapse
|
10
|
Lee JRH, Pavlova M, Famouri M, Wong A. Cancer-Net SCa: tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med Imaging 2022; 22:143. [PMID: 35945505 PMCID: PMC9364616 DOI: 10.1186/s12880-022-00871-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Background Skin cancer continues to be the most frequently diagnosed form of cancer in the U.S., with not only significant effects on health and well-being but also significant economic costs associated with treatment. A crucial step to the treatment and management of skin cancer is effective early detection with key screening approaches such as dermoscopy examinations, leading to stronger recovery prognoses. Motivated by the advances of deep learning and inspired by the open source initiatives in the research community, in this study we introduce Cancer-Net SCa, a suite of deep neural network designs tailored for the detection of skin cancer from dermoscopy images that is open source and available to the general public. To the best of the authors’ knowledge, Cancer-Net SCa comprises the first machine-driven design of deep neural network architectures tailored specifically for skin cancer detection, one of which leverages attention condensers for an efficient self-attention design. Results We investigate and audit the behaviour of Cancer-Net SCa in a responsible and transparent manner through explainability-driven performance validation. All the proposed designs achieved improved accuracy when compared to the ResNet-50 architecture while also achieving significantly reduced architectural and computational complexity. In addition, when evaluating the decision making process of the networks, it can be seen that diagnostically relevant critical factors are leveraged rather than irrelevant visual indicators and imaging artifacts. Conclusion The proposed Cancer-Net SCa designs achieve strong skin cancer detection performance on the International Skin Imaging Collaboration (ISIC) dataset, while providing a strong balance between computation and architectural efficiency and accuracy. While Cancer-Net SCa is not a production-ready screening solution, the hope is that the release of Cancer-Net SCa in open source, open access form will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.
Collapse
Affiliation(s)
- James Ren Hou Lee
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.
| | - Maya Pavlova
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| | | | - Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| |
Collapse
|
11
|
Deep Learning-Based Classification for Melanoma Detection Using XceptionNet. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2196096. [PMID: 35360474 PMCID: PMC8964214 DOI: 10.1155/2022/2196096] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/04/2022] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
Skin cancer is one of the most common types of cancer in the world, accounting for at least 40% of all cancers. Melanoma is considered as the 19th most commonly occurring cancer among the other cancers in the human society, such that about 300,000 new cases were found in 2018. While cancer diagnosis is based on interventional methods such as surgery, radiotherapy, and chemotherapy, studies show that the use of new computer technologies such as image processing mechanisms in processes related to early diagnosis of this cancer can help the physicians heal this cancer. This paper proposes an automatic method for diagnosis of skin cancer from dermoscopy images. The proposed model is based on an improved XceptionNet, which utilized swish activation function and depthwise separable convolutions. This system shows an improvement in the classification accuracy of the network compared to the original Xception and other dome architectures. Simulations of the proposed method are compared with some other related skin cancer diagnosis state-of-the-art solutions, and the results show that the suggested method achieves higher accuracy compared to the other comparative methods.
Collapse
|
12
|
Tiwari S, Jain A. A lightweight capsule network architecture for detection of COVID-19 from lung CT scans. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:419-434. [PMID: 35465213 PMCID: PMC9015631 DOI: 10.1002/ima.22706] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/22/2021] [Accepted: 01/04/2022] [Indexed: 05/28/2023]
Abstract
COVID-19, a novel coronavirus, has spread quickly and produced a worldwide respiratory ailment outbreak. There is a need for large-scale screening to prevent the spreading of the disease. When compared with the reverse transcription polymerase chain reaction (RT-PCR) test, computed tomography (CT) is far more consistent, concrete, and precise in detecting COVID-19 patients through clinical diagnosis. An architecture based on deep learning has been proposed by integrating a capsule network with different variants of convolution neural networks. DenseNet, ResNet, VGGNet, and MobileNet are utilized with CapsNet to detect COVID-19 cases using lung computed tomography scans. It has found that all the four models are providing adequate accuracy, among which the VGGCapsNet, DenseCapsNet, and MobileCapsNet models have gained the highest accuracy of 99%. An Android-based app can be deployed using MobileCapsNet model to detect COVID-19 as it is a lightweight model and best suited for handheld devices like a mobile.
Collapse
Affiliation(s)
- Shamik Tiwari
- School of Computer ScienceUniversity of Petroleum and Energy StudiesDehradunUttarakhandIndia
| | - Anurag Jain
- School of Computer ScienceUniversity of Petroleum and Energy StudiesDehradunUttarakhandIndia
| |
Collapse
|
13
|
InSiNet: a deep convolutional approach to skin cancer detection and segmentation. Med Biol Eng Comput 2022; 60:643-662. [PMID: 35028864 DOI: 10.1007/s11517-021-02473-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 11/08/2021] [Indexed: 12/29/2022]
Abstract
Cancer is among the common causes of death around the world. Skin cancer is one of the most lethal types of cancer. Early diagnosis and treatment are vital in skin cancer. In addition to traditional methods, method such as deep learning is frequently used to diagnose and classify the disease. Expert experience plays a major role in diagnosing skin cancer. Therefore, for more reliable results in the diagnosis of skin lesions, deep learning algorithms can help in the correct diagnosis. In this study, we propose InSiNet, a deep learning-based convolutional neural network to detect benign and malignant lesions. The performance of the method is tested on International Skin Imaging Collaboration HAM10000 images (ISIC 2018), ISIC 2019, and ISIC 2020, under the same conditions. The computation time and accuracy comparison analysis was performed between the proposed algorithm and other machine learning techniques (GoogleNet, DenseNet-201, ResNet152V2, EfficientNetB0, RBF-support vector machine, logistic regression, and random forest). The results show that the developed InSiNet architecture outperforms the other methods achieving an accuracy of 94.59%, 91.89%, and 90.54% in ISIC 2018, 2019, and 2020 datasets, respectively. Since the deep learning algorithms eliminate the human factor during diagnosis, they can give reliable results in addition to traditional methods.
Collapse
|
14
|
Jain S, Singhania U, Tripathy B, Nasr EA, Aboudaif MK, Kamrani AK. Deep Learning-Based Transfer Learning for Classification of Skin Cancer. SENSORS (BASEL, SWITZERLAND) 2021; 21:8142. [PMID: 34884146 PMCID: PMC8662405 DOI: 10.3390/s21238142] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/30/2021] [Accepted: 11/30/2021] [Indexed: 11/16/2022]
Abstract
One of the major health concerns for human society is skin cancer. When the pigments producing skin color turn carcinogenic, this disease gets contracted. A skin cancer diagnosis is a challenging process for dermatologists as many skin cancer pigments may appear similar in appearance. Hence, early detection of lesions (which form the base of skin cancer) is definitely critical and useful to completely cure the patients suffering from skin cancer. Significant progress has been made in developing automated tools for the diagnosis of skin cancer to assist dermatologists. The worldwide acceptance of artificial intelligence-supported tools has permitted usage of the enormous collection of images of lesions and benevolent sores approved by histopathology. This paper performs a comparative analysis of six different transfer learning nets for multi-class skin cancer classification by taking the HAM10000 dataset. We used replication of images of classes with low frequencies to counter the imbalance in the dataset. The transfer learning nets that were used in the analysis were VGG19, InceptionV3, InceptionResNetV2, ResNet50, Xception, and MobileNet. Results demonstrate that replication is suitable for this task, achieving high classification accuracies and F-measures with lower false negatives. It is inferred that Xception Net outperforms the rest of the transfer learning nets used for the study, with an accuracy of 90.48. It also has the highest recall, precision, and F-Measure values.
Collapse
Affiliation(s)
- Satin Jain
- Department of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, Tamil Nadu, India; (S.J.); (B.T.)
| | - Udit Singhania
- Department of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, Tamil Nadu, India;
| | - Balakrushna Tripathy
- Department of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, Tamil Nadu, India; (S.J.); (B.T.)
| | - Emad Abouel Nasr
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia;
| | - Mohamed K. Aboudaif
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia;
| | - Ali K. Kamrani
- Industrial Engineering Department, College of Engineering, University of Houston, Houston, TX 77204-4008, USA;
| |
Collapse
|
15
|
Attique Khan M, Sharif M, Akram T, Kadry S, Hsu C. A two‐stream deep neural network‐based intelligent system for complex skin cancer types classification. INT J INTELL SYST 2021. [DOI: 10.1002/int.22691] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science COMSATS University Islamabad Wah Campus Wah Cantt Pakistan
| | - Muhammad Sharif
- Department of Computer Science COMSATS University Islamabad Wah Campus Wah Cantt Pakistan
| | - Tallha Akram
- Department of Electrical and Computer Engineering COMSATS University Islamabad Wah Campus Wah Cantt Pakistan
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology Noroff University College Kristiansand Norway
| | - Ching‐Hsien Hsu
- Guangdong‐Hong Kong‐Macao Joint Laboratory for Intelligent Micro‐Nano Optoelectronic Technology, School of Mathematics and Big Data Foshan University Foshan China
- Department of Computer Science and Information Engineering Asia University Taichung Taiwan
- Department of Medical Research China Medical University Hospital China Medical University Taichung Taiwan
| |
Collapse
|
16
|
Turbé V, Herbst C, Mngomezulu T, Meshkinfamfard S, Dlamini N, Mhlongo T, Smit T, Cherepanova V, Shimada K, Budd J, Arsenov N, Gray S, Pillay D, Herbst K, Shahmanesh M, McKendry RA. Deep learning of HIV field-based rapid tests. Nat Med 2021; 27:1165-1170. [PMID: 34140702 PMCID: PMC7611654 DOI: 10.1038/s41591-021-01384-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 05/06/2021] [Indexed: 02/04/2023]
Abstract
Although deep learning algorithms show increasing promise for disease diagnosis, their use with rapid diagnostic tests performed in the field has not been extensively tested. Here we use deep learning to classify images of rapid human immunodeficiency virus (HIV) tests acquired in rural South Africa. Using newly developed image capture protocols with the Samsung SM-P585 tablet, 60 fieldworkers routinely collected images of HIV lateral flow tests. From a library of 11,374 images, deep learning algorithms were trained to classify tests as positive or negative. A pilot field study of the algorithms deployed as a mobile application demonstrated high levels of sensitivity (97.8%) and specificity (100%) compared with traditional visual interpretation by humans-experienced nurses and newly trained community health worker staff-and reduced the number of false positives and false negatives. Our findings lay the foundations for a new paradigm of deep learning-enabled diagnostics in low- and middle-income countries, termed REASSURED diagnostics1, an acronym for real-time connectivity, ease of specimen collection, affordable, sensitive, specific, user-friendly, rapid, equipment-free and deliverable. Such diagnostics have the potential to provide a platform for workforce training, quality assurance, decision support and mobile connectivity to inform disease control strategies, strengthen healthcare system efficiency and improve patient outcomes and outbreak management in emerging infections.
Collapse
Affiliation(s)
- Valérian Turbé
- London Centre for Nanotechnology, University College London, London, UK.
| | - Carina Herbst
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa
| | - Thobeka Mngomezulu
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa
| | | | - Nondumiso Dlamini
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa
| | - Thembani Mhlongo
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa
| | - Theresa Smit
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa
| | | | - Koki Shimada
- Department of Computer Science, University College London, London, UK
| | - Jobie Budd
- London Centre for Nanotechnology, University College London, London, UK
- Division of Medicine, University College London, London, UK
| | - Nestor Arsenov
- London Centre for Nanotechnology, University College London, London, UK
| | - Steven Gray
- UCL Centre for Advanced Spatial Analysis, London, UK
| | - Deenan Pillay
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa
- Division of Infection and Immunity, University College London, London, UK
| | - Kobus Herbst
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa.
- DSI-MRC South African Population Research Infrastructure Network, Durban, South Africa.
| | - Maryam Shahmanesh
- Africa Health Research Institute, Nelson R. Mandela Medical School, Durban, South Africa.
- Institute for Global Health, University College London, London, UK.
| | - Rachel A McKendry
- London Centre for Nanotechnology, University College London, London, UK.
- Division of Medicine, University College London, London, UK.
| |
Collapse
|
17
|
Khan MA, Sharif M, Akram T, Damaševičius R, Maskeliūnas R. Skin Lesion Segmentation and Multiclass Classification Using Deep Learning Features and Improved Moth Flame Optimization. Diagnostics (Basel) 2021; 11:811. [PMID: 33947117 PMCID: PMC8145295 DOI: 10.3390/diagnostics11050811] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 04/23/2021] [Accepted: 04/26/2021] [Indexed: 11/18/2022] Open
Abstract
Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantonment 47040, Pakistan;
| | - Muhammad Sharif
- Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantonment 47040, Pakistan;
| | - Tallha Akram
- Department of Electrical Engineering, Wah Campus, COMSATS University Islamabad, Islamabad 45550, Pakistan;
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Rytis Maskeliūnas
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania;
| |
Collapse
|
18
|
Khan MA, Akram T, Zhang YD, Sharif M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2020.12.015] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
19
|
Afza F, Sharif M, Mittal M, Khan MA, Jude Hemanth D. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2021; 202:88-102. [PMID: 33610692 DOI: 10.1016/j.ymeth.2021.02.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 01/30/2021] [Accepted: 02/14/2021] [Indexed: 12/24/2022] Open
Abstract
Skin cancer is one of the most common and dangerous cancer that exists worldwide. Malignant melanoma is one of the most dangerous skin cancer types has a high mortality rate. An estimated 196,060 melanoma cases will be diagnosed in 2020 in the USA. Many computerized techniques are presented in the past to diagnose skin lesions, but they are still failing to achieve significant accuracy. To improve the existing accuracy, we proposed a hierarchical framework based on two-dimensional superpixels and deep learning. First, we enhance the contrast of original dermoscopy images by fusing local and global enhanced images. The entire enhanced images are utilized in the next step to segmentation skin lesions using three-step superpixel lesion segmentation. The segmented lesions are mapped over the whole enhanced dermoscopy images and obtained only segmented color images. Then, a deep learning model (ResNet-50) is applied to these mapped images and learned features through transfer learning. The extracted features are further optimized using an improved grasshopper optimization algorithm, which is later classified through the Naïve Bayes classifier. The proposed hierarchical method has been evaluated on three datasets (Ph2, ISBI2016, and HAM1000), consisting of three, two, and seven skin cancer classes. On these datasets, our method achieved an accuracy of 95.40%, 91.1%, and 85.50%, respectively. The results show that this method can be helpful for the classification of skin cancer with improved accuracy.
Collapse
Affiliation(s)
- Farhat Afza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mamta Mittal
- Department of Computer Science and Engineering, G. B. Pant Government Engineering College, Okhla, New Delhi, India
| | | | - D Jude Hemanth
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India.
| |
Collapse
|