1
|
Ray A, Sarkar S, Schwenker F, Sarkar R. Decoding skin cancer classification: perspectives, insights, and advances through researchers' lens. Sci Rep 2024; 14:30542. [PMID: 39695157 DOI: 10.1038/s41598-024-81961-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 12/02/2024] [Indexed: 12/20/2024] Open
Abstract
Skin cancer is a significant global health concern, with timely and accurate diagnosis playing a critical role in improving patient outcomes. In recent years, computer-aided diagnosis systems have emerged as powerful tools for automated skin cancer classification, revolutionizing the field of dermatology. This survey analyzes 107 research papers published over the last 18 years, providing a thorough evaluation of advancements in classification techniques, with a focus on the growing integration of computer vision and artificial intelligence (AI) in enhancing diagnostic accuracy and reliability. The paper begins by presenting an overview of the fundamental concepts of skin cancer, addressing underlying challenges in accurate classification, and highlighting the limitations of traditional diagnostic methods. Extensive examination is devoted to a range of datasets, including the HAM10000 and the ISIC archive, among others, commonly employed by researchers. The exploration then delves into machine learning techniques coupled with handcrafted features, emphasizing their inherent limitations. Subsequent sections provide a comprehensive investigation into deep learning-based approaches, encompassing convolutional neural networks, transfer learning, attention mechanisms, ensemble techniques, generative adversarial networks, vision transformers, and segmentation-guided classification strategies, detailing various architectures, tailored for skin lesion analysis. The survey also sheds light on the various hybrid and multimodal techniques employed for classification. By critically analyzing each approach and highlighting its limitations, this survey provides researchers with valuable insights into the latest advancements, trends, and gaps in skin cancer classification. Moreover, it offers clinicians practical knowledge on the integration of AI tools to enhance diagnostic decision-making processes. This comprehensive analysis aims to bridge the gap between research and clinical practice, serving as a guide for the AI community to further advance the state-of-the-art in skin cancer classification systems.
Collapse
Affiliation(s)
- Amartya Ray
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Sujan Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081, Ulm, Germany.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
2
|
Vardasca R, Mendes JG, Magalhaes C. Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review. J Imaging 2024; 10:265. [PMID: 39590729 PMCID: PMC11595075 DOI: 10.3390/jimaging10110265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 09/26/2024] [Accepted: 10/17/2024] [Indexed: 11/28/2024] Open
Abstract
The increasing incidence of and resulting deaths associated with malignant skin tumors are a public health problem that can be minimized if detection strategies are improved. Currently, diagnosis is heavily based on physicians' judgment and experience, which can occasionally lead to the worsening of the lesion or needless biopsies. Several non-invasive imaging modalities, e.g., confocal scanning laser microscopy or multiphoton laser scanning microscopy, have been explored for skin cancer assessment, which have been aligned with different artificial intelligence (AI) strategies to assist in the diagnostic task, based on several image features, thus making the process more reliable and faster. This systematic review concerns the implementation of AI methods for skin tumor classification with different imaging modalities, following the PRISMA guidelines. In total, 206 records were retrieved and qualitatively analyzed. Diagnostic potential was found for several techniques, particularly for dermoscopy images, with strategies yielding classification results close to perfection. Learning approaches based on support vector machines and artificial neural networks seem to be preferred, with a recent focus on convolutional neural networks. Still, detailed descriptions of training/testing conditions are lacking in some reports, hampering reproduction. The use of AI methods in skin cancer diagnosis is an expanding field, with future work aiming to construct optimal learning approaches and strategies. Ultimately, early detection could be optimized, improving patient outcomes, even in areas where healthcare is scarce.
Collapse
Affiliation(s)
- Ricardo Vardasca
- ISLA Santarem, Rua Teixeira Guedes 31, 2000-029 Santarem, Portugal
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
| | - Joaquim Gabriel Mendes
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
- Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
| | - Carolina Magalhaes
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
- Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
| |
Collapse
|
3
|
Attallah O. Skin cancer classification leveraging multi-directional compact convolutional neural network ensembles and gabor wavelets. Sci Rep 2024; 14:20637. [PMID: 39232043 PMCID: PMC11375051 DOI: 10.1038/s41598-024-69954-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 08/12/2024] [Indexed: 09/06/2024] Open
Abstract
Skin cancer (SC) is an important medical condition that necessitates prompt identification to ensure timely treatment. Although visual evaluation by dermatologists is considered the most reliable method, its efficacy is subjective and laborious. Deep learning-based computer-aided diagnostic (CAD) platforms have become valuable tools for supporting dermatologists. Nevertheless, current CAD tools frequently depend on Convolutional Neural Networks (CNNs) with huge amounts of deep layers and hyperparameters, single CNN model methodologies, large feature space, and exclusively utilise spatial image information, which restricts their effectiveness. This study presents SCaLiNG, an innovative CAD tool specifically developed to address and surpass these constraints. SCaLiNG leverages a collection of three compact CNNs and Gabor Wavelets (GW) to acquire a comprehensive feature vector consisting of spatial-textural-frequency attributes. SCaLiNG gathers a wide range of image details by breaking down these photos into multiple directional sub-bands using GW, and then learning several CNNs using those sub-bands and the original picture. SCaLiNG also combines attributes taken from various CNNs trained with the actual images and subbands derived from GW. This fusion process correspondingly improves diagnostic accuracy due to the thorough representation of attributes. Furthermore, SCaLiNG applies a feature selection approach which further enhances the model's performance by choosing the most distinguishing features. Experimental findings indicate that SCaLiNG maintains a classification accuracy of 0.9170 in categorising SC subcategories, surpassing conventional single-CNN models. The outstanding performance of SCaLiNG underlines its ability to aid dermatologists in swiftly and precisely recognising and classifying SC, thereby enhancing patient outcomes.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
4
|
Attallah O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput Biol Med 2024; 178:108798. [PMID: 38925085 DOI: 10.1016/j.compbiomed.2024.108798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/30/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandri, 21937, Egypt; Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
5
|
Monica KM, Shreeharsha J, Falkowski-Gilski P, Falkowska-Gilska B, Awasthy M, Phadke R. Melanoma skin cancer detection using mask-RCNN with modified GRU model. Front Physiol 2024; 14:1324042. [PMID: 38292449 PMCID: PMC10825805 DOI: 10.3389/fphys.2023.1324042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/18/2023] [Indexed: 02/01/2024] Open
Abstract
Introduction: Melanoma Skin Cancer (MSC) is a type of cancer in the human body; therefore, early disease diagnosis is essential for reducing the mortality rate. However, dermoscopic image analysis poses challenges due to factors such as color illumination, light reflections, and the varying sizes and shapes of lesions. To overcome these challenges, an automated framework is proposed in this manuscript. Methods: Initially, dermoscopic images are acquired from two online benchmark datasets: International Skin Imaging Collaboration (ISIC) 2020 and Human against Machine (HAM) 10000. Subsequently, a normalization technique is employed on the dermoscopic images to decrease noise impact, outliers, and variations in the pixels. Furthermore, cancerous regions in the pre-processed images are segmented utilizing the mask-faster Region based Convolutional Neural Network (RCNN) model. The mask-RCNN model offers precise pixellevel segmentation by accurately delineating object boundaries. From the partitioned cancerous regions, discriminative feature vectors are extracted by applying three pre-trained CNN models, namely ResNeXt101, Xception, and InceptionV3. These feature vectors are passed into the modified Gated Recurrent Unit (GRU) model for MSC classification. In the modified GRU model, a swish-Rectified Linear Unit (ReLU) activation function is incorporated that efficiently stabilizes the learning process with better convergence rate during training. Results and discussion: The empirical investigation demonstrate that the modified GRU model attained an accuracy of 99.95% and 99.98% on the ISIC 2020 and HAM 10000 datasets, where the obtained results surpass the conventional detection models.
Collapse
Affiliation(s)
- K. M. Monica
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - J. Shreeharsha
- Department of Computer Science and Engineering, Rao Bahadur Y. Mahabaleswarappa Engineering College, Ballari, Karnataka, India
| | | | | | - Mohan Awasthy
- Department of Engineering and Technology, Bharati Vidyapeeth Peeth Deemed to be University, Navi Mumbai, Maharashtra, India
| | - Rekha Phadke
- Department of Electronics and Communication Engineering, Nitte Meenakshi Institute of Technology, Bangalore, Karnataka, India
| |
Collapse
|
6
|
Sanga P, Singh J, Dubey AK, Khanna NN, Laird JR, Faa G, Singh IM, Tsoulfas G, Kalra MK, Teji JS, Al-Maini M, Rathore V, Agarwal V, Ahluwalia P, Fouda MM, Saba L, Suri JS. DermAI 1.0: A Robust, Generalized, and Novel Attention-Enabled Ensemble-Based Transfer Learning Paradigm for Multiclass Classification of Skin Lesion Images. Diagnostics (Basel) 2023; 13:3159. [PMID: 37835902 PMCID: PMC10573070 DOI: 10.3390/diagnostics13193159] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/03/2023] [Accepted: 10/04/2023] [Indexed: 10/15/2023] Open
Abstract
Skin lesion classification plays a crucial role in dermatology, aiding in the early detection, diagnosis, and management of life-threatening malignant lesions. However, standalone transfer learning (TL) models failed to deliver optimal performance. In this study, we present an attention-enabled ensemble-based deep learning technique, a powerful, novel, and generalized method for extracting features for the classification of skin lesions. This technique holds significant promise in enhancing diagnostic accuracy by using seven pre-trained TL models for classification. Six ensemble-based DL (EBDL) models were created using stacking, softmax voting, and weighted average techniques. Furthermore, we investigated the attention mechanism as an effective paradigm and created seven attention-enabled transfer learning (aeTL) models before branching out to construct three attention-enabled ensemble-based DL (aeEBDL) models to create a reliable, adaptive, and generalized paradigm. The mean accuracy of the TL models is 95.30%, and the use of an ensemble-based paradigm increased it by 4.22%, to 99.52%. The aeTL models' performance was superior to the TL models in accuracy by 3.01%, and aeEBDL models outperformed aeTL models by 1.29%. Statistical tests show significant p-value and Kappa coefficient along with a 99.6% reliability index for the aeEBDL models. The approach is highly effective and generalized for the classification of skin lesions.
Collapse
Affiliation(s)
- Prabhav Sanga
- Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India; (P.S.); (A.K.D.)
- Global Biomedical Technologies, Inc., Roseville, CA 95661, USA
| | - Jaskaran Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
| | - Arun Kumar Dubey
- Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India; (P.S.); (A.K.D.)
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha Apollo Hospitals, New Delhi 110076, India;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA;
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
| | - Georgios Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Jagjit S. Teji
- Department of Pediatrics, Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | - Vijay Rathore
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India;
| | - Puneet Ahluwalia
- Department of Uro Oncology, Medanta the Medicity, Gurugram 122001, India;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Jasjit S. Suri
- Global Biomedical Technologies, Inc., Roseville, CA 95661, USA
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
- Department of Computer Science and Engineering, Graphic Era University (G.E.U.), Dehradun 248002, India
| |
Collapse
|
7
|
Akram T, Junejo R, Alsuhaibani A, Rafiullah M, Akram A, Almujally NA. Precision in Dermatology: Developing an Optimal Feature Selection Framework for Skin Lesion Classification. Diagnostics (Basel) 2023; 13:2848. [PMID: 37685386 PMCID: PMC10486423 DOI: 10.3390/diagnostics13172848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 08/30/2023] [Accepted: 08/31/2023] [Indexed: 09/10/2023] Open
Abstract
Melanoma is widely recognized as one of the most lethal forms of skin cancer, with its incidence showing an upward trend in recent years. Nonetheless, the timely detection of this malignancy substantially enhances the likelihood of patients' long-term survival. Several computer-based methods have recently been proposed, in the pursuit of diagnosing skin lesions at their early stages. Despite achieving some level of success, there still remains a margin of error that the machine learning community considers to be an unresolved research challenge. The primary objective of this study was to maximize the input feature information by combining multiple deep models in the first phase, and then to avoid noisy and redundant information by downsampling the feature set, using a novel evolutionary feature selection technique, in the second phase. By maintaining the integrity of the original feature space, the proposed idea generated highly discriminant feature information. Recent deep models, including Darknet53, DenseNet201, InceptionV3, and InceptionResNetV2, were employed in our study, for the purpose of feature extraction. Additionally, transfer learning was leveraged, to enhance the performance of our approach. In the subsequent phase, the extracted feature information from the chosen pre-existing models was combined, with the aim of preserving maximum information, prior to undergoing the process of feature selection, using a novel entropy-controlled gray wolf optimization (ECGWO) algorithm. The integration of fusion and selection techniques was employed, initially to incorporate the feature vector with a high level of information and, subsequently, to eliminate redundant and irrelevant feature information. The effectiveness of our concept is supported by an assessment conducted on three benchmark dermoscopic datasets: PH2, ISIC-MSK, and ISIC-UDA. In order to validate the proposed methodology, a comprehensive evaluation was conducted, including a rigorous comparison to established techniques in the field.
Collapse
Affiliation(s)
- Tallha Akram
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Cantt Campus, Islamabad 45040, Pakistan
| | - Riaz Junejo
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Cantt Campus, Islamabad 45040, Pakistan
| | - Anas Alsuhaibani
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Muhammad Rafiullah
- Department of Mathematics, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan
| | - Adeel Akram
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Cantt Campus, Islamabad 45040, Pakistan
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia;
| |
Collapse
|
8
|
Hasan MK, Ahamad MA, Yap CH, Yang G. A survey, review, and future trends of skin lesion segmentation and classification. Comput Biol Med 2023; 155:106624. [PMID: 36774890 DOI: 10.1016/j.compbiomed.2023.106624] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/04/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023]
Abstract
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Bioengineering, Imperial College London, UK; Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Md Asif Ahamad
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, UK.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, UK.
| |
Collapse
|
9
|
Olaniyi EO, Komolafe TE, Oyedotun OK, Oyemakinde TT, Abdelaziz M, Khashman A. Eye Melanoma Diagnosis System using Statistical Texture Feature Extraction and Soft Computing Techniques. J Biomed Phys Eng 2023; 13:77-88. [PMID: 36818006 PMCID: PMC9923246 DOI: 10.31661/jbpe.v0i0.2101-1268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/26/2021] [Indexed: 06/18/2023]
Abstract
BACKGROUND Eye melanoma is deforming in the eye, growing and developing in tissues inside the middle layer of an eyeball, resulting in dark spots in the iris section of the eye, changes in size, the shape of the pupil, and vision. OBJECTIVE The current study aims to diagnose eye melanoma using a gray level co-occurrence matrix (GLCM) for texture extraction and soft computing techniques, leading to the disease diagnosis faster, time-saving, and prevention of misdiagnosis resulting from the physician's manual approach. MATERIAL AND METHODS In this experimental study, two models are proposed for the diagnosis of eye melanoma, including backpropagation neural networks (BPNN) and radial basis functions network (RBFN). The images used for training and validating were obtained from the eye-cancer database. RESULTS Based on our experiments, our proposed models achieve 92.31% and 94.70% recognition rates for GLCM+BPNN and GLCM+RBFN, respectively. CONCLUSION Based on the comparison of our models with the others, the models used in the current study outperform other proposed models.
Collapse
Affiliation(s)
- Ebenezer Obaloluwa Olaniyi
- Center for Quantum Computational System, Department of Electrical and Electronics Engineering, Adeleke University, Osun State, Nigeria
- European Centre for Research and Academic Affairs, Lefkosa, Turkey
| | - Temitope Emmanuel Komolafe
- Department of Medical Imaging, Suzhou Institute of Biomedical and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Oyebade Kayode Oyedotun
- Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg
| | | | - Mohamed Abdelaziz
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Adnan Khashman
- European Centre for Research and Academic Affairs, Turkey
| |
Collapse
|
10
|
Al Shoura T, Leung H, Balaji B. An Adaptive Kernels Layer for Deep Neural Networks Based on Spectral Analysis for Image Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:1527. [PMID: 36772565 PMCID: PMC9921880 DOI: 10.3390/s23031527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 01/24/2023] [Accepted: 01/27/2023] [Indexed: 06/18/2023]
Abstract
As the pixel resolution of imaging equipment has grown larger, the images' sizes and the number of pixels used to represent objects in images have increased accordingly, exposing an issue when dealing with larger images using the traditional deep learning models and methods, as they typically employ mechanisms such as increasing the models' depth, which, while suitable for applications that have to be spatially invariant, such as image classification, causes issues for applications that relies on the location of the different features within the images such as object localization and change detection. This paper proposes an adaptive convolutional kernels layer (AKL) as an architecture that adjusts dynamically to images' sizes in order to extract comparable spectral information from images of different sizes, improving the features' spatial resolution without sacrificing the local receptive field (LRF) for various image applications, specifically those that are sensitive to objects and features locations, using the definition of Fourier transform and the relation between spectral analysis and convolution kernels. The proposed method is then tested using a Monte Carlo simulation to evaluate its performance in spectral information coverage across images of various sizes, validating its ability to maintain coverage of a ratio of the spectral domain with a variation of around 20% of the desired coverage ratio. Finally, the AKL is validated for various image applications compared to other architectures such as Inception and VGG, demonstrating its capability to match Inception v4 in image classification applications, and outperforms it as images grow larger, up to a 30% increase in accuracy in object localization for the same number of parameters.
Collapse
Affiliation(s)
- Tariq Al Shoura
- Department of Electrical and Software Engineering, University of Calgary, 2500 University Drive NW, Calgary, AB T2N 1N4, Canada
| | - Henry Leung
- Department of Electrical and Software Engineering, University of Calgary, 2500 University Drive NW, Calgary, AB T2N 1N4, Canada
| | - Bhashyam Balaji
- Radar Sensing and Exploitation Section, Defence Research and Development Canada, Ottawa, ON K1A 0Z4, Canada
| |
Collapse
|
11
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
12
|
Yue G, Wei P, Zhou T, Jiang Q, Yan W, Wang T. Toward Multicenter Skin Lesion Classification Using Deep Neural Network With Adaptively Weighted Balance Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:119-131. [PMID: 36063522 DOI: 10.1109/tmi.2022.3204646] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Recently, deep neural network-based methods have shown promising advantages in accurately recognizing skin lesions from dermoscopic images. However, most existing works focus more on improving the network framework for better feature representation but ignore the data imbalance issue, limiting their flexibility and accuracy across multiple scenarios in multi-center clinics. Generally, different clinical centers have different data distributions, which presents challenging requirements for the network's flexibility and accuracy. In this paper, we divert the attention from framework improvement to the data imbalance issue and propose a new solution for multi-center skin lesion classification by introducing a novel adaptively weighted balance (AWB) loss to the conventional classification network. Benefiting from AWB, the proposed solution has the following advantages: 1) it is easy to satisfy different practical requirements by only changing the backbone; 2) it is user-friendly with no tuning on hyperparameters; and 3) it adaptively enables small intraclass compactness and pays more attention to the minority class. Extensive experiments demonstrate that, compared with solutions equipped with state-of-the-art loss functions, the proposed solution is more flexible and more competent for tackling the multi-center imbalanced skin lesion classification task with considerable performance on two benchmark datasets. In addition, the proposed solution is proved to be effective in handling the imbalanced gastrointestinal disease classification task and the imbalanced DR grading task. Code is available at https://github.com/Weipeishan2021.
Collapse
|
13
|
Attention Cost-Sensitive Deep Learning-Based Approach for Skin Cancer Detection and Classification. Cancers (Basel) 2022; 14:cancers14235872. [PMID: 36497355 PMCID: PMC9735681 DOI: 10.3390/cancers14235872] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 12/03/2022] Open
Abstract
Deep learning-based models have been employed for the detection and classification of skin diseases through medical imaging. However, deep learning-based models are not effective for rare skin disease detection and classification. This is mainly due to the reason that rare skin disease has very a smaller number of data samples. Thus, the dataset will be highly imbalanced, and due to the bias in learning, most of the models give better performances. The deep learning models are not effective in detecting the affected tiny portions of skin disease in the overall regions of the image. This paper presents an attention-cost-sensitive deep learning-based feature fusion ensemble meta-classifier approach for skin cancer detection and classification. Cost weights are included in the deep learning models to handle the data imbalance during training. To effectively learn the optimal features from the affected tiny portions of skin image samples, attention is integrated into the deep learning models. The features from the finetuned models are extracted and the dimensionality of the features was further reduced by using a kernel-based principal component (KPCA) analysis. The reduced features of the deep learning-based finetuned models are fused and passed into ensemble meta-classifiers for skin disease detection and classification. The ensemble meta-classifier is a two-stage model. The first stage performs the prediction of skin disease and the second stage performs the classification by considering the prediction of the first stage as features. Detailed analysis of the proposed approach is demonstrated for both skin disease detection and skin disease classification. The proposed approach demonstrated an accuracy of 99% on skin disease detection and 99% on skin disease classification. In all the experimental settings, the proposed approach outperformed the existing methods and demonstrated a performance improvement of 4% accuracy for skin disease detection and 9% accuracy for skin disease classification. The proposed approach can be used as a computer-aided diagnosis (CAD) tool for the early diagnosis of skin cancer detection and classification in healthcare and medical environments. The tool can accurately detect skin diseases and classify the skin disease into their skin disease family.
Collapse
|
14
|
Umar Ibrahim A, Al-Turjman F, Ozsoz M, Serte S. Computer aided detection of tuberculosis using two classifiers. BIOMED ENG-BIOMED TE 2022; 67:513-524. [PMID: 36165698 DOI: 10.1515/bmt-2021-0310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 09/13/2022] [Indexed: 11/15/2022]
Abstract
OBJECTIVES Tuberculosis caused by Mycobacterium tuberculosis have been a major challenge for medical and healthcare sectors in many underdeveloped countries with limited diagnosis tools. Tuberculosis can be detected from microscopic slides and chest X-ray but as a result of the high cases of tuberculosis, this method can be tedious for both microbiologist and Radiologist and can lead to miss-diagnosis. The main objective of this study is to addressed these challenges by employing Computer Aided Detection (CAD) using Artificial Intelligence-driven models which learn features based on convolution and result in an output with high accuracy. METHOD In this paper, we described automated discrimination of X-ray and microscopic slide images of tuberculosis into positive and negative cases using pretrained AlexNet Models. The study employed Chest X-ray dataset made available on Kaggle repository and microscopic slide images from both Near East university hospital and Kaggle repository. RESULTS For classification of tuberculosis and healthy microscopic slide using AlexNet+Softmax, the model achieved accuracy of 98.14%. For classification of tuberculosis and healthy microscopic slide using AlexNet+SVM, the model achieved 98.73% accuracy. For classification of tuberculosis and healthy chest X-ray images using AlexNet+Softmax, the model achieved accuracy of 98.19%. For classification of tuberculosis and healthy chest X-ray images using AlexNet+SVM, the model achieved 98.38% accuracy. CONCLUSION The result obtained has shown to outperformed several studies in the current literature. Future studies will attempt to integrate Internet of Medical Things (IoMT) for the design of IoMT/AI-enabled platform for detection of Tuberculosis from both X-ray and Microscopic slide images.
Collapse
Affiliation(s)
| | - Fadi Al-Turjman
- Department of Artificial Intelligence, Research Center for AI and IoT, Near East University, Nicosia, Turkey
| | - Mehmet Ozsoz
- Department of Biomedical Engineering, Near East University, Nicosia, Turkey
| | - Sertan Serte
- Department of Electrical and Electronics Engineering, Near East University, Nicosia, Turkey
| |
Collapse
|
15
|
Fraiwan M, Faouri E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. SENSORS 2022; 22:s22134963. [PMID: 35808463 PMCID: PMC9269808 DOI: 10.3390/s22134963] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 06/22/2022] [Accepted: 06/29/2022] [Indexed: 12/15/2022]
Abstract
Skin cancer (melanoma and non-melanoma) is one of the most common cancer types and leads to hundreds of thousands of yearly deaths worldwide. It manifests itself through abnormal growth of skin cells. Early diagnosis drastically increases the chances of recovery. Moreover, it may render surgical, radiographic, or chemical therapies unnecessary or lessen their overall usage. Thus, healthcare costs can be reduced. The process of diagnosing skin cancer starts with dermoscopy, which inspects the general shape, size, and color characteristics of skin lesions, and suspected lesions undergo further sampling and lab tests for confirmation. Image-based diagnosis has undergone great advances recently due to the rise of deep learning artificial intelligence. The work in this paper examines the applicability of raw deep transfer learning in classifying images of skin lesions into seven possible categories. Using the HAM1000 dataset of dermoscopy images, a system that accepts these images as input without explicit feature extraction or preprocessing was developed using 13 deep transfer learning models. Extensive evaluation revealed the advantages and shortcomings of such a method. Although some cancer types were correctly classified with high accuracy, the imbalance of the dataset, the small number of images in some categories, and the large number of classes reduced the best overall accuracy to 82.9%.
Collapse
|
16
|
Ozturk S, Cukur T. Deep Clustering via Center-Oriented Margin Free-Triplet Loss for Skin Lesion Detection in Highly Imbalanced Datasets. IEEE J Biomed Health Inform 2022; 26:4679-4690. [PMID: 35767499 DOI: 10.1109/jbhi.2022.3187215] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings.
Collapse
|
17
|
Abstract
Healthcare is one of the crucial aspects of the Internet of things. Connected machine learning-based systems provide faster healthcare services. Doctors and radiologists can also use these systems for collaboration to provide better help to patients. The recently emerged Coronavirus (COVID-19) is known to have strong infectious ability. Reverse transcription-polymerase chain reaction (RT-PCR) is recognised as being one of the primary diagnostic tools. However, RT-PCR tests might not be accurate. In contrast, doctors can employ artificial intelligence techniques on X-ray and CT scans for analysis. Artificial intelligent methods need a large number of images; however, this might not be possible during a pandemic. In this paper, a novel data-efficient deep network is proposed for the identification of COVID-19 on CT images. This method increases the small number of available CT scans by generating synthetic versions of CT scans using the generative adversarial network (GAN). Then, we estimate the parameters of convolutional and fully connected layers of the deep networks using synthetic and augmented data. The method shows that the GAN-based deep learning model provides higher performance than classic deep learning models for COVID-19 detection. The performance evaluation is performed on COVID19-CT and Mosmed datasets. The best performing models are ResNet-18 and MobileNetV2 on COVID19-CT and Mosmed, respectively. The area under curve values of ResNet-18 and MobileNetV2 are 0.89% and 0.84%, respectively.
Collapse
|
18
|
Cai G, Zhu Y, Wu Y, Jiang X, Ye J, Yang D. A multimodal transformer to fuse images and metadata for skin disease classification. THE VISUAL COMPUTER 2022; 39:1-13. [PMID: 35540957 PMCID: PMC9070977 DOI: 10.1007/s00371-022-02492-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
Skin disease cases are rising in prevalence, and the diagnosis of skin diseases is always a challenging task in the clinic. Utilizing deep learning to diagnose skin diseases could help to meet these challenges. In this study, a novel neural network is proposed for the classification of skin diseases. Since the datasets for the research consist of skin disease images and clinical metadata, we propose a novel multimodal Transformer, which consists of two encoders for both images and metadata and one decoder to fuse the multimodal information. In the proposed network, a suitable Vision Transformer (ViT) model is utilized as the backbone to extract image deep features. As for metadata, they are regarded as labels and a new Soft Label Encoder (SLE) is designed to embed them. Furthermore, in the decoder part, a novel Mutual Attention (MA) block is proposed to better fuse image features and metadata features. To evaluate the model's effectiveness, extensive experiments have been conducted on the private skin disease dataset and the benchmark dataset ISIC 2018. Compared with state-of-the-art methods, the proposed model shows better performance and represents an advancement in skin disease diagnosis.
Collapse
Affiliation(s)
- Gan Cai
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237 China
| | - Yu Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237 China
| | - Yue Wu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237 China
| | - Xiaoben Jiang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237 China
| | - Jiongyao Ye
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237 China
| | - Dawei Yang
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai, 200032 China
- Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, 200032 China
| |
Collapse
|
19
|
Multiclass Skin Lesion Classification Using a Novel Lightweight Deep Learning Framework for Smart Healthcare. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052677] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Skin lesion classification has recently attracted significant attention. Regularly, physicians take much time to analyze the skin lesions because of the high similarity between these skin lesions. An automated classification system using deep learning can assist physicians in detecting the skin lesion type and enhance the patient’s health. The skin lesion classification has become a hot research area with the evolution of deep learning architecture. In this study, we propose a novel method using a new segmentation approach and wide-ShuffleNet for skin lesion classification. First, we calculate the entropy-based weighting and first-order cumulative moment (EW-FCM) of the skin image. These values are used to separate the lesion from the background. Then, we input the segmentation result into a new deep learning structure wide-ShuffleNet and determine the skin lesion type. We evaluated the proposed method on two large datasets: HAM10000 and ISIC2019. Based on our numerical results, EW-FCM and wide-ShuffleNet achieve more accuracy than state-of-the-art approaches. Additionally, the proposed method is superior lightweight and suitable with a small system like a mobile healthcare system.
Collapse
|
20
|
Nie Y, Sommella P, Carratu M, Ferro M, O'Nils M, Lundgren J. Recent Advances in Diagnosis of Skin Lesions Using Dermoscopic Images Based on Deep Learning. IEEE ACCESS 2022; 10:95716-95747. [DOI: 10.1109/access.2022.3199613] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Yali Nie
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| | - Paolo Sommella
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Marco Carratu
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Matteo Ferro
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Mattias O'Nils
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| | - Jan Lundgren
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| |
Collapse
|
21
|
Hasan MK, Elahi MTE, Alam MA, Jawad MT, Martí R. DermoExpert: Skin lesion classification using a hybrid convolutional neural network through segmentation, transfer learning, and augmentation. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2021.100819] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
22
|
A Dermoscopic Skin Lesion Classification Technique Using YOLO-CNN and Traditional Feature Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-021-05571-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
23
|
Maniraj SP, Sardarmaran P. Classification of dermoscopic images using soft computing techniques. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05998-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
24
|
Sun MD, Halpern AC. Advances in the Etiology, Detection, and Clinical Management of Seborrheic Keratoses. Dermatology 2021; 238:205-217. [PMID: 34311463 DOI: 10.1159/000517070] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 05/06/2021] [Indexed: 11/19/2022] Open
Abstract
Seborrheic keratoses (SKs) are ubiquitous, generally benign skin tumors that exhibit high clinical variability. While age is a known risk factor, the precise roles of UV exposure and immune abnormalities are currently unclear. The underlying mechanisms of this benign disorder are paradoxically driven by oncogenic mutations and may have profound implications for our understanding of the malignant state. Advances in molecular pathogenesis suggest that inhibition of Akt and APP, as well as existing treatments for skin cancer, may have therapeutic potential in SK. Dermoscopic criteria have also become increasingly important to the accurate detection of SK, and other noninvasive diagnostic methods, such as reflectance confocal microscopy and optical coherence tomography, are rapidly developing. Given their ability to mimic malignant tumors, SK cases are often used to train artificial intelligence-based algorithms in the computerized detection of skin disease. These technologies are becoming increasingly accurate and have the potential to significantly augment clinical practice. Current treatment options for SK cause discomfort and can lead to adverse post-treatment effects, especially in skin of color. In light of the discontinuation of ESKATA in late 2019, promising alternatives, such as nitric-zinc and trichloroacetic acid topicals, should be further developed. There is also a need for larger, head-to-head trials of emerging laser therapies to ensure that future treatment standards address diverse patient needs.
Collapse
Affiliation(s)
- Mary D Sun
- Icahn School of Medicine at Mount Sinai, New York, New York, USA,
| | - Allan C Halpern
- Dermatology Service, Memorial Sloan Kettering, New York, New York, USA
| |
Collapse
|
25
|
Hasan MK, Roy S, Mondal C, Alam MA, E Elahi MT, Dutta A, Uddin Raju ST, Jawad MT, Ahmad M. Dermo-DOCTOR: A framework for concurrent skin lesion detection and recognition using a deep convolutional neural network with end-to-end dual encoders. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
26
|
Almezhghwi K, Serte S, Al-Turjman F. Convolutional neural networks for the classification of chest X-rays in the IoT era. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:29051-29065. [PMID: 34155434 PMCID: PMC8210525 DOI: 10.1007/s11042-021-10907-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 03/19/2021] [Accepted: 04/01/2021] [Indexed: 05/08/2023]
Abstract
Chest X-ray medical imaging technology allows the diagnosis of many lung diseases. It is known that this technology is frequently used in hospitals, and it is the most accurate way of detecting most thorax diseases. Radiologists examine these images to identify lung diseases; however, this process can require some time. In contrast, an automated artificial intelligence system could help radiologists detect lung diseases more accurately and faster. Therefore, we propose two artificial intelligence approaches for processing and identifying chest X-ray images to detect chest diseases from such images. We introduce two novel deep learning methods for fast and automated classification of chest X-ray images. First, we propose the use of support vector machines based on the AlexNet model. Second, we develop support vector machines based on the VGGNet16 method. Combined deep networks with a robust classifier have shown that the proposed methods outperform AlexNet and VGG16 deep learning approaches for the chest X-ray image classification tasks. The proposed AlexNet and VGGNet based SVM provide average area under the curve values of 98% and 97%, respectively, for twelve chest X-ray diseases.
Collapse
Affiliation(s)
- Khaled Almezhghwi
- Electrical and Electronic Engineering, College Of Electronic Technology, Tripoli, Libya
| | - Sertan Serte
- Electrical and Electronic Engineering, Near East University, Nicosia, North Cyprus via Mersin 10, Turkey
| | - Fadi Al-Turjman
- Artificial Intelligence Department, Near East University, Nicosia, North Cyprus via Mersin 10, Turkey
- Research Center for AI and IoT, Near East University, Nicosia, North Cyprus via Mersin 10, Turkey
| |
Collapse
|
27
|
Serte S, Demirel H. Deep learning for diagnosis of COVID-19 using 3D CT scans. Comput Biol Med 2021; 132:104306. [PMID: 33780867 PMCID: PMC7943389 DOI: 10.1016/j.compbiomed.2021.104306] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 02/27/2021] [Accepted: 02/27/2021] [Indexed: 12/16/2022]
Abstract
A new pneumonia-type coronavirus, COVID-19, recently emerged in Wuhan, China. COVID-19 has subsequently infected many people and caused many deaths worldwide. Isolating infected people is one of the methods of preventing the spread of this virus. CT scans provide detailed imaging of the lungs and assist radiologists in diagnosing COVID-19 in hospitals. However, a person's CT scan contains hundreds of slides, and the diagnosis of COVID-19 using such scans can lead to delays in hospitals. Artificial intelligence techniques could assist radiologists with rapidly and accurately detecting COVID-19 infection from these scans. This paper proposes an artificial intelligence (AI) approach to classify COVID-19 and normal CT volumes. The proposed AI method uses the ResNet-50 deep learning model to predict COVID-19 on each CT image of a 3D CT scan. Then, this AI method fuses image-level predictions to diagnose COVID-19 on a 3D CT volume. We show that the proposed deep learning model provides 96% AUC value for detecting COVID-19 on CT scans.
Collapse
Affiliation(s)
- Sertan Serte
- Department of Electrical and Electronic Engineering Near East University Nicosia, North Cyprus Via Mersin 10, Turkey.
| | - Hasan Demirel
- Department of Electrical and Electronic Engineering Near East University Nicosia, North Cyprus Via Mersin 10, Turkey
| |
Collapse
|
28
|
Turki T, Taguchi YH. Discriminating the single-cell gene regulatory networks of human pancreatic islets: A novel deep learning application. Comput Biol Med 2021; 132:104257. [PMID: 33740535 DOI: 10.1016/j.compbiomed.2021.104257] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 02/01/2021] [Accepted: 02/03/2021] [Indexed: 12/24/2022]
Abstract
Analysis of single-cell pancreatic data can play an important role in understanding various metabolic diseases and health conditions. Due to the sparsity and noise present in such single-cell gene expression data, inference of single-cell gene regulatory networks remains a challenge. Since recent studies have reported the reliable inference of single-cell gene regulatory networks (SCGRNs), the current study focused on discriminating the SCGRNs of T2D patients from those of healthy controls. By accurately distinguishing SCGRNs of healthy pancreas from those of T2D pancreas, it would be possible to annotate, organize, visualize, and identify common patterns of SCGRNs in metabolic diseases. Such annotated SCGRNs could play an important role in accelerating the process of building large data repositories. This study aimed to contribute to the development of a novel deep learning (DL) application. First, we generated a dataset consisting of 224 SCGRNs belonging to both T2D and healthy pancreas and made it freely available. Next, we chose seven DL architectures, including VGG16, VGG19, Xception, ResNet50, ResNet101, DenseNet121, and DenseNet169, trained each of them on the dataset, and checked their prediction based on a test set. Of note, we evaluated the DL architectures on a single NVIDIA GeForce RTX 2080Ti GPU. Experimental results on the whole dataset, using several performance measures, demonstrated the superiority of VGG19 DL model in the automatic classification of SCGRNs, derived from the single-cell pancreatic data.
Collapse
Affiliation(s)
- Turki Turki
- Department of Computer Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| | - Y-H Taguchi
- Department of Physics, Chuo University, Tokyo, 112-8551, Japan.
| |
Collapse
|
29
|
Sevli O. A deep convolutional neural network-based pigmented skin lesion classification application and experts evaluation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05929-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
30
|
Mahbod A, Schaefer G, Wang C, Dorffner G, Ecker R, Ellinger I. Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 193:105475. [PMID: 32268255 DOI: 10.1016/j.cmpb.2020.105475] [Citation(s) in RCA: 87] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 02/15/2020] [Accepted: 03/20/2020] [Indexed: 05/27/2023]
Abstract
BACKGROUND AND OBJECTIVE Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. METHODS We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. RESULTS Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. CONCLUSIONS We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria; Research and Development Department of TissueGnostics GmbH, Vienna, Austria.
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough, United Kingdom
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Georg Dorffner
- Section for Artificial Intelligence and Decision Support, Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Vienna, Austria
| | - Rupert Ecker
- Research and Development Department of TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
31
|
Pacheco AGC, Krohling RA. The impact of patient clinical information on automated skin cancer detection. Comput Biol Med 2019; 116:103545. [PMID: 31760271 DOI: 10.1016/j.compbiomed.2019.103545] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/14/2019] [Accepted: 11/14/2019] [Indexed: 01/08/2023]
Abstract
Skin cancer is one of the most common types of cancer worldwide. Over the past few years, different approaches have been proposed to deal with automated skin cancer detection. Nonetheless, most of them are based only on dermoscopic images and do not take into account the patient clinical information, an important clue towards clinical diagnosis. In this work, we present an approach to fill this gap. First, we introduce a new dataset composed of clinical images, collected using smartphones, and clinical data related to the patient. Next, we propose a straightforward method that includes an aggregation mechanism in well-known deep learning models to combine features from images and clinical data. Last, we carry out experiments to compare the models' performance with and without using this mechanism. The results present an improvement of approximately 7% in balanced accuracy when the aggregation method is applied. Overall, the impact of clinical data on models' performance is significant and shows the importance of including these features on automated skin cancer detection.
Collapse
Affiliation(s)
- Andre G C Pacheco
- Graduate Program in Computer Science, PPGI, UFES - Federal University of Espírito Santo, Av. Fernando Ferrari 514, Vitória CEP: 29060-270, Brazil.
| | - Renato A Krohling
- Graduate Program in Computer Science, PPGI, UFES - Federal University of Espírito Santo, Av. Fernando Ferrari 514, Vitória CEP: 29060-270, Brazil; Production Engineering Department, UFES - Federal University of Espírito Santo, Av. Fernando Ferrari 514, Vitória CEP: 29060-270, Brazil.
| |
Collapse
|