1
|
Wang H, Ahn E, Bi L, Kim J. Self-supervised multi-modality learning for multi-label skin lesion classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 265:108729. [PMID: 40184849 DOI: 10.1016/j.cmpb.2025.108729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 03/10/2025] [Accepted: 03/16/2025] [Indexed: 04/07/2025]
Abstract
BACKGROUND The clinical diagnosis of skin lesions involves the analysis of dermoscopic and clinical modalities. Dermoscopic images provide detailed views of surface structures, while clinical images offer complementary macroscopic information. Clinicians frequently use the seven-point checklist as an auxiliary tool for melanoma diagnosis and identifying lesion attributes. Supervised deep learning approaches, such as convolutional neural networks, have performed well using dermoscopic and clinical modalities (multi-modality) and further enhanced classification by predicting seven skin lesion attributes (multi-label). However, the performance of these approaches is reliant on the availability of large-scale labeled data, which are costly and time-consuming to obtain, more so with annotating multi-attributes METHODS:: To reduce the dependency on large labeled datasets, we propose a self-supervised learning (SSL) algorithm for multi-modality multi-label skin lesion classification. Compared with single-modality SSL, our algorithm enables multi-modality SSL by maximizing the similarities between paired dermoscopic and clinical images from different views. We introduce a novel multi-modal and multi-label SSL strategy that generates surrogate pseudo-multi-labels for seven skin lesion attributes through clustering analysis. A label-relation-aware module is proposed to refine each pseudo-label embedding, capturing the interrelationships between pseudo-multi-labels. We further illustrate the interrelationships of skin lesion attributes and their relationships with clinical diagnoses using an attention visualization technique. RESULTS The proposed algorithm was validated using the well-benchmarked seven-point skin lesion dataset. Our results demonstrate that our method outperforms the state-of-the-art SSL counterparts. Improvements in the area under receiver operating characteristic curve, precision, sensitivity, and specificity were observed across various lesion attributes and melanoma diagnoses. CONCLUSIONS Our self-supervised learning algorithm offers a robust and efficient solution for multi-modality multi-label skin lesion classification, reducing the reliance on large-scale labeled data. By effectively capturing and leveraging the complementary information between the dermoscopic and clinical images and interrelationships between lesion attributes, our approach holds the potential for improving clinical diagnosis accuracy in dermatology.
Collapse
Affiliation(s)
- Hao Wang
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia; Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Euijoon Ahn
- College of Science and Engineering, James Cook University, Cairns, QLD 4870, Australia.
| | - Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Jinman Kim
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia.
| |
Collapse
|
2
|
Khatun R, Chatterjee S, Bert C, Wadepohl M, Ott OJ, Semrau S, Fietkau R, Nürnberger A, Gaipl US, Frey B. Complex-valued neural networks to speed-up MR thermometry during hyperthermia using Fourier PD and PDUNet. Sci Rep 2025; 15:11765. [PMID: 40189690 PMCID: PMC11973158 DOI: 10.1038/s41598-025-96071-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 03/25/2025] [Indexed: 04/09/2025] Open
Abstract
Hyperthermia (HT) in combination with radio- and/or chemotherapy has become an accepted cancer treatment for distinct solid tumour entities. In HT, tumour tissue is exogenously heated to temperatures between 39 and 43 °C for 60 min. Temperature monitoring can be performed non-invasively using dynamic magnetic resonance imaging (MRI). However, the slow nature of MRI leads to motion artefacts in the images due to the movements of patients during image acquisition. By discarding parts of the data, the speed of the acquisition can be increased - known as undersampling. However, due to the invalidation of the Nyquist criterion, the acquired images might be blurry and can also produce aliasing artefacts. The aim of this work was, therefore, to reconstruct highly undersampled MR thermometry acquisitions with better resolution and with fewer artefacts compared to conventional methods. The use of deep learning in the medical field has emerged in recent times, and various studies have shown that deep learning has the potential to solve inverse problems such as MR image reconstruction. However, most of the published work only focuses on the magnitude images, while the phase images are ignored, which are fundamental requirements for MR thermometry. This work, for the first time, presents deep learning-based solutions for reconstructing undersampled MR thermometry data. Two different deep learning models have been employed here, the Fourier Primal-Dual network and the Fourier Primal-Dual UNet, to reconstruct highly undersampled complex images of MR thermometry. MR images of 44 patients with different sarcoma types who received HT treatment in combination with radiotherapy and/or chemotherapy were used in this study. The method reduced the temperature difference between the undersampled MRIs and the fully sampled MRIs from 1.3 to 0.6 °C in full volume and 0.49 °C to 0.06 °C in the tumour region for a theoretical acceleration factor of 10.
Collapse
Affiliation(s)
- Rupali Khatun
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany.
- Genomics Research Centre, Human Technopole, Milan, Italy.
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | | | - Oliver J Ott
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Sabine Semrau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Udo S Gaipl
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| | - Benjamin Frey
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Centre Erlangen-EMN, Erlangen, Germany
| |
Collapse
|
3
|
Zuo L, Wang Z, Wang Y. A multi-stage multi-modal learning algorithm with adaptive multimodal fusion for improving multi-label skin lesion classification. Artif Intell Med 2025; 162:103091. [PMID: 40015211 DOI: 10.1016/j.artmed.2025.103091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 09/10/2024] [Accepted: 02/14/2025] [Indexed: 03/01/2025]
Abstract
Skin cancer is frequently occurring and has become a major contributor to both cancer incidence and mortality. Accurate and timely diagnosis of skin cancer holds the potential to save lives. Deep learning-based methods have demonstrated significant advancements in the screening of skin cancers. However, most current approaches rely on a single modality input for diagnosis, thereby missing out on valuable complementary information that could enhance accuracy. Although some multimodal-based methods exist, they often lack adaptability and fail to fully leverage multimodal information. In this paper, we introduce a novel uncertainty-based hybrid fusion strategy for a multi-modal learning algorithm aimed at skin cancer diagnosis. Our approach specifically combines three different modalities: clinical images, dermoscopy images, and metadata, to make the final classification. For the fusion of two image modalities, we employ an intermediate fusion strategy that considers the similarity between clinical and dermoscopy images to extract features containing both complementary and correlated information. To capture the correlated information, we utilize cosine similarity, and we employ concatenation as the means for integrating complementary information. In the fusion of image and metadata modalities, we leverage uncertainty to obtain confident late fusion results, allowing our method to adaptively combine the information from different modalities. We conducted comprehensive experiments using a popular publicly available skin disease diagnosis dataset, and the results of these experiments demonstrate the effectiveness of our proposed method. Our proposed fusion algorithm could enhance the clinical applicability of automated skin lesion classification, offering a more robust and adaptive way to make automatic diagnoses with the help of uncertainty mechanism. Code is available at https://github.com/Zuo-Lihan/CosCatNet-Adaptive_Fusion_Algorithm.
Collapse
Affiliation(s)
- Lihan Zuo
- School of Computer and Artificial Intelligence, Southwest Jiaotong University, Chengdu 610000, PR China
| | - Zizhou Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Yan Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| |
Collapse
|
4
|
Nazari S, Garcia R. Going Smaller: Attention-based models for automated melanoma diagnosis. Comput Biol Med 2025; 185:109492. [PMID: 39637458 DOI: 10.1016/j.compbiomed.2024.109492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 11/08/2024] [Accepted: 11/26/2024] [Indexed: 12/07/2024]
Abstract
Computational approaches offer a valuable tool to aid with the early diagnosis of melanoma by increasing both the speed and accuracy of doctors' decisions. The latest and best-performing approaches often rely on large ensemble models, with the number of trained parameters exceeding 600 million. However, this large parameter count presents considerable challenges in terms of computational demands and practical application. Addressing this gap, our work introduces a suite of attention-based convolutional neural network (CNN) architectures tailored to the nuanced classification of melanoma. These innovative models, founded on the EfficientNet-B3 backbone, are characterized by their significantly reduced size. This study highlights the feasibility of deploying powerful, yet compact, diagnostic models in practical settings, such as smartphone-based dermoscopy, and in doing so revolutionizing point-of-care diagnostics and extending the reach of advanced medical technologies to remote and under-resourced areas. It presents a comparative analysis of these novel models with the top three prize winners of the International Skin Imaging Collaboration (ISIC) 2020 challenge using two independent test sets. The results for our architectures outperformed the second and third-placed winners and achieved comparable results to the first-placed winner. These models demonstrated a delicate balance between efficiency and accuracy, holding their ground against larger models in performance metrics while operating on up to 98% less number of parameters and showcasing their potential for real-time application in resource-limited environments.
Collapse
Affiliation(s)
- Sana Nazari
- Computer Vision and Robotics Group, University of Girona, Plaça de Sant Domènec, 3, Girona, 17004, Spain.
| | - Rafael Garcia
- Computer Vision and Robotics Group, University of Girona, Plaça de Sant Domènec, 3, Girona, 17004, Spain
| |
Collapse
|
5
|
Zhu H, Liu W, Gao Z, Zhang H. Explainable Classification of Benign-Malignant Pulmonary Nodules With Neural Networks and Information Bottleneck. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2028-2039. [PMID: 37843998 DOI: 10.1109/tnnls.2023.3303395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2023]
Abstract
Computerized tomography (CT) is a clinically primary technique to differentiate benign-malignant pulmonary nodules for lung cancer diagnosis. Early classification of pulmonary nodules is essential to slow down the degenerative process and reduce mortality. The interactive paradigm assisted by neural networks is considered to be an effective means for early lung cancer screening in large populations. However, some inherent characteristics of pulmonary nodules in high-resolution CT images, e.g., diverse shapes and sparse distribution over the lung fields, have been inducing inaccurate results. On the other hand, most existing methods with neural networks are dissatisfactory from a lack of transparency. In order to overcome these obstacles, a united framework is proposed, including the classification and feature visualization stages, to learn distinctive features and provide visual results. Specifically, a bilateral scheme is employed to synchronously extract and aggregate global-local features in the classification stage, where the global branch is constructed to perceive deep-level features and the local branch is built to focus on the refined details. Furthermore, an encoder is built to generate some features, and a decoder is constructed to simulate decision behavior, followed by the information bottleneck viewpoint to optimize the objective. Extensive experiments are performed to evaluate our framework on two publicly available datasets, namely, 1) the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and 2) the Lung and Colon Histopathological Image Dataset (LC25000). For instance, our framework achieves 92.98% accuracy and presents additional visualizations on the LIDC. The experiment results show that our framework can obtain outstanding performance and is effective to facilitate explainability. It also demonstrates that this united framework is a serviceable tool and further has the scalability to be introduced into clinical research.
Collapse
|
6
|
Feng Y, Ma B, Liu D, Zhang Y, Cai W, Xia Y. Contrastive Neuron Pruning for Backdoor Defense. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:1234-1245. [PMID: 40031528 DOI: 10.1109/tip.2025.3539466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Recent studies have revealed that deep neural networks (DNNs) are susceptible to backdoor attacks, in which attackers insert a pre-defined backdoor into a DNN model by poisoning a few training samples. A small subset of neurons in DNN is responsible for activating this backdoor and pruning these backdoor-associated neurons has been shown to mitigate the impact of such attacks. Current neuron pruning techniques often face challenges in accurately identifying these critical neurons, and they typically depend on the availability of labeled clean data, which is not always feasible. To address these challenges, we propose a novel defense strategy called Contrastive Neuron Pruning (CNP). This approach is based on the observation that poisoned samples tend to cluster together and are distinguishable from benign samples in the feature space of a backdoored model. Given a backdoored model, we initially apply a reversed trigger to benign samples, generating multiple positive (benign-benign) and negative (benign-poisoned) feature pairs from the backdoored model. We then employ contrastive learning on these pairs to improve the separation between benign and poisoned features. Subsequently, we identify and prune neurons in the Batch Normalization layers that show significant response differences to the generated pairs. By removing these backdoor-associated neurons, CNP effectively defends against backdoor attacks while requiring the pruning of only about 1% of the total neurons. Comprehensive experiments conducted on various benchmarks validate the efficacy of CNP, demonstrating its robustness and effectiveness in mitigating backdoor attacks compared to existing methods.
Collapse
|
7
|
Xiao C, Zhu A, Xia C, Qiu Z, Liu Y, Zhao C, Ren W, Wang L, Dong L, Wang T, Guo L, Lei B. Attention-Guided Learning With Feature Reconstruction for Skin Lesion Diagnosis Using Clinical and Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:543-555. [PMID: 39208042 DOI: 10.1109/tmi.2024.3450682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Skin lesion is one of the most common diseases, and most categories are highly similar in morphology and appearance. Deep learning models effectively reduce the variability between classes and within classes, and improve diagnostic accuracy. However, the existing multi-modal methods are only limited to the surface information of lesions in skin clinical and dermatoscopic modalities, which hinders the further improvement of skin lesion diagnostic accuracy. This requires us to further study the depth information of lesions in skin ultrasound. In this paper, we propose a novel skin lesion diagnosis network, which combines clinical and ultrasound modalities to fuse the surface and depth information of the lesion to improve diagnostic accuracy. Specifically, we propose an attention-guided learning (AL) module that fuses clinical and ultrasound modalities from both local and global perspectives to enhance feature representation. The AL module consists of two parts, attention-guided local learning (ALL) computes the intra-modality and inter-modality correlations to fuse multi-scale information, which makes the network focus on the local information of each modality, and attention-guided global learning (AGL) fuses global information to further enhance the feature representation. In addition, we propose a feature reconstruction learning (FRL) strategy which encourages the network to extract more discriminative features and corrects the focus of the network to enhance the model's robustness and certainty. We conduct extensive experiments and the results confirm the superiority of our proposed method. Our code is available at: https://github.com/XCL-hub/AGFnet.
Collapse
|
8
|
Ray A, Sarkar S, Schwenker F, Sarkar R. Decoding skin cancer classification: perspectives, insights, and advances through researchers' lens. Sci Rep 2024; 14:30542. [PMID: 39695157 DOI: 10.1038/s41598-024-81961-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 12/02/2024] [Indexed: 12/20/2024] Open
Abstract
Skin cancer is a significant global health concern, with timely and accurate diagnosis playing a critical role in improving patient outcomes. In recent years, computer-aided diagnosis systems have emerged as powerful tools for automated skin cancer classification, revolutionizing the field of dermatology. This survey analyzes 107 research papers published over the last 18 years, providing a thorough evaluation of advancements in classification techniques, with a focus on the growing integration of computer vision and artificial intelligence (AI) in enhancing diagnostic accuracy and reliability. The paper begins by presenting an overview of the fundamental concepts of skin cancer, addressing underlying challenges in accurate classification, and highlighting the limitations of traditional diagnostic methods. Extensive examination is devoted to a range of datasets, including the HAM10000 and the ISIC archive, among others, commonly employed by researchers. The exploration then delves into machine learning techniques coupled with handcrafted features, emphasizing their inherent limitations. Subsequent sections provide a comprehensive investigation into deep learning-based approaches, encompassing convolutional neural networks, transfer learning, attention mechanisms, ensemble techniques, generative adversarial networks, vision transformers, and segmentation-guided classification strategies, detailing various architectures, tailored for skin lesion analysis. The survey also sheds light on the various hybrid and multimodal techniques employed for classification. By critically analyzing each approach and highlighting its limitations, this survey provides researchers with valuable insights into the latest advancements, trends, and gaps in skin cancer classification. Moreover, it offers clinicians practical knowledge on the integration of AI tools to enhance diagnostic decision-making processes. This comprehensive analysis aims to bridge the gap between research and clinical practice, serving as a guide for the AI community to further advance the state-of-the-art in skin cancer classification systems.
Collapse
Affiliation(s)
- Amartya Ray
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Sujan Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081, Ulm, Germany.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
9
|
Quan X, Ou X, Gao L, Yin W, Hou G, Zhang H. SCINet: A Segmentation and Classification Interaction CNN Method for Arteriosclerotic Retinopathy Grading. Interdiscip Sci 2024; 16:926-935. [PMID: 39222258 DOI: 10.1007/s12539-024-00650-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 08/09/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024]
Abstract
As a common disease, cardiovascular and cerebrovascular diseases pose a great harm threat to human wellness. Even using advanced and comprehensive treatment methods, there is still a high mortality rate. Arteriosclerosis, as an important factor reflecting the severity of cardiovascular and cerebrovascular diseases, is imperative to detect the arteriosclerotic retinopathy. However, the detection of arteriosclerosis retinopathy requires expensive and time-consuming manual evaluation, while end-to-end deep learning detection methods also need interpretable design to high light task-related features. Considering the importance of automatic arteriosclerotic retinopathy grading, we propose a segmentation and classification interaction network (SCINet). We propose a segmentation and classification interaction architecture for grading arteriosclerotic retinopathy. After IterNet is used to segment retinal vessel from original fundus images, the backbone feature extractor roughly extracts features from the segmented and original fundus arteriosclerosis images and further enhances them through the vessel aware module. The last classifier module generates fundus arteriosclerosis grading results. Specifically, the vessel aware module is designed to highlight the important areal vessel features segmented from original images by attention mechanism, thereby achieving information interaction. The attention mechanism selectively learns the vessel features of segmentation region information under the proposed interactive architecture, which leads to reweighting the extracted features and enhances significant feature information. Extensive experiments have confirmed the effect of our model. SCINet has the best performance on the task of arteriosclerotic retinopathy grading. Additionally, the CNN method is scalable to similar tasks by incorporating segmented images as auxiliary information.
Collapse
Affiliation(s)
- Xiongwen Quan
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Xingyuan Ou
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Li Gao
- Ophthalmology, Tianjin Huanhu Hospital, Tianjin, 300000, China
| | - Wenya Yin
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Guangyao Hou
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Han Zhang
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China.
| |
Collapse
|
10
|
Cheng H, Lian J, Jiao W. Enhanced MobileNet for skin cancer image classification with fused spatial channel attention mechanism. Sci Rep 2024; 14:28850. [PMID: 39572649 PMCID: PMC11582717 DOI: 10.1038/s41598-024-80087-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Accepted: 11/14/2024] [Indexed: 11/24/2024] Open
Abstract
Skin Cancer, which leads to a large number of deaths annually, has been extensively considered as the most lethal tumor around the world. Accurate detection of skin cancer in its early stage can significantly raise the survival rate of patients and reduce the burden on public health. Currently, the diagnosis of skin cancer relies heavily on human visual system for screening and dermoscopy. However, manual inspection is laborious, time-consuming, and error-prone. In consequence, the development of an automatic machine vision algorithm for skin cancer classification turns into imperative. Various machine learning techniques have been presented for the last few years. Although these methods have yielded promising outcome in skin cancer detection and recognition, there is still a certain gap in machine learning algorithms and clinical applications. To enhance the performance of classification, this study proposes a novel deep learning model for discriminating clinical skin cancer images. The proposed model incorporates a convolutional neural network for extracting local receptive field information and a novel attention mechanism for revealing the global associations within an image. Experimental results of the proposed approach demonstrate its superiority over the state-of-the-art algorithms on the publicly available dataset International Skin Imaging Collaboration 2019 (ISIC-2019) in terms of Precision, Recall, F1-score. From the experimental results, it can be observed that the proposed approach is a potentially valuable instrument for skin cancer classification.
Collapse
Affiliation(s)
- Hebin Cheng
- School of Intelligence Engineering, Shandong Management University, Jinan, 250357, China
| | - Jian Lian
- School of Intelligence Engineering, Shandong Management University, Jinan, 250357, China
| | - Wanzhen Jiao
- Department of Ophthalmology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, 250021, China.
| |
Collapse
|
11
|
Yuan L, Jin K, Shao A, Feng J, Shi C, Ye J, Grzybowski A. Analysis of international publication trends in artificial intelligence in skin cancer. Clin Dermatol 2024; 42:570-584. [PMID: 39260460 DOI: 10.1016/j.clindermatol.2024.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Bibliometric methods were used to analyze publications on the use of artificial intelligence (AI) in skin cancer from 2010 to 2022, aiming to explore current publication trends and future directions. A comprehensive search using four terms, "artificial intelligence," "machine learning," "deep learning," and "skin cancer," was performed in the Web of Science database for original English language publications on AI in skin cancer from 2010 to 2022. We visually analyzed publication, citation, and coupling information, focusing on authors, countries and regions, publishing journals, institutions, and core keywords. The analysis of 989 publications revealed a consistent year-on-year increase in publications from 2010 to 2022 (0.51% versus 33.57%). The United States, India, and China emerged as the leading contributors. IEEE Access was identified as the most prolific journal in this area. Key journals and influential authors were highlighted. Examination of the top 10 most cited publications highlights the significant potential of AI in oncology. Co-citation network analysis identified four primary categories of classical literature on AI in skin tumors. Keyword analysis indicated that "melanoma," "classification," and "deep learning" were the most prevalent keywords, suggesting that deep learning for melanoma diagnosis and grading is the current research focus. The term "pigmented skin lesions" showed the strongest burst and longest duration, whereas "texture" was the latest emerging keyword. AI represents a rapidly growing area of research in skin cancer with the potential to significantly improve skin cancer management. Future research will likely focus on machine learning and deep learning technologies for screening and diagnostic purposes.
Collapse
Affiliation(s)
- Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - An Shao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jia Feng
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Caiping Shi
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
12
|
Saghir U, Singh SK, Hasan M. Skin Cancer Image Segmentation Based on Midpoint Analysis Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2581-2596. [PMID: 38627267 PMCID: PMC11522265 DOI: 10.1007/s10278-024-01106-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 02/16/2024] [Accepted: 03/27/2024] [Indexed: 10/30/2024]
Abstract
Skin cancer affects people of all ages and is a common disease. The death toll from skin cancer rises with a late diagnosis. An automated mechanism for early-stage skin cancer detection is required to diminish the mortality rate. Visual examination with scanning or imaging screening is a common mechanism for detecting this disease, but due to its similarity to other diseases, this mechanism shows the least accuracy. This article introduces an innovative segmentation mechanism that operates on the ISIC dataset to divide skin images into critical and non-critical sections. The main objective of the research is to segment lesions from dermoscopic skin images. The suggested framework is completed in two steps. The first step is to pre-process the image; for this, we have applied a bottom hat filter for hair removal and image enhancement by applying DCT and color coefficient. In the next phase, a background subtraction method with midpoint analysis is applied for segmentation to extract the region of interest and achieves an accuracy of 95.30%. The ground truth for the validation of segmentation is accomplished by comparing the segmented images with validation data provided with the ISIC dataset.
Collapse
Affiliation(s)
- Uzma Saghir
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India
| | - Shailendra Kumar Singh
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India.
| | - Moin Hasan
- Dept. of Computer Science & Engineering, Jain Deemed-to-be-University, Bengaluru, 562112, India
| |
Collapse
|
13
|
Arjun KP, Kumar KS, Dhanaraj RK, Ravi V, Kumar TG. Optimizing time prediction and error classification in early melanoma detection using a hybrid RCNN-LSTM model. Microsc Res Tech 2024; 87:1789-1809. [PMID: 38515433 DOI: 10.1002/jemt.24559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 01/13/2024] [Accepted: 03/14/2024] [Indexed: 03/23/2024]
Abstract
Skin cancer is a terrifying disorder that affects all individuals. Due to the significant increase in the rate of melanoma skin cancer, early detection of skin cancer is now more critical than ever before. Malignant melanoma is one of the most serious forms of skin cancer, and it is caused by abnormal melanocyte cell growth. In recent years, skin cancer predictive categorization has become more accurate and predictive due to multiple deep learning algorithms. Malignant melanoma is diagnosed using the Recurrent Convolution Neural Network-Long Short-Term Memory (RCNN-LSTM), which is one of the deep learning classification approaches. Using the International Skin Image Collection and the RCNN-LSTM, the data are categorized and analyzed to gain a better understanding of skin cancer. The method begins with data preprocessing, which prepares the dataset for classification. Additionally, the RCNN is employed to extract the features that are vital to the prediction process. The LSTM is accountable for the final step, classification. There are further factors to examine, such as the precision of 94.60%, the sensitivity of 95.67%, and the F1-score of 95.13%. Other benefits of the suggested study include shorter prediction durations of 95.314, 122.530, and 131.205 s and lower model loss of 0.25%, 0.19%, and 0.15% for input sizes 10, 15, and 20, respectively. Three datasets had a reduced categorization error of 5.11% and an accuracy of 95.42%. In comparison to previous approaches, the work discussed here produces superior outcomes. RESEARCH HIGHLIGHTS: Recurrent convolutional neural network (RCNN) deep learning approach for optimizing time prediction and error classification in early melanoma detection. It extracts a high number of specific features from the skin disease image, making the classification process easier and more accurate. To reduce classification errors in accurately detecting melanoma, context dependency is considered in this work. By accounting for context dependency, the deprivation state is avoided, preventing performance degradation in the model. To minimize melanoma detection model loss, a skin disease image augmentation or regularization process is performed in this work. This strategy improves the accuracy of the model when applied to fresh, previously unobserved data.
Collapse
Affiliation(s)
- K P Arjun
- Department of Computer Science and Engineering, GITAM University, Bangalore, India
| | - K Sampath Kumar
- Department of Computer Science and Engineering, AMET University, Chennai, India
| | - Rajesh Kumar Dhanaraj
- Symbiosis Institute of Computer Studies and Research (SICSR), Symbiosis International (Deemed University), Pune, India
| | - Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - T Ganesh Kumar
- School of Computing Science and Engineering, Galgotias University, Greater Noida, India
| |
Collapse
|
14
|
Yin Y, Huang C, Bao X. ContrAttNet: Contribution and attention approach to multivariate time-series data imputation. NETWORK (BRISTOL, ENGLAND) 2024:1-24. [PMID: 38828665 DOI: 10.1080/0954898x.2024.2360157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 05/22/2024] [Indexed: 06/05/2024]
Abstract
The imputation of missing values in multivariate time-series data is a basic and popular data processing technology. Recently, some studies have exploited Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs) to impute/fill the missing values in multivariate time-series data. However, when faced with datasets with high missing rates, the imputation error of these methods increases dramatically. To this end, we propose a neural network model based on dynamic contribution and attention, denoted as ContrAttNet. ContrAttNet consists of three novel modules: feature attention module, iLSTM (imputation Long Short-Term Memory) module, and 1D-CNN (1-Dimensional Convolutional Neural Network) module. ContrAttNet exploits temporal information and spatial feature information to predict missing values, where iLSTM attenuates the memory of LSTM according to the characteristics of the missing values, to learn the contributions of different features. Moreover, the feature attention module introduces an attention mechanism based on contributions, to calculate supervised weights. Furthermore, under the influence of these supervised weights, 1D-CNN processes the time-series data by treating them as spatial features. Experimental results show that ContrAttNet outperforms other state-of-the-art models in the missing value imputation of multivariate time-series data, with average 6% MAPE and 9% MAE on the benchmark datasets.
Collapse
Affiliation(s)
- Yunfei Yin
- College of Computer Science, Chongqing University, Chongqing, China
| | - Caihao Huang
- College of Computer Science, Chongqing University, Chongqing, China
| | - Xianjian Bao
- Department of Computer Science, Maharishi University of Management, Fairfield, USA
| |
Collapse
|
15
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
16
|
Hu Z, Mei W, Chen H, Hou W. Multi-scale feature fusion and class weight loss for skin lesion classification. Comput Biol Med 2024; 176:108594. [PMID: 38761501 DOI: 10.1016/j.compbiomed.2024.108594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 05/20/2024]
Abstract
Skin cancer is one of the common types of cancer. It spreads quickly and is not easy to detect in the early stages, posing a major threat to human health. In recent years, deep learning methods have attracted widespread attention for skin cancer detection in dermoscopic images. However, training a practical classifier becomes highly challenging due to inter-class similarity and intra-class variation in skin lesion images. To address these problems, we propose a multi-scale fusion structure that combines shallow and deep features for more accurate classification. Simultaneously, we implement three approaches to the problem of class imbalance: class weighting, label smoothing, and resampling. In addition, the HAM10000_RE dataset strips out hair features to demonstrate the role of hair features in the classification process. We demonstrate that the region of interest is the most critical classification feature for the HAM10000_SE dataset, which segments lesion regions. We evaluated the effectiveness of our model using the HAM10000 and ISIC2019 dataset. The results showed that this method performed well in dermoscopic classification tasks, with ACC and AUC of 94.0% and 99.3%, on the HAM10000 dataset and ACC of 89.8% for the ISIC2019 dataset. The overall performance of our model is excellent in comparison to state-of-the-art models.
Collapse
Affiliation(s)
- Zhentao Hu
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Weiqiang Mei
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China.
| | - Hongyu Chen
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Wei Hou
- College of Computer and Information Engineering, Henan University, Kaifeng, 475001, China
| |
Collapse
|
17
|
Kandhro IA, Manickam S, Fatima K, Uddin M, Malik U, Naz A, Dandoush A. Performance evaluation of E-VGG19 model: Enhancing real-time skin cancer detection and classification. Heliyon 2024; 10:e31488. [PMID: 38826726 PMCID: PMC11141372 DOI: 10.1016/j.heliyon.2024.e31488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 05/16/2024] [Indexed: 06/04/2024] Open
Abstract
Skin cancer is a pervasive and potentially life-threatening disease. Early detection plays a crucial role in improving patient outcomes. Machine learning (ML) techniques, particularly when combined with pre-trained deep learning models, have shown promise in enhancing the accuracy of skin cancer detection. In this paper, we enhanced the VGG19 pre-trained model with max pooling and dense layer for the prediction of skin cancer. Moreover, we also explored the pre-trained models such as Visual Geometry Group 19 (VGG19), Residual Network 152 version 2 (ResNet152v2), Inception-Residual Network version 2 (InceptionResNetV2), Dense Convolutional Network 201 (DenseNet201), Residual Network 50 (ResNet50), Inception version 3 (InceptionV3), For training, skin lesions dataset is used with malignant and benign cases. The models extract features and divide skin lesions into two categories: malignant and benign. The features are then fed into machine learning methods, including Linear Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR) and Support Vector Machine (SVM), our results demonstrate that combining E-VGG19 model with traditional classifiers significantly improves the overall classification accuracy for skin cancer detection and classification. Moreover, we have also compared the performance of baseline classifiers and pre-trained models with metrics (recall, F1 score, precision, sensitivity, and accuracy). The experiment results provide valuable insights into the effectiveness of various models and classifiers for accurate and efficient skin cancer detection. This research contributes to the ongoing efforts to create automated technologies for detecting skin cancer that can help healthcare professionals and individuals identify potential skin cancer cases at an early stage, ultimately leading to more timely and effective treatments.
Collapse
Affiliation(s)
- Irfan Ali Kandhro
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Selvakumar Manickam
- National Advanced IPv6 Centre (NAv6), Universiti Sains Malaysia, Gelugor, Penang, 11800, Malaysia
| | - Kanwal Fatima
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Mueen Uddin
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| | - Urooj Malik
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Anum Naz
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Abdulhalim Dandoush
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| |
Collapse
|
18
|
Munuswamy Selvaraj K, Gnanagurusubbiah S, Roby Roy RR, John Peter JH, Balu S. Enhancing skin lesion classification with advanced deep learning ensemble models: a path towards accurate medical diagnostics. Curr Probl Cancer 2024; 49:101077. [PMID: 38480028 DOI: 10.1016/j.currproblcancer.2024.101077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/27/2024] [Accepted: 02/28/2024] [Indexed: 04/29/2024]
Abstract
Skin cancer, including the highly lethal malignant melanoma, poses a significant global health challenge with a rising incidence rate. Early detection plays a pivotal role in improving survival rates. This study aims to develop an advanced deep learning-based approach for accurate skin lesion classification, addressing challenges such as limited data availability, class imbalance, and noise. Modern deep neural network designs, such as ResNeXt101, SeResNeXt101, ResNet152V2, DenseNet201, GoogLeNet, and Xception, which are used in the study and ze optimised using the SGD technique. The dataset comprises diverse skin lesion images from the HAM10000 and ISIC datasets. Noise and artifacts are tackled using image inpainting, and data augmentation techniques enhance training sample diversity. The ensemble technique is utilized, creating both average and weighted average ensemble models. Grid search optimizes model weight distribution. The individual models exhibit varying performance, with metrics including recall, precision, F1 score, and MCC. The "Average ensemble model" achieves harmonious balance, emphasizing precision, F1 score, and recall, yielding high performance. The "Weighted ensemble model" capitalizes on individual models' strengths, showcasing heightened precision and MCC, yielding outstanding performance. The ensemble models consistently outperform individual models, with the average ensemble model attaining a macro-average ROC-AUC score of 96 % and the weighted ensemble model achieving a macro-average ROC-AUC score of 97 %. This research demonstrates the efficacy of ensemble techniques in significantly improving skin lesion classification accuracy. By harnessing the strengths of individual models and addressing their limitations, the ensemble models exhibit robust and reliable performance across various metrics. The findings underscore the potential of ensemble techniques in enhancing medical diagnostics and contributing to improved patient outcomes in skin lesion diagnosis.
Collapse
Affiliation(s)
- Kavitha Munuswamy Selvaraj
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, RSM Nagar, Chennai, Tamil Nadu, India.
| | - Sumathy Gnanagurusubbiah
- Department of Computational Intelligence, SRM Institute of Science and Technology, kattankulathur, Tamil Nadu, India
| | - Reena Roy Roby Roy
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Jasmine Hephzipah John Peter
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, RSM Nagar, Chennai, Tamil Nadu, India
| | - Sarala Balu
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, RSM Nagar, Chennai, Tamil Nadu, India
| |
Collapse
|
19
|
Malik FS, Yousaf MH, Sial HA, Viriri S. Exploring dermoscopic structures for melanoma lesions' classification. Front Big Data 2024; 7:1366312. [PMID: 38590699 PMCID: PMC10999676 DOI: 10.3389/fdata.2024.1366312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 02/26/2024] [Indexed: 04/10/2024] Open
Abstract
Background Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology. Objectives In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures. Methodology The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks. Results In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy. Conclusions This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features.
Collapse
Affiliation(s)
- Fiza Saeed Malik
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Muhammad Haroon Yousaf
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
- School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa
| | | | - Serestina Viriri
- School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
20
|
Zhang K, Lin PC, Pan J, Shao R, Xu PX, Cao R, Wu CG, Crookes D, Hua L, Wang L. DeepmdQCT: A multitask network with domain invariant features and comprehensive attention mechanism for quantitative computer tomography diagnosis of osteoporosis. Comput Biol Med 2024; 170:107916. [PMID: 38237237 DOI: 10.1016/j.compbiomed.2023.107916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/18/2023] [Accepted: 12/29/2023] [Indexed: 02/28/2024]
Abstract
In the medical field, the application of machine learning technology in the automatic diagnosis and monitoring of osteoporosis often faces challenges related to domain adaptation in drug therapy research. The existing neural networks used for the diagnosis of osteoporosis may experience a decrease in model performance when applied to new data domains due to changes in radiation dose and equipment. To address this issue, in this study, we propose a new method for multi domain diagnostic and quantitative computed tomography (QCT) images, called DeepmdQCT. This method adopts a domain invariant feature strategy and integrates a comprehensive attention mechanism to guide the fusion of global and local features, effectively improving the diagnostic performance of multi domain CT images. We conducted experimental evaluations on a self-created OQCT dataset, and the results showed that for dose domain images, the average accuracy reached 91%, while for device domain images, the accuracy reached 90.5%. our method successfully estimated bone density values, with a fit of 0.95 to the gold standard. Our method not only achieved high accuracy in CT images in the dose and equipment fields, but also successfully estimated key bone density values, which is crucial for evaluating the effectiveness of osteoporosis drug treatment. In addition, we validated the effectiveness of our architecture in feature extraction using three publicly available datasets. We also encourage the application of the DeepmdQCT method to a wider range of medical image analysis fields to improve the performance of multi-domain images.
Collapse
Affiliation(s)
- Kun Zhang
- School of Electrical Engineering, Nantong University, Nantong, Jiangsu, 226001, China; Nantong Key Laboratory of Intelligent Control and Intelligent Computing, Nantong, Jiangsu, 226001, China; Nantong Key Laboratory of Intelligent Medicine Innovation and Transformation, Nantong, Jiangsu, 226001, China
| | - Peng-Cheng Lin
- School of Electrical Engineering, Nantong University, Nantong, Jiangsu, 226001, China
| | - Jing Pan
- Department of Radiology, Affiliated Hospital 2 of Nantong University, Nantong, Jiangsu, 226001, China
| | - Rui Shao
- School of Electrical Engineering, Nantong University, Nantong, Jiangsu, 226001, China
| | - Pei-Xia Xu
- School of Electrical Engineering, Nantong University, Nantong, Jiangsu, 226001, China
| | - Rui Cao
- Department of Radiology, Affiliated Hospital 2 of Nantong University, Nantong, Jiangsu, 226001, China
| | - Cheng-Gang Wu
- School of Electrical Engineering, Nantong University, Nantong, Jiangsu, 226001, China
| | - Danny Crookes
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT7 1NN, UK
| | - Liang Hua
- School of Electrical Engineering, Nantong University, Nantong, Jiangsu, 226001, China.
| | - Lin Wang
- Department of Radiology, Affiliated Hospital 2 of Nantong University, Nantong, Jiangsu, 226001, China.
| |
Collapse
|
21
|
Zhang X, Li Q, Li W, Guo Y, Zhang J, Guo C, Chang K, Lovell NH. FD-Net: Feature Distillation Network for Oral Squamous Cell Carcinoma Lymph Node Segmentation in Hyperspectral Imagery. IEEE J Biomed Health Inform 2024; 28:1552-1563. [PMID: 38446656 DOI: 10.1109/jbhi.2024.3350245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Oral squamous cell carcinoma (OSCC) has the characteristics of early regional lymph node metastasis. OSCC patients often have poor prognoses and low survival rates due to cervical lymph metastases. Therefore, it is necessary to rely on a reasonable screening method to quickly judge the cervical lymph metastastic condition of OSCC patients and develop appropriate treatment plans. In this study, the widely used pathological sections with hematoxylin-eosin (H&E) staining are taken as the target, and combined with the advantages of hyperspectral imaging technology, a novel diagnostic method for identifying OSCC lymph node metastases is proposed. The method consists of a learning stage and a decision-making stage, focusing on cancer and non-cancer nuclei, gradually completing the lesions' segmentation from coarse to fine, and achieving high accuracy. In the learning stage, the proposed feature distillation-Net (FD-Net) network is developed to segment the cancerous and non-cancerous nuclei. In the decision-making stage, the segmentation results are post-processed, and the lesions are effectively distinguished based on the prior. Experimental results demonstrate that the proposed FD-Net is very competitive in the OSCC hyperspectral medical image segmentation task. The proposed FD-Net method performs best on the seven segmentation evaluation indicators: MIoU, OA, AA, SE, CSI, GDR, and DICE. Among these seven evaluation indicators, the proposed FD-Net method is 1.75%, 1.27%, 0.35%, 1.9%, 0.88%, 4.45%, and 1.98% higher than the DeepLab V3 method, which ranks second in performance, respectively. In addition, the proposed diagnosis method of OSCC lymph node metastasis can effectively assist pathologists in disease screening and reduce the workload of pathologists.
Collapse
|
22
|
De A, Mishra N, Chang HT. An approach to the dermatological classification of histopathological skin images using a hybridized CNN-DenseNet model. PeerJ Comput Sci 2024; 10:e1884. [PMID: 38435616 PMCID: PMC10909212 DOI: 10.7717/peerj-cs.1884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/29/2024] [Indexed: 03/05/2024]
Abstract
This research addresses the challenge of automating skin disease diagnosis using dermatoscopic images. The primary issue lies in accurately classifying pigmented skin lesions, which traditionally rely on manual assessment by dermatologists and are prone to subjectivity and time consumption. By integrating a hybrid CNN-DenseNet model, this study aimed to overcome the complexities of differentiating various skin diseases and automating the diagnostic process effectively. Our methodology involved rigorous data preprocessing, exploratory data analysis, normalization, and label encoding. Techniques such as model hybridization, batch normalization and data fitting were employed to optimize the model architecture and data fitting. Initial iterations of our convolutional neural network (CNN) model achieved an accuracy of 76.22% on the test data and 75.69% on the validation data. Recognizing the need for improvement, the model was hybridized with DenseNet architecture and ResNet architecture was implemented for feature extraction and then further trained on the HAM10000 and PAD-UFES-20 datasets. Overall, our efforts resulted in a hybrid model that demonstrated an impressive accuracy of 95.7% on the HAM10000 dataset and 91.07% on the PAD-UFES-20 dataset. In comparison to recently published works, our model stands out because of its potential to effectively diagnose skin diseases such as melanocytic nevi, melanoma, benign keratosis-like lesions, basal cell carcinoma, actinic keratoses, vascular lesions, and dermatofibroma, all of which rival the diagnostic accuracy of real-world clinical specialists but also offer customization potential for more nuanced clinical uses.
Collapse
Affiliation(s)
- Anubhav De
- School of Computing Science & Engineering, VIT Bhopal University, Madhya Pradesh, India
| | - Nilamadhab Mishra
- School of Computing Science & Engineering, VIT Bhopal University, Madhya Pradesh, India
| | - Hsien-Tsung Chang
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Artificial Intelligence Research Center, Chang Gung University, Taoyuan, Taiwan
- Bachelor Program in Artificial Intelligence, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
23
|
Farhatullah, Chen X, Zeng D, Xu J, Nawaz R, Ullah R. Classification of Skin Lesion With Features Extraction Using Quantum Chebyshev Polynomials and Autoencoder From Wavelet-Transformed Images. IEEE ACCESS 2024; 12:193923-193936. [DOI: 10.1109/access.2024.3502513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Farhatullah
- School of Computer Science, China University of Geosciences, Wuhan, China
| | - Xin Chen
- School of Automation, China University of Geosciences, Wuhan, China
| | - Deze Zeng
- School of Computer Science, China University of Geosciences, Wuhan, China
| | - Jiafeng Xu
- School of Automation, China University of Geosciences, Wuhan, China
| | - Rab Nawaz
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, U.K
| | - Rahmat Ullah
- School of Computer Science, China University of Geosciences, Wuhan, China
| |
Collapse
|
24
|
Zhang D, Li A, Wu W, Yu L, Kang X, Huo X. CR-Conformer: a fusion network for clinical skin lesion classification. Med Biol Eng Comput 2024; 62:85-94. [PMID: 37653185 DOI: 10.1007/s11517-023-02904-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 08/03/2023] [Indexed: 09/02/2023]
Abstract
Deep convolutional neural network (DCNN) models have been widely used to diagnose skin lesions, and some of them have achieved diagnostic results comparable to or even better than dermatologists. Most publicly available skin lesion datasets used to train DCNN were dermoscopic images. Expensive dermoscopic equipment is rarely available in rural clinics or small hospitals in remote areas. Therefore, it is of great significance to rely on clinical images for computer-aided diagnosis of skin lesions. This paper proposes an improved dual-branch fusion network called CR-Conformer. It integrates a DCNN branch that can effectively extract local features and a Transformer branch that can extract global features to capture more valuable features in clinical skin lesion images. In addition, we improved the DCNN branch to extract enhanced features in four directions through the convolutional rotation operation, further improving the classification performance of clinical skin lesion images. To verify the effectiveness of our proposed method, we conducted comprehensive tests on a private dataset named XJUSL, which contains ten types of clinical skin lesions. The test results indicate that our proposed method reduced the number of parameters by 11.17 M and improved the accuracy of clinical skin lesion image classification by 1.08%. It has the potential to realize automatic diagnosis of skin lesions in mobile devices.
Collapse
Affiliation(s)
- Dezhi Zhang
- Department of Dermatology and Venereology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, 830000, China
- Xinjiang Clinical Research Center for Dermatologic Diseases, Urumqi, China
- Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, China
| | - Aolun Li
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Weidong Wu
- Department of Dermatology and Venereology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, 830000, China.
- Xinjiang Clinical Research Center for Dermatologic Diseases, Urumqi, China.
- Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, China.
| | - Long Yu
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Xiaojing Kang
- Department of Dermatology and Venereology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, 830000, China
- Xinjiang Clinical Research Center for Dermatologic Diseases, Urumqi, China
- Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, China
| | - Xiangzuo Huo
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| |
Collapse
|
25
|
Khan MA, Muhammad K, Sharif M, Akram T, Kadry S. Intelligent fusion-assisted skin lesion localization and classification for smart healthcare. Neural Comput Appl 2024; 36:37-52. [DOI: 10.1007/s00521-021-06490-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 08/30/2021] [Indexed: 12/28/2022]
|
26
|
Khan S, Khan A. SkinViT: A transformer based method for Melanoma and Nonmelanoma classification. PLoS One 2023; 18:e0295151. [PMID: 38150449 PMCID: PMC10752524 DOI: 10.1371/journal.pone.0295151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
Over the past few decades, skin cancer has emerged as a major global health concern. The efficacy of skin cancer treatment greatly depends upon early diagnosis and effective treatment. The automated classification of Melanoma and Nonmelanoma is quite challenging task due to presence of high visual similarities across different classes and variabilities within each class. According to the best of our knowledge, this study represents the classification of Melanoma and Nonmelanoma utilising Basal Cell Carcinoma (BCC) and Squamous Cell Carcinoma (SCC) under the Nonmelanoma class for the first time. Therefore, this research focuses on automated detection of different skin cancer types to provide assistance to the dermatologists in timely diagnosis and treatment of Melanoma and Nonmelanoma patients. Recently, artificial intelligence (AI) methods have gained popularity where Convolutional Neural Networks (CNNs) are employed to accurately classify various skin diseases. However, CNN has limitation in its ability to capture global contextual information which may lead to missing important information. In order to address this issue, this research explores the outlook attention mechanism inspired by vision outlooker, which improves important features while suppressing noisy features. The proposed SkinViT architecture integrates an outlooker block, transformer block and MLP head block to efficiently capture both fine level and global features in order to enhance the accuracy of Melanoma and Nonmelanoma classification. The proposed SkinViT method is assessed by different performance metrics such as recall, precision, classification accuracy, and F1 score. We performed extensive experiments on three datasets, Dataset1 which is extracted from ISIC2019, Dataset2 collected from various online dermatological database and Dataset3 combines both datasets. The proposed SkinViT achieved 0.9109 accuracy on Dataset1, 0.8911 accuracy on Dataset3 and 0.8611 accuracy on Dataset2. Moreover, the proposed SkinViT method outperformed other SOTA models and displayed higher accuracy compared to the previous work in the literature. The proposed method demonstrated higher performance efficiency in classification of Melanoma and Nonmelanoma dermoscopic images. This work is expected to inspire further research in implementing a system for detecting skin cancer that can assist dermatologists in timely diagnosing Melanoma and Nonmelanoma patients.
Collapse
Affiliation(s)
- Somaiya Khan
- School of Electronics Engineering, Beijing University of Posts and Telecommunications, Beijing, China
| | - Ali Khan
- School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
27
|
Knoedler L, Knoedler S, Allam O, Remy K, Miragall M, Safi AF, Alfertshofer M, Pomahac B, Kauke-Navarro M. Application possibilities of artificial intelligence in facial vascularized composite allotransplantation-a narrative review. Front Surg 2023; 10:1266399. [PMID: 38026484 PMCID: PMC10646214 DOI: 10.3389/fsurg.2023.1266399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/26/2023] [Indexed: 12/01/2023] Open
Abstract
Facial vascularized composite allotransplantation (FVCA) is an emerging field of reconstructive surgery that represents a dogmatic shift in the surgical treatment of patients with severe facial disfigurements. While conventional reconstructive strategies were previously considered the goldstandard for patients with devastating facial trauma, FVCA has demonstrated promising short- and long-term outcomes. Yet, there remain several obstacles that complicate the integration of FVCA procedures into the standard workflow for facial trauma patients. Artificial intelligence (AI) has been shown to provide targeted and resource-effective solutions for persisting clinical challenges in various specialties. However, there is a paucity of studies elucidating the combination of FVCA and AI to overcome such hurdles. Here, we delineate the application possibilities of AI in the field of FVCA and discuss the use of AI technology for FVCA outcome simulation, diagnosis and prediction of rejection episodes, and malignancy screening. This line of research may serve as a fundament for future studies linking these two revolutionary biotechnologies.
Collapse
Affiliation(s)
- Leonard Knoedler
- Department of Plastic, Hand- and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Omar Allam
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Katya Remy
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Maximilian Miragall
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, Bern, Switzerland
- Faculty of Medicine, University of Bern, Bern, Switzerland
| | - Michael Alfertshofer
- Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians University Munich, Munich, Germany
| | - Bohdan Pomahac
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| |
Collapse
|
28
|
Hussain M, Khan MA, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics (Basel) 2023; 13:2869. [PMID: 37761236 PMCID: PMC10527569 DOI: 10.3390/diagnostics13182869] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 08/30/2023] [Accepted: 09/01/2023] [Indexed: 09/29/2023] Open
Abstract
Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top-bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 13-5053, Lebanon
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
29
|
Huang Z, Wu J, Wang T, Li Z, Ioannou A. Class-Specific Distribution Alignment for semi-supervised medical image classification. Comput Biol Med 2023; 164:107280. [PMID: 37517324 DOI: 10.1016/j.compbiomed.2023.107280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 07/11/2023] [Accepted: 07/16/2023] [Indexed: 08/01/2023]
Abstract
Despite the success of deep neural networks in medical image classification, the problem remains challenging as data annotation is time-consuming, and the class distribution is imbalanced due to the relative scarcity of diseases. To address this problem, we propose Class-Specific Distribution Alignment (CSDA), a semi-supervised learning framework based on self-training that is suitable to learn from highly imbalanced datasets. Specifically, we first provide a new perspective to distribution alignment by considering the process as a change of basis in the vector space spanned by marginal predictions, and then derive CSDA to capture class-dependent marginal predictions on both labeled and unlabeled data, in order to avoid the bias towards majority classes. Furthermore, we propose a Variable Condition Queue (VCQ) module to maintain a proportionately balanced number of unlabeled samples for each class. Experiments on three public datasets HAM10000, CheXpert and Kvasir show that our method provides competitive performance on semi-supervised skin disease, thoracic disease, and endoscopic image classification tasks.
Collapse
Affiliation(s)
- Zhongzheng Huang
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China; College of Computer and Data Science, Fuzhou University, Fuzhou, China
| | - Jiawei Wu
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China; College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Tao Wang
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China; International Digital Economy College, Minjiang University, Fuzhou, China.
| | - Zuoyong Li
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China.
| | - Anastasia Ioannou
- International Digital Economy College, Minjiang University, Fuzhou, China; Department of Computer Science and Engineering, European University Cyprus, Nicosia, Cyprus
| |
Collapse
|
30
|
Radhika V, Chandana BS. MSCDNet-based multi-class classification of skin cancer using dermoscopy images. PeerJ Comput Sci 2023; 9:e1520. [PMID: 37705664 PMCID: PMC10495937 DOI: 10.7717/peerj-cs.1520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 07/18/2023] [Indexed: 09/15/2023]
Abstract
Background Skin cancer is a life-threatening disease, and early detection of skin cancer improves the chances of recovery. Skin cancer detection based on deep learning algorithms has recently grown popular. In this research, a new deep learning-based network model for the multiple skin cancer classification including melanoma, benign keratosis, melanocytic nevi, and basal cell carcinoma is presented. We propose an automatic Multi-class Skin Cancer Detection Network (MSCD-Net) model in this research. Methods The study proposes an efficient semantic segmentation deep learning model "DenseUNet" for skin lesion segmentation. The semantic skin lesions are segmented by using the DenseUNet model with a substantially deeper network and fewer trainable parameters. Some of the most relevant features are selected using Binary Dragonfly Algorithm (BDA). SqueezeNet-based classification can be made in the selected features. Results The performance of the proposed model is evaluated using the ISIC 2019 dataset. The DenseNet connections and UNet links are used by the proposed DenseUNet segmentation model, which produces low-level features and provides better segmentation results. The performance results of the proposed MSCD-Net model are superior to previous research in terms of effectiveness and efficiency on the standard ISIC 2019 dataset.
Collapse
Affiliation(s)
| | - B. Sai Chandana
- School of Computer Science Engineering, VIT-AP University, Amaravathi, India
| |
Collapse
|
31
|
Zhang Z, Ye S, Liu Z, Wang H, Ding W. Deep Hyperspherical Clustering for Skin Lesion Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3770-3781. [PMID: 37022227 DOI: 10.1109/jbhi.2023.3240297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Diagnosis of skin lesions based on imaging techniques remains a challenging task because data (knowledge) uncertainty may reduce accuracy and lead to imprecise results. This paper investigates a new deep hyperspherical clustering (DHC) method for skin lesion medical image segmentation by combining deep convolutional neural networks and the theory of belief functions (TBF). The proposed DHC aims to eliminate the dependence on labeled data, improve segmentation performance, and characterize the imprecision caused by data (knowledge) uncertainty. First, the SLIC superpixel algorithm is employed to group the image into multiple meaningful superpixels, aiming to maximize the use of context without destroying the boundary information. Second, an autoencoder network is designed to transform the superpixels' information into potential features. Third, a hypersphere loss is developed to train the autoencoder network. The loss is defined to map the input to a pair of hyperspheres so that the network can perceive tiny differences. Finally, the result is redistributed to characterize the imprecision caused by data (knowledge) uncertainty based on the TBF. The proposed DHC method can well characterize the imprecision between skin lesions and non-lesions, which is particularly important for the medical procedures. A series of experiments on four dermoscopic benchmark datasets demonstrate that the proposed DHC yields better segmentation performance, increasing the accuracy of the predictions while can perceive imprecise regions compared to other typical methods.
Collapse
|
32
|
Zou X, Zhai J, Qian S, Li A, Tian F, Cao X, Wang R. Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15244-15264. [PMID: 37679179 DOI: 10.3934/mbe.2023682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Ultrasonography is a widely used medical imaging technique for detecting breast cancer. While manual diagnostic methods are subject to variability and time-consuming, computer-aided diagnostic (CAD) methods have proven to be more efficient. However, current CAD approaches neglect the impact of noise and artifacts on the accuracy of image analysis. To enhance the precision of breast ultrasound image analysis for identifying tissues, organs and lesions, we propose a novel approach for improved tumor classification through a dual-input model and global average pooling (GAP)-guided attention loss function. Our approach leverages a convolutional neural network with transformer architecture and modifies the single-input model for dual-input. This technique employs a fusion module and GAP operation-guided attention loss function simultaneously to supervise the extraction of effective features from the target region and mitigate the effect of information loss or redundancy on misclassification. Our proposed method has three key features: (i) ResNet and MobileViT are combined to enhance local and global information extraction. In addition, a dual-input channel is designed to include both attention images and original breast ultrasound images, mitigating the impact of noise and artifacts in ultrasound images. (ii) A fusion module and GAP operation-guided attention loss function are proposed to improve the fusion of dual-channel feature information, as well as supervise and constrain the weight of the attention mechanism on the fused focus region. (iii) Using the collected uterine fibroid ultrasound dataset to train ResNet18 and load the pre-trained weights, our experiments on the BUSI and BUSC public datasets demonstrate that the proposed method outperforms some state-of-the-art methods. The code will be publicly released at https://github.com/425877/Improved-Breast-Ultrasound-Tumor-Classification.
Collapse
Affiliation(s)
- Xiao Zou
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Jintao Zhai
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Shengyou Qian
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Ang Li
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Feng Tian
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Xiaofei Cao
- College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
| | - Runmin Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
| |
Collapse
|
33
|
FixMatch-LS: Semi-supervised skin lesion classification with label smoothing. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
34
|
Fu Y, Xue P, Zhang Z, Dong E. PKA 2-Net: Prior Knowledge-Based Active Attention Network for Accurate Pneumonia Diagnosis on Chest X-Ray Images. IEEE J Biomed Health Inform 2023; 27:3513-3524. [PMID: 37058372 DOI: 10.1109/jbhi.2023.3267057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2023]
Abstract
To accurately diagnose pneumonia patients on a limited annotated chest X-ray image dataset, a prior knowledge-based active attention network (PKA2-Net1) was constructed. The PKA2-Net uses improved ResNet as the backbone network and consists of residual blocks, novel subject enhancement and background suppression (SEBS) blocks and candidate template generators, where template generators are designed to generate candidate templates for characterizing the importance of different spatial locations in feature maps. The core of PKA2-Net is SEBS block, which is proposed based on the prior knowledge that highlighting distinctive features and suppressing irrelevant features can improve the recognition effect. The purpose of SEBS block is to generate active attention features without any high-level features and enhance the ability of the model to localize lung lesions. In SEBS block, first, a series of candidate templates T with different spatial energy distributions are generated and the controllability of the energy distribution in T enables active attention features to maintain the continuity and integrity of the feature space distributions. Second, Top-n templates are selected from T according to certain learning rules, which are then operated by a convolution layer for generating supervision information that can guide the inputs of SEBS block to form active attention features. We evaluated the PKA2-Net on the binary classification problem of identifying pneumonia and healthy controls on a dataset containing 5856 chest X-ray images (ChestXRay2017), the results showed that our method can achieve 97.63% accuracy and 0.9872 sensitivity.
Collapse
|
35
|
Li Q, Chen M, Geng J, Adamu MJ, Guan X. High-Resolution Network with Dynamic Convolution and Coordinate Attention for Classification of Chest X-ray Images. Diagnostics (Basel) 2023; 13:2165. [PMID: 37443559 DOI: 10.3390/diagnostics13132165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 06/12/2023] [Accepted: 06/21/2023] [Indexed: 07/15/2023] Open
Abstract
The development of automatic chest X-ray (CXR) disease classification algorithms is significant for diagnosing thoracic diseases. Owing to the characteristics of lesions in CXR images, including high similarity in appearance of the disease, varied sizes, and different occurrence locations, most existing convolutional neural network-based methods have insufficient feature extraction for thoracic lesions and struggle to adapt to changes in lesion size and location. To address these issues, this study proposes a high-resolution classification network with dynamic convolution and coordinate attention (HRCC-Net). In the method, this study suggests a parallel multi-resolution network in which a high-resolution branch acquires essential detailed features of the lesion and multi-resolution feature swapping and fusion to obtain multiple receptive fields to extract complicated disease features adequately. Furthermore, this study proposes dynamic convolution to enhance the network's ability to represent multi-scale information to accommodate lesions of diverse scales. In addition, this study introduces a coordinate attention mechanism, which enables automatic focus on pathologically relevant regions and capturing the variations in lesion location. The proposed method is evaluated on ChestX-ray14 and CheXpert datasets. The average AUC (area under ROC curve) values reach 0.845 and 0.913, respectively, indicating this method's advantages compared with the currently available methods. Meanwhile, with its specificity and sensitivity to measure the performance of medical diagnostic systems, the network can improve diagnostic efficiency while reducing the rate of misdiagnosis. The proposed algorithm has great potential for thoracic disease diagnosis and treatment.
Collapse
Affiliation(s)
- Qiang Li
- School of Microelectronics, Tianjin University, Tianjin 300072, China
| | - Mingyu Chen
- School of Microelectronics, Tianjin University, Tianjin 300072, China
| | - Jingjing Geng
- School of Microelectronics, Tianjin University, Tianjin 300072, China
| | | | - Xin Guan
- School of Microelectronics, Tianjin University, Tianjin 300072, China
| |
Collapse
|
36
|
Wu H, Huang X, Guo X, Wen Z, Qin J. Cross-Image Dependency Modeling for Breast Ultrasound Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1619-1631. [PMID: 37018315 DOI: 10.1109/tmi.2022.3233648] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a novel deep network (namely BUSSeg) equipped with both within- and cross-image long-range dependency modeling for automated lesions segmentation from breast ultrasound images, which is a quite daunting task due to (1) the large variation of breast lesions, (2) the ambiguous lesion boundaries, and (3) the existence of speckle noise and artifacts in ultrasound images. Our work is motivated by the fact that most existing methods only focus on modeling the within-image dependencies while neglecting the cross-image dependencies, which are essential for this task under limited training data and noise. We first propose a novel cross-image dependency module (CDM) with a cross-image contextual modeling scheme and a cross-image dependency loss (CDL) to capture more consistent feature expression and alleviate noise interference. Compared with existing cross-image methods, the proposed CDM has two merits. First, we utilize more complete spatial features instead of commonly used discrete pixel vectors to capture the semantic dependencies between images, mitigating the negative effects of speckle noise and making the acquired features more representative. Second, the proposed CDM includes both intra- and inter-class contextual modeling rather than just extracting homogeneous contextual dependencies. Furthermore, we develop a parallel bi-encoder architecture (PBA) to tame a Transformer and a convolutional neural network to enhance BUSSeg's capability in capturing within-image long-range dependencies and hence offer richer features for CDM. We conducted extensive experiments on two representative public breast ultrasound datasets, and the results demonstrate that the proposed BUSSeg consistently outperforms state-of-the-art approaches in most metrics.
Collapse
|
37
|
Ran B, Huang B, Liang S, Hou Y. Surgical Instrument Detection Algorithm Based on Improved YOLOv7x. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115037. [PMID: 37299761 DOI: 10.3390/s23115037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/19/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
The counting of surgical instruments is an important task to ensure surgical safety and patient health. However, due to the uncertainty of manual operations, there is a risk of missing or miscounting instruments. Applying computer vision technology to the instrument counting process can not only improve efficiency, but also reduce medical disputes and promote the development of medical informatization. However, during the counting process, surgical instruments may be densely arranged or obstruct each other, and they may be affected by different lighting environments, all of which can affect the accuracy of instrument recognition. In addition, similar instruments may have only minor differences in appearance and shape, which increases the difficulty of identification. To address these issues, this paper improves the YOLOv7x object detection algorithm and applies it to the surgical instrument detection task. First, the RepLK Block module is introduced into the YOLOv7x backbone network, which can increase the effective receptive field and guide the network to learn more shape features. Second, the ODConv structure is introduced into the neck module of the network, which can significantly enhance the feature extraction ability of the basic convolution operation of the CNN and capture more rich contextual information. At the same time, we created the OSI26 data set, which contains 452 images and 26 surgical instruments, for model training and evaluation. The experimental results show that our improved algorithm exhibits higher accuracy and robustness in surgical instrument detection tasks, with F1, AP, AP50, and AP75 reaching 94.7%, 91.5%, 99.1%, and 98.2%, respectively, which are 4.6%, 3.1%, 3.6%, and 3.9% higher than the baseline. Compared to other mainstream object detection algorithms, our method has significant advantages. These results demonstrate that our method can more accurately identify surgical instruments, thereby improving surgical safety and patient health.
Collapse
Affiliation(s)
- Boping Ran
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Bo Huang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Shunpan Liang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Yulei Hou
- School of Mechanical Engineering, Yanshan University, Qinhuangdao 066000, China
| |
Collapse
|
38
|
Zhang Y, Xie F, Chen J. TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis. Comput Biol Med 2023; 157:106712. [PMID: 36907033 DOI: 10.1016/j.compbiomed.2023.106712] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/27/2023] [Accepted: 02/26/2023] [Indexed: 03/04/2023]
Abstract
Multi-modal skin lesion diagnosis (MSLD) has achieved remarkable success by modern computer-aided diagnosis (CAD) technology based on deep convolutions. However, the information aggregation across modalities in MSLD remains challenging due to severity unaligned spatial resolution (e.g., dermoscopic image and clinical image) and heterogeneous data (e.g., dermoscopic image and patients' meta-data). Limited by the intrinsic local attention, most recent MSLD pipelines using pure convolutions struggle to capture representative features in shallow layers, thus the fusion across different modalities is usually done at the end of the pipelines, even at the last layer, leading to an insufficient information aggregation. To tackle the issue, we introduce a pure transformer-based method, which we refer to as "Throughout Fusion Transformer (TFormer)", for sufficient information integration in MSLD. Different from the existing approaches with convolutions, the proposed network leverages transformer as feature extraction backbone, bringing more representative shallow features. We then carefully design a stack of dual-branch hierarchical multi-modal transformer (HMT) blocks to fuse information across different image modalities in a stage-by-stage way. With the aggregated information of image modalities, a multi-modal transformer post-fusion (MTP) block is designed to integrate features across image and non-image data. Such a strategy that information of the image modalities is firstly fused then the heterogeneous ones enables us to better divide and conquer the two major challenges while ensuring inter-modality dynamics are effectively modeled. Experiments conducted on the public Derm7pt dataset validate the superiority of the proposed method. Our TFormer achieves an average accuracy of 77.99% and diagnostic accuracy of 80.03% , which outperforms other state-of-the-art methods. Ablation experiments also suggest the effectiveness of our designs. The codes can be publicly available from https://github.com/zylbuaa/TFormer.git.
Collapse
Affiliation(s)
- Yilan Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Fengying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China.
| | - Jianqi Chen
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| |
Collapse
|
39
|
Yang T, He Q, Huang L. OM-NAS: pigmented skin lesion image classification based on a neural architecture search. BIOMEDICAL OPTICS EXPRESS 2023; 14:2153-2165. [PMID: 37206141 PMCID: PMC10191671 DOI: 10.1364/boe.483828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/07/2023] [Accepted: 04/05/2023] [Indexed: 05/21/2023]
Abstract
Because pigmented skin lesion image classification based on manually designed convolutional neural networks (CNNs) requires abundant experience in neural network design and considerable parameter tuning, we proposed the macro operation mutation-based neural architecture search (OM-NAS) approach in order to automatically build a CNN for image classification of pigmented skin lesions. We first used an improved search space that was oriented toward cells and contained micro and macro operations. The macro operations include InceptionV1, Fire and other well-designed neural network modules. During the search process, an evolutionary algorithm based on macro operation mutation was employed to iteratively change the operation type and connection mode of parent cells so that the macro operation was inserted into the child cell similar to the injection of virus into host DNA. Ultimately, the searched best cells were stacked to build a CNN for the image classification of pigmented skin lesions, which was then assessed on the HAM10000 and ISIC2017 datasets. The test results showed that the CNN built with this approach was more accurate than or almost as accurate as state-of-the-art (SOTA) approaches such as AmoebaNet, InceptionV3 + Attention and ARL-CNN in terms of image classification. The average sensitivity of this method on the HAM10000 and ISIC2017 datasets was 72.4% and 58.5%, respectively.
Collapse
Affiliation(s)
- Tiejun Yang
- College of Intelligent Medicine and Biotechnology,
Guilin Medical University, Guilin, 541199 Guangxi, China
| | - Qing He
- Guangxi Key Laboratory of Embedded Technology and Intelligent System,
Guilin University of Technology, Guilin, 541006 Guangxi, China
| | - Lin Huang
- Guangxi Key Laboratory of Embedded Technology and Intelligent System,
Guilin University of Technology, Guilin, 541006 Guangxi, China
| |
Collapse
|
40
|
Wang L, Zhang L, Shu X, Yi Z. Intra-class consistency and inter-class discrimination feature learning for automatic skin lesion classification. Med Image Anal 2023; 85:102746. [PMID: 36638748 DOI: 10.1016/j.media.2023.102746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 10/24/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Automated skin lesion classification has been proved to be capable of improving the diagnostic performance for dermoscopic images. Although many successes have been achieved, accurate classification remains challenging due to the significant intra-class variation and inter-class similarity. In this article, a deep learning method is proposed to increase the intra-class consistency as well as the inter-class discrimination of learned features in the automatic skin lesion classification. To enhance the inter-class discriminative feature learning, a CAM-based (class activation mapping) global-lesion localization module is proposed by optimizing the distance of CAMs for the same dermoscopic image generated by different skin lesion tasks. Then, a global features guided intra-class similarity learning module is proposed to generate the class center according to the deep features of all samples in one class and the history feature of one sample during the learning process. In this way, the performance can be improved with the collaboration of CAM-based inter-class feature discriminating and global features guided intra-class feature concentrating. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on the ISIC-2017 and ISIC-2018 datasets. Experimental results with different backbones have demonstrated that the proposed method has good generalizability and can adaptively focus on more discriminative regions of the skin lesion.
Collapse
Affiliation(s)
- Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
41
|
Baskaran D, Nagamani Y, Merugula S, Premnath SP. MSRFNet for skin lesion segmentation and deep learning with hybrid optimization for skin cancer detection. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2187518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
42
|
Bharathi G, Malleswaran M, Muthupriya V. Detection and diagnosis of melanoma skin cancers in dermoscopic images using pipelined internal module architecture (PIMA) method. Microsc Res Tech 2023; 86:701-713. [PMID: 36860140 DOI: 10.1002/jemt.24307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 02/10/2023] [Accepted: 02/13/2023] [Indexed: 03/03/2023]
Abstract
Detection and diagnosis of melanoma skin cancer is important to save the life of humans. The main objective of this article is to perform both detection and diagnosis of the skin cancers in dermoscopy images. Both skin cancer detection and diagnosis system uses deep learning architectures for the effective performance improvement as the main objective. The detection process involves by identifying the cancer affected skin dermoscopy images and the diagnosis process involves by estimating the severity levels of the segmented cancer regions in skin images. This article proposes parallel CNN architecture for the classification of skin images into either melanoma or healthy. Initially, color map histogram equalization (CMHE) method is proposed in this article to enhance the source skin images and then thick and thin edges are detected from the enhanced skin image using the Fuzzy system. The gray-level co-occurrence matrix (GLCM) and Law's texture features are extracted from the edge detected images and these features are optimized using genetic algorithm (GA) approach. Further, the optimized features are classified by the developed pipelined internal module architecture (PIMA) of deep learning structure. The cancer regions in the classified melanoma skin images are segmented using mathematical morphological process and these segmented cancer regions are diagnosed into either mild or severe using the proposed PIMA structure. The proposed PIMA-based skin cancer classification system is applied and tested on ISIC and HAM 10000 skin image datasets. RESEARCH HIGHLIGHTS: The melanoma skin cancer is detected and classified using dermoscopy images. The skin dermoscopy images are enhanced using color map histogram equalization. GLCM and Law's texture features are extracted from the enhanced skin images. To propose pipelined internal module architecture (PIMA) for the classification of skin images.
Collapse
Affiliation(s)
- G Bharathi
- Faculty of Department of Electronics and Communication Engineering, Ranippettai Engineering College, Ranipet, Tamil Nadu, India
| | - M Malleswaran
- Department of Electronics and Communication, Anna University, Chennai, India
| | - V Muthupriya
- Department of Computer Science Engineering, B.S AbdurRahman Crescent Institute of Science and Technology, Chennai, India
| |
Collapse
|
43
|
Hasan MK, Ahamad MA, Yap CH, Yang G. A survey, review, and future trends of skin lesion segmentation and classification. Comput Biol Med 2023; 155:106624. [PMID: 36774890 DOI: 10.1016/j.compbiomed.2023.106624] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/04/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023]
Abstract
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Bioengineering, Imperial College London, UK; Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Md Asif Ahamad
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, UK.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, UK.
| |
Collapse
|
44
|
Liu Z, Xiong R, Jiang T. CI-Net: Clinical-Inspired Network for Automated Skin Lesion Recognition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:619-632. [PMID: 36279355 DOI: 10.1109/tmi.2022.3215547] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The lesion recognition of dermoscopy images is significant for automated skin cancer diagnosis. Most of the existing methods ignore the medical perspective, which is crucial since this task requires a large amount of medical knowledge. A few methods are designed according to medical knowledge, but they ignore to be fully in line with doctors' entire learning and diagnosis process, since certain strategies and steps of those are conducted in practice for doctors. Thus, we put forward Clinical-Inspired Network (CI-Net) to involve the learning strategy and diagnosis process of doctors, as for a better analysis. The diagnostic process contains three main steps: the zoom step, the observe step and the compare step. To simulate these, we introduce three corresponding modules: a lesion area attention module, a feature extraction module and a lesion feature attention module. To simulate the distinguish strategy, which is commonly used by doctors, we introduce a distinguish module. We evaluate our proposed CI-Net on six challenging datasets, including ISIC 2016, ISIC 2017, ISIC 2018, ISIC 2019, ISIC 2020 and PH2 datasets, and the results indicate that CI-Net outperforms existing work. The code is publicly available at https://github.com/lzh19961031/Dermoscopy_classification.
Collapse
|
45
|
Wang Y, Su J, Xu Q, Zhong Y. A Collaborative Learning Model for Skin Lesion Segmentation and Classification. Diagnostics (Basel) 2023; 13:diagnostics13050912. [PMID: 36900056 PMCID: PMC10001355 DOI: 10.3390/diagnostics13050912] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 02/19/2023] [Accepted: 02/24/2023] [Indexed: 03/06/2023] Open
Abstract
The automatic segmentation and classification of skin lesions are two essential tasks in computer-aided skin cancer diagnosis. Segmentation aims to detect the location and boundary of the skin lesion area, while classification is used to evaluate the type of skin lesion. The location and contour information of lesions provided by segmentation is essential for the classification of skin lesions, while the skin disease classification helps generate target localization maps to assist the segmentation task. Although the segmentation and classification are studied independently in most cases, we find meaningful information can be explored using the correlation of dermatological segmentation and classification tasks, especially when the sample data are insufficient. In this paper, we propose a collaborative learning deep convolutional neural networks (CL-DCNN) model based on the teacher-student learning method for dermatological segmentation and classification. To generate high-quality pseudo-labels, we provide a self-training method. The segmentation network is selectively retrained through classification network screening pseudo-labels. Specially, we obtain high-quality pseudo-labels for the segmentation network by providing a reliability measure method. We also employ class activation maps to improve the location ability of the segmentation network. Furthermore, we provide the lesion contour information by using the lesion segmentation masks to improve the recognition ability of the classification network. Experiments are carried on the ISIC 2017 and ISIC Archive datasets. The CL-DCNN model achieved a Jaccard of 79.1% on the skin lesion segmentation task and an average AUC of 93.7% on the skin disease classification task, which is superior to the advanced skin lesion segmentation methods and classification methods.
Collapse
Affiliation(s)
- Ying Wang
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China
| | - Jie Su
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China
- Correspondence: ; Tel.: +86-15054125550
| | - Qiuyu Xu
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China
| | - Yixin Zhong
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Artificial Intelligence Research Institute, University of Jinan, Jinan 250022, China
| |
Collapse
|
46
|
Mandal A, Priyam S, Chan HH, Gouveia BM, Guitera P, Song Y, Baker MAB, Vafaee F. Computer-Aided Diagnosis of Melanoma Subtypes Using Reflectance Confocal Images. Cancers (Basel) 2023; 15:1428. [PMID: 36900219 PMCID: PMC10000703 DOI: 10.3390/cancers15051428] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 03/03/2023] Open
Abstract
Lentigo maligna (LM) is an early form of pre-invasive melanoma that predominantly affects sun-exposed areas such as the face. LM is highly treatable when identified early but has an ill-defined clinical border and a high rate of recurrence. Atypical intraepidermal melanocytic proliferation (AIMP), also known as atypical melanocytic hyperplasia (AMH), is a histological description that indicates melanocytic proliferation with uncertain malignant potential. Clinically and histologically, AIMP can be difficult to distinguish from LM, and indeed AIMP may, in some cases, progress to LM. The early diagnosis and distinction of LM from AIMP are important since LM requires a definitive treatment. Reflectance confocal microscopy (RCM) is an imaging technique often used to investigate these lesions non-invasively, without biopsy. However, RCM equipment is often not readily available, nor is the associated expertise for RCM image interpretation easy to find. Here, we implemented a machine learning classifier using popular convolutional neural network (CNN) architectures and demonstrated that it could correctly classify lesions between LM and AIMP on biopsy-confirmed RCM image stacks. We identified local z-projection (LZP) as a recent fast approach for projecting a 3D image into 2D while preserving information and achieved high-accuracy machine classification with minimal computational requirements.
Collapse
Affiliation(s)
- Ankita Mandal
- School of Biotechnology and Biomolecular Sciences, University of New South Wales (UNSW Sydney), Sydney 2052, Australia
- Department of Mechanical Engineering, Indian Institute of Technology (IIT Delhi), Delhi 110016, India
| | - Siddhaant Priyam
- School of Biotechnology and Biomolecular Sciences, University of New South Wales (UNSW Sydney), Sydney 2052, Australia
- Department of Electrical Engineering, Indian Institute of Technology (IIT Delhi), Delhi 110016, India
| | - Hsien Herbert Chan
- Department of Dermatology, Princess Alexandra Hospital, Brisbane 4102, Australia
- Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Sydney 2006, Australia
- Melanoma Institute Australia, The University of Sydney, Sydney 2006, Australia
| | - Bruna Melhoranse Gouveia
- Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Sydney 2006, Australia
- Melanoma Institute Australia, The University of Sydney, Sydney 2006, Australia
| | - Pascale Guitera
- Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Sydney 2006, Australia
- Melanoma Institute Australia, The University of Sydney, Sydney 2006, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales (UNSW Sydney), Sydney 2052, Australia
| | | | - Fatemeh Vafaee
- School of Biotechnology and Biomolecular Sciences, University of New South Wales (UNSW Sydney), Sydney 2052, Australia
- UNSW Data Science Hub, University of New South Wales (UNSW Sydney), Sydney 2052, Australia
| |
Collapse
|
47
|
Song X, Li J, Qian X. Diagnosis of Glioblastoma Multiforme Progression via Interpretable Structure-Constrained Graph Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:380-390. [PMID: 36018877 DOI: 10.1109/tmi.2022.3202037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Glioblastoma multiforme (GBM) is the most common type of brain tumors with high recurrence and mortality rates. After chemotherapy treatment, GBM patients still show a high rate of differentiating pseudoprogression (PsP), which is often confused as true tumor progression (TTP) due to high phenotypical similarities. Thus, it is crucial to construct an automated diagnosis model for differentiating between these two types of glioma progression. However, attaining this goal is impeded by the limited data availability and the high demand for interpretability in clinical settings. In this work, we propose an interpretable structure-constrained graph neural network (ISGNN) with enhanced features to automatically discriminate between PsP and TTP. This network employs a metric-based meta-learning strategy to aggregate class-specific graph nodes, focus on meta-tasks associated with various small graphs, thus improving the classification performance on small-scale datasets. Specifically, a node feature enhancement module is proposed to account for the relative importance of node features and enhance their distinguishability through inductive learning. A graph generation constraint module enables learning reasonable graph structures to improve the efficiency of information diffusion while avoiding propagation errors. Furthermore, model interpretability can be naturally enhanced based on the learned node features and graph structures that are closely related to the classification results. Comprehensive experimental evaluation of our method demonstrated excellent interpretable results in the diagnosis of glioma progression. In general, our work provides a novel systematic GNN approach for dealing with data scarcity and enhancing decision interpretability. Our source codes will be released at https://github.com/SJTUBME-QianLab/GBM-GNN.
Collapse
|
48
|
Yanagisawa Y, Shido K, Kojima K, Yamasaki K. Convolutional neural network-based skin image segmentation model to improve classification of skin diseases in conventional and non-standardized picture images. J Dermatol Sci 2023; 109:30-36. [PMID: 36658056 DOI: 10.1016/j.jdermsci.2023.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 12/07/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
BACKGROUND For dermatological practices, non-standardized conventional photo images are taken and collected as a mixture of variable fields of the image view, including close-up images focusing on designated lesions and long-shot images including normal skin and background of the body surface. Computer-aided detection/diagnosis (CAD) models trained using non-standardized conventional photo images exhibit lower performance rates than CAD models that detect lesions in a localized small area, such as dermoscopic images. OBJECTIVE We aimed to develop a convolutional neural network (CNN) model for skin image segmentation to generate a skin disease image dataset suitable for CAD of multiple skin disease classification. METHODS We trained a DeepLabv3 + -based CNN segmentation model to detect skin and lesion areas and segmented out areas that satisfy the following conditions: more than 80% of the image will be the skin area, and more than 10% of the image will be the lesion area. RESULTS The generated CNN-segmented image database was examined using CAD of skin disease classification and achieved approximately 90% sensitivity and specificity to differentiate atopic dermatitis from malignant diseases and complications, such as mycosis fungoides, impetigo, and herpesvirus infection. The accuracy of skin disease classification in the CNN-segmented image dataset was almost equal to that of the manually cropped image dataset and higher than that of the original image dataset. CONCLUSION Our CNN segmentation model, which automatically extracts lesions and segmented images of the skin area regardless of image fields, will reduce the burden of physician annotation and improve CAD performance.
Collapse
Affiliation(s)
| | - Kosuke Shido
- Department of Dermatology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kaname Kojima
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan.
| | - Kenshi Yamasaki
- Department of Dermatology, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
49
|
Zafar M, Sharif MI, Sharif MI, Kadry S, Bukhari SAC, Rauf HT. Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey. Life (Basel) 2023; 13:146. [PMID: 36676093 PMCID: PMC9864434 DOI: 10.3390/life13010146] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/25/2022] [Accepted: 12/28/2022] [Indexed: 01/06/2023] Open
Abstract
The skin is the human body's largest organ and its cancer is considered among the most dangerous kinds of cancer. Various pathological variations in the human body can cause abnormal cell growth due to genetic disorders. These changes in human skin cells are very dangerous. Skin cancer slowly develops over further parts of the body and because of the high mortality rate of skin cancer, early diagnosis is essential. The visual checkup and the manual examination of the skin lesions are very tricky for the determination of skin cancer. Considering these concerns, numerous early recognition approaches have been proposed for skin cancer. With the fast progression in computer-aided diagnosis systems, a variety of deep learning, machine learning, and computer vision approaches were merged for the determination of medical samples and uncommon skin lesion samples. This research provides an extensive literature review of the methodologies, techniques, and approaches applied for the examination of skin lesions to date. This survey includes preprocessing, segmentation, feature extraction, selection, and classification approaches for skin cancer recognition. The results of these approaches are very impressive but still, some challenges occur in the analysis of skin lesions because of complex and rare features. Hence, the main objective is to examine the existing techniques utilized in the discovery of skin cancer by finding the obstacle that helps researchers contribute to future research.
Collapse
Affiliation(s)
- Mehwish Zafar
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
| | - Muhammad Irfan Sharif
- Department of Computer Science, University of Education, Jauharabad Campus, Khushāb 41200, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| | - Syed Ahmad Chan Bukhari
- Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, Queens, NY 11439, USA
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| |
Collapse
|
50
|
Jalaboi R, Faye F, Orbes-Arteaga M, Jørgensen D, Winther O, Galimzianova A. DermX: An end-to-end framework for explainable automated dermatological diagnosis. Med Image Anal 2023; 83:102647. [PMID: 36272237 DOI: 10.1016/j.media.2022.102647] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 08/17/2022] [Accepted: 09/27/2022] [Indexed: 11/06/2022]
Abstract
Dermatological diagnosis automation is essential in addressing the high prevalence of skin diseases and critical shortage of dermatologists. Despite approaching expert-level diagnosis performance, convolutional neural network (ConvNet) adoption in clinical practice is impeded by their limited explainability, and by subjective, expensive explainability validations. We introduce DermX, an end-to-end framework for explainable automated dermatological diagnosis. DermX is a clinically-inspired explainable dermatological diagnosis ConvNet, trained using DermXDB, a 554 image dataset annotated by eight dermatologists with diagnoses, supporting explanations, and explanation attention maps. DermX+ extends DermX with guided attention training for explanation attention maps. Both methods achieve near-expert diagnosis performance, with DermX, DermX+, and dermatologist F1 scores of 0.79, 0.79, and 0.87, respectively. We assess the explanation performance in terms of identification and localization by comparing model-selected with dermatologist-selected explanations, and gradient-weighted class-activation maps with dermatologist explanation maps, respectively. DermX obtained an identification F1 score of 0.77, while DermX+ obtained 0.79. The localization F1 score is 0.39 for DermX and 0.35 for DermX+. These results show that explainability does not necessarily come at the expense of predictive power, as our high-performance models provide expert-inspired explanations for their diagnoses without lowering their diagnosis performance.
Collapse
Affiliation(s)
- Raluca Jalaboi
- Department of Applied Mathematics and Computer Science at the Technical University of Denmark, Richard Petersens Plads, Building 324, DK-2800 Kongens Lyngby, Denmark; Omhu A/S, Silkegade 8 st, DK-1113 Copenhagen C, Denmark.
| | - Frederik Faye
- Omhu A/S, Silkegade 8 st, DK-1113 Copenhagen C, Denmark
| | | | - Dan Jørgensen
- Omhu A/S, Silkegade 8 st, DK-1113 Copenhagen C, Denmark
| | - Ole Winther
- Department of Applied Mathematics and Computer Science at the Technical University of Denmark, Richard Petersens Plads, Building 324, DK-2800 Kongens Lyngby, Denmark; Bioinformatics Centre, Department of Biology, University of Copenhagen, Copenhagen, Denmark; Center for Genomic Medicine, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
| | | |
Collapse
|