1
|
Liu Y, Li C, Li F, Lin R, Zhang D, Lian Y. Advances in computer vision and deep learning-facilitated early detection of melanoma. Brief Funct Genomics 2025; 24:elaf002. [PMID: 40139223 PMCID: PMC11942789 DOI: 10.1093/bfgp/elaf002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 01/03/2025] [Accepted: 01/17/2025] [Indexed: 03/29/2025] Open
Abstract
Melanoma is characterized by its rapid progression and high mortality rates, making early and accurate detection essential for improving patient outcomes. This paper presents a comprehensive review of significant advancements in early melanoma detection, with a focus on integrating computer vision and deep learning techniques. This study investigates cutting-edge neural networks such as YOLO, GAN, Mask R-CNN, ResNet, and DenseNet to explore their application in enhancing early melanoma detection and diagnosis. These models were critically evaluated for their capacity to enhance dermatological imaging and diagnostic accuracy, crucial for effective melanoma treatment. Our research demonstrates that these AI technologies refine image analysis and feature extraction, and enhance processing capabilities in various clinical settings. Additionally, we emphasize the importance of comprehensive dermatological datasets such as PH2, ISIC, DERMQUEST, and MED-NODE, which are crucial for training and validating these sophisticated models. Integrating these datasets ensures that the AI systems are robust, versatile, and perform well under diverse conditions. The results of this study suggest that the integration of AI into melanoma detection marks a significant advancement in the field of medical diagnostics and is expected to have the potential to improve patient outcomes through more accurate and earlier detection methods. Future research should focus on enhancing these technologies further, integrating multimodal data, and improving AI decision interpretability to facilitate clinical adoption, thus transforming melanoma diagnostics into a more precise, personalized, and preventive healthcare service.
Collapse
Affiliation(s)
- Yantong Liu
- Department of Gastroenterology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, 201 Hubin South Road, Siming district, Xiamen 361005, China
- Department of Computer and Information Engineering, Kunsan National University, 558 Daehak Road, miryong district, Gunsan 54150, Republic of Korea
| | - Chuang Li
- Department of Biological Sciences, Purdue University, 610 Purdue Mall, West Lafayette, IN 47906, United States
| | - Feifei Li
- Department of Biochemistry and molecular biology, Mayo clinic, MN 55905, United States
| | - Rubin Lin
- Department of Orthopedics, Shenzhen Children's Hospital, 7019 Yitian Road, Futian District, Shenzhen, 518000, China
| | - Dongdong Zhang
- Department of Gastroenterology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, 201 Hubin South Road, Siming district, Xiamen 361005, China
| | - Yifan Lian
- Department of Gastroenterology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, 201 Hubin South Road, Siming district, Xiamen 361005, China
| |
Collapse
|
2
|
Liu A, Ma H, Zhu Y, Wu Q, Xu S, Feng W, Liang H, Ma J, Wang X, Ye X, Liu Y, Wang C, Sun X, Xiang S, Yang Q. Development of a Deep Learning-Based Model for Pressure Injury Surface Assessment. J Clin Nurs 2025. [PMID: 39809598 DOI: 10.1111/jocn.17645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 12/10/2024] [Accepted: 01/02/2025] [Indexed: 01/16/2025]
Abstract
AIM To develop a deep learning-based smart assessment model for pressure injury surface. DESIGN Exploratory analysis study. METHODS Pressure injury images from four Guangzhou hospitals were labelled and used to train a neural network model. Evaluation metrics included mean intersection over union (MIoU), pixel accuracy (PA), and accuracy. Model performance was tested by comparing wound number, maximum dimensions and area extent. RESULTS From 1063 images, the model achieved 74% IoU, 88% PA and 83% accuracy for wound bed segmentation. Cohen's kappa coefficient for wound number was 0.810. Correlation coefficients were 0.900 for maximum length (mean difference 0.068 cm), 0.814 for maximum width (mean difference 0.108 cm) and 0.930 for regional extent (mean difference 0.527 cm2). CONCLUSION The model demonstrated exceptional automated estimation capabilities, potentially serving as a crucial tool for informed decision-making in wound assessment. IMPLICATIONS AND IMPACT This study promotes precision nursing and equitable resource use. The AI-based assessment model serves clinical work by assisting healthcare professionals in decision-making and facilitating wound assessment resource sharing. REPORTING METHOD The STROBE checklist guided study reporting. PATIENT OR PUBLIC CONTRIBUTION Patients provided image resources for model training.
Collapse
Affiliation(s)
- Ankang Liu
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| | - Hualong Ma
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| | - Yanying Zhu
- Department of Continuing Care Services, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Qinyang Wu
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| | - Shihai Xu
- Emergency Department, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Wei Feng
- College of Cyber Security, Jinan University, Guangzhou, Guangdong, China
| | - Haobin Liang
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| | - Jian Ma
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| | - Xinwei Wang
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| | - Xuemei Ye
- Burn and Wound Repair Center, Guangzhou Red Cross Hospital, Guangzhou, Guangdong, China
| | - Yanxiong Liu
- Department of Burns, Plastic and Reconstructive Surgery and Wound Repair, Guangzhou First People's Hospital, Guangzhou, Guangdong, China
| | - Chao Wang
- Emergency Department, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Xu Sun
- Guanzhou Life Science Center, Guangzhou, Guangdong, China
| | - Shijun Xiang
- College of Cyber Security, Jinan University, Guangzhou, Guangdong, China
| | - Qiaohong Yang
- School of Nursing, Jinan University, Guangzhou, Guangdong, China
| |
Collapse
|
3
|
Behara K, Bhero E, Agee JT. AI in dermatology: a comprehensive review into skin cancer detection. PeerJ Comput Sci 2024; 10:e2530. [PMID: 39896358 PMCID: PMC11784784 DOI: 10.7717/peerj-cs.2530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Accepted: 10/28/2024] [Indexed: 02/04/2025]
Abstract
Background Artificial Intelligence (AI) is significantly transforming dermatology, particularly in early skin cancer detection and diagnosis. This technological advancement addresses a crucial public health issue by enhancing diagnostic accuracy, efficiency, and accessibility. AI integration in medical imaging and diagnostic procedures offers promising solutions to the limitations of traditional methods, which often rely on subjective clinical evaluations and histopathological analyses. This study systematically reviews current AI applications in skin cancer classification, providing a comprehensive overview of their advantages, challenges, methodologies, and functionalities. Methodology In this study, we conducted a comprehensive analysis of artificial intelligence (AI) applications in the classification of skin cancer. We evaluated publications from three prominent journal databases: Scopus, IEEE, and MDPI. We conducted a thorough selection process using the PRISMA guidelines, collecting 1,156 scientific articles. Our methodology included evaluating the titles and abstracts and thoroughly examining the full text to determine their relevance and quality. Consequently, we included a total of 95 publications in the final study. We analyzed and categorized the articles based on four key dimensions: advantages, difficulties, methodologies, and functionalities. Results AI-based models exhibit remarkable performance in skin cancer detection by leveraging advanced deep learning algorithms, image processing techniques, and feature extraction methods. The advantages of AI integration include significantly improved diagnostic accuracy, faster turnaround times, and increased accessibility to dermatological expertise, particularly benefiting underserved areas. However, several challenges remain, such as concerns over data privacy, complexities in integrating AI systems into existing workflows, and the need for large, high-quality datasets. AI-based methods for skin cancer detection, including CNNs, SVMs, and ensemble learning techniques, aim to improve lesion classification accuracy and increase early detection. AI systems enhance healthcare by enabling remote consultations, continuous patient monitoring, and supporting clinical decision-making, leading to more efficient care and better patient outcomes. Conclusions This comprehensive review highlights the transformative potential of AI in dermatology, particularly in skin cancer detection and diagnosis. While AI technologies have significantly improved diagnostic accuracy, efficiency, and accessibility, several challenges remain. Future research should focus on ensuring data privacy, developing robust AI systems that can generalize across diverse populations, and creating large, high-quality datasets. Integrating AI tools into clinical workflows is critical to maximizing their utility and effectiveness. Continuous innovation and interdisciplinary collaboration will be essential for fully realizing the benefits of AI in skin cancer detection and diagnosis.
Collapse
Affiliation(s)
- Kavita Behara
- Department of Electrical Engineering, Mangosuthu University of Technology, Durban, Kwazulu- Natal, South Africa
| | - Ernest Bhero
- Discipline of Computer Engineering, University of KwaZulu Natal, Durban, KwaZulu-Natal, South Africa
| | - John Terhile Agee
- Discipline of Computer Engineering, University of KwaZulu Natal, Durban, KwaZulu-Natal, South Africa
| |
Collapse
|
4
|
Alharbi H, Sampedro GA, Juanatas RA, Lim SJ. Enhanced skin cancer diagnosis: a deep feature extraction-based framework for the multi-classification of skin cancer utilizing dermoscopy images. Front Med (Lausanne) 2024; 11:1495576. [PMID: 39606634 PMCID: PMC11601079 DOI: 10.3389/fmed.2024.1495576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 10/23/2024] [Indexed: 11/29/2024] Open
Abstract
Skin cancer is one of the most common, deadly, and widespread cancers worldwide. Early detection of skin cancer can lead to reduced death rates. A dermatologist or primary care physician can use a dermatoscope to inspect a patient to diagnose skin disorders visually. Early detection of skin cancer is essential, and in order to confirm the diagnosis and determine the most appropriate course of therapy, patients should undergo a biopsy and a histological evaluation. Significant advancements have been made recently as the accuracy of skin cancer categorization by automated deep learning systems matches that of dermatologists. Though progress has been made, there is still a lack of a widely accepted, clinically reliable method for diagnosing skin cancer. This article presented four variants of the Convolutional Neural Network (CNN) model (i.e., original CNN, no batch normalization CNN, few filters CNN, and strided CNN) for the classification and prediction of skin cancer in lesion images with the aim of helping physicians in their diagnosis. Further, it presents the hybrid models CNN-Support Vector Machine (CNNSVM), CNN-Random Forest (CNNRF), and CNN-Logistic Regression (CNNLR), using a grid search for the best parameters. Exploratory Data Analysis (EDA) and random oversampling are performed to normalize and balance the data. The CNN models (original CNN, strided, and CNNSVM) obtained an accuracy rate of 98%. In contrast, CNNRF and CNNLR obtained an accuracy rate of 99% for skin cancer prediction on a HAM10000 dataset of 10,015 dermoscopic images. The encouraging outcomes demonstrate the effectiveness of the proposed method and show that improving the performance of skin cancer diagnosis requires including the patient's metadata with the lesion image.
Collapse
Affiliation(s)
- Hadeel Alharbi
- College of Computer Science and Engineering, University of Hail, Ha'il, Saudi Arabia
| | | | - Roben A. Juanatas
- College of Computing and Information Technologies, National University, Manila, Philippines
| | - Se-jung Lim
- School of Electrical and Computer Engineering, Yeosu Campus, Chonnam National University, Gwangju, Republic of Korea
| |
Collapse
|
5
|
Li Y, Chiu PW, Tam V, Lee A, Lam EY. Dual-Mode Imaging System for Early Detection and Monitoring of Ocular Surface Diseases. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2024; 18:783-798. [PMID: 38875082 DOI: 10.1109/tbcas.2024.3411713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2024]
Abstract
The global prevalence of ocular surface diseases (OSDs), such as dry eyes, conjunctivitis, and subconjunctival hemorrhage (SCH), is steadily increasing due to factors such as aging populations, environmental influences, and lifestyle changes. These diseases affect millions of individuals worldwide, emphasizing the importance of early diagnosis and continuous monitoring for effective treatment. Therefore, we present a deep learning-enhanced imaging system for the automated, objective, and reliable assessment of these three representative OSDs. Our comprehensive pipeline incorporates processing techniques derived from dual-mode infrared (IR) and visible (RGB) images. It employs a multi-stage deep learning model to enable accurate and consistent measurement of OSDs. This proposed method has achieved a 98.7% accuracy with an F1 score of 0.980 in class classification and a 96.2% accuracy with an F1 score of 0.956 in SCH region identification. Furthermore, our system aims to facilitate early diagnosis of meibomian gland dysfunction (MGD), a primary factor causing dry eyes, by quantitatively analyzing the meibomian gland (MG) area ratio and detecting gland morphological irregularities with an accuracy of 88.1% and an F1 score of 0.781. To enhance convenience and timely OSD management, we are integrating a portable IR camera for obtaining meibography during home inspections. Our system demonstrates notable improvements in expanding dual-mode image-based diagnosis for broader applicability, effectively enhancing patient care efficiency. With its automation, accuracy, and compact design, this system is well-suited for early detection and ongoing assessment of OSDs, contributing to improved eye healthcare in an accessible and comprehensible manner.
Collapse
|
6
|
Zhang D, Li H, Shi J, Shen Y, Zhu L, Chen N, Wei Z, Lv J, Chen Y, Hao F. Advancements in acne detection: application of the CenterNet network in smart dermatology. Front Med (Lausanne) 2024; 11:1344314. [PMID: 38596788 PMCID: PMC11003269 DOI: 10.3389/fmed.2024.1344314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/06/2024] [Indexed: 04/11/2024] Open
Abstract
Introduction Acne detection is critical in dermatology, focusing on quality control of acne imagery, precise segmentation, and grading. Traditional research has been limited, typically concentrating on singular aspects of acne detection. Methods We propose a multi-task acne detection method, employing a CenterNet-based training paradigm to develop an advanced detection system. This system collects acne images via smartphones and features multi-task capabilities for detecting image quality and identifying various acne types. It differentiates between noninflammatory acne, papules, pustules, nodules, and provides detailed delineation for cysts and post-acne scars. Results The implementation of this multi-task learning-based framework in clinical diagnostics demonstrated an 83% accuracy in lesion categorization, surpassing ResNet18 models by 12%. Furthermore, it achieved a 76% precision in lesion stratification, outperforming dermatologists by 16%. Discussion Our framework represents a advancement in acne detection, offering a comprehensive tool for classification, localization, counting, and precise segmentation. It not only enhances the accuracy of remote acne lesion identification by doctors but also clarifies grading logic and criteria, facilitating easier grading judgments.
Collapse
Affiliation(s)
- Daojun Zhang
- The Third Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Huanyu Li
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Jiajia Shi
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Yue Shen
- Simulation of Complex Systems Lab, Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Tokyo, Japan
| | - Ling Zhu
- Shanghai Beforteen AI Lab, Shanghai, China
| | | | - Zikun Wei
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Junwei Lv
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Yu Chen
- Simulation of Complex Systems Lab, Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Tokyo, Japan
| | - Fei Hao
- The Third Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|