1
|
Salinas MP, Sepúlveda J, Hidalgo L, Peirano D, Morel M, Uribe P, Rotemberg V, Briones J, Mery D, Navarrete-Dechent C. A systematic review and meta-analysis of artificial intelligence versus clinicians for skin cancer diagnosis. NPJ Digit Med 2024; 7:125. [PMID: 38744955 PMCID: PMC11094047 DOI: 10.1038/s41746-024-01103-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 04/04/2024] [Indexed: 05/16/2024] Open
Abstract
Scientific research of artificial intelligence (AI) in dermatology has increased exponentially. The objective of this study was to perform a systematic review and meta-analysis to evaluate the performance of AI algorithms for skin cancer classification in comparison to clinicians with different levels of expertise. Based on PRISMA guidelines, 3 electronic databases (PubMed, Embase, and Cochrane Library) were screened for relevant articles up to August 2022. The quality of the studies was assessed using QUADAS-2. A meta-analysis of sensitivity and specificity was performed for the accuracy of AI and clinicians. Fifty-three studies were included in the systematic review, and 19 met the inclusion criteria for the meta-analysis. Considering all studies and all subgroups of clinicians, we found a sensitivity (Sn) and specificity (Sp) of 87.0% and 77.1% for AI algorithms, respectively, and a Sn of 79.78% and Sp of 73.6% for all clinicians (overall); differences were statistically significant for both Sn and Sp. The difference between AI performance (Sn 92.5%, Sp 66.5%) vs. generalists (Sn 64.6%, Sp 72.8%), was greater, when compared with expert clinicians. Performance between AI algorithms (Sn 86.3%, Sp 78.4%) vs expert dermatologists (Sn 84.2%, Sp 74.4%) was clinically comparable. Limitations of AI algorithms in clinical practice should be considered, and future studies should focus on real-world settings, and towards AI-assistance.
Collapse
Affiliation(s)
- Maria Paz Salinas
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Javiera Sepúlveda
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Leonel Hidalgo
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Dominga Peirano
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Macarena Morel
- Universidad Catolica-Evidence Center, Cochrane Chile Associated Center, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Pablo Uribe
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
- Melanoma and Skin Cancer Unit, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Veronica Rotemberg
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Juan Briones
- Department of Oncology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Domingo Mery
- Department of Computer Science, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Cristian Navarrete-Dechent
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.
- Melanoma and Skin Cancer Unit, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.
| |
Collapse
|
2
|
Zhong F, He K, Ji M, Chen J, Gao T, Li S, Zhang J, Li C. Optimizing vitiligo diagnosis with ResNet and Swin transformer deep learning models: a study on performance and interpretability. Sci Rep 2024; 14:9127. [PMID: 38644396 PMCID: PMC11033269 DOI: 10.1038/s41598-024-59436-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 04/10/2024] [Indexed: 04/23/2024] Open
Abstract
Vitiligo is a hypopigmented skin disease characterized by the loss of melanin. The progressive nature and widespread incidence of vitiligo necessitate timely and accurate detection. Usually, a single diagnostic test often falls short of providing definitive confirmation of the condition, necessitating the assessment by dermatologists who specialize in vitiligo. However, the current scarcity of such specialized medical professionals presents a significant challenge. To mitigate this issue and enhance diagnostic accuracy, it is essential to build deep learning models that can support and expedite the detection process. This study endeavors to establish a deep learning framework to enhance the diagnostic accuracy of vitiligo. To this end, a comparative analysis of five models including ResNet (ResNet34, ResNet50, and ResNet101 models) and Swin Transformer series (Swin Transformer Base, and Swin Transformer Large models), were conducted under the uniform condition to identify the model with superior classification capabilities. Moreover, the study sought to augment the interpretability of these models by selecting one that not only provides accurate diagnostic outcomes but also offers visual cues highlighting the regions pertinent to vitiligo. The empirical findings reveal that the Swin Transformer Large model achieved the best performance in classification, whose AUC, accuracy, sensitivity, and specificity are 0.94, 93.82%, 94.02%, and 93.5%, respectively. In terms of interpretability, the highlighted regions in the class activation map correspond to the lesion regions of the vitiligo images, which shows that it effectively indicates the specific category regions associated with the decision-making of dermatological diagnosis. Additionally, the visualization of feature maps generated in the middle layer of the deep learning model provides insights into the internal mechanisms of the model, which is valuable for improving the interpretability of the model, tuning performance, and enhancing clinical applicability. The outcomes of this study underscore the significant potential of deep learning models to revolutionize medical diagnosis by improving diagnostic accuracy and operational efficiency. The research highlights the necessity for ongoing exploration in this domain to fully leverage the capabilities of deep learning technologies in medical diagnostics.
Collapse
Affiliation(s)
- Fan Zhong
- College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Kaiqiao He
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Mengqi Ji
- College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Jianru Chen
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Tianwen Gao
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Shuli Li
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Junpeng Zhang
- College of Electrical Engineering, Sichuan University, Chengdu, China.
| | - Chunying Li
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
3
|
Lindholm V, Annala L, Koskenmies S, Pitkänen S, Isoherranen K, Järvinen A, Jeskanen L, Pölönen I, Ranki A, Raita‐Hakola A, Salmivuori M. Discriminating basal cell carcinoma and Bowen's disease from benign skin lesions with a 3D hyperspectral imaging system and convolutional neural networks. Skin Res Technol 2024; 30:e13677. [PMID: 38558486 PMCID: PMC10982671 DOI: 10.1111/srt.13677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/12/2024] [Indexed: 04/04/2024]
Affiliation(s)
- Vivian Lindholm
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Leevi Annala
- Faculty of Information TechnologyUniversity of JyväskyläJyväskyläFinland
- Department of Food and NutritionUniversity of HelsinkiHelsinkiFinland
- Department of Computer ScienceUniversity of HelsinkiHelsinkiFinland
| | - Sari Koskenmies
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Sari Pitkänen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Kirsi Isoherranen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Anna Järvinen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Leila Jeskanen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Ilkka Pölönen
- Faculty of Information TechnologyUniversity of JyväskyläJyväskyläFinland
| | - Annamari Ranki
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | | | - Mari Salmivuori
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| |
Collapse
|
4
|
Brancaccio G, Balato A, Malvehy J, Puig S, Argenziano G, Kittler H. Artificial Intelligence in Skin Cancer Diagnosis: A Reality Check. J Invest Dermatol 2024; 144:492-499. [PMID: 37978982 DOI: 10.1016/j.jid.2023.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 09/08/2023] [Accepted: 10/01/2023] [Indexed: 11/19/2023]
Abstract
The field of skin cancer detection offers a compelling use case for the application of artificial intelligence (AI) within the realm of image-based diagnostic medicine. Through the analysis of large datasets, AI algorithms have the capacity to classify clinical or dermoscopic images with remarkable accuracy. Although these AI-based applications can operate both autonomously and under human supervision, the best results are achieved through a collaborative approach that leverages the expertise of both AI and human experts. However, it is important to note that most studies focus on assessing the diagnostic accuracy of AI in artificial settings rather than in real-world scenarios. Consequently, the practical utility of AI-assisted diagnosis in a clinical environment is still largely unknown. Furthermore, there exists a knowledge gap concerning the optimal use cases and deployment settings for these AI systems as well as the practical challenges that may arise from widespread implementation. This review explores the advantages and limitations of AI in a variety of real-world contexts, with a specific focus on its value to consumers, general practitioners, and dermatologists.
Collapse
Affiliation(s)
| | - Anna Balato
- Dermatology Unit, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Josep Malvehy
- Melanoma Unit, Dermatology Department, Hospital Clínic de Barcelona, Instituto de Investigaciones Biomédicas August Pi i Sunye, Universitat de Barcelona, Barcelona, Spain; Centro de Investigación Biomédica en Red de Enfermedades Raras (CIBERER), Instituto de Salud Carlos III, Barcelona, Spain
| | - Susana Puig
- Melanoma Unit, Dermatology Department, Hospital Clínic de Barcelona, Instituto de Investigaciones Biomédicas August Pi i Sunye, Universitat de Barcelona, Barcelona, Spain; Centro de Investigación Biomédica en Red de Enfermedades Raras (CIBERER), Instituto de Salud Carlos III, Barcelona, Spain
| | | | - Harald Kittler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
5
|
Foltz EA, Witkowski A, Becker AL, Latour E, Lim JY, Hamilton A, Ludzik J. Artificial Intelligence Applied to Non-Invasive Imaging Modalities in Identification of Nonmelanoma Skin Cancer: A Systematic Review. Cancers (Basel) 2024; 16:629. [PMID: 38339380 PMCID: PMC10854803 DOI: 10.3390/cancers16030629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 01/28/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND The objective of this study is to systematically analyze the current state of the literature regarding novel artificial intelligence (AI) machine learning models utilized in non-invasive imaging for the early detection of nonmelanoma skin cancers. Furthermore, we aimed to assess their potential clinical relevance by evaluating the accuracy, sensitivity, and specificity of each algorithm and assessing for the risk of bias. METHODS Two reviewers screened the MEDLINE, Cochrane, PubMed, and Embase databases for peer-reviewed studies that focused on AI-based skin cancer classification involving nonmelanoma skin cancers and were published between 2018 and 2023. The search terms included skin neoplasms, nonmelanoma, basal-cell carcinoma, squamous-cell carcinoma, diagnostic techniques and procedures, artificial intelligence, algorithms, computer systems, dermoscopy, reflectance confocal microscopy, and optical coherence tomography. Based on the search results, only studies that directly answered the review objectives were included and the efficacy measures for each were recorded. A QUADAS-2 risk assessment for bias in included studies was then conducted. RESULTS A total of 44 studies were included in our review; 40 utilizing dermoscopy, 3 using reflectance confocal microscopy (RCM), and 1 for hyperspectral epidermal imaging (HEI). The average accuracy of AI algorithms applied to all imaging modalities combined was 86.80%, with the same average for dermoscopy. Only one of the three studies applying AI to RCM measured accuracy, with a result of 87%. Accuracy was not measured in regard to AI based HEI interpretation. CONCLUSION AI algorithms exhibited an overall favorable performance in the diagnosis of nonmelanoma skin cancer via noninvasive imaging techniques. Ultimately, further research is needed to isolate pooled diagnostic accuracy for nonmelanoma skin cancers as many testing datasets also include melanoma and other pigmented lesions.
Collapse
Affiliation(s)
- Emilie A. Foltz
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA 99202, USA
| | - Alexander Witkowski
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
| | - Alyssa L. Becker
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
- John A. Burns School of Medicine, University of Hawai’i at Manoa, Honolulu, HI 96813, USA
| | - Emile Latour
- Biostatistics Shared Resource, Knight Cancer Institute, Oregon Health & Science University, Portland, OR 97201, USA
| | - Jeong Youn Lim
- Biostatistics Shared Resource, Knight Cancer Institute, Oregon Health & Science University, Portland, OR 97201, USA
| | - Andrew Hamilton
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
| | - Joanna Ludzik
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
| |
Collapse
|
6
|
Koumaki D, Manios G, Papadakis M, Doxastaki A, Zacharopoulos GV, Katoulis A, Manios A. Color Analysis of Merkel Cell Carcinoma: A Comparative Study with Cherry Angiomas, Hemangiomas, Basal Cell Carcinomas, and Squamous Cell Carcinomas. Diagnostics (Basel) 2024; 14:230. [PMID: 38275477 PMCID: PMC10814937 DOI: 10.3390/diagnostics14020230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 01/11/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024] Open
Abstract
Merkel cell carcinoma (MCC) is recognized as one of the most malignant skin tumors. Its rarity might explain the limited exploration of digital color studies in this area. The objective of this study was to delineate color alterations in MCCs compared to benign lesions resembling MCC, such as cherry angiomas and hemangiomas, along with other non-melanoma skin cancer lesions like basal cell carcinoma (BCC) and squamous cell carcinoma (SCC), utilizing computer-aided digital color analysis. This was a retrospective study where clinical images of the color of the lesion and adjacent normal skin from 11 patients with primary MCC, 11 patients with cherry angiomas, 12 patients with hemangiomas, and 12 patients with BCC/SCC (totaling 46 patients) were analyzed using the RGB (red, green, and blue) and the CIE Lab color system. The Lab color system aided in estimating the Individual Typology Angle (ITA) change in the skin, and these results are documented in this study. It was demonstrated that the estimation of color components can assist in the differential diagnosis of these types of lesions because there were significant differences in color parameters between MCC and other categories of skin lesions such as hemangiomas, common skin carcinomas, and cherry hemangiomas. Significant differences in values were observed in the blue color of RGB (p = 0.003) and the b* parameter of Lab color (p < 0.0001) of MCC versus cherry angiomas. Similarly, the mean a* value of Merkel cell carcinoma (MCC) compared to basal cell carcinoma and squamous cell carcinoma showed a statistically significant difference (p < 0.0001). Larger prospective studies are warranted to further validate the clinical application of these findings.
Collapse
Affiliation(s)
- Dimitra Koumaki
- Dermatology Department, University Hospital of Heraklion, 71110 Heraklion, Greece;
| | - Georgios Manios
- Department of Computer Science and Biomedical Informatics, University of Thessaly, 35100 Lamia, Greece;
| | - Marios Papadakis
- Department of Surgery II, Witten/Herdecke University, Heusnerstrasse 40, 42283 Witten, Germany;
| | - Aikaterini Doxastaki
- Dermatology Department, University Hospital of Heraklion, 71110 Heraklion, Greece;
| | | | - Alexander Katoulis
- 2nd Department of Dermatology and Venereology, “Attikon” General University Hospital, Medical School, National and Kapodistrian University of Athens, Rimini 1, Haidari, 12462 Athens, Greece;
| | - Andreas Manios
- Plastic Surgery Unit, Surgical Oncology Department, University Hospital of Heraklion, 71110 Heraklion, Greece; (G.V.Z.); (A.M.)
| |
Collapse
|
7
|
Zhu AQ, Wang Q, Shi YL, Ren WW, Cao X, Ren TT, Wang J, Zhang YQ, Sun YK, Chen XW, Lai YX, Ni N, Chen YC, Hu JL, Mou LC, Zhao YJ, Liu YQ, Sun LP, Zhu XX, Xu HX, Guo LH. A deep learning fusion network trained with clinical and high-frequency ultrasound images in the multi-classification of skin diseases in comparison with dermatologists: a prospective and multicenter study. EClinicalMedicine 2024; 67:102391. [PMID: 38274117 PMCID: PMC10808933 DOI: 10.1016/j.eclinm.2023.102391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/07/2023] [Accepted: 12/07/2023] [Indexed: 01/27/2024] Open
Abstract
Background Clinical appearance and high-frequency ultrasound (HFUS) are indispensable for diagnosing skin diseases by providing internal and external information. However, their complex combination brings challenges for primary care physicians and dermatologists. Thus, we developed a deep multimodal fusion network (DMFN) model combining analysis of clinical close-up and HFUS images for binary and multiclass classification in skin diseases. Methods Between Jan 10, 2017, and Dec 31, 2020, the DMFN model was trained and validated using 1269 close-ups and 11,852 HFUS images from 1351 skin lesions. The monomodal convolutional neural network (CNN) model was trained and validated with the same close-up images for comparison. Subsequently, we did a prospective and multicenter study in China. Both CNN models were tested prospectively on 422 cases from 4 hospitals and compared with the results from human raters (general practitioners, general dermatologists, and dermatologists specialized in HFUS). The performance of binary classification (benign vs. malignant) and multiclass classification (the specific diagnoses of 17 types of skin diseases) measured by the area under the receiver operating characteristic curve (AUC) were evaluated. This study is registered with www.chictr.org.cn (ChiCTR2300074765). Findings The performance of the DMFN model (AUC, 0.876) was superior to that of the monomodal CNN model (AUC, 0.697) in the binary classification (P = 0.0063), which was also better than that of the general practitioner (AUC, 0.651, P = 0.0025) and general dermatologists (AUC, 0.838; P = 0.0038). By integrating close-up and HFUS images, the DMFN model attained an almost identical performance in comparison to dermatologists (AUC, 0.876 vs. AUC, 0.891; P = 0.0080). For the multiclass classification, the DMFN model (AUC, 0.707) exhibited superior prediction performance compared with general dermatologists (AUC, 0.514; P = 0.0043) and dermatologists specialized in HFUS (AUC, 0.640; P = 0.0083), respectively. Compared to dermatologists specialized in HFUS, the DMFN model showed better or comparable performance in diagnosing 9 of the 17 skin diseases. Interpretation The DMFN model combining analysis of clinical close-up and HFUS images exhibited satisfactory performance in the binary and multiclass classification compared with the dermatologists. It may be a valuable tool for general dermatologists and primary care providers. Funding This work was supported in part by the National Natural Science Foundation of China and the Clinical research project of Shanghai Skin Disease Hospital.
Collapse
Affiliation(s)
- An-Qi Zhu
- Department of Medical Ultrasound, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, China
| | - Qiao Wang
- Department of Medical Ultrasound, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, China
- Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, Shanghai, China
| | - Yi-Lei Shi
- MedAI Technology (Wuxi) Co., Ltd., Wuxi, China
| | - Wei-Wei Ren
- Department of Medical Ultrasound, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, China
- Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, Shanghai, China
| | - Xu Cao
- MedAI Technology (Wuxi) Co., Ltd., Wuxi, China
| | - Tian-Tian Ren
- Department of Medical Ultrasound, Ma'anshan People's Hospital, Ma'anshan, China
| | - Jing Wang
- Department of Ultrasound, Jiading District Central Hospital Affiliated Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Ya-Qin Zhang
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, China
| | - Yi-Kang Sun
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, China
| | - Xue-Wen Chen
- Department of Dermatological Surgery, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Yong-Xian Lai
- Department of Dermatological Surgery, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Na Ni
- Department of Dermatological Surgery, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Yu-Chong Chen
- Department of Dermatological Surgery, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
| | | | - Li-Chao Mou
- MedAI Technology (Wuxi) Co., Ltd., Wuxi, China
| | - Yu-Jing Zhao
- Department of Medical Ultrasound, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Ye-Qiang Liu
- Department of Pathology, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Li-Ping Sun
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, China
- Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, Shanghai, China
| | - Xiao-Xiang Zhu
- Chair of Data Science in Earth Observation, Technical University of Munich, Munich, Germany
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, China
| | - Le-Hang Guo
- Department of Medical Ultrasound, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, China
- Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, Shanghai, China
| | - China Alliance of Multi-Center Clinical Study for Ultrasound (Ultra-Chance)
- Department of Medical Ultrasound, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, China
- Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, Shanghai, China
- MedAI Technology (Wuxi) Co., Ltd., Wuxi, China
- Department of Medical Ultrasound, Ma'anshan People's Hospital, Ma'anshan, China
- Department of Ultrasound, Jiading District Central Hospital Affiliated Shanghai University of Medicine & Health Sciences, Shanghai, China
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, China
- Department of Dermatological Surgery, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Pathology, Shanghai Skin Disease Hospital, School of Medicine, Tongji University, Shanghai, China
- Chair of Data Science in Earth Observation, Technical University of Munich, Munich, Germany
| |
Collapse
|
8
|
Kolasa K, Admassu B, Hołownia-Voloskova M, Kędzior KJ, Poirrier JE, Perni S. Systematic reviews of machine learning in healthcare: a literature review. Expert Rev Pharmacoecon Outcomes Res 2024; 24:63-115. [PMID: 37955147 DOI: 10.1080/14737167.2023.2279107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
INTRODUCTION The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. METHODS A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. RESULTS In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). EXPERT OPINION The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kolasa
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | - Bisrat Admassu
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | | | | | | | | |
Collapse
|
9
|
Bakay OSK, Kacar N, Gonulal M, Demirkan NC, Cenk H, Goksin S, Gural Y. Dermoscopic Features of Cutaneous Vasculitis. Dermatol Pract Concept 2024; 14:dpc.1401a51. [PMID: 38364381 PMCID: PMC10868889 DOI: 10.5826/dpc.1401a51] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2023] [Indexed: 02/18/2024] Open
Abstract
INTRODUCTION Dermoscopy has become widespread in the diagnosis of inflammatory skin diseases. Cutaneous vasculitis (CV) is characterized by inflammation of vessels, and a rapid and reliable technique is required for the diagnosis. OBJECTIVES We aimed to define CV dermoscopic features and increase the diagnostic accuracy of dermoscopy with machine learning (ML) methods. METHODS Eighty-nine patients with clinically suspected CV were included in the study. Dermoscopic images were obtained before biopsy using a polarized dermoscopy. Dermoscopic images were independently evaluated, and interobserver variability was calculated. Decision Tree, Random Forest, and K-Nearest Neighbors were used as ML classification models. RESULTS The histopathological diagnosis of 58 patients was CV. Three patterns were observed: homogeneous pattern, mottled pattern, and meshy pattern. There was a significant difference in background color between the CV and non-CV groups (P = 0.001). The milky red and livedoid background color were specific markers in the differential diagnosis of CV (sensitivity 56.7%, specificity 96.3%, sensitivity 29.4%, specificity 99.2%, respectively). Red blotches were significantly more common in CV lesions (P = 0.038). Red dots, comma vessels, and scales were more common in the non-CV group (P = 0.002, P = 0.002, P = 0.003, respectively). Interobserver agreement was very good for both pattern (κ = 0.869) and background color analysis (κ = 0.846) (P < 0.001). According to ML classifiers, the background color and lack of scales were the most significant dermoscopic aspects of CV. CONCLUSIONS Dermoscopy may guide as a rapid and reliable technique in CV diagnosis. High accuracy rates obtained with ML methods may increase the success of dermoscopy.
Collapse
Affiliation(s)
| | - Nida Kacar
- Pamukkale University Faculty of Medicine, Department of Dermatology, Denizli, Turkey
| | - Melis Gonulal
- Tepecik Education and Research Hospital Department of Dermatology, University of Health Sciences Turkey, İzmir, Turkey
| | - Nese Calli Demirkan
- Department of Pathology, Medical Faculty, Pamukkale University, Denizli, Turkey
| | - Hülya Cenk
- Pamukkale University Faculty of Medicine, Department of Dermatology, Denizli, Turkey
| | - Sule Goksin
- Pamukkale University Faculty of Medicine, Department of Dermatology, Denizli, Turkey
| | - Yunus Gural
- Firat University Faculty of Science, Division of Statistics, Elazig, Turkey
| |
Collapse
|
10
|
Kushimo OO, Salau AO, Adeleke OJ, Olaoye DS. Deep learning model to improve melanoma detection in people of color. ARAB JOURNAL OF BASIC AND APPLIED SCIENCES 2023. [DOI: 10.1080/25765299.2023.2170066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023] Open
Affiliation(s)
- Oluwatobi O. Kushimo
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| | - Ayodeji Olalekan Salau
- Department of Electrical/Electronics and Computer Engineering, Afe Babalola University, Ado-Ekiti, Nigeria
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Oladapo J. Adeleke
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| | - Doyinsola S. Olaoye
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| |
Collapse
|
11
|
Kim J, Lee C, Choi S, Sung DI, Seo J, Na Lee Y, Hee Lee J, Jin Han E, Young Kim A, Suk Park H, Jeong Jung H, Hoon Kim J, Hee Lee J. Augmented Decision-Making in wound Care: Evaluating the clinical utility of a Deep-Learning model for pressure injury staging. Int J Med Inform 2023; 180:105266. [PMID: 37866277 DOI: 10.1016/j.ijmedinf.2023.105266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 10/24/2023]
Abstract
BACKGROUND Precise categorization of pressure injury (PI) stages is critical in determining the appropriate treatment for wound care. However, the expertise necessary for PI staging is frequently unavailable in residential care settings. OBJECTIVE This study aimed to develop a convolutional neural network (CNN) model for classifying PIs and investigate whether its implementation can allow physicians to make better decisions for PI staging. METHODS Using 3,098 clinical images (2,614 and 484 from internal and external datasets, respectively), a CNN was trained and validated to classify PIs and other related dermatoses. A two-part survey was conducted with 24 dermatology residents, ward nurses, and medical students to determine whether the implementation of the CNN improved initial PI classification decisions. RESULTS The top-1 accuracy of the model was 0.793 (95% confidence interval [CI], 0.778-0.808) and 0.717 (95% CI, 0.676-0.758) over the internal and external testing sets, respectively. The accuracy of PI staging among participants was 0.501 (95% CI, 0.487-0.515) in Part I, improving by 17.1% to 0.672 (95% CI, 0.660-0.684) in Part II. Furthermore, the concordance between participants increased significantly with the use of the CNN model, with Fleiss' κ of 0.414 (95% CI, 0.410-0.417) and 0.641 (95% CI, 0.638-0.644) in Parts I and II, respectively. CONCLUSIONS The proposed CNN model can help classify PIs and relevant dermatoses. In addition, augmented decision-making can improve consultation accuracy while ensuring concordance between the clinical decisions made by a diverse group of health professionals.
Collapse
Affiliation(s)
- Jemin Kim
- Department of Dermatology, Yongin Severance Hospital, Yonsei University College of Medicine, Gyeonggi-do, Republic of Korea
| | - Changyoon Lee
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sungchul Choi
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Da-In Sung
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jeonga Seo
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yun Na Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Joo Hee Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Eun Jin Han
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Ah Young Kim
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Hyun Suk Park
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Hye Jeong Jung
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Jong Hoon Kim
- Department of Dermatology and Cutaneous Biology Research Institute, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Ju Hee Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
12
|
Liu Z, Wang X, Ma Y, Lin Y, Wang G. Artificial intelligence in psoriasis: Where we are and where we are going. Exp Dermatol 2023; 32:1884-1899. [PMID: 37740587 DOI: 10.1111/exd.14938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/05/2023] [Accepted: 09/09/2023] [Indexed: 09/24/2023]
Abstract
Artificial intelligence (AI) is a field of computer science that involves the development of programs designed to replicate human cognitive processes and the analysis of complex data. In dermatology, which is predominantly a visual-based diagnostic field, AI has become increasingly important in improving professional processes, particularly in the diagnosis of psoriasis. In this review, we summarized current AI applications in psoriasis: (i) diagnosis, including identification, classification, lesion segmentation, lesion severity and area scoring; (ii) treatment, including prediction treatment efficiency and prediction candidate drugs; (iii) management, including e-health and preventive medicine. Key challenges and future aspects of AI in psoriasis were also discussed, in hope of providing potential directions for future studies.
Collapse
Affiliation(s)
- Zhenhua Liu
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
- Department of Dermatology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Xinyu Wang
- Department of Economics, Finance and Healthcare Administration, Valdosta State University, Valdosta, Georgia, USA
| | - Yao Ma
- Student Brigade of Basic Medicine School, Fourth Military Medical University, Xi'an, China
| | - Yiting Lin
- Department of Dermatology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Gang Wang
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
13
|
Derekas P, Spyridonos P, Likas A, Zampeta A, Gaitanis G, Bassukas I. The Promise of Semantic Segmentation in Detecting Actinic Keratosis Using Clinical Photography in the Wild. Cancers (Basel) 2023; 15:4861. [PMID: 37835555 PMCID: PMC10571759 DOI: 10.3390/cancers15194861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/15/2023] Open
Abstract
AK is a common precancerous skin condition that requires effective detection and treatment monitoring. To improve the monitoring of the AK burden in clinical settings with enhanced automation and precision, the present study evaluates the application of semantic segmentation based on the U-Net architecture (i.e., AKU-Net). AKU-Net employs transfer learning to compensate for the relatively small dataset of annotated images and integrates a recurrent process based on convLSTM to exploit contextual information and address the challenges related to the low contrast and ambiguous boundaries of AK-affected skin regions. We used an annotated dataset of 569 clinical photographs from 115 patients with actinic keratosis to train and evaluate the model. From each photograph, patches of 512 × 512 pixels were extracted using translation lesion boxes that encompassed lesions in different positions and captured different contexts of perilesional skin. In total, 16,488 translation-augmented crops were used for training the model, and 403 lesion center crops were used for testing. To demonstrate the improvements in AK detection, AKU-Net was compared with plain U-Net and U-Net++ architectures. The experimental results highlighted the effectiveness of AKU-Net, improving upon both automation and precision over existing approaches, paving the way for more effective and reliable evaluation of actinic keratosis in clinical settings.
Collapse
Affiliation(s)
- Panagiotis Derekas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Panagiota Spyridonos
- Department of Medical Physics, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece
| | - Aristidis Likas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Athanasia Zampeta
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Georgios Gaitanis
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Ioannis Bassukas
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| |
Collapse
|
14
|
Akram T, Junejo R, Alsuhaibani A, Rafiullah M, Akram A, Almujally NA. Precision in Dermatology: Developing an Optimal Feature Selection Framework for Skin Lesion Classification. Diagnostics (Basel) 2023; 13:2848. [PMID: 37685386 PMCID: PMC10486423 DOI: 10.3390/diagnostics13172848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 08/30/2023] [Accepted: 08/31/2023] [Indexed: 09/10/2023] Open
Abstract
Melanoma is widely recognized as one of the most lethal forms of skin cancer, with its incidence showing an upward trend in recent years. Nonetheless, the timely detection of this malignancy substantially enhances the likelihood of patients' long-term survival. Several computer-based methods have recently been proposed, in the pursuit of diagnosing skin lesions at their early stages. Despite achieving some level of success, there still remains a margin of error that the machine learning community considers to be an unresolved research challenge. The primary objective of this study was to maximize the input feature information by combining multiple deep models in the first phase, and then to avoid noisy and redundant information by downsampling the feature set, using a novel evolutionary feature selection technique, in the second phase. By maintaining the integrity of the original feature space, the proposed idea generated highly discriminant feature information. Recent deep models, including Darknet53, DenseNet201, InceptionV3, and InceptionResNetV2, were employed in our study, for the purpose of feature extraction. Additionally, transfer learning was leveraged, to enhance the performance of our approach. In the subsequent phase, the extracted feature information from the chosen pre-existing models was combined, with the aim of preserving maximum information, prior to undergoing the process of feature selection, using a novel entropy-controlled gray wolf optimization (ECGWO) algorithm. The integration of fusion and selection techniques was employed, initially to incorporate the feature vector with a high level of information and, subsequently, to eliminate redundant and irrelevant feature information. The effectiveness of our concept is supported by an assessment conducted on three benchmark dermoscopic datasets: PH2, ISIC-MSK, and ISIC-UDA. In order to validate the proposed methodology, a comprehensive evaluation was conducted, including a rigorous comparison to established techniques in the field.
Collapse
Affiliation(s)
- Tallha Akram
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Cantt Campus, Islamabad 45040, Pakistan
| | - Riaz Junejo
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Cantt Campus, Islamabad 45040, Pakistan
| | - Anas Alsuhaibani
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Muhammad Rafiullah
- Department of Mathematics, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan
| | - Adeel Akram
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Cantt Campus, Islamabad 45040, Pakistan
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia;
| |
Collapse
|
15
|
Yang TT, Ma CW, Jhou JW, Chen YT, Lan CCE. Response predictor for pigment reduction after one session of photo-based therapy using convolutional neural network: A proof of concept study. PHOTODERMATOLOGY, PHOTOIMMUNOLOGY & PHOTOMEDICINE 2023; 39:498-505. [PMID: 37306455 DOI: 10.1111/phpp.12891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/19/2023] [Accepted: 05/26/2023] [Indexed: 06/13/2023]
Abstract
BACKGROUND Identifying treatment responders after a single session of photo-based procedure for hyperpigmentary disorders may be difficult. OBJECTIVES We aim to train a convolutional neural network (CNN) to test the hypothesis that there exist discernible features in pretreatment photographs for identifying favorable responses after photo-based treatments for facial hyperpigmentation and develop a clinically applicable algorithm to predict treatment outcome. METHODS Two hundred and sixty-four sets of pretreatment photographs of subjects receiving photo-based treatment for esthetic enhancement were obtained using the VISIA® skin analysis system. Preprocessing was done by masking the facial features of the photographs. Each set of photographs consists of five types of images. Five independently trained CNNs based on the Resnet50 backbone were developed based on these images and the results of these CNNs were combined to obtain the final result. RESULTS The developed CNN algorithm has a prediction accuracy approaching 78.5% with area under the receiver operating characteristic curve being 0.839. CONCLUSION The treatment efficacy of photo-based therapies on facial skin pigmentation can be predicted based on pretreatment images.
Collapse
Affiliation(s)
- Ting-Ting Yang
- Department of Dermatology, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Ching-Wen Ma
- College of Artificial Intelligence, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Jyun-Wei Jhou
- College of Artificial Intelligence, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Yu-Ting Chen
- College of Artificial Intelligence, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Cheng-Che E Lan
- Department of Dermatology, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Dermatology, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
16
|
Kim J, Oh I, Lee YN, Lee JH, Lee YI, Kim J, Lee JH. Predicting the severity of postoperative scars using artificial intelligence based on images and clinical data. Sci Rep 2023; 13:13448. [PMID: 37596459 PMCID: PMC10439171 DOI: 10.1038/s41598-023-40395-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 08/09/2023] [Indexed: 08/20/2023] Open
Abstract
Evaluation of scar severity is crucial for determining proper treatment modalities; however, there is no gold standard for assessing scars. This study aimed to develop and evaluate an artificial intelligence model using images and clinical data to predict the severity of postoperative scars. Deep neural network models were trained and validated using images and clinical data from 1283 patients (main dataset: 1043; external dataset: 240) with post-thyroidectomy scars. Additionally, the performance of the model was tested against 16 dermatologists. In the internal test set, the area under the receiver operating characteristic curve (ROC-AUC) of the image-based model was 0.931 (95% confidence interval 0.910‒0.949), which increased to 0.938 (0.916‒0.955) when combined with clinical data. In the external test set, the ROC-AUC of the image-based and combined prediction models were 0.896 (0.874‒0.916) and 0.912 (0.892‒0.932), respectively. In addition, the performance of the tested algorithm with images from the internal test set was comparable with that of 16 dermatologists. This study revealed that a deep neural network model derived from image and clinical data could predict the severity of postoperative scars. The proposed model may be utilized in clinical practice for scar management, especially for determining severity and treatment initiation.
Collapse
Affiliation(s)
- Jemin Kim
- Department of Dermatology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, South Korea
- Scar Laser and Plastic Surgery Center, Yonsei Cancer Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Inrok Oh
- LG Chem Ltd., Seoul, South Korea
| | - Yun Na Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Joo Hee Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Young In Lee
- Scar Laser and Plastic Surgery Center, Yonsei Cancer Hospital, Yonsei University College of Medicine, Seoul, South Korea
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Jihee Kim
- Department of Dermatology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, South Korea
- Scar Laser and Plastic Surgery Center, Yonsei Cancer Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Ju Hee Lee
- Scar Laser and Plastic Surgery Center, Yonsei Cancer Hospital, Yonsei University College of Medicine, Seoul, South Korea.
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea.
| |
Collapse
|
17
|
Schuh S, Schiele S, Thamm J, Kranz S, Welzel J, Blum A. Implementation of a dermatoscopy curriculum during residency at Augsburg University Hospital in Germany. J Dtsch Dermatol Ges 2023; 21:872-879. [PMID: 37235503 DOI: 10.1111/ddg.15115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 04/04/2023] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVES To date, there is no structured program for dermatoscopy training during residency in Germany. Whether and how much dermatoscopy training is acquired is left to the initiative of each resident, although dermatoscopy is one of the core competencies of dermatological training and daily practice. The aim of the study was to establish a structured dermatoscopy curriculum during residency at the University Hospital Augsburg. PATIENTS AND METHODS An online platform with dermatoscopy modules was created, accessible regardless of time and place. Practical skills were acquired under the personal guidance of a dermatoscopy expert. Participants were tested on their level of knowledge before and after completing the modules. Test scores on management decisions and correct dermatoscopic diagnosis were analyzed. RESULTS Results of 28 participants showed improvements in management decisions from pre- to posttest (74.0% vs. 89.4%) and in dermatoscopic accuracy (65.0% vs. 85.6%). Pre- vs. posttest differences in test score (7.05/10 vs. 8.94/10 points) and correct diagnosis were significant (p < 0.001). CONCLUSIONS The dermatoscopy curriculum increases the number of correct management decisions and dermatoscopy diagnoses. This will result in more skin cancers being detected, and fewer benign lesions being excised. The curriculum can be offered to other dermatology training centers and medical professionals.
Collapse
Affiliation(s)
- Sandra Schuh
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Stefan Schiele
- Institute of Mathematics, University of Augsburg, Augsburg, Germany
| | - Janis Thamm
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Stefanie Kranz
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Julia Welzel
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Andreas Blum
- Public, Private and Teaching Practice of Dermatology, Konstanz, Germany
| |
Collapse
|
18
|
Schuh S, Schiele S, Thamm J, Kranz S, Welzel J, Blum A. Implementierung eines Dermatoskopie-Curriculums in der Facharztausbildung am Universitätsklinikum Augsburg. J Dtsch Dermatol Ges 2023; 21:872-881. [PMID: 37574685 DOI: 10.1111/ddg.15115_g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 04/04/2023] [Indexed: 08/15/2023]
Abstract
ZusammenfassungHintergrund und ZieleBislang gibt es in Deutschland kein strukturiertes Programm für die Dermatoskopieausbildung während der Facharztausbildung. Es bleibt der Initiative des einzelnen Assistenzarztes überlassen, ob und in welchem Umfang er sich in der Dermatoskopie weiterbildet, obwohl die Dermatoskopie zu den Kernkompetenzen der dermatologischen Ausbildung und der täglichen Praxis gehört. Ziel der Studie war die Etablierung eines strukturierten Dermatoskopie‐Curriculums während der dermatologischen Facharztausbildung am Universitätsklinikum Augsburg.Patienten und MethodikEs wurde eine Online‐Plattform mit Dermatoskopie‐Modulen geschaffen, auf die von überall und jederzeit zugegriffen werden kann. Praktische Fertigkeiten wurden unter individueller Anleitung eines Dermatoskopie‐Experten erworben. Die Teilnehmer wurden vor und nach Abschluss der Module auf ihren Wissensstand getestet. Die Testergebnisse zum therapeutischen Management und zur korrekten dermatoskopischen Diagnose wurden analysiert.ErgebnisseDie Ergebnisse der 28 Teilnehmer verbesserten sich vom Eingangs‐ zum Abschlusstest bei der Managemententscheidung (74,0% vs. 89,4%) und bei der dermatoskopischen Genauigkeit (65,0% vs. 85,6%). Die Unterschiede zwischen Eingangs‐ und Abschlusstest bei der Gesamtpunktzahl (7,05/10 vs. 8,94/10 Punkte) und bei der richtigen Diagnose waren signifikant (p < 0,001).SchlussfolgerungenDas Dermatoskopie‐Curriculum verbessert die Managemententscheidungen und die dermatoskopische Diagnostik der Teilnehmer. Das wird dazu führen, dass mehr Hautkrebsfälle erkannt werden und weniger gutartige Läsionen reseziert werden müssen. Das Curriculum kann anderen dermatologischen Ausbildungszentren und Gesundheitsberufen angeboten werden.
Collapse
Affiliation(s)
- Sandra Schuh
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | | | - Janis Thamm
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | - Stefanie Kranz
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | - Julia Welzel
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | - Andreas Blum
- Hautarzt- und Lehrpraxis für Dermatologie, Konstanz
| |
Collapse
|
19
|
Grossarth S, Mosley D, Madden C, Ike J, Smith I, Huo Y, Wheless L. Recent Advances in Melanoma Diagnosis and Prognosis Using Machine Learning Methods. Curr Oncol Rep 2023; 25:635-645. [PMID: 37000340 PMCID: PMC10339689 DOI: 10.1007/s11912-023-01407-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 04/01/2023]
Abstract
PURPOSE OF REVIEW The purpose was to summarize the current role and state of artificial intelligence and machine learning in the diagnosis and management of melanoma. RECENT FINDINGS Deep learning algorithms can identify melanoma from clinical, dermoscopic, and whole slide pathology images with increasing accuracy. Efforts to provide more granular annotation to datasets and to identify new predictors are ongoing. There have been many incremental advances in both melanoma diagnostics and prognostic tools using artificial intelligence and machine learning. Higher quality input data will further improve these models' capabilities.
Collapse
Affiliation(s)
- Sarah Grossarth
- Quillen College of Medicine, East Tennessee State University, Johnson City, TN, USA
| | | | - Christopher Madden
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- State University of New York Downstate College of Medicine, Brooklyn, NY, USA
| | - Jacqueline Ike
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- Meharry Medical College, Nashville, TN, USA
| | - Isabelle Smith
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Department of Computer Science and Electrical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Lee Wheless
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA.
- Department of Medicine, Division of Epidemiology, Vanderbilt University Medical Center, Nashville, TN, USA.
- Tennessee Valley Healthcare System VA Medical Center, Nashville, TN, USA.
| |
Collapse
|
20
|
Ain QU, Al-Sahaf H, Xue B, Zhang M. Automatically Diagnosing Skin Cancers From Multimodality Images Using Two-Stage Genetic Programming. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:2727-2740. [PMID: 35797327 DOI: 10.1109/tcyb.2022.3182474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Developing a computer-aided diagnostic system for detecting various skin malignancies from images has attracted many researchers. Unlike many machine-learning approaches, such as artificial neural networks, genetic programming (GP) automatically evolves models with flexible representation. GP successfully provides effective solutions using its intrinsic ability to select prominent features (i.e., feature selection) and build new features (i.e., feature construction). Existing approaches have utilized GP to construct new features from the complete set of original features and the set of operators. However, the complete set of features may contain redundant or irrelevant features that do not provide useful information for classification. This study aims to develop a two-stage GP method, where the first stage selects prominent features, and the second stage constructs new features from these selected features and operators, such as multiplication in a wrapper approach to improve the classification performance. To include local, global, texture, color, and multiscale image properties of skin images, GP selects and constructs features extracted from local binary patterns and pyramid-structured wavelet decomposition. The accuracy of this GP method is assessed using two real-world skin image datasets captured from the standard camera and specialized instruments, and compared with commonly used classification algorithms, three state of the art, and an existing embedded GP method. The results reveal that this new approach of feature selection and feature construction effectively helps improve the performance of the machine-learning classification algorithms. Unlike other black-box models, the evolved models by GP are interpretable; therefore, the proposed method can assist dermatologists to identify prominent features, which has been shown by further analysis on the evolved models.
Collapse
|
21
|
Steele L, Tan XL, Olabi B, Gao JM, Tanaka RJ, Williams HC. Determining the clinical applicability of machine learning models through assessment of reporting across skin phototypes and rarer skin cancer types: A systematic review. J Eur Acad Dermatol Venereol 2023; 37:657-665. [PMID: 36514990 DOI: 10.1111/jdv.18814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 11/09/2022] [Indexed: 12/15/2022]
Abstract
Machine learning (ML) models for skin cancer recognition may have variable performance across different skin phototypes and skin cancer types. Overall performance metrics alone are insufficient to detect poor subgroup performance. We aimed (1) to assess whether studies of ML models reported results separately for different skin phototypes and rarer skin cancers, and (2) to graphically represent the skin cancer training datasets used by current ML models. In this systematic review, we searched PubMed, Embase and CENTRAL. We included all studies in medical journals assessing an ML technique for skin cancer diagnosis that used clinical or dermoscopic images from 1 January 2012 to 22 September 2021. No language restrictions were applied. We considered rarer skin cancers to be skin cancers other than pigmented melanoma, basal cell carcinoma and squamous cell carcinoma. We identified 114 studies for inclusion. Rarer skin cancers were included by 8/114 studies (7.0%), and results for a rarer skin cancer were reported separately in 1/114 studies (0.9%). Performance was reported across all skin phototypes in 1/114 studies (0.9%), but performance was uncertain in skin phototypes I and VI from minimal representation of the skin phototypes in the test dataset (9/3756 and 1/3756, respectively). For training datasets, although public datasets were most frequently used, with the most widely used being the International Skin Imaging Collaboration (ISIC) archive (65/114 studies, 57.0%), the largest datasets were private. Our review identified that most ML models did not report performance separately for rarer skin cancers and different skin phototypes. A degree of variability in ML model performance across subgroups is expected, but the current lack of transparency is not justifiable and risks models being used inappropriately in populations in whom accuracy is low.
Collapse
Affiliation(s)
- Lloyd Steele
- Department of Dermatology, The Royal London Hospital, London, UK.,Centre for Cell Biology and Cutaneous Research, Blizard Institute, Queen Mary University of London, London, UK
| | - Xiang Li Tan
- St George's University Hospitals NHS Foundation Trust, London, UK
| | - Bayanne Olabi
- Biosciences Institute, Newcastle University, Newcastle, UK
| | - Jing Mia Gao
- Department of Dermatology, The Royal London Hospital, London, UK
| | - Reiko J Tanaka
- Department of Bioengineering, Imperial College London, London, UK
| | - Hywel C Williams
- Centre of Evidence-Based Dermatology, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
22
|
Hasan MK, Ahamad MA, Yap CH, Yang G. A survey, review, and future trends of skin lesion segmentation and classification. Comput Biol Med 2023; 155:106624. [PMID: 36774890 DOI: 10.1016/j.compbiomed.2023.106624] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/04/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023]
Abstract
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Bioengineering, Imperial College London, UK; Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Md Asif Ahamad
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, UK.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, UK.
| |
Collapse
|
23
|
Liu L, Liang C, Xue Y, Chen T, Chen Y, Lan Y, Wen J, Shao X, Chen J. An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input. Dermatol Ther (Heidelb) 2023; 13:569-579. [PMID: 36577888 PMCID: PMC9884721 DOI: 10.1007/s13555-022-00874-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Accepted: 12/05/2022] [Indexed: 12/29/2022] Open
Abstract
INTRODUCTION The diagnosis of melasma is often based on the naked-eye judgment of physicians. However, this is a challenge for inexperienced physicians and non-professionals, and incorrect treatment might have serious consequences. Therefore, it is important to develop an accurate method for melasma diagnosis. The objective of this study is to develop and validate an intelligent diagnostic system based on deep learning for melasma images. METHODS A total of 8010 images in the VISIA system, comprising 4005 images of patients with melasma and 4005 images of patients without melasma, were collected for training and testing. Inspired by four high-performance structures (i.e., DenseNet, ResNet, Swin Transformer, and MobileNet), the performances of deep learning models in melasma and non-melasma binary classifiers were evaluated. Furthermore, considering that there were five modes of images for each shot in VISIA, we fused these modes via multichannel image input in different combinations to explore whether multimode images could improve network performance. RESULTS The proposed network based on DenseNet121 achieved the best performance with an accuracy of 93.68% and an area under the curve (AUC) of 97.86% on the test set for the melasma classifier. The results of the Gradient-weighted Class Activation Mapping showed that it was interpretable. In further experiments, for the five modes of the VISIA system, we found the best performing mode to be "BROWN SPOTS." Additionally, the combination of "NORMAL," "BROWN SPOTS," and "UV SPOTS" modes significantly improved the network performance, achieving the highest accuracy of 97.4% and AUC of 99.28%. CONCLUSIONS In summary, deep learning is feasible for diagnosing melasma. The proposed network not only has excellent performance with clinical images of melasma, but can also acquire high accuracy by using multiple modes of images in VISIA.
Collapse
Affiliation(s)
- Lin Liu
- Department of Dermatology, The First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, 400016, China
- Medical Data Science Academy, Chongqing Medical University, Chongqing, China
| | - Chen Liang
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Yuzhou Xue
- Department of Cardiology and Institute of Vascular Medicine, Peking University Third Hospital, Beijing, China
| | - Tingqiao Chen
- Department of Dermatology, The First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, 400016, China
| | - Yangmei Chen
- Department of Dermatology, The First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, 400016, China
| | - Yufan Lan
- Chongqing Medical University, Chongqing, China
| | - Jiamei Wen
- Department of Otolaryngology-Head and Neck Surgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xinyi Shao
- Department of Dermatology, The First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, 400016, China
| | - Jin Chen
- Department of Dermatology, The First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, 400016, China.
| |
Collapse
|
24
|
Spyridonos P, Gaitanis G, Likas A, Bassukas ID. A convolutional neural network based system for detection of actinic keratosis in clinical images of cutaneous field cancerization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Jartarkar SR, Cockerell CJ, Patil A, Kassir M, Babaei M, Weidenthaler‐Barth B, Grabbe S, Goldust M. Artificial intelligence in Dermatopathology. J Cosmet Dermatol 2022; 22:1163-1167. [PMID: 36548174 DOI: 10.1111/jocd.15565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 11/14/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022]
Abstract
INTRODUCTION Ever evolving research in medical field has reached an exciting stage with advent of newer technologies. With the introduction of digital microscopy, pathology has transitioned to become more digitally oriented speciality. The potential of artificial intelligence (AI) in dermatopathology is to aid the diagnosis, and it requires dermatopathologists' guidance for efficient functioning of artificial intelligence. METHOD Comprehensive literature search was performed using electronic online databases "PubMed" and "Google Scholar." Articles published in English language were considered for the review. RESULTS Convolutional neural network, a type of deep neural network, is considered as an ideal tool in image recognition, processing, classification, and segmentation. Implementation of AI in tumor pathology is involved in the diagnosis, grading, staging, and prognostic prediction as well as in identification of genetic or pathological features. In this review, we attempt to discuss the use of AI in dermatopathology, the attitude of patients and clinicians, its challenges, limitation, and potential opportunities in future implementation.
Collapse
Affiliation(s)
- Shishira R. Jartarkar
- Department of Dermatology Vydehi Institute of Medical Sciences and Research Centre University‐RGUHS Bengaluru India
| | - Clay J. Cockerell
- Departments of Dermatology and Pathology The University of Texas Southwestern Medical Center Dallas Texas USA
| | - Anant Patil
- Department of Pharmacology Dr. DY Patil Medical College Navi Mumbai India
| | | | - Mahsa Babaei
- School of Medicine Stanford University California USA
| | - Beate Weidenthaler‐Barth
- Department of Dermatology University Medical Center of the Johannes Gutenberg University Mainz Germany
| | - Stephan Grabbe
- Department of Dermatology University Medical Center of the Johannes Gutenberg University Mainz Germany
| | - Mohamad Goldust
- Department of Dermatology University Medical Center Mainz Mainz Germany
| |
Collapse
|
26
|
Weng W, Imaizumi M, Murono S, Zhu X. Expert-level aspiration and penetration detection during flexible endoscopic evaluation of swallowing with artificial intelligence-assisted diagnosis. Sci Rep 2022; 12:21689. [PMID: 36522385 PMCID: PMC9753025 DOI: 10.1038/s41598-022-25618-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 12/01/2022] [Indexed: 12/23/2022] Open
Abstract
Flexible endoscopic evaluation of swallowing (FEES) is considered the gold standard in diagnosing oropharyngeal dysphagia. Recent advances in deep learning have led to a resurgence of artificial intelligence-assisted computer-aided diagnosis (AI-assisted CAD) for a variety of applications. AI-assisted CAD would be a remarkable benefit in providing medical services to populations with inadequate access to dysphagia experts, especially in aging societies. This paper presents an AI-assisted CAD named FEES-CAD for aspiration and penetration detection on video recording during FEES. FEES-CAD segments the input FEES video and classifies penetration, aspiration, residue in the vallecula, and residue in the hypopharynx based on the segmented FEES video. We collected and annotated FEES videos from 199 patients to train the network and tested the performance of FEES-CAD using FEES videos from other 40 patients. These patients consecutively underwent FEES between December 2016 and August 2019 at Fukushima Medical University Hospital. FEES videos were deidentified, randomized, and rated by FEES-CAD and laryngologists with over 15 years of experience in performing FEES. FEES-CAD achieved an average Dice similarity coefficient of 98.6[Formula: see text]. FEES-CAD achieved expert-level accuracy performance on penetration (92.5[Formula: see text]), aspiration (92.5[Formula: see text]), residue in the vallecula (100[Formula: see text]), and residue in the hypopharynx (87.5[Formula: see text]) classification tasks. To the best of our knowledge, FEES-CAD is the first CNN-based system that achieves expert-level performance in detecting aspiration and penetration.
Collapse
Affiliation(s)
- Weihao Weng
- Graduate School of Computer Scicence and Engineering, The University of Aizu, Aizuwakamatsu, 965-8580, Japan
| | - Mitsuyoshi Imaizumi
- Department of Otolaryngology, Fukushima Medical University, Fukushima, 960-1295, Japan.
| | - Shigeyuki Murono
- Department of Otolaryngology, Fukushima Medical University, Fukushima, 960-1295, Japan
| | - Xin Zhu
- Graduate School of Computer Scicence and Engineering, The University of Aizu, Aizuwakamatsu, 965-8580, Japan.
| |
Collapse
|
27
|
Jartarkar SR. Artificial intelligence: Its role in dermatopathology. Indian J Dermatol Venereol Leprol 2022:1-4. [PMID: 36688886 DOI: 10.25259/ijdvl_725_2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 08/01/2022] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI), a major frontier in the field of medical research, can potentially lead to a paradigm shift in clinical practice. A type of artificial intelligence system known as convolutional neural network points to the possible utility of deep learning in dermatopathology. Though pathology has been traditionally restricted to microscopes and glass slides, recent advancement in digital pathological imaging has led to a transition making it a potential branch for the implementation of artificial intelligence. The current application of artificial intelligence in dermatopathology is to complement the diagnosis and requires a well-trained dermatopathologist's guidance for better designing and development of deep learning algorithms. Here we review the recent advances of artificial intelligence in dermatopathology, its applications in disease diagnosis and in research, along with its limitations and future potential.
Collapse
Affiliation(s)
- Shishira R Jartarkar
- Department of Dermatology, Venereology and Leprosy, Vydehi Institute of Medical Sciences and Research Centre, Whitefield, Bengaluru, Karnataka, India
| |
Collapse
|
28
|
Liopyris K, Gregoriou S, Dias J, Stratigos AJ. Artificial Intelligence in Dermatology: Challenges and Perspectives. Dermatol Ther (Heidelb) 2022; 12:2637-2651. [PMID: 36306100 PMCID: PMC9674813 DOI: 10.1007/s13555-022-00833-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 10/07/2022] [Indexed: 01/07/2023] Open
Abstract
Artificial intelligence (AI) based on machine learning and convolutional neuron networks (CNN) is rapidly becoming a realistic prospect in dermatology. Non-melanoma skin cancer is the most common cancer worldwide and melanoma is one of the deadliest forms of cancer. Dermoscopy has improved physicians' diagnostic accuracy for skin cancer recognition but unfortunately it remains comparatively low. AI could provide invaluable aid in the early evaluation and diagnosis of skin cancer. In the last decade, there has been a breakthrough in new research and publications in the field of AI. Studies have shown that CNN algorithms can classify skin lesions from dermoscopic images with superior or at least equivalent performance compared to clinicians. Even though AI algorithms have shown very promising results for the diagnosis of skin cancer in reader studies, their generalizability and applicability in everyday clinical practice remain elusive. Herein we attempted to summarize the potential pitfalls and challenges of AI that were underlined in reader studies and pinpoint strategies to overcome limitations in future studies. Finally, we tried to analyze the advantages and opportunities that lay ahead for a better future for dermatology and patients, with the potential use of AI in our practices.
Collapse
Affiliation(s)
- Konstantinos Liopyris
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece
- Dermatology Department, Memorial Sloan Kettering Cancer Center, New York, NY, 10021, USA
| | - Stamatios Gregoriou
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece.
| | - Julia Dias
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece
| | - Alexandros J Stratigos
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece
| |
Collapse
|
29
|
Puri P, Comfere N, Drage LA, Shamim H, Bezalel SA, Pittelkow MR, Davis MDP, Wang M, Mangold AR, Tollefson MM, Lehman JS, Meves A, Yiannias JA, Otley CC, Carter RE, Sokumbi O, Hall MR, Bridges AG, Murphree DH. Deep learning for dermatologists: Part II. Current applications. J Am Acad Dermatol 2022; 87:1352-1360. [PMID: 32428608 PMCID: PMC7669658 DOI: 10.1016/j.jaad.2020.05.053] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Revised: 05/07/2020] [Accepted: 05/08/2020] [Indexed: 01/14/2023]
Abstract
Because of a convergence of the availability of large data sets, graphics-specific computer hardware, and important theoretical advancements, artificial intelligence has recently contributed to dramatic progress in medicine. One type of artificial intelligence known as deep learning has been particularly impactful for medical image analysis. Deep learning applications have shown promising results in dermatology and other specialties, including radiology, cardiology, and ophthalmology. The modern clinician will benefit from an understanding of the basic features of deep learning to effectively use new applications and to better gauge their utility and limitations. In this second article of a 2-part series, we review the existing and emerging clinical applications of deep learning in dermatology and discuss future opportunities and limitations. Part 1 of this series offered an introduction to the basic concepts of deep learning to facilitate effective communication between clinicians and technical experts.
Collapse
Affiliation(s)
- Pranav Puri
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona; Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota
| | - Nneka Comfere
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota; Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota.
| | - Lisa A Drage
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | - Huma Shamim
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | - Spencer A Bezalel
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | - Mark R Pittelkow
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Scottsdale, Arizona
| | - Mark D P Davis
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | - Michael Wang
- Department of Dermatology, University of California San Francisco, San Francisco, California
| | - Aaron R Mangold
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Scottsdale, Arizona
| | - Megha M Tollefson
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | - Julia S Lehman
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota; Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Alexander Meves
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | | | - Clark C Otley
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota
| | - Rickey E Carter
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo Clinic, Jacksonville, Florida
| | - Olayemi Sokumbi
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Jacksonville, Florida; Department of Laboratory Medicine and Pathology, Mayo Clinic, Jacksonville, Florida
| | - Matthew R Hall
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Jacksonville, Florida
| | - Alina G Bridges
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Dermatology, Mayo Clinic, Rochester, Minnesota; Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Dennis H Murphree
- Mayo Clinic Office of Artificial Intelligence in Dermatology, Rochester, Minnesota; Department of Health Sciences Research, Division of Digital Health Sciences, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
30
|
Requa J, Godard T, Mandal R, Balzer B, Whittemore D, George E, Barcelona F, Lambert C, Lee J, Lambert A, Larson A, Osmond G. High-fidelity detection, subtyping, and localization of five skin neoplasms using supervised and semi-supervised learning. J Pathol Inform 2022; 14:100159. [PMID: 36506813 PMCID: PMC9731861 DOI: 10.1016/j.jpi.2022.100159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/16/2022] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
Background Skin cancers are the most common malignancies diagnosed worldwide. While the early detection and treatment of pre-cancerous and cancerous skin lesions can dramatically improve outcomes, factors such as a global shortage of pathologists, increased workloads, and high rates of diagnostic discordance underscore the need for techniques that improve pathology workflows. Although AI models are now being used to classify lesions from whole slide images (WSIs), diagnostic performance rarely surpasses that of expert pathologists. Objectives The objective of the present study was to create an AI model to detect and classify skin lesions with a higher degree of sensitivity than previously demonstrated, with potential to match and eventually surpass expert pathologists to improve clinical workflows. Methods We combined supervised learning (SL) with semi-supervised learning (SSL) to produce an end-to-end multi-level skin detection system that not only detects 5 main types of skin lesions with high sensitivity and specificity, but also subtypes, localizes, and provides margin status to evaluate the proximity of the lesion to non-epidermal margins. The Supervised Training Subset consisted of 2188 random WSIs collected by the PathologyWatch (PW) laboratory between 2013 and 2018, while the Weakly Supervised Subset consisted of 5161 WSIs from daily case specimens. The Validation Set consisted of 250 curated daily case WSIs obtained from the PW tissue archives and included 50 "mimickers". The Testing Set (3821 WSIs) was composed of non-curated daily case specimens collected from July 20, 2021 to August 20, 2021 from PW laboratories. Results The performance characteristics of our AI model (i.e., Mihm) were assessed retrospectively by running the Testing Set through the Mihm Evaluation Pipeline. Our results show that the sensitivity of Mihm in classifying melanocytic lesions, basal cell carcinoma, and atypical squamous lesions, verruca vulgaris, and seborrheic keratosis was 98.91% (95% CI: 98.27%, 99.55%), 97.24% (95% CI: 96.15%, 98.33%), 95.26% (95% CI: 93.79%, 96.73%), 93.50% (95% CI: 89.14%, 97.86%), and 86.91% (95% CI: 82.13%, 91.69%), respectively. Additionally, our multi-level (i.e., patch-level, ROI-level, and WSI-level) detection algorithm includes a qualitative feature that subtypes lesions, an AI overlay in the front-end digital display that localizes diagnostic ROIs, and reports on margin status by detecting overlap between lesions and non-epidermal tissue margins. Conclusions Our AI model, developed in collaboration with dermatopathologists, detects 5 skin lesion types with higher sensitivity than previously published AI models, and provides end users with information such as subtyping, localization, and margin status in a front-end digital display. Our end-to-end system has the potential to improve pathology workflows by increasing diagnostic accuracy, expediting the course of patient care, and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- James Requa
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Tuatini Godard
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Rajni Mandal
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Bonnie Balzer
- Cedars-Sinai Medical Center, 8700 Beverly Blvd, Los Angeles, CA 90048, USA
| | - Darren Whittemore
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Eva George
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | | | - Chalette Lambert
- Kirk Kerkorian School of Medicine at UNLV, University of Nevada, Las Vegas, Mail Stop: 3070, 2040 W Charleston Blvd., Las Vegas, NV 89102-2244, USA
| | - Jonathan Lee
- Bethesda Dermatopathology Laboratory, 1730 Elton Road, Silver Spring, MD 20903, USA
| | - Allison Lambert
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - April Larson
- Pathology Watch, 497 West 4800 South, Suite 201, Murray, UT 84123, USA
| | - Gregory Osmond
- Intermountain Healthcare, Saint George Regional Hospital, Department of Pathology, 1380 East Medical Center Drive, Saint George, Utah 84790, USA,Corresponding author.
| |
Collapse
|
31
|
Distinguish the Value of the Benign Nevus and Melanomas Using Machine Learning: A Meta-Analysis and Systematic Review. Mediators Inflamm 2022; 2022:1734327. [PMID: 36274972 PMCID: PMC9586788 DOI: 10.1155/2022/1734327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/22/2022] [Accepted: 10/01/2022] [Indexed: 11/26/2022] Open
Abstract
Background Melanomas, the most common human malignancy, are primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy, and histopathological examination. We aimed to systematically review the performance and quality of machine learning-based methods in distinguishing melanoma and benign nevus in the relevant literature. Method Four databases (Web of Science, PubMed, Embase, and the Cochrane library) were searched to retrieve the relevant studies published until March 26, 2022. The Predictive model Deviation Risk Assessment tool (PROBAST) was used to assess the deviation risk of opposing law. Result This systematic review included thirty researches with 114007 subjects and 71 machine learning models. The convolutional neural network was the main machine learning method. The pooled sensitivity was 85% (95% CI 82–87%), the specificity was 86% (82–88%), and the C-index was 0.87 (0.84–0.90). Conclusion The findings of our study showed that ML algorithms had high sensitivity and specificity for distinguishing between melanoma and benign nevi. This suggests that state-of-the-art ML-based algorithms for distinguishing melanoma from benign nevi may be ready for clinical use. However, a large proportion of the earlier published studies had methodological flaws, such as lack of external validation and lack of clinician comparisons. The results of these studies should be interpreted with caution.
Collapse
|
32
|
Oloruntoba AI, Vestergaard T, Nguyen TD, Yu Z, Sashindranath M, Betz-Stablein B, Soyer HP, Ge Z, Mar V. Assessing the Generalizability of Deep Learning Models Trained on Standardized and Nonstandardized Images and Their Performance Against Teledermatologists: Retrospective Comparative Study. JMIR DERMATOLOGY 2022. [DOI: 10.2196/35150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Background
Convolutional neural networks (CNNs) are a type of artificial intelligence that shows promise as a diagnostic aid for skin cancer. However, the majority are trained using retrospective image data sets with varying image capture standardization.
Objective
The aim of our study was to use CNN models with the same architecture—trained on image sets acquired with either the same image capture device and technique (standardized) or with varied devices and capture techniques (nonstandardized)—and test variability in performance when classifying skin cancer images in different populations.
Methods
In all, 3 CNNs with the same architecture were trained. CNN nonstandardized (CNN-NS) was trained on 25,331 images taken from the International Skin Imaging Collaboration (ISIC) using different image capture devices. CNN standardized (CNN-S) was trained on 177,475 MoleMap images taken with the same capture device, and CNN standardized number 2 (CNN-S2) was trained on a subset of 25,331 standardized MoleMap images (matched for number and classes of training images to CNN-NS). These 3 models were then tested on 3 external test sets: 569 Danish images, the publicly available ISIC 2020 data set consisting of 33,126 images, and The University of Queensland (UQ) data set of 422 images. Primary outcome measures were sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). Teledermatology assessments available for the Danish data set were used to determine model performance compared to teledermatologists.
Results
When tested on the 569 Danish images, CNN-S achieved an AUROC of 0.861 (95% CI 0.830-0.889) and CNN-S2 achieved an AUROC of 0.831 (95% CI 0.798-0.861; standardized models), with both outperforming CNN-NS (nonstandardized model; P=.001 and P=.009, respectively), which achieved an AUROC of 0.759 (95% CI 0.722-0.794). When tested on 2 additional data sets (ISIC 2020 and UQ), CNN-S (P<.001 and P<.001, respectively) and CNN-S2 (P=.08 and P=.35, respectively) still outperformed CNN-NS. When the CNNs were matched to the mean sensitivity and specificity of the teledermatologists on the Danish data set, the models’ resultant sensitivities and specificities were surpassed by the teledermatologists. However, when compared to CNN-S, the differences were not statistically significant (sensitivity: P=.10; specificity: P=.053). Performance across all CNN models as well as teledermatologists was influenced by image quality.
Conclusions
CNNs trained on standardized images had improved performance and, therefore, greater generalizability in skin cancer classification when applied to unseen data sets. This finding is an important consideration for future algorithm development, regulation, and approval.
Collapse
|
33
|
Mahumud RA, Janda M, Soyer HP, Fernández‐Peñas P, Mar VJ, Morton RL. Assessing the value of precision medicine health technologies to detect and manage melanoma. Med J Aust 2022; 217:275-278. [PMID: 36057953 PMCID: PMC9826311 DOI: 10.5694/mja2.51696] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Affiliation(s)
| | - Monika Janda
- Centre for Health Services ResearchUniversity of QueenslandBrisbaneQLD
| | - H Peter Soyer
- The University of Queensland Diamantina InstituteUniversity of QueenslandBrisbaneQLD
- Dermatology Research CentreUniversity of QueenslandBrisbaneQLD
- Department of DermatologyPrincess Alexandra HospitalBrisbaneQLD
| | - Pablo Fernández‐Peñas
- Centre for Cancer ResearchWestmead Institute for Medical ResearchSydneyNSW
- University of SydneySydneyNSW
| | - Victoria J Mar
- Victorian Melanoma ServiceAlfred HospitalMelbourneVIC
- Monash UniversityMelbourneVIC
| | - Rachael L Morton
- NHMRC Clinical Trials CentreUniversity of SydneySydneyNSW
- Melanoma Institute AustraliaUniversity of SydneySydneyNSW
| |
Collapse
|
34
|
Yilmaz A, Gencoglan G, Varol R, Demircali AA, Keshavarz M, Uvet H. MobileSkin: Classification of Skin Lesion Images Acquired Using Mobile Phone-Attached Hand-Held Dermoscopes. J Clin Med 2022; 11:jcm11175102. [PMID: 36079042 PMCID: PMC9457478 DOI: 10.3390/jcm11175102] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/17/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Dermoscopy is the visual examination of the skin under a polarized or non-polarized light source. By using dermoscopic equipment, many lesion patterns that are invisible under visible light can be clearly distinguished. Thus, more accurate decisions can be made regarding the treatment of skin lesions. The use of images collected from a dermoscope has both increased the performance of human examiners and allowed the development of deep learning models. The availability of large-scale dermoscopic datasets has allowed the development of deep learning models that can classify skin lesions with high accuracy. However, most dermoscopic datasets contain images that were collected from digital dermoscopic devices, as these devices are frequently used for clinical examination. However, dermatologists also often use non-digital hand-held (optomechanical) dermoscopes. This study presents a dataset consisting of dermoscopic images taken using a mobile phone-attached hand-held dermoscope. Four deep learning models based on the MobileNetV1, MobileNetV2, NASNetMobile, and Xception architectures have been developed to classify eight different lesion types using this dataset. The number of images in the dataset was increased with different data augmentation methods. The models were initialized with weights that were pre-trained on the ImageNet dataset, and then they were further fine-tuned using the presented dataset. The most successful models on the unseen test data, MobileNetV2 and Xception, had performances of 89.18% and 89.64%. The results were evaluated with the 5-fold cross-validation method and compared. Our method allows for automated examination of dermoscopic images taken with mobile phone-attached hand-held dermoscopes.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Gulsum Gencoglan
- Department of Dermatology, Liv Hospital Vadistanbul, Istinye University, 34396 Istanbul, Turkey
| | - Rahmetullah Varol
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Ali Anil Demircali
- Department of Metabolism, Digestion and Reproduction, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Meysam Keshavarz
- Department of Electrical and Electronic Engineering, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
- Correspondence: (M.K.); (H.U.)
| | - Huseyin Uvet
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Correspondence: (M.K.); (H.U.)
| |
Collapse
|
35
|
Girdhar N, Sinha A, Gupta S. DenseNet-II: an improved deep convolutional neural network for melanoma cancer detection. Soft comput 2022; 27:1-20. [PMID: 36034768 PMCID: PMC9400005 DOI: 10.1007/s00500-022-07406-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/16/2022] [Indexed: 10/28/2022]
Abstract
Research in the field of medicine and relevant studies evince that melanoma is one of the deadliest cancers. It defines precisely that the condition develops due to uncontrolled growth of melanocytic cells. The current trends in any disease detection revolve around the usage of two main categories of models; these are general machine learning models and deep learning models. Further, the experimental analysis of melanoma has an additional requirement of visual records like dermatological scans or normal camera lens images. This further accentuates the need for a more accurate model for melanoma detection. In this work, we aim to achieve the same, primarily by the extensive usage of neural networks. Our objective is to propose a deep learning CNN framework-based model to improve the accuracy of melanoma detection by customizing the number of layers in the network architecture, activation functions applied, and the dimension of the input array. Models like Resnet, DenseNet, Inception, and VGG have proved to yield appreciable accuracy in melanoma detection. However, in most cases, the dataset was classified into malignant or benign classes only. The dataset used in our research provides seven lesions; these are melanocytic nevi, melanoma, benign keratosis, basal cell carcinoma, actinic keratoses, vascular lesions, and dermatofibroma. Thus, through the HAM10000 dataset and various deep learning models, we diversified the precision factors as well as input qualities. The obtained results are highly propitious and establish its credibility.
Collapse
Affiliation(s)
- Nancy Girdhar
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida, UP India
| | - Aparna Sinha
- Amity School of Engineering and Technology, Amity University, Noida, UP India
| | - Shivang Gupta
- Amity School of Engineering and Technology, Amity University, Noida, UP India
| |
Collapse
|
36
|
Deep Learning in Dermatology: A Systematic Review of Current Approaches, Outcomes, and Limitations. JID INNOVATIONS 2022; 3:100150. [PMID: 36655135 PMCID: PMC9841357 DOI: 10.1016/j.xjidi.2022.100150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/17/2022] [Accepted: 07/15/2022] [Indexed: 01/21/2023] Open
Abstract
Artificial intelligence (AI) has recently made great advances in image classification and malignancy prediction in the field of dermatology. However, understanding the applicability of AI in clinical dermatology practice remains challenging owing to the variability of models, image data, database characteristics, and variable outcome metrics. This systematic review aims to provide a comprehensive overview of dermatology literature using convolutional neural networks. Furthermore, the review summarizes the current landscape of image datasets, transfer learning approaches, challenges, and limitations within current AI literature and current regulatory pathways for approval of models as clinical decision support tools.
Collapse
|
37
|
Rezk E, Eltorki M, El-Dakhakhni W. Improving Skin Color Diversity in Cancer Detection: Deep Learning Approach. JMIR DERMATOLOGY 2022. [DOI: 10.2196/39143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Background
The lack of dark skin images in pathologic skin lesions in dermatology resources hinders the accurate diagnosis of skin lesions in people of color. Artificial intelligence applications have further disadvantaged people of color because those applications are mainly trained with light skin color images.
Objective
The aim of this study is to develop a deep learning approach that generates realistic images of darker skin colors to improve dermatology data diversity for various malignant and benign lesions.
Methods
We collected skin clinical images for common malignant and benign skin conditions from DermNet NZ, the International Skin Imaging Collaboration, and Dermatology Atlas. Two deep learning methods, style transfer (ST) and deep blending (DB), were utilized to generate images with darker skin colors using the lighter skin images. The generated images were evaluated quantitively and qualitatively. Furthermore, a convolutional neural network (CNN) was trained using the generated images to assess the latter’s effect on skin lesion classification accuracy.
Results
Image quality assessment showed that the ST method outperformed DB, as the former achieved a lower loss of realism score of 0.23 (95% CI 0.19-0.27) compared to 0.63 (95% CI 0.59-0.67) for the DB method. In addition, ST achieved a higher disease presentation with a similarity score of 0.44 (95% CI 0.40-0.49) compared to 0.17 (95% CI 0.14-0.21) for the DB method. The qualitative assessment completed on masked participants indicated that ST-generated images exhibited high realism, whereby 62.2% (1511/2430) of the votes for the generated images were classified as real. Eight dermatologists correctly diagnosed the lesions in the generated images with an average rate of 0.75 (360 correct diagnoses out of 480) for several malignant and benign lesions. Finally, the classification accuracy and the area under the curve (AUC) of the model when considering the generated images were 0.76 (95% CI 0.72-0.79) and 0.72 (95% CI 0.67-0.77), respectively, compared to the accuracy of 0.56 (95% CI 0.52-0.60) and AUC of 0.63 (95% CI 0.58-0.68) for the model without considering the generated images.
Conclusions
Deep learning approaches can generate realistic skin lesion images that improve the skin color diversity of dermatology atlases. The diversified image bank, utilized herein to train a CNN, demonstrates the potential of developing generalizable artificial intelligence skin cancer diagnosis applications.
International Registered Report Identifier (IRRID)
RR2-10.2196/34896
Collapse
|
38
|
Zhou J, Wu Z, Jiang Z, Huang K, Guo K, Zhao S. Background selection schema on deep learning-based classification of dermatological disease. Comput Biol Med 2022; 149:105966. [PMID: 36029748 DOI: 10.1016/j.compbiomed.2022.105966] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/28/2022] [Accepted: 08/13/2022] [Indexed: 11/03/2022]
Abstract
Skin diseases are one of the most common ailments affecting humans. Artificial intelligence based on deep learning can significantly improve the efficiency of identifying skin disorders and alleviate the scarcity of medical resources. However, the distribution of background information in dermatological datasets is imbalanced, causing generalized deep learning models to perform poorly in skin disease classification. We propose a deep learning schema that combines data preprocessing, data augmentation, and residual networks to study the influence of color-based background selection on a deep model's capacity to learn foreground lesion subject attributes in a skin disease classification problem. First, clinical photographs are annotated by dermatologists, and then the original background information is masked with unique colors to generate several subsets with distinct background colors. Sample-balanced training and test sets are generated using random over/undersampling and data augmentation techniques. Finally, the deep learning networks are independently trained on diverse subsets of backdrop colors to compare the performance of classifiers based on different background information. Extensive experiments demonstrate that color-based background information significantly affects the classification of skin diseases and that classifiers trained on the green subset achieve state-of-the-art performance for classifying black and red skin lesions.
Collapse
Affiliation(s)
- Jiancun Zhou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China; College of Information and Electronic Engineering, Hunan City University, Yiyang 413000, China
| | - Zheng Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Zixi Jiang
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China
| | - Kai Huang
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China
| | - Kehua Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Shuang Zhao
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China.
| |
Collapse
|
39
|
Zheng T, Zheng S, Wang K, Quan H, Bai Q, Li S, Qi R, Zhao Y, Cui X, Gao X. Automatic CD30 scoring method for whole slide images of primary cutaneous CD30 + lymphoproliferative diseases. J Clin Pathol 2022; 76:jclinpath-2022-208344. [PMID: 35863885 DOI: 10.1136/jcp-2022-208344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 07/07/2022] [Indexed: 11/03/2022]
Abstract
AIMS Deep-learning methods for scoring biomarkers are an active research topic. However, the superior performance of many studies relies on large datasets collected from clinical samples. In addition, there are fewer studies on immunohistochemical marker assessment for dermatological diseases. Accordingly, we developed a method for scoring CD30 based on convolutional neural networks for a few primary cutaneous CD30+ lymphoproliferative disorders and used this method to evaluate other biomarkers. METHODS A multipatch spatial attention mechanism and conditional random field algorithm were used to fully fuse tumour tissue characteristics on immunohistochemical slides and alleviate the few sample feature deficits. We trained and tested 28 CD30+ immunohistochemical whole slide images (WSIs), evaluated them with a performance index, and compared them with the diagnoses of senior dermatologists. Finally, the model's performance was further demonstrated on the publicly available Yale HER2 cohort. RESULTS Compared with the diagnoses by senior dermatologists, this method can better locate the tumour area and reduce the misdiagnosis rate. The prediction of CD3 and Ki-67 validated the model's ability to identify other biomarkers. CONCLUSIONS In this study, using a few immunohistochemical WSIs, our model can accurately identify CD30, CD3 and Ki-67 markers. In addition, the model could be applied to additional tumour identification tasks to aid pathologists in diagnosis and benefit clinical evaluation.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Song Zheng
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| | - Ke Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Qun Bai
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Ruiqun Qi
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Xinghua Gao
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| |
Collapse
|
40
|
Elashiri MA, Rajesh A, Nath Pandey S, Kumar Shukla S, Urooj S, Lay-Ekuakille A. Ensemble of weighted deep concatenated features for the skin disease classification model using modified long short term memory. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
41
|
Bhimavarapu U, Battineni G. Skin Lesion Analysis for Melanoma Detection Using the Novel Deep Learning Model Fuzzy GC-SCNN. Healthcare (Basel) 2022; 10:healthcare10050962. [PMID: 35628098 PMCID: PMC9141659 DOI: 10.3390/healthcare10050962] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/19/2022] [Accepted: 05/21/2022] [Indexed: 02/01/2023] Open
Abstract
Melanoma is easily detectable by visual examination since it occurs on the skin’s surface. In melanomas, which are the most severe types of skin cancer, the cells that make melanin are affected. However, the lack of expert opinion increases the processing time and cost of computer-aided skin cancer detection. As such, we aimed to incorporate deep learning algorithms to conduct automatic melanoma detection from dermoscopic images. The fuzzy-based GrabCut-stacked convolutional neural networks (GC-SCNN) model was applied for image training. The image features extraction and lesion classification were performed on different publicly available datasets. The fuzzy GC-SCNN coupled with the support vector machines (SVM) produced 99.75% classification accuracy and 100% sensitivity and specificity, respectively. Additionally, model performance was compared with existing techniques and outcomes suggesting the proposed model could detect and classify the lesion segments with higher accuracy and lower processing time than other techniques.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- School of Competitive Coding, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Vijayawada 522502, India;
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- Correspondence: ; Tel.: +39-3331728206
| |
Collapse
|
42
|
Khan MS, Alam KN, Dhruba AR, Zunair H, Mohammed N. Knowledge distillation approach towards melanoma detection. Comput Biol Med 2022; 146:105581. [PMID: 35594685 DOI: 10.1016/j.compbiomed.2022.105581] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 04/04/2022] [Accepted: 04/30/2022] [Indexed: 11/29/2022]
Abstract
Melanoma is regarded as the most threatening among all skin cancers. There is a pressing need to build systems which can aid in the early detection of melanoma and enable timely treatment to patients. Recent methods are geared towards machine learning based systems where the task is posed as image recognition, tag dermoscopic images of skin lesions as melanoma or non-melanoma. Even though these methods show promising results in terms of accuracy, they are computationally quite expensive to train, that questions the ability of these models to be deployable in a clinical setting or memory constraint devices. To address this issue, we focus on building simple and performant models having few layers, less than ten compared to hundreds. As well as with fewer learnable parameters, 0.26 million (M) compared to 42.5 M using knowledge distillation with the goal to detect melanoma from dermoscopic images. First, we train a teacher model using a ResNet-50 to detect melanoma. Using the teacher model, we train the student model known as Distilled Student Network (DSNet) which has around 0.26 M parameters using knowledge distillation achieving an accuracy of 91.7%. We compare against ImageNet pre-trained models such MobileNet, VGG-16, Inception-V3, EfficientNet-B0, ResNet-50 and ResNet-101. We find that our approach works well in terms of inference runtime compared to other pre-trained models, 2.57 s compared to 14.55 s. We find that DSNet (0.26 M parameters), which is 15 times smaller, consistently performs better than EfficientNet-B0 (4 M parameters) in both melanoma and non-melanoma detection across Precision, Recall and F1 scores.
Collapse
|
43
|
The Application of Differing Machine Learning Algorithms and Their Related Performance in Detecting Skin Cancers and Melanomas. J Skin Cancer 2022; 2022:2839162. [PMID: 35573163 PMCID: PMC9095410 DOI: 10.1155/2022/2839162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 03/04/2022] [Accepted: 03/15/2022] [Indexed: 11/17/2022] Open
Abstract
Skin cancer, and its less common form melanoma, is a disease affecting a wide variety of people. Since it is usually detected initially by visual inspection, it makes for a good candidate for the application of machine learning. With early detection being key to good outcomes, any method that can enhance the diagnostic accuracy of dermatologists and oncologists is of significant interest. When comparing different existing implementations of machine learning against public datasets and several we seek to create, we attempted to create a more accurate model that can be readily adapted to use in clinical settings. We tested combinations of models, including convolutional neural networks (CNNs), and various layers of data manipulation, such as the application of Gaussian functions and trimming of images to improve accuracy. We also created more traditional data models, including support vector classification, K-nearest neighbor, Naïve Bayes, random forest, and gradient boosting algorithms, and compared them to the CNN-based models we had created. Results had indicated that CNN-based algorithms significantly outperformed other data models we had created. Partial results of this work were presented at the CSET Presentations for Research Month at the Minnesota State University, Mankato.
Collapse
|
44
|
Bishop KW, Maitland KC, Rajadhyaksha M, Liu JTC. In vivo microscopy as an adjunctive tool to guide detection, diagnosis, and treatment. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-220032-PER. [PMID: 35478042 PMCID: PMC9043840 DOI: 10.1117/1.jbo.27.4.040601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 04/05/2022] [Indexed: 05/05/2023]
Abstract
SIGNIFICANCE There have been numerous academic and commercial efforts to develop high-resolution in vivo microscopes for a variety of clinical use cases, including early disease detection and surgical guidance. While many high-profile studies, commercialized products, and publications have resulted from these efforts, mainstream clinical adoption has been relatively slow other than for a few clinical applications (e.g., dermatology). AIM Here, our goals are threefold: (1) to introduce and motivate the need for in vivo microscopy (IVM) as an adjunctive tool for clinical detection, diagnosis, and treatment, (2) to discuss the key translational challenges facing the field, and (3) to propose best practices and recommendations to facilitate clinical adoption. APPROACH We will provide concrete examples from various clinical domains, such as dermatology, oral/gastrointestinal oncology, and neurosurgery, to reinforce our observations and recommendations. RESULTS While the incremental improvement and optimization of IVM technologies should and will continue to occur, future translational efforts would benefit from the following: (1) integrating clinical and industry partners upfront to define and maintain a compelling value proposition, (2) identifying multimodal/multiscale imaging workflows, which are necessary for success in most clinical scenarios, and (3) developing effective artificial intelligence tools for clinical decision support, tempered by a realization that complete adoption of such tools will be slow. CONCLUSIONS The convergence of imaging modalities, academic-industry-clinician partnerships, and new computational capabilities has the potential to catalyze rapid progress and adoption of IVM in the next few decades.
Collapse
Affiliation(s)
- Kevin W. Bishop
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | - Kristen C. Maitland
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| | - Milind Rajadhyaksha
- Memorial Sloan Kettering Cancer Center, Dermatology Service, New York, New York, United States
| | - Jonathan T. C. Liu
- University of Washington, Department of Bioengineering, Seattle, Washington, United States
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
- University of Washington, Department of Laboratory Medicine and Pathology, Seattle, Washington, United States
- Address all correspondence to Jonathan T.C. Liu,
| |
Collapse
|
45
|
Lindholm V, Raita-Hakola AM, Annala L, Salmivuori M, Jeskanen L, Saari H, Koskenmies S, Pitkänen S, Pölönen I, Isoherranen K, Ranki A. Differentiating Malignant from Benign Pigmented or Non-Pigmented Skin Tumours-A Pilot Study on 3D Hyperspectral Imaging of Complex Skin Surfaces and Convolutional Neural Networks. J Clin Med 2022; 11:jcm11071914. [PMID: 35407522 PMCID: PMC8999463 DOI: 10.3390/jcm11071914] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Accepted: 03/28/2022] [Indexed: 02/08/2023] Open
Abstract
Several optical imaging techniques have been developed to ease the burden of skin cancer disease on our health care system. Hyperspectral images can be used to identify biological tissues by their diffuse reflected spectra. In this second part of a three-phase pilot study, we used a novel hand-held SICSURFIS Spectral Imager with an adaptable field of view and target-wise selectable wavelength channels to provide detailed spectral and spatial data for lesions on complex surfaces. The hyperspectral images (33 wavelengths, 477–891 nm) provided photometric data through individually controlled illumination modules, enabling convolutional networks to utilise spectral, spatial, and skin-surface models for the analyses. In total, 42 lesions were studied: 7 melanomas, 13 pigmented and 7 intradermal nevi, 10 basal cell carcinomas, and 5 squamous cell carcinomas. All lesions were excised for histological analyses. A pixel-wise analysis provided map-like images and classified pigmented lesions with a sensitivity of 87% and a specificity of 93%, and 79% and 91%, respectively, for non-pigmented lesions. A majority voting analysis, which provided the most probable lesion diagnosis, diagnosed 41 of 42 lesions correctly. This pilot study indicates that our non-invasive hyperspectral imaging system, which involves shape and depth data analysed by convolutional neural networks, is feasible for differentiating between malignant and benign pigmented and non-pigmented skin tumours, even on complex skin surfaces.
Collapse
Affiliation(s)
- Vivian Lindholm
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
- Correspondence: (V.L.); (A.-M.R.-H.); Tel.: +358-9471-86355 (V.L.)
| | - Anna-Maria Raita-Hakola
- Faculty of Information Technology, University of Jyväskylä, 40100 Jyväskylä, Finland; (L.A.); (I.P.)
- Correspondence: (V.L.); (A.-M.R.-H.); Tel.: +358-9471-86355 (V.L.)
| | - Leevi Annala
- Faculty of Information Technology, University of Jyväskylä, 40100 Jyväskylä, Finland; (L.A.); (I.P.)
| | - Mari Salmivuori
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
| | - Leila Jeskanen
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
| | - Heikki Saari
- VTT Technical Research Centre of Finland, 02150 Espoo, Finland;
| | - Sari Koskenmies
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
| | - Sari Pitkänen
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
| | - Ilkka Pölönen
- Faculty of Information Technology, University of Jyväskylä, 40100 Jyväskylä, Finland; (L.A.); (I.P.)
| | - Kirsi Isoherranen
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
| | - Annamari Ranki
- Department of Dermatology and Allergology, University of Helsinki and Helsinki University Hospital, 00290 Helsinki, Finland; (M.S.); (L.J.); (S.K.); (S.P.); (K.I.); (A.R.)
| |
Collapse
|
46
|
Real-time high-resolution millimeter-wave imaging for in-vivo skin cancer diagnosis. Sci Rep 2022; 12:4971. [PMID: 35322133 PMCID: PMC8943071 DOI: 10.1038/s41598-022-09047-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 03/16/2022] [Indexed: 12/26/2022] Open
Abstract
High-resolution millimeter-wave imaging (HR-MMWI), with its high discrimination contrast and sufficient penetration depth, can potentially provide affordable tissue diagnostic information noninvasively. In this study, we evaluate the application of a real-time system of HR-MMWI for in-vivo skin cancer diagnosis. 136 benign and malignant skin lesions from 71 patients, including melanoma, basal cell carcinoma, squamous cell carcinoma, actinic keratosis, melanocytic nevi, angiokeratoma, dermatofibroma, solar lentigo, and seborrheic keratosis were measured. Lesions were classified using a 3-D principal component analysis followed by five classifiers including linear discriminant analysis (LDA), K-nearest neighbor (KNN) with different K-values, linear and Gaussian support vector machine (LSVM and GSVM) with different margin factors, and multilayer perception (MLP). Our results suggested that the best classification was achieved by using five PCA components followed by MLP with 97% sensitivity and 98% specificity. Our findings establish that real-time millimeter-wave imaging can be used to distinguish malignant tissues from benign skin lesions with high diagnostic accuracy comparable with clinical examination and other methods.
Collapse
|
47
|
Rezk E, Eltorki M, El-Dakhakhni W. Leveraging Artificial Intelligence to Improve the Diversity of Dermatological Skin Color Pathology: Protocol for an Algorithm Development and Validation Study. JMIR Res Protoc 2022; 11:e34896. [PMID: 34983017 PMCID: PMC8941446 DOI: 10.2196/34896] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 12/23/2021] [Accepted: 01/04/2022] [Indexed: 01/26/2023] Open
Abstract
Background The paucity of dark skin images in dermatological textbooks and atlases is a reflection of racial injustice in medicine. The underrepresentation of dark skin images makes diagnosing skin pathology in people of color challenging. For conditions such as skin cancer, in which early diagnosis makes a difference between life and death, people of color have worse prognoses and lower survival rates than people with lighter skin tones as a result of delayed or incorrect diagnoses. Recent advances in artificial intelligence, such as deep learning, offer a potential solution that can be achieved by diversifying the mostly light-skin image repositories through generating images for darker skin tones. Thus, facilitating the development of inclusive cancer early diagnosis systems that are trained and tested on diverse images that truly represent human skin tones. Objective We aim to develop and evaluate an artificial intelligence–based skin cancer early detection system for all skin tones using clinical images. Methods This study consists of four phases: (1) Publicly available skin image repositories will be analyzed to quantify the underrepresentation of darker skin tones, (2) Images will be generated for the underrepresented skin tones, (3) Generated images will be extensively evaluated for realism and disease presentation with quantitative image quality assessment as well as qualitative human expert and nonexpert ratings, and (4) The images will be utilized with available light-skin images to develop a robust skin cancer early detection model. Results This study started in September 2020. The first phase of quantifying the underrepresentation of darker skin tones was completed in March 2021. The second phase of generating the images is in progress and will be completed by March 2022. The third phase is expected to be completed by May 2022, and the final phase is expected to be completed by September 2022. Conclusions This work is the first step toward expanding skin tone diversity in existing image databases to address the current gap in the underrepresentation of darker skin tones. Once validated, the image bank will be a valuable resource that can potentially be utilized in physician education and in research applications. Furthermore, generated images are expected to improve the generalizability of skin cancer detection. When completed, the model will assist family physicians and general practitioners in evaluating skin lesion severity and in efficient triaging for referral to expert dermatologists. In addition, the model can assist dermatologists in diagnosing skin lesions. International Registered Report Identifier (IRRID) DERR1-10.2196/34896
Collapse
Affiliation(s)
- Eman Rezk
- School of Computational Science and Engineering, McMaster University, Hamilton, ON, Canada
| | - Mohamed Eltorki
- Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Wael El-Dakhakhni
- School of Computational Science and Engineering, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
48
|
Lucieri A, Bajwa MN, Braun SA, Malik MI, Dengel A, Ahmed S. ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106620. [PMID: 35033756 DOI: 10.1016/j.cmpb.2022.106620] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 12/01/2021] [Accepted: 01/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES One principal impediment in the successful deployment of Artificial Intelligence (AI) based Computer-Aided Diagnosis (CAD) systems in everyday clinical workflows is their lack of transparent decision-making. Although commonly used eXplainable AI (XAI) methods provide insights into these largely opaque algorithms, such explanations are usually convoluted and not readily comprehensible. The explanation of decisions regarding the malignancy of skin lesions from dermoscopic images demands particular clarity, as the underlying medical problem definition is ambiguous in itself. This work presents ExAID (Explainable AI for Dermatology), a novel XAI framework for biomedical image analysis that provides multi-modal concept-based explanations, consisting of easy-to-understand textual explanations and visual maps, to justify the predictions. METHODS Our framework relies on Concept Activation Vectors to map human-understandable concepts to those learned by an arbitrary Deep Learning (DL) based algorithm, and Concept Localisation Maps to highlight those concepts in the input space. This identification of relevant concepts is then used to construct fine-grained textual explanations supplemented by concept-wise location information to provide comprehensive and coherent multi-modal explanations. All decision-related information is presented in a diagnostic interface for use in clinical routines. Moreover, the framework includes an educational mode providing dataset-level explanation statistics as well as tools for data and model exploration to aid medical research and education processes. RESULTS Through rigorous quantitative and qualitative evaluation of our framework on a range of publicly available dermoscopic image datasets, we show the utility of multi-modal explanations for CAD-assisted scenarios even in case of wrong disease predictions. We demonstrate that concept detectors for the explanation of pre-trained networks reach accuracies of up to 81.46%, which is comparable to supervised networks trained end-to-end. CONCLUSIONS We present a new end-to-end framework for the multi-modal explanation of DL-based biomedical image analysis in Melanoma classification and evaluate its utility on an array of datasets. Since perspicuous explanation is one of the cornerstones of any CAD system, we believe that ExAID will accelerate the transition from AI research to practice by providing dermatologists and researchers with an effective tool that they can both understand and trust. ExAID can also serve as the basis for similar applications in other biomedical fields.
Collapse
Affiliation(s)
- Adriano Lucieri
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Muhammad Naseer Bajwa
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Stephan Alexander Braun
- University Hospital Münster, Albert-Schweitzer-Campus 1, 48149 Münster, Germany; University Hospital of Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany.
| | - Muhammad Imran Malik
- School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan; Deep Learning Laboratory, National Center of Artificial Intelligence, Islamabad, Pakistan.
| | - Andreas Dengel
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Sheraz Ahmed
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany.
| |
Collapse
|
49
|
Yu Z, Nguyen J, Nguyen TD, Kelly J, Mclean C, Bonnington P, Zhang L, Mar V, Ge Z. Early Melanoma Diagnosis With Sequential Dermoscopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:633-646. [PMID: 34648437 DOI: 10.1109/tmi.2021.3120091] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Dermatologists often diagnose or rule out early melanoma by evaluating the follow-up dermoscopic images of skin lesions. However, existing algorithms for early melanoma diagnosis are developed using single time-point images of lesions. Ignoring the temporal, morphological changes of lesions can lead to misdiagnosis in borderline cases. In this study, we propose a framework for automated early melanoma diagnosis using sequential dermoscopic images. To this end, we construct our method in three steps. First, we align sequential dermoscopic images of skin lesions using estimated Euclidean transformations, extract the lesion growth region by computing image differences among the consecutive images, and then propose a spatio-temporal network to capture the dermoscopic changes from aligned lesion images and the corresponding difference images. Finally, we develop an early diagnosis module to compute probability scores of malignancy for lesion images over time. We collected 179 serial dermoscopic imaging data from 122 patients to verify our method. Extensive experiments show that the proposed model outperforms other commonly used sequence models. We also compared the diagnostic results of our model with those of seven experienced dermatologists and five registrars. Our model achieved higher diagnostic accuracy than clinicians (63.69% vs. 54.33%, respectively) and provided an earlier diagnosis of melanoma (60.7% vs. 32.7% of melanoma correctly diagnosed on the first follow-up images). These results demonstrate that our model can be used to identify melanocytic lesions that are at high-risk of malignant transformation earlier in the disease process and thereby redefine what is possible in the early detection of melanoma.
Collapse
|
50
|
Sies K, Winkler JK, Fink C, Bardehle F, Toberer F, Buhl T, Enk A, Blum A, Stolz W, Rosenberger A, Haenssle HA. Does sex matter? Analysis of sex-related differences in the diagnostic performance of a market-approved convolutional neural network for skin cancer detection. Eur J Cancer 2022; 164:88-94. [PMID: 35182926 DOI: 10.1016/j.ejca.2021.12.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 12/17/2021] [Accepted: 12/29/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND Advances in biomedical artificial intelligence may introduce or perpetuate sex and gender discriminations. Convolutional neural networks (CNN) have proven a dermatologist-level performance in image classification tasks but have not been assessed for sex and gender biases that may affect training data and diagnostic performance. In this study, we investigated sex-related imbalances in training data and diagnostic performance of a market-approved CNN for skin cancer classification (Moleanalyzer Pro®, Fotofinder Systems GmbH, Bad Birnbach, Germany). METHODS We screened open-access dermoscopic image repositories widely used for CNN training for distribution of sex. Moreover, the sex-related diagnostic performance of the market-approved CNN was tested in 1549 dermoscopic images stratified by sex (female n = 773; male n = 776). RESULTS Most open-access repositories showed a marked under-representation of images originating from female (40%) versus male (60%) patients. Despite these imbalances and well-known sex-related differences in skin anatomy or skin-directed behaviour, the tested CNN achieved a comparable sensitivity of 87.0% [80.9%-91.3%] versus 87.1% [81.1%-91.4%], specificity of 98.7% [97.4%-99.3%] versus 96.9% [95.2%-98.0%] and ROC-AUC of 0.984 [0.975-0.993] versus 0.979 [0.969-0.988] in dermoscopic images of female versus male origin, respectively. In the sample at hand, sex-related differences in ROC-AUCs were not statistically significant in the per-image analysis nor in an additional per-individual analysis (p ≥ 0.59). CONCLUSION Design and training of artificial intelligence algorithms for medical applications should generally acknowledge sex and gender dimensions. Despite sex-related imbalances in open-access training data, the diagnostic performance of the tested CNN showed no sex-related bias in the classification of skin lesions.
Collapse
Affiliation(s)
- Katharina Sies
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Julia K Winkler
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Christine Fink
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Felicitas Bardehle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Ferdinand Toberer
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Timo Buhl
- Department of Dermatology, Venereology and Allergology, University Medical Center Göttingen, Göttingen, Germany
| | - Alexander Enk
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Andreas Blum
- Public, Private and Teaching Practice of Dermatology, Konstanz, Germany
| | - Wilhelm Stolz
- Department of Dermatology, Allergology and Environmental Medicine II, Hospital Thalkirchner Street, Munich, Germany
| | - Albert Rosenberger
- Department of Genetic Epidemiology, University of Goettingen, Goettingen, Germany
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| |
Collapse
|