1
|
Chowdhury MAZ, Oehlschlaeger MA. Artificial Intelligence in Gas Sensing: A Review. ACS Sens 2025; 10:1538-1563. [PMID: 40067186 DOI: 10.1021/acssensors.4c02272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2025]
Abstract
The role of artificial intelligence (AI), machine learning (ML), and deep learning (DL) in enhancing and automating gas sensing methods and the implications of these technologies for emergent gas sensor systems is reviewed. Applications of AI-based intelligent gas sensors include environmental monitoring, industrial safety, remote sensing, and medical diagnostics. AI, ML, and DL methods can process and interpret complex sensor data, allowing for improved accuracy, sensitivity, and selectivity, enabling rapid gas detection and quantitative concentration measurements based on sophisticated multiband, multispecies sensor systems. These methods can discern subtle patterns in sensor signals, allowing sensors to readily distinguish between gases with similar sensor signatures, enabling adaptable, cross-sensitive sensor systems for multigas detection under various environmental conditions. Integrating AI in gas sensor technology represents a paradigm shift, enabling sensors to achieve unprecedented performance, selectivity, and adaptability. This review describes gas sensor technologies and AI while highlighting approaches to AI-sensor integration.
Collapse
Affiliation(s)
- M A Z Chowdhury
- Department of Mechanical, Aerospace, and Nuclear Engineering, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, New York 12180, United States
| | - M A Oehlschlaeger
- Department of Mechanical, Aerospace, and Nuclear Engineering, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, New York 12180, United States
| |
Collapse
|
2
|
Barbosa da Cruz Junior L, Bernardo de Barros K, Eduardo Girasol C, Mendonça Quaranta Lobão R, Bachmann L. Absorption Coefficient Estimation of Pigmented Skin Phantoms Using Colorimetric Parameters. APPLIED SPECTROSCOPY 2025; 79:376-384. [PMID: 39396522 DOI: 10.1177/00037028241281388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
The increasing use of light-based treatments requires a better understanding of the light tissue interaction for pigmented skin. To enhance comprehension in this area, this study proposes the use of pigmented-mimicking skin phantoms to assess the optical properties based on their tone, represented by the individual typology angle (ITA) color scale. In this study, an epoxy resin matrix alongside compact facial powder and titanium dioxide was used to mimic the absorption, scattering, and shade properties of human skins. Eight phantoms covering the skin tones, light (ITA = 45.2°), tan (ITA = 23.3°), brown (ITA = 6.9°, -5.7°, and -16.9°), and dark (ITA = -34.6°, -41.6°, and -48.6°), were crafted. The absorption and reduced scattering coefficients were obtained using integrating spheres and calibrated spectrometers in the 500-900 nm range, and tones were measured using a commercial colorimeter. The experimental fitting proposed in this study could estimate the optical properties as a function of the skin tones through ITA values, by using an exponential function with a second-order polynomial exponent. This investigation aligns with prior studies involving human skin samples, and these findings hold promise for future clinical and diagnostic applications, particularly in the realm of light-based treatments to individual dermatological corrections in pigmented skin.
Collapse
Affiliation(s)
- Luismar Barbosa da Cruz Junior
- Laboratory of Biophotonics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, Brazil
- Engineering Department, Federal Institute of Education Science and Technology of São Paulo, Catanduva, Brazil
- São Carlos Institute of Physics, University of São Paulo, São Carlos, Brazil
| | - Kaio Bernardo de Barros
- Laboratory of Biophotonics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, Brazil
| | - Carlos Eduardo Girasol
- Laboratory of Biophotonics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, Brazil
| | - Raissa Mendonça Quaranta Lobão
- Laboratory of Biophotonics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, Brazil
| | - Luciano Bachmann
- Laboratory of Biophotonics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, Brazil
| |
Collapse
|
3
|
Halder A, Dalal A, Gharami S, Wozniak M, Ijaz MF, Singh PK. A fuzzy rank-based deep ensemble methodology for multi-class skin cancer classification. Sci Rep 2025; 15:6268. [PMID: 39979375 PMCID: PMC11842842 DOI: 10.1038/s41598-025-90423-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 02/12/2025] [Indexed: 02/22/2025] Open
Abstract
Skin cancer is widespread and can be potentially fatal. According to the World Health Organisation (WHO), it has been identified as a leading cause of mortality. It is essential to detect skin cancer early so that effective treatment can be provided at an initial stage. In this study, the widely-used HAM10000 dataset, containing high-resolution images of various skin lesions, is employed to train and evaluate. Our methodology for the HAM10000 dataset involves balancing the imbalanced dataset by augmenting images followed by splitting the dataset into train, test and validation set, preprocessing the images, training the individual models Xception, InceptionResNetV2 and MobileNetV2, and then combining their outputs using fuzzy logic to generate a final prediction. We examined the performance of the ensemble using standard metrics like classification accuracy, confusion matrix, etc. and achieved an impressive accuracy of 95.14% and the result demonstrates the effectiveness of our approach in accurately identifying skin cancer lesions. To further assess the efficiency of the model, additional tests have been performed on the DermaMNIST dataset from the MedMNISTv2 collection. The model performs well on the dataset and transcends the benchmark accuracy of 76.8%, achieving 78.25%. Thus the model is efficient for skin cancer classification, showcasing its potential for clinical applications.
Collapse
Affiliation(s)
- Arindam Halder
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Anogh Dalal
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Sanghita Gharami
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Marcin Wozniak
- Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, Gliwice, 44100, Poland.
| | - Muhammad Fazal Ijaz
- School of IT and Engineering, Melbourne Institute of Technology, Melbourne, 3000, Australia.
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
- Shinawatra University, 99, Moo 10, Bang Toei, Sam Khok, 12160, Pathum Thani, Thailand
| |
Collapse
|
4
|
Hamamoto R, Komatsu M, Yamada M, Kobayashi K, Takahashi M, Miyake M, Jinnai S, Koyama T, Kouno N, Machino H, Takahashi S, Asada K, Ueda N, Kaneko S. Current status and future direction of cancer research using artificial intelligence for clinical application. Cancer Sci 2025; 116:297-307. [PMID: 39557634 PMCID: PMC11786316 DOI: 10.1111/cas.16395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 10/23/2024] [Accepted: 11/01/2024] [Indexed: 11/20/2024] Open
Abstract
The expectations for artificial intelligence (AI) technology have increased considerably in recent years, mainly due to the emergence of deep learning. At present, AI technology is being used for various purposes and has brought about change in society. In particular, the rapid development of generative AI technology, exemplified by ChatGPT, has amplified the societal impact of AI. The medical field is no exception, with a wide range of AI technologies being introduced for basic and applied research. Further, AI-equipped software as a medical device (AI-SaMD) is also being approved by regulatory bodies. Combined with the advent of big data, data-driven research utilizing AI is actively pursued. Nevertheless, while AI technology has great potential, it also presents many challenges that require careful consideration. In this review, we introduce the current status of AI-based cancer research, especially from the perspective of clinical application, and discuss the associated challenges and future directions, with the aim of helping to promote cancer research that utilizes effective AI technology.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Masaaki Komatsu
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Masayoshi Yamada
- Department of EndoscopyNational Cancer Center HospitalTokyoJapan
| | - Kazuma Kobayashi
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Masamichi Takahashi
- Department of Neurosurgery and Neuro‐OncologyNational Cancer Center HospitalTokyoJapan
- Department of Neurosurgery, School of MedicineTokai UniversityIseharaKanagawaJapan
| | - Mototaka Miyake
- Department of Diagnostic RadiologyNational Cancer Center HospitalTokyoJapan
| | - Shunichi Jinnai
- Department of Dermatologic OncologyNational Cancer Center Hospital EastKashiwaJapan
| | - Takafumi Koyama
- Department of Experimental TherapeuticsNational Cancer Center HospitalTokyoJapan
| | - Nobuji Kouno
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
- Department of Surgery, Graduate School of MedicineKyoto UniversityKyotoJapan
| | - Hidenori Machino
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Satoshi Takahashi
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Ken Asada
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Naonori Ueda
- Disaster Resilience Science TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| | - Syuzo Kaneko
- Division of Medical AI Research and DevelopmentNational Cancer Center Research InstituteTokyoJapan
- Cancer Translational Research TeamRIKEN Center for Advanced Intelligence ProjectTokyoJapan
| |
Collapse
|
5
|
Vidhyalakshmi AM, Kanchana M. Optimizing Skin Cancer Diagnosis: A Modified Ensemble Convolutional Neural Network for Classification. Microsc Res Tech 2025. [PMID: 39888306 DOI: 10.1002/jemt.24792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 12/12/2024] [Accepted: 12/21/2024] [Indexed: 02/01/2025]
Abstract
Skin cancer is recognized as one of the most harmful cancers worldwide. Early detection of this cancer is an effective measure for treating the disease efficiently. Traditional skin cancer detection methods face scalability challenges and overfitting issues. To address these complexities, this study proposes a random cat swarm optimization (CSO)with an ensemble convolutional neural network (RCS-ECNN) method to categorize the different stages of skin cancer. In this study, two deep learning classifiers, deep neural network (DNN) and Keras DNN (KDNN), are utilized to identify the stages of skin cancer. In this method, an effective preprocessing phase is presented to simplify the classification process. The optimal features are selected using the feature extraction phase. Then, the GrabCut algorithm is employed to carry out the segmentation process. Also, the CSO is employed to enhance the effectiveness of the method. The HAM10000 and ISIC datasets are utilized to evaluate the RCS-ECNN method. The RCS-ECNN method achieved an accuracy of 99.56%, a recall of 99.66%, a specificity value of 99.254%, a precision value of 99.18%, and an F1-score value of 98.545%, respectively. The experimental results demonstrated that the RCS-ECNN method outperforms the existing techniques.
Collapse
Affiliation(s)
- A M Vidhyalakshmi
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu, India
| | - M Kanchana
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu, India
| |
Collapse
|
6
|
Khullar V, Kaur P, Gargrish S, Mishra AM, Singh P, Diwakar M, Bijalwan A, Gupta I. Minimal sourced and lightweight federated transfer learning models for skin cancer detection. Sci Rep 2025; 15:2605. [PMID: 39837883 PMCID: PMC11750969 DOI: 10.1038/s41598-024-82402-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 12/05/2024] [Indexed: 01/23/2025] Open
Abstract
One of the most fatal diseases that affect people is skin cancer. Because nevus and melanoma lesions are so similar and there is a high likelihood of false negative diagnoses challenges in hospitals. The aim of this paper is to propose and develop a technique to classify type of skin cancer with high accuracy using minimal resources and lightweight federated transfer learning models. Here minimal resource based pre-trained deep learning models including EfficientNetV2S, EfficientNetB3, ResNet50, and NasNetMobile have been used to apply transfer learning on data of shape[Formula: see text]. To compare with applied minimal resource transfer learning, same methodology has been applied using best identified model i.e. EfficientNetV2S for images of shape[Formula: see text]. The identified minimal and lightweight resource based EfficientNetV2S with images of shape [Formula: see text] have been applied for federated learning ecosystem. Both, identically and non-identically distributed datasets of shape [Formula: see text] have been applied and analyzed through federated learning implementations. The results have been analyzed to show the impact of low-pixel images with non-identical distributions over clients using parameters such as accuracy, precision, recall and categorical losses. The classification of skin cancer shows an accuracy of IID 89.83% and Non-IID 90.64%.
Collapse
Affiliation(s)
- Vikas Khullar
- Chitkara University Institute of Engineering Technology, Chitkara University, Rajpura, Punjab, India
| | - Prabhjot Kaur
- Chitkara University Institute of Engineering Technology, Chitkara University, Rajpura, Punjab, India
| | - Shubham Gargrish
- Chitkara University Institute of Engineering Technology, Chitkara University, Rajpura, Punjab, India
| | - Anand Muni Mishra
- Chandigarh Engineering College, Chandigarh Group of Colleges, Jhanjeri, Mohali, India
| | - Prabhishek Singh
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida, Uttar Pradesh, India
| | - Manoj Diwakar
- CSE Department, Graphic Era Deemed to be University, Dehradun, Uttrakhand, India
- Graphic Era Hill University, Dehradun, Uttrakhand, India
| | - Anchit Bijalwan
- Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia.
| | - Indrajeet Gupta
- School of Computer Science and AI, SR University, Warangal, Telangana, India
| |
Collapse
|
7
|
Natha P, Tera SP, Chinthaginjala R, Rab SO, Narasimhulu CV, Kim TH. Boosting skin cancer diagnosis accuracy with ensemble approach. Sci Rep 2025; 15:1290. [PMID: 39779772 PMCID: PMC11711234 DOI: 10.1038/s41598-024-84864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 12/27/2024] [Indexed: 01/11/2025] Open
Abstract
Skin cancer is common and deadly, hence a correct diagnosis at an early age is essential. Effective therapy depends on precise classification of the several skin cancer forms, each with special traits. Because dermoscopy and other sophisticated imaging methods produce detailed lesion images, early detection has been enhanced. It's still difficult to analyze the images to differentiate benign from malignant tumors, though. Better predictive modeling methods are needed since the diagnostic procedures used now frequently produce inaccurate and inconsistent results. In dermatology, Machine learning (ML) models are becoming essential for the automatic detection and classification of skin cancer lesions from image data. With the ensemble model, which mix several ML approaches to take use of their advantages and lessen their disadvantages, this work seeks to improve skin cancer predictions. We introduce a new method, the Max Voting method, for optimization of skin cancer classification. On the HAM10000 and ISIC 2018 datasets, we trained and assessed three distinct ML models: Random Forest (RF), Multi-layer Perceptron Neural Network (MLPN), and Support Vector Machine (SVM). Overall performance was increased by the combined predictions made with the Max Voting technique. Moreover, feature vectors that were optimally produced from image data by a Genetic Algorithm (GA) were given to the ML models. We demonstrate that the Max Voting method greatly improves predictive performance, reaching an accuracy of 94.70% and producing the best results for F1-measure, recall, and precision. The most dependable and robust approach turned out to be Max Voting, which combines the benefits of numerous pre-trained ML models to provide a new and efficient method for classifying skin cancer lesions.
Collapse
Affiliation(s)
- Priya Natha
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram, Guntur, Andhra Pradesh, 522302, India
| | - Sivarama Prasad Tera
- Department of Electronics and Electrical Engineering, Indian Institute of Technology, Guwahati, Assam, 781039, India
| | - Ravikumar Chinthaginjala
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, 632014, India.
| | - Safia Obaidur Rab
- Department of Clinical Laboratory Sciences, College of Applied Medical Science, King Khalid University, Abha, Saudi Arabia
| | - C Venkata Narasimhulu
- Department of Electronics and Communication Engineering, Chaitanya Bharati Institute of Technology, Hyderabad, 500075, India
| | - Tae Hoon Kim
- School of Information and Electronic Engineering and Zhejiang Key Laboratory of Biomedical Intelligent Computing Technology, Zhejiang University of Science and Technology, No. 318, Hangzhou, Zhejiang, China.
| |
Collapse
|
8
|
Charles C, Tulloch C, McNaughton M, Hosein P, Hambleton IR. Data journey map: a process for co-creating data requirements for health care artificial intelligence. Rev Panam Salud Publica 2024; 48:e107. [PMID: 39687242 PMCID: PMC11648063 DOI: 10.26633/rpsp.2024.107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 08/12/2024] [Indexed: 12/18/2024] Open
Abstract
The Caribbean small island developing states have limited resources for comprehensive health care provision and are facing an increasing burden of noncommunicable diseases which is driven by an aging regional population. Artificial intelligence (AI) and other digital technologies offer promise for contributing to health care efficiencies, but themselves are dependent on the availability and accessibility of accurate health care data. A regional shortfall in data professionals continues to hamper legislative recognition and promotion of increased data production in Caribbean countries. Tackling the data shortfall will take time and will require a sustainably wider pool of data producers. The data journey map is one approach that can contribute to overcoming such challenges. A data journey map is a process for organizing the collection of health data that focuses on interactions between patient and health care provider. It introduces the idea that data collection is an integral part of the patient journey and that interactions between patient and provider can be enhanced by building data collection into daily health care. A carefully developed and enacted data journey map highlights key points in the care pathway for data collection. These so-called data hotspots can be used to plan - then eventually implement - appropriate AI health care solutions. In this article we introduce the idea of journey mapping, offer an example using cervical cancer prevention and treatment, and discuss the benefits and challenges to implementing such an approach.
Collapse
Affiliation(s)
- Curtis Charles
- Five Islands CampusUniversity of the West IndiesAntigua and BarbudaFive Islands Campus, University of the West Indies, Antigua and Barbuda.
| | - Cherie Tulloch
- The Cervical Cancer Elimination ProgrammeMinistry of Health, Wellness, Social Transformation and the EnvironmentAntigua and BarbudaThe Cervical Cancer Elimination Programme, Ministry of Health, Wellness, Social Transformation and the Environment, Antigua and Barbuda.
| | - Maurice McNaughton
- Centre of Excellence and InnovationMona School of Business & ManagementUniversity of the West IndiesMona CampusJamaicaCentre of Excellence and Innovation, Mona School of Business & Management, University of the West Indies, Mona Campus, Jamaica.
| | - Patrick Hosein
- Department of Computing and Information TechnologyUniversity of the West IndiesSt AugustineTrinidadDepartment of Computing and Information Technology, University of the West Indies, St Augustine, Trinidad
| | - Ian R. Hambleton
- George Alleyne Chronic Disease Research CentreCaribbean Institute for Health ResearchUniversity of the West IndiesBridgetownBarbadosGeorge Alleyne Chronic Disease Research Centre, Caribbean Institute for Health Research, University of the West Indies, Bridgetown, Barbados.
| |
Collapse
|
9
|
Hsu BWY, Hsiao WW, Liu CY, Tseng VS, Lee CH. Rapid and noninvasive estimation of human arsenic exposure based on 4-photo-set of the hand and foot photos through artificial intelligence. JOURNAL OF HAZARDOUS MATERIALS 2024; 480:136003. [PMID: 39378597 DOI: 10.1016/j.jhazmat.2024.136003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Revised: 09/26/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024]
Abstract
Chronic exposure to arsenic is linked to the development of cancers in the skin, lungs, and bladder. Arsenic exposure manifests as variegated pigmentation and characteristic pitted keratosis on the hands and feet, which often precede the onset of internal cancers. Traditionally, human arsenic exposure is estimated through arsenic levels in biological tissues; however, these methods are invasive and time-consuming. This study aims to develop a noninvasive approach to predict arsenic exposure using artificial intelligence (AI) to analyze photographs of hands and feet. By incorporating well water consumption data and arsenic concentration levels, we developed an AI algorithm trained on 9988 hand and foot photographs from 2497 subjects. This algorithm correlates visual features of palmoplantar hyperkeratosis with arsenic exposure levels. Four pictures per patient, capturing both ventral and dorsal aspects of hands and feet, were analyzed. The AI model utilized existing arsenic exposure data, including arsenic concentration (AC) and cumulative arsenic exposure (CAE), to make binary predictions of high and low arsenic exposure. The AI model achieved an optimal area under the curve (AUC) values of 0.813 for AC and 0.779 for CAE. Recall and precision metrics were 0.729 and 0.705 for CAE, and 0.750 and 0.763 for AC, respectively. While biomarkers have traditionally been used to assess arsenic exposure, efficient noninvasive methods are lacking. To our knowledge, this is the first study to leverage deep learning for noninvasive arsenic exposure assessment. Despite challenges with binary classification due to imbalanced and sparse data, this approach demonstrates the potential for noninvasive estimation of arsenic concentration. Future studies should focus on increasing data volume and categorizing arsenic concentration statistics to enhance model accuracy. This rapid estimation method could significantly contribute to epidemiological studies and aid physicians in diagnosis.
Collapse
Affiliation(s)
- Benny Wei-Yun Hsu
- Department of Computer Science, National Yang Ming Chiao Tung University, Engineering Bldg 3, 1001 University Road, Hsinchu 300, Taiwan
| | - Wei-Wen Hsiao
- Department of Computer Science, National Yang Ming Chiao Tung University, Engineering Bldg 3, 1001 University Road, Hsinchu 300, Taiwan
| | - Ching-Yi Liu
- Department of Dermatology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, 123 Dapi Road, Niasong District, Kaohsiung City, Taiwan 83301
| | - Vincent S Tseng
- Department of Computer Science, National Yang Ming Chiao Tung University, Engineering Bldg 3, 1001 University Road, Hsinchu 300, Taiwan.
| | - Chih-Hung Lee
- Department of Dermatology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, 123 Dapi Road, Niasong District, Kaohsiung City, Taiwan 83301.
| |
Collapse
|
10
|
Yuan L, Jin K, Shao A, Feng J, Shi C, Ye J, Grzybowski A. Analysis of international publication trends in artificial intelligence in skin cancer. Clin Dermatol 2024; 42:570-584. [PMID: 39260460 DOI: 10.1016/j.clindermatol.2024.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Bibliometric methods were used to analyze publications on the use of artificial intelligence (AI) in skin cancer from 2010 to 2022, aiming to explore current publication trends and future directions. A comprehensive search using four terms, "artificial intelligence," "machine learning," "deep learning," and "skin cancer," was performed in the Web of Science database for original English language publications on AI in skin cancer from 2010 to 2022. We visually analyzed publication, citation, and coupling information, focusing on authors, countries and regions, publishing journals, institutions, and core keywords. The analysis of 989 publications revealed a consistent year-on-year increase in publications from 2010 to 2022 (0.51% versus 33.57%). The United States, India, and China emerged as the leading contributors. IEEE Access was identified as the most prolific journal in this area. Key journals and influential authors were highlighted. Examination of the top 10 most cited publications highlights the significant potential of AI in oncology. Co-citation network analysis identified four primary categories of classical literature on AI in skin tumors. Keyword analysis indicated that "melanoma," "classification," and "deep learning" were the most prevalent keywords, suggesting that deep learning for melanoma diagnosis and grading is the current research focus. The term "pigmented skin lesions" showed the strongest burst and longest duration, whereas "texture" was the latest emerging keyword. AI represents a rapidly growing area of research in skin cancer with the potential to significantly improve skin cancer management. Future research will likely focus on machine learning and deep learning technologies for screening and diagnostic purposes.
Collapse
Affiliation(s)
- Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - An Shao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jia Feng
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Caiping Shi
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
11
|
Gómez-Martínez V, Chushig-Muzo D, Veierød MB, Granja C, Soguero-Ruiz C. Ensemble feature selection and tabular data augmentation with generative adversarial networks to enhance cutaneous melanoma identification and interpretability. BioData Min 2024; 17:46. [PMID: 39478549 PMCID: PMC11526724 DOI: 10.1186/s13040-024-00397-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 10/09/2024] [Indexed: 11/02/2024] Open
Abstract
BACKGROUND Cutaneous melanoma is the most aggressive form of skin cancer, responsible for most skin cancer-related deaths. Recent advances in artificial intelligence, jointly with the availability of public dermoscopy image datasets, have allowed to assist dermatologists in melanoma identification. While image feature extraction holds potential for melanoma detection, it often leads to high-dimensional data. Furthermore, most image datasets present the class imbalance problem, where a few classes have numerous samples, whereas others are under-represented. METHODS In this paper, we propose to combine ensemble feature selection (FS) methods and data augmentation with the conditional tabular generative adversarial networks (CTGAN) to enhance melanoma identification in imbalanced datasets. We employed dermoscopy images from two public datasets, PH2 and Derm7pt, which contain melanoma and not-melanoma lesions. To capture intrinsic information from skin lesions, we conduct two feature extraction (FE) approaches, including handcrafted and embedding features. For the former, color, geometric and first-, second-, and higher-order texture features were extracted, whereas for the latter, embeddings were obtained using ResNet-based models. To alleviate the high-dimensionality in the FE, ensemble FS with filter methods were used and evaluated. For data augmentation, we conducted a progressive analysis of the imbalance ratio (IR), related to the amount of synthetic samples created, and evaluated the impact on the predictive results. To gain interpretability on predictive models, we used SHAP, bootstrap resampling statistical tests and UMAP visualizations. RESULTS The combination of ensemble FS, CTGAN, and linear models achieved the best predictive results, achieving AUCROC values of 87% (with support vector machine and IR=0.9) and 76% (with LASSO and IR=1.0) for the PH2 and Derm7pt, respectively. We also identified that melanoma lesions were mainly characterized by features related to color, while not-melanoma lesions were characterized by texture features. CONCLUSIONS Our results demonstrate the effectiveness of ensemble FS and synthetic data in the development of models that accurately identify melanoma. This research advances skin lesion analysis, contributing to both melanoma detection and the interpretation of main features for its identification.
Collapse
Affiliation(s)
- Vanesa Gómez-Martínez
- Department of Signal Theory and Communications, Telematics and Computing Systems, Rey Juan Carlos University, Madrid, 28943, Spain.
| | - David Chushig-Muzo
- Department of Signal Theory and Communications, Telematics and Computing Systems, Rey Juan Carlos University, Madrid, 28943, Spain
| | - Marit B Veierød
- Oslo Centre for Biostatistics and Epidemiology, Department of Biostatistics, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Conceição Granja
- Norwegian Centre for E-health Research, University Hospital of North Norway, Tromsø, 9019, Norway
| | - Cristina Soguero-Ruiz
- Department of Signal Theory and Communications, Telematics and Computing Systems, Rey Juan Carlos University, Madrid, 28943, Spain
| |
Collapse
|
12
|
Alzakari SA, Ojo S, Wanliss J, Umer M, Alsubai S, Alasiry A, Marzougui M, Innab N. LesionNet: an automated approach for skin lesion classification using SIFT features with customized convolutional neural network. Front Med (Lausanne) 2024; 11:1487270. [PMID: 39497838 PMCID: PMC11532583 DOI: 10.3389/fmed.2024.1487270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Accepted: 10/02/2024] [Indexed: 11/07/2024] Open
Abstract
Accurate detection of skin lesions through computer-aided diagnosis has emerged as a critical advancement in dermatology, addressing the inefficiencies and errors inherent in manual visual analysis. Despite the promise of automated diagnostic approaches, challenges such as image size variability, hair artifacts, color inconsistencies, ruler markers, low contrast, lesion dimension differences, and gel bubbles must be overcome. Researchers have made significant strides in binary classification problems, particularly in distinguishing melanocytic lesions from normal skin conditions. Leveraging the "MNIST HAM10000" dataset from the International Skin Image Collaboration, this study integrates Scale-Invariant Feature Transform (SIFT) features with a custom convolutional neural network model called LesionNet. The experimental results reveal the model's robustness, achieving an impressive accuracy of 99.28%. This high accuracy underscores the effectiveness of combining feature extraction techniques with advanced neural network models in enhancing the precision of skin lesion detection.
Collapse
Affiliation(s)
- Sarah A. Alzakari
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Stephen Ojo
- College of Engineering, Anderson University, Anderson, SC, United States
| | - James Wanliss
- College of Engineering, Anderson University, Anderson, SC, United States
| | - Muhammad Umer
- Department of Computer Science and Information Technology, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Areej Alasiry
- Department of Informatics and Computer Systems, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Mehrez Marzougui
- Department of Computer Engineering, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Nisreen Innab
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Riyadh, Saudi Arabia
| |
Collapse
|
13
|
Goyal M, Marotti JD, Workman AA, Tooker GM, Ramin SK, Kuhn EP, Chamberlin MD, diFlorio-Alexander RM, Hassanpour S. A multi-model approach integrating whole-slide imaging and clinicopathologic features to predict breast cancer recurrence risk. NPJ Breast Cancer 2024; 10:93. [PMID: 39426965 PMCID: PMC11490577 DOI: 10.1038/s41523-024-00700-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 09/24/2024] [Indexed: 10/21/2024] Open
Abstract
Breast cancer is the most common malignancy affecting women worldwide and is notable for its morphologic and biologic diversity, with varying risks of recurrence following treatment. The Oncotype DX Breast Recurrence Score test is an important predictive and prognostic genomic assay for estrogen receptor positive/HER2 negative breast cancer that guides therapeutic strategies; however, such tests can be expensive, delay care, and are not widely available. The aim of this study was to develop a multi-model approach integrating the analysis of whole-slide images and clinicopathologic data to predict their associated breast cancer recurrence risks and categorize these patients into two risk groups according to the predicted score: low-risk and high-risk. The proposed novel methodology uses convolutional neural networks for feature extraction and vision transformers for contextual aggregation, complemented by a logistic regression model that analyzes clinicopathologic data for classification into two risk categories. This method was trained and tested on 956 hematoxylin and eosin-stained whole-slide images of 950 ER+/HER2- breast cancer patients with corresponding clinicopathological features that had prior Oncotype DX testing. The model's performance was evaluated using an internal test set of 192 patients from Dartmouth Health and an external test set of 405 patients from the University of Chicago. The multi-model approach achieved an AUC of 0.91 (95% CI: 0.87-0.95) on the internal set and an AUC of 0.84 (95% CI: 0.78-0.89) on the external cohort for predicting low- and high-breast cancer recurrence risk categories based on the Oncotype DX recurrence score. With further validation, the proposed methodology could provide an alternative to assist clinicians in personalizing treatment for breast cancer patients and potentially improving their outcomes.
Collapse
Affiliation(s)
- Manu Goyal
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH, USA.
| | - Jonathan D Marotti
- Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH, USA
| | - Adrienne A Workman
- Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH, USA
| | | | - Seth K Ramin
- Geisel School of Medicine, Dartmouth College, Hanover, NH, USA
| | - Elaine P Kuhn
- Geisel School of Medicine, Dartmouth College, Hanover, NH, USA
| | | | | | - Saeed Hassanpour
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH, USA
- Department of Computer Science, Dartmouth College, Hanover, NH, USA
- Department of Epidemiology, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
14
|
Trager MH, Gordon ER, Breneman A, Weng C, Samie FH. Artificial intelligence for nonmelanoma skin cancer. Clin Dermatol 2024; 42:466-476. [PMID: 38925444 DOI: 10.1016/j.clindermatol.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2024]
Abstract
Nonmelanoma skin cancers (NMSCs) are among the top five most common cancers globally. NMSC is an area with great potential for novel application of diagnostic tools including artificial intelligence (AI). In this scoping review, we aimed to describe the applications of AI in the diagnosis and treatment of NMSC. Twenty-nine publications described AI applications to dermatopathology including lesion classification and margin assessment. Twenty-five publications discussed AI use in clinical image analysis, showing that algorithms are not superior to dermatologists and may rely on unbalanced, nonrepresentative, and nontransparent training data sets. Sixteen publications described the use of AI in cutaneous surgery for NMSC including use in margin assessment during excisions and Mohs surgery, as well as predicting procedural complexity. Eleven publications discussed spectroscopy, confocal microscopy, thermography, and the AI algorithms that analyze and interpret their data. Ten publications pertained to AI applications for the discovery and use of NMSC biomarkers. Eight publications discussed the use of smartphones and AI, specifically how they enable clinicians and patients to have increased access to instant dermatologic assessments but with varying accuracies. Five publications discussed large language models and NMSC, including how they may facilitate or hinder patient education and medical decision-making. Three publications pertaining to the skin of color and AI for NMSC discussed concerns regarding limited diverse data sets for the training of convolutional neural networks. AI demonstrates tremendous potential to improve diagnosis, patient and clinician education, and management of NMSC. Despite excitement regarding AI, data sets are often not transparently reported, may include low-quality images, and may not include diverse skin types, limiting generalizability. AI may serve as a tool to increase access to dermatology services for patients in rural areas and save health care dollars. These benefits can only be achieved, however, with consideration of potential ethical costs.
Collapse
Affiliation(s)
- Megan H Trager
- Department of Dermatology, Columbia University Irving Medical Center, New York, NY, USA
| | - Emily R Gordon
- Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA
| | - Alyssa Breneman
- Department of Dermatology, Columbia University Irving Medical Center, New York, NY, USA
| | - Chunhua Weng
- Department of Biomedical Informatics, Columbia University Irving Medical Center, New York, NY, USA
| | - Faramarz H Samie
- Department of Dermatology, Columbia University Irving Medical Center, New York, NY, USA.
| |
Collapse
|
15
|
Yeh HH, Hsu BWY, Chou SY, Hsu TJ, Tseng VS, Lee CH. Deep Deblurring in Teledermatology: Deep Learning Models Restore the Accuracy of Blurry Images' Classification. Telemed J E Health 2024; 30:2477-2482. [PMID: 38934135 DOI: 10.1089/tmj.2023.0703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2024] Open
Abstract
Background: Blurry images in teledermatology and consultation increased the diagnostic difficulty for both deep learning models and physicians. We aim to determine the extent of restoration in diagnostic accuracy after blurry images are deblurred by deep learning models. Methods: We used 19,191 skin images from a public skin image dataset that includes 23 skin disease categories, 54 skin images from a public dataset of blurry skin images, and 53 blurry dermatology consultation photos in a medical center to compare the diagnosis accuracy of trained diagnostic deep learning models and subjective sharpness between blurry and deblurred images. We evaluated five different deblurring models, including models for motion blur, Gaussian blur, Bokeh blur, mixed slight blur, and mixed strong blur. Main Outcomes and Measures: Diagnostic accuracy was measured as sensitivity and precision of correct model prediction of the skin disease category. Sharpness rating was performed by board-certified dermatologists on a 4-point scale, with 4 being the highest image clarity. Results: The sensitivity of diagnostic models dropped 0.15 and 0.22 on slightly and strongly blurred images, respectively, and deblurring models restored 0.14 and 0.17 for each group. The sharpness ratings perceived by dermatologists improved from 1.87 to 2.51 after deblurring. Activation maps showed the focus of diagnostic models was compromised by the blurriness but was restored after deblurring. Conclusions: Deep learning models can restore the diagnostic accuracy of diagnostic models for blurry images and increase image sharpness perceived by dermatologists. The model can be incorporated into teledermatology to help the diagnosis of blurry images.
Collapse
Affiliation(s)
- Hsu-Hang Yeh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Benny Wei-Yun Hsu
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Sheng-Yuan Chou
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ting-Jung Hsu
- Department of Dermatology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Vincent S Tseng
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Chih-Hung Lee
- Department of Dermatology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan
- Institute for Translational Research in Biomedicine, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| |
Collapse
|
16
|
Goyal M, Tafe LJ, Feng JX, Muller KE, Hondelink L, Bentz JL, Hassanpour S. Deep Learning for Grading Endometrial Cancer. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:1701-1711. [PMID: 38879079 PMCID: PMC11373039 DOI: 10.1016/j.ajpath.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 05/10/2024] [Accepted: 05/17/2024] [Indexed: 06/26/2024]
Abstract
Endometrial cancer is the fourth most common cancer in women in the United States, with a lifetime risk of approximately 2.8%. Precise histologic evaluation and molecular classification of endometrial cancer are important for effective patient management and determining the best treatment options. This study introduces EndoNet, which uses convolutional neural networks for extracting histologic features and a vision transformer for aggregating these features and classifying slides into high- and low-grade cases. The model was trained on 929 digitized hematoxylin and eosin-stained whole-slide images of endometrial cancer from hysterectomy cases at Dartmouth-Health. It classifies these slides into low-grade (endometrioid grades 1 and 2) and high-grade (endometrioid carcinoma International Federation of Gynecology and Obstetrics grade 3, uterine serous carcinoma, or carcinosarcoma) categories. EndoNet was evaluated on an internal test set of 110 patients and an external test set of 100 patients from The Cancer Genome Atlas database. The model achieved a weighted average F1 score of 0.91 (95% CI, 0.86 to 0.95) and an area under the curve of 0.95 (95% CI, 0.89 to 0.99) on the internal test, and 0.86 (95% CI, 0.80 to 0.94) for F1 score and 0.86 (95% CI, 0.75 to 0.93) for area under the curve on the external test. Pending further validation, EndoNet has the potential to support pathologists without the need of manual annotations in classifying the grades of gynecologic pathology tumors.
Collapse
Affiliation(s)
- Manu Goyal
- Department of Biomedical Data Science, Dartmouth College, Hanover, New Hampshire.
| | - Laura J Tafe
- Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, New Hampshire
| | - James X Feng
- Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire
| | - Kristen E Muller
- Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, New Hampshire
| | - Liesbeth Hondelink
- Department of Pathology, Leiden University Medical Center, Leiden, the Netherlands
| | - Jessica L Bentz
- Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, New Hampshire
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Dartmouth College, Hanover, New Hampshire
| |
Collapse
|
17
|
Malik SG, Jamil SS, Aziz A, Ullah S, Ullah I, Abohashrh M. High-Precision Skin Disease Diagnosis through Deep Learning on Dermoscopic Images. Bioengineering (Basel) 2024; 11:867. [PMID: 39329609 PMCID: PMC11440112 DOI: 10.3390/bioengineering11090867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 08/21/2024] [Accepted: 08/22/2024] [Indexed: 09/28/2024] Open
Abstract
Dermatological conditions are primarily prevalent in humans and are primarily caused by environmental and climatic fluctuations, as well as various other reasons. Timely identification is the most effective remedy to avert minor ailments from escalating into severe conditions. Diagnosing skin illnesses is consistently challenging for health practitioners. Presently, they rely on conventional methods, such as examining the condition of the skin. State-of-the-art technologies can enhance the accuracy of skin disease diagnosis by utilizing data-driven approaches. This paper presents a Computer Assisted Diagnosis (CAD) framework that has been developed to detect skin illnesses at an early stage. We suggest a computationally efficient and lightweight deep learning model that utilizes a CNN architecture. We then do thorough experiments to compare the performance of shallow and deep learning models. The CNN model under consideration consists of seven convolutional layers and has obtained an accuracy of 87.64% when applied to three distinct disease categories. The studies were conducted using the International Skin Imaging Collaboration (ISIC) dataset, which exclusively consists of dermoscopic images. This study enhances the field of skin disease diagnostics by utilizing state-of-the-art technology, attaining exceptional levels of accuracy, and striving for efficiency improvements. The unique features and future considerations of this technology create opportunities for additional advancements in the automated diagnosis of skin diseases and tailored treatment.
Collapse
Affiliation(s)
- Sadia Ghani Malik
- School of Computing, National University of Computer & Emerging Sciences, Karachi 75030, Pakistan
| | - Syed Shahryar Jamil
- College of Computing and Information Sciences, PAF Karachi Institute of Economics and Technology (PAFKIET), Karachi 74600, Pakistan
| | - Abdul Aziz
- School of Computing, National University of Computer & Emerging Sciences, Karachi 75030, Pakistan
| | - Sana Ullah
- Department of Software Engineering, University of Malakand, Malakand 18800, Pakistan
| | - Inam Ullah
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea
| | - Mohammed Abohashrh
- Department of Basic Medical Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61421, Saudi Arabia
| |
Collapse
|
18
|
Mishra A, Tabassum N, Aggarwal A, Kim YM, Khan F. Artificial Intelligence-Driven Analysis of Antimicrobial-Resistant and Biofilm-Forming Pathogens on Biotic and Abiotic Surfaces. Antibiotics (Basel) 2024; 13:788. [PMID: 39200087 PMCID: PMC11351874 DOI: 10.3390/antibiotics13080788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 08/14/2024] [Accepted: 08/19/2024] [Indexed: 09/01/2024] Open
Abstract
The growing threat of antimicrobial-resistant (AMR) pathogens to human health worldwide emphasizes the need for more effective infection control strategies. Bacterial and fungal biofilms pose a major challenge in treating AMR pathogen infections. Biofilms are formed by pathogenic microbes encased in extracellular polymeric substances to confer protection from antimicrobials and the host immune system. Biofilms also promote the growth of antibiotic-resistant mutants and latent persister cells and thus complicate therapeutic approaches. Biofilms are ubiquitous and cause serious health risks due to their ability to colonize various surfaces, including human tissues, medical devices, and food-processing equipment. Detection and characterization of biofilms are crucial for prompt intervention and infection control. To this end, traditional approaches are often effective, yet they fail to identify the microbial species inside biofilms. Recent advances in artificial intelligence (AI) have provided new avenues to improve biofilm identification. Machine-learning algorithms and image-processing techniques have shown promise for the accurate and efficient detection of biofilm-forming microorganisms on biotic and abiotic surfaces. These advancements have the potential to transform biofilm research and clinical practice by allowing faster diagnosis and more tailored therapy. This comprehensive review focuses on the application of AI techniques for the identification of biofilm-forming pathogens in various industries, including healthcare, food safety, and agriculture. The review discusses the existing approaches, challenges, and potential applications of AI in biofilm research, with a particular focus on the role of AI in improving diagnostic capacities and guiding preventative actions. The synthesis of the current knowledge and future directions, as described in this review, will guide future research and development efforts in combating biofilm-associated infections.
Collapse
Affiliation(s)
- Akanksha Mishra
- School of Bioengineering and Biosciences, Lovely Professional University, Phagwara 144001, Punjab, India;
| | - Nazia Tabassum
- Marine Integrated Biomedical Technology Center, The National Key Research Institutes in Universities, Pukyong National University, Busan 48513, Republic of Korea; (N.T.); (Y.-M.K.)
- Research Center for Marine Integrated Bionics Technology, Pukyong National University, Busan 48513, Republic of Korea
| | - Ashish Aggarwal
- School of Bioengineering and Biosciences, Lovely Professional University, Phagwara 144001, Punjab, India;
| | - Young-Mog Kim
- Marine Integrated Biomedical Technology Center, The National Key Research Institutes in Universities, Pukyong National University, Busan 48513, Republic of Korea; (N.T.); (Y.-M.K.)
- Research Center for Marine Integrated Bionics Technology, Pukyong National University, Busan 48513, Republic of Korea
- Department of Food Science and Technology, Pukyong National University, Busan 48513, Republic of Korea
| | - Fazlurrahman Khan
- Marine Integrated Biomedical Technology Center, The National Key Research Institutes in Universities, Pukyong National University, Busan 48513, Republic of Korea; (N.T.); (Y.-M.K.)
- Research Center for Marine Integrated Bionics Technology, Pukyong National University, Busan 48513, Republic of Korea
- Institute of Fisheries Science, Pukyong National University, Busan 48513, Republic of Korea
- International Graduate Program of Fisheries Science, Pukyong National University, Busan 48513, Republic of Korea
| |
Collapse
|
19
|
Ali OME, Wright B, Goodhead C, Hampton PJ. Patient-led skin cancer teledermatology without dermoscopy during the COVID-19 pandemic: important lessons for the development of future patient-facing teledermatology and artificial intelligence-assisted -self-diagnosis. Clin Exp Dermatol 2024; 49:1056-1059. [PMID: 38589979 DOI: 10.1093/ced/llae126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 03/20/2024] [Accepted: 04/01/2024] [Indexed: 04/10/2024]
Abstract
MySkinSelfie is a mobile phone application for skin self-monitoring, enabling secure sharing of patient-captured images with healthcare providers. This retrospective study assessed MySkinSelfie's role in remote skin cancer assessment at two centres for urgent (melanoma and squamous cell carcinoma) and nonurgent skin cancer referrals, investigating the feasibility of using patient-captured images without dermoscopy for remote diagnosis. The total number of lesions using MySkinSelfie was 814, with a mean patient age of 63 years. Remote consultations reduced face-to-face appointments by 90% for basal cell carcinoma and by 63% for referrals on a 2-week waiting list. Diagnostic concordance (consultant vs. histological diagnosis) rates of 72% and 83% were observed for basal cell carcinoma (n = 107) and urgent skin cancers (n = 704), respectively. Challenges included image quality, workflow integration and lack of dermoscopy. Higher sensitivities were observed in recent artificial intelligence algorithms employing dermoscopy. While patient-captured images proved useful during the COVID-19 pandemic, further research is needed to explore the feasibility of widespread patient-led dermoscopy to enable direct patient-to-artificial intelligence diagnostic assessment.
Collapse
Affiliation(s)
- Omar M E Ali
- Department of Medicine, Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Beth Wright
- Department of Dermatology, North Bristol NHS Trust, Westbury on Trym, UK
| | - Charlotte Goodhead
- Department of Dermatology, Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Philip J Hampton
- Department of Dermatology, Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
- National Institute for Health and Care Research Newcastle Biomedical Research Centre, Newcastle upon Tyne, UK
| |
Collapse
|
20
|
Lee ST, Lee JH. Review of neuromorphic computing based on NAND flash memory. NANOSCALE HORIZONS 2024; 9:1475-1492. [PMID: 39015048 DOI: 10.1039/d3nh00532a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2024]
Abstract
The proliferation of data has facilitated global accessibility, which demands escalating amounts of power for data storage and processing purposes. In recent years, there has been a rise in research in the field of neuromorphic electronics, which draws inspiration from biological neurons and synapses. These electronics possess the ability to perform in-memory computing, which helps alleviate the limitations imposed by the 'von Neumann bottleneck' that exists between the memory and processor in the traditional von Neumann architecture. By leveraging their multi-bit non-volatility, characteristics that mimic biology, and Kirchhoff's law, neuromorphic electronics offer a promising solution to reduce the power consumption in processing vector-matrix multiplication tasks. Among all the existing nonvolatile memory technologies, NAND flash memory is one of the most competitive integrated solutions for the storage of large volumes of data. This work provides a comprehensive overview of the recent developments in neuromorphic computing based on NAND flash memory. Neuromorphic architectures using NAND flash memory for off-chip learning are presented with various quantization levels of input and weight. Next, neuromorphic architectures for on-chip learning are presented using standard backpropagation and feedback alignment algorithms. The array architecture, operation scheme, and electrical characteristics of NAND flash memory are discussed with a focus on the use of NAND flash memory in various neural network structures. Furthermore, the discrepancy of array architecture between on-chip learning and off-chip learning is addressed. This review article provides a foundation for understanding the neuromorphic computing based on the NAND flash memory and methods to utilize it based on application requirements.
Collapse
Affiliation(s)
- Sung-Tae Lee
- School of Electronic and Electrical Engineering, Hongik University, Seoul 04066, Republic of Korea.
| | - Jong-Ho Lee
- The Inter-University Semiconductor Research Center, Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, Republic of Korea.
- Minstry of Sciecne and ICT, Sejong, Korea
| |
Collapse
|
21
|
Maurya R, Mahapatra S, Dutta MK, Singh VP, Karnati M, Sahu G, Pandey NN. Skin cancer detection through attention guided dual autoencoder approach with extreme learning machine. Sci Rep 2024; 14:17785. [PMID: 39090261 PMCID: PMC11294626 DOI: 10.1038/s41598-024-68749-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024] Open
Abstract
Skin cancer is a lethal disease, and its early detection plays a pivotal role in preventing its spread to other body organs and tissues. Artificial Intelligence (AI)-based automated methods can play a significant role in its early detection. This study presents an AI-based novel approach, termed 'DualAutoELM' for the effective identification of various types of skin cancers. The proposed method leverages a network of autoencoders, comprising two distinct autoencoders: the spatial autoencoder and the FFT (Fast Fourier Transform)-autoencoder. The spatial-autoencoder specializes in learning spatial features within input lesion images whereas the FFT-autoencoder learns to capture textural and distinguishing frequency patterns within transformed input skin lesion images through the reconstruction process. The use of attention modules at various levels within the encoder part of these autoencoders significantly improves their discriminative feature learning capabilities. An Extreme Learning Machine (ELM) with a single layer of feedforward is trained to classify skin malignancies using the characteristics that were recovered from the bottleneck layers of these autoencoders. The 'HAM10000' and 'ISIC-2017' are two publicly available datasets used to thoroughly assess the suggested approach. The experimental findings demonstrate the accuracy and robustness of the proposed technique, with AUC, precision, and accuracy values for the 'HAM10000' dataset being 0.98, 97.68% and 97.66%, and for the 'ISIC-2017' dataset being 0.95, 86.75% and 86.68%, respectively. This study highlights the possibility of the suggested approach for accurate detection of skin cancer.
Collapse
Affiliation(s)
- Ritesh Maurya
- Amity Centre for Artificial Intelligence, Amity University, Noida, India
| | - Satyajit Mahapatra
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India.
| | | | - Vibhav Prakash Singh
- Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology Allahabad, Allahabad, India
| | - Mohan Karnati
- Computer Science and Engineering Department, National Institute of Technology Raipur, Chhattisgarh, 492010, India
| | - Geet Sahu
- Amity Centre for Artificial Intelligence, Amity University, Noida, India
| | - Nageshwar Nath Pandey
- Department of Computer Science and Engineering, ITER, Siksha 'O' Anusandhan, Bhubaneswar, Odisha, India
| |
Collapse
|
22
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
23
|
Atwal K. Artificial intelligence in clinical nutrition and dietetics: A brief overview of current evidence. Nutr Clin Pract 2024; 39:736-742. [PMID: 38591653 DOI: 10.1002/ncp.11150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 03/07/2024] [Accepted: 03/13/2024] [Indexed: 04/10/2024] Open
Abstract
The rapid surge in artificial intelligence (AI) has dominated technological innovation in today's society. As experts begin to understand the potential, a spectrum of opportunities could yield a remarkable revolution. The upsurge in healthcare could transform clinical interventions and outcomes, but it risks dehumanization and increased unethical practices. The field of clinical nutrition and dietetics is no exception. This article finds a multitude of developments underway, which include the use of AI for malnutrition screening; predicting clinical outcomes, such as disease onset, and clinical risks, such as drug interactions; aiding interventions, such as estimating nutrient intake; applying precision nutrition, such as measuring postprandial glycemic response; and supporting workflow through chatbots trained on natural language models. Although the opportunity and scalability of AI is incalculably attractive, especially in the face of poor healthcare resources, the threat cannot be ignored. The risk of malpractice and lack of accountability are some of the main concerns. As such, the healthcare professional's responsibility remains paramount. The data used to train AI models could be biased, which could risk the quality of care to vulnerable or minority patient groups. Standardized AI-development protocols, benchmarked to care recommendations, with rigorous large-scale validation are required to maximize application among different settings. AI could overturn the healthcare landscape, and this article skims the surface of its potential in clinical nutrition and dietetics.
Collapse
Affiliation(s)
- Kiranjit Atwal
- Department of Nutritional Sciences, King's College London, London, UK
- School of Health Professions, University of Plymouth, Plymouth, UK
| |
Collapse
|
24
|
Imran M, Akram MU, Salam AA. Transformer-Based Skin Carcinoma Classification using Histopathology Images via Incremental Learning. 2024 14TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION SYSTEMS (ICPRS) 2024:1-7. [DOI: 10.1109/icprs62101.2024.10677812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Muhammad Imran
- National University of Sciences and Technology,Dept. of Mechatronics Engr.,Islamabad,Pakistan
| | - Muhammad Usman Akram
- National University of Sciences and Technology,Dept. of Comp. & Software Engr.,Islamabad,Pakistan
| | - Anum Abdul Salam
- National University of Sciences and Technology,Dept. of Comp. & Software Engr.,Islamabad,Pakistan
| |
Collapse
|
25
|
Semerci ZM, Toru HS, Çobankent Aytekin E, Tercanlı H, Chiorean DM, Albayrak Y, Cotoi OS. The Role of Artificial Intelligence in Early Diagnosis and Molecular Classification of Head and Neck Skin Cancers: A Multidisciplinary Approach. Diagnostics (Basel) 2024; 14:1477. [PMID: 39061614 PMCID: PMC11276319 DOI: 10.3390/diagnostics14141477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/01/2024] [Accepted: 07/09/2024] [Indexed: 07/28/2024] Open
Abstract
Cancer remains a significant global health concern, with increasing genetic and metabolic irregularities linked to its onset. Among various forms of cancer, skin cancer, including squamous cell carcinoma, basal cell carcinoma, and melanoma, is on the rise worldwide, often triggered by ultraviolet (UV) radiation. The propensity of skin cancer to metastasize highlights the importance of early detection for successful treatment. This narrative review explores the evolving role of artificial intelligence (AI) in diagnosing head and neck skin cancers from both radiological and pathological perspectives. In the past two decades, AI has made remarkable progress in skin cancer research, driven by advances in computational capabilities, digitalization of medical images, and radiomics data. AI has shown significant promise in image-based diagnosis across various medical domains. In dermatology, AI has played a pivotal role in refining diagnostic and treatment strategies, including genomic risk assessment. This technology offers substantial potential to aid primary clinicians in improving patient outcomes. Studies have demonstrated AI's effectiveness in identifying skin lesions, categorizing them, and assessing their malignancy, contributing to earlier interventions and better prognosis. The rising incidence and mortality rates of skin cancer, coupled with the high cost of treatment, emphasize the need for early diagnosis. Further research and integration of AI into clinical practice are warranted to maximize its benefits in skin cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Zeliha Merve Semerci
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Akdeniz University, 07070 Antalya, Turkey; (Z.M.S.); (H.T.)
| | - Havva Serap Toru
- Department of Pathology, Faculty of Medicine, Akdeniz University, 07070 Antalya, Turkey
| | | | - Hümeyra Tercanlı
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Akdeniz University, 07070 Antalya, Turkey; (Z.M.S.); (H.T.)
| | - Diana Maria Chiorean
- Department of Pathology, County Clinical Hospital of Targu Mures, 540072 Targu Mures, Romania; (D.M.C.); (O.S.C.)
- Department of Pathophysiology, “George Emil Palade” University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 38 Gheorghe Marinescu Street, 540142 Targu Mures, Romania
| | - Yalçın Albayrak
- Department of Electric and Electronic Engineering, Faculty of Engineering, Akdeniz University, 07010 Antalya, Turkey;
| | - Ovidiu Simion Cotoi
- Department of Pathology, County Clinical Hospital of Targu Mures, 540072 Targu Mures, Romania; (D.M.C.); (O.S.C.)
- Department of Pathophysiology, “George Emil Palade” University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 38 Gheorghe Marinescu Street, 540142 Targu Mures, Romania
| |
Collapse
|
26
|
Xu C, Wu J, Zhang F, Freer J, Zhang Z, Cheng Y. A deep image classification model based on prior feature knowledge embedding and application in medical diagnosis. Sci Rep 2024; 14:13244. [PMID: 38853158 PMCID: PMC11163012 DOI: 10.1038/s41598-024-63818-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 06/03/2024] [Indexed: 06/11/2024] Open
Abstract
Aiming at the problem of image classification with insignificant morphological structural features, strong target correlation, and low signal-to-noise ratio, combined with prior feature knowledge embedding, a deep learning method based on ResNet and Radial Basis Probabilistic Neural Network (RBPNN) is proposed model. Taking ResNet50 as a visual modeling network, it uses feature pyramid and self-attention mechanism to extract appearance and semantic features of images at multiple scales, and associate and enhance local and global features. Taking into account the diversity of category features, channel cosine similarity attention and dynamic C-means clustering algorithms are used to select representative sample features in different category of sample subsets to implicitly express prior category feature knowledge, and use them as the kernel centers of radial basis probability neurons (RBPN) to realize the embedding of diverse prior feature knowledge. In the RBPNN pattern aggregation layer, the outputs of RBPN are selectively summed according to the category of the kernel center, that is, the subcategory features are combined into category features, and finally the image classification is implemented based on Softmax. The functional module of the proposed method is designed specifically for image characteristics, which can highlight the significance of local and structural features of the image, form a non-convex decision-making area, and reduce the requirements for the completeness of the sample set. Applying the proposed method to medical image classification, experiments were conducted based on the brain tumor MRI image classification public dataset and the actual cardiac ultrasound image dataset, and the accuracy rate reached 85.82% and 83.92% respectively. Compared with the three mainstream image classification models, the performance indicators of this method have been significantly improved.
Collapse
Affiliation(s)
- Chen Xu
- School of Computer Science, Fudan University, Shanghai, China.
| | - Jiangxing Wu
- School of Computer Science, Fudan University, Shanghai, China
| | - Fan Zhang
- School of Computer Science, Fudan University, Shanghai, China
| | - Jonathan Freer
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Zhongqun Zhang
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Yihua Cheng
- School of Computer Science, University of Birmingham, Birmingham, UK
| |
Collapse
|
27
|
Primiero CA, Rezze GG, Caffery LJ, Carrera C, Podlipnik S, Espinosa N, Puig S, Janda M, Soyer HP, Malvehy J. A Narrative Review: Opportunities and Challenges in Artificial Intelligence Skin Image Analyses Using Total Body Photography. J Invest Dermatol 2024; 144:1200-1207. [PMID: 38231164 DOI: 10.1016/j.jid.2023.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/19/2023] [Accepted: 11/09/2023] [Indexed: 01/18/2024]
Abstract
Artificial intelligence (AI) algorithms for skin lesion classification have reported accuracy at par with and even outperformance of expert dermatologists in experimental settings. However, the majority of algorithms do not represent real-world clinical approach where skin phenotype and clinical background information are considered. We review the current state of AI for skin lesion classification and present opportunities and challenges when applied to total body photography (TBP). AI in TBP analysis presents opportunities for intrapatient assessment of skin phenotype and holistic risk assessment by incorporating patient-level metadata, although challenges exist for protecting patient privacy in algorithm development and improving explainable AI methods.
Collapse
Affiliation(s)
- Clare A Primiero
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia
| | - Gisele Gargantini Rezze
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Liam J Caffery
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia; Centre of Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia; Centre for Online Health, The University of Queensland, Brisbane, Australia
| | - Cristina Carrera
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Medicine Department, University of Barcelona, Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain
| | - Sebastian Podlipnik
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain
| | - Natalia Espinosa
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Susana Puig
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Medicine Department, University of Barcelona, Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain
| | - Monika Janda
- Centre of Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | - H Peter Soyer
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia; Dermatology Department, Princess Alexandra Hospital, Brisbane, Australia
| | - Josep Malvehy
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Medicine Department, University of Barcelona, Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain.
| |
Collapse
|
28
|
Andersson E, Hult J, Troein C, Stridh M, Sjögren B, Pekar-Lukacs A, Hernandez-Palacios J, Edén P, Persson B, Olariu V, Malmsjö M, Merdasa A. Facilitating clinically relevant skin tumor diagnostics with spectroscopy-driven machine learning. iScience 2024; 27:109653. [PMID: 38680659 PMCID: PMC11053315 DOI: 10.1016/j.isci.2024.109653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/26/2024] [Accepted: 04/01/2024] [Indexed: 05/01/2024] Open
Abstract
In the dawning era of artificial intelligence (AI), health care stands to undergo a significant transformation with the increasing digitalization of patient data. Digital imaging, in particular, will serve as an important platform for AI to aid decision making and diagnostics. A growing number of studies demonstrate the potential of automatic pre-surgical skin tumor delineation, which could have tremendous impact on clinical practice. However, current methods rely on having ground truth images in which tumor borders are already identified, which is not clinically possible. We report a novel approach where hyperspectral images provide spectra from small regions representing healthy tissue and tumor, which are used to generate prediction maps using artificial neural networks (ANNs), after which a segmentation algorithm automatically identifies the tumor borders. This circumvents the need for ground truth images, since an ANN model is trained with data from each individual patient, representing a more clinically relevant approach.
Collapse
Affiliation(s)
- Emil Andersson
- Centre for Environmental and Climate Science, Lund University, Lund, Sweden
| | - Jenny Hult
- Department of Clinical Sciences Lund, Ophthalmology, Lund University, Lund, Sweden
| | - Carl Troein
- Centre for Environmental and Climate Science, Lund University, Lund, Sweden
| | - Magne Stridh
- Department of Clinical Sciences Lund, Ophthalmology, Lund University, Lund, Sweden
| | - Benjamin Sjögren
- Department of Clinical Sciences Lund, Ophthalmology, Lund University, Lund, Sweden
| | | | | | - Patrik Edén
- Centre for Environmental and Climate Science, Lund University, Lund, Sweden
| | - Bertil Persson
- Department of Dermatology, Skåne University Hospital, Lund, Sweden
| | - Victor Olariu
- Centre for Environmental and Climate Science, Lund University, Lund, Sweden
| | - Malin Malmsjö
- Department of Clinical Sciences Lund, Ophthalmology, Lund University, Lund, Sweden
| | - Aboma Merdasa
- Department of Clinical Sciences Lund, Ophthalmology, Lund University, Lund, Sweden
| |
Collapse
|
29
|
Winkler JK, Kommoss KS, Toberer F, Enk A, Maul LV, Navarini AA, Hudson J, Salerni G, Rosenberger A, Haenssle HA. Performance of an automated total body mapping algorithm to detect melanocytic lesions of clinical relevance. Eur J Cancer 2024; 202:114026. [PMID: 38547776 DOI: 10.1016/j.ejca.2024.114026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/11/2024] [Accepted: 03/14/2024] [Indexed: 04/21/2024]
Abstract
IMPORTANCE Total body photography for skin cancer screening is a well-established tool allowing documentation and follow-up of the entire skin surface. Artificial intelligence-based systems are increasingly applied for automated lesion detection and diagnosis. DESIGN AND PATIENTS In this prospective observational international multicentre study experienced dermatologists performed skin cancer screenings and identified clinically relevant melanocytic lesions (CRML, requiring biopsy or observation). Additionally, patients received 2D automated total body mapping (ATBM) with automated lesion detection (ATBM master, Fotofinder Systems GmbH). Primary endpoint was the percentage of CRML detected by the bodyscan software. Secondary endpoints included the percentage of correctly identified "new" and "changed" lesions during follow-up examinations. RESULTS At baseline, dermatologists identified 1075 CRML in 236 patients and 999 CRML (92.9%) were also detected by the automated software. During follow-up examinations dermatologists identified 334 CRMLs in 55 patients, with 323 (96.7%) also being detected by ATBM with automated lesions detection. Moreover, all new (n = 13) or changed CRML (n = 24) during follow-up were detected by the software. Average time requirements per baseline examination was 14.1 min (95% CI [12.8-15.5]). Subgroup analysis of undetected lesions revealed either technical (e.g. covering by clothing, hair) or lesion-specific reasons (e.g. hypopigmentation, palmoplantar sites). CONCLUSIONS ATBM with lesion detection software correctly detected the vast majority of CRML and new or changed CRML during follow-up examinations in a favourable amount of time. Our prospective international study underlines that automated lesion detection in TBP images is feasible, which is of relevance for developing AI-based skin cancer screenings.
Collapse
Affiliation(s)
- Julia K Winkler
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| | | | - Ferdinand Toberer
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Alexander Enk
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Lara V Maul
- Department of Dermatology, University Hospital of Basel, Basel, Switzerland
| | | | - Jeremy Hudson
- North Queensland Skin Centre, Townsville, Queensland, Australia
| | - Gabriel Salerni
- Department of Dermatology, Hospital Provincial del Centenario de Rosario- Universidad Nacional de Rosario, Rosario, Argentina
| | - Albert Rosenberger
- Institute of Genetic Epidemiology, University Medical Center, Georg-August University of Goettingen, Goettingen, Germany
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| |
Collapse
|
30
|
Mahmoud NM, Soliman AM. Early automated detection system for skin cancer diagnosis using artificial intelligent techniques. Sci Rep 2024; 14:9749. [PMID: 38679633 PMCID: PMC11056372 DOI: 10.1038/s41598-024-59783-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 04/15/2024] [Indexed: 05/01/2024] Open
Abstract
Recently, skin cancer is one of the spread and dangerous cancers around the world. Early detection of skin cancer can reduce mortality. Traditional methods for skin cancer detection are painful, time-consuming, expensive, and may cause the disease to spread out. Dermoscopy is used for noninvasive diagnosis of skin cancer. Artificial Intelligence (AI) plays a vital role in diseases' diagnosis especially in biomedical engineering field. The automated detection systems based on AI reduce the complications in the traditional methods and can improve skin cancer's diagnosis rate. In this paper, automated early detection system for skin cancer dermoscopic images using artificial intelligent is presented. Adaptive snake (AS) and region growing (RG) algorithms are used for automated segmentation and compared with each other. The results show that AS is accurate and efficient (accuracy = 96%) more than RG algorithm (accuracy = 90%). Artificial Neural networks (ANN) and support vector machine (SVM) algorithms are used for automated classification compared with each other. The proposed system with ANN algorithm shows high accuracy (94%), precision (96%), specificity (95.83%), sensitivity (recall) (92.30%), and F1-score (0.94). The proposed system is easy to use, time consuming, enables patients to make early detection for skin cancer and has high efficiency.
Collapse
Affiliation(s)
- Nourelhoda M Mahmoud
- Biomedical Engineering Department, Faculty of Engineering, Minia University, Minya, Egypt.
| | - Ahmed M Soliman
- Biomedical Engineering Department, Faculty of Engineering, Helwan University, Cairo, Egypt
| |
Collapse
|
31
|
Patel NC. How might the rapid development of artificial intelligence affect the delivery of UK Defence healthcare? BMJ Mil Health 2024:e002682. [PMID: 38604755 DOI: 10.1136/military-2024-002682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 03/11/2024] [Indexed: 04/13/2024]
Abstract
Artificial intelligence (AI) has developed greatly and is now at the centre of technological advancements. Current and recent military conflicts have highlighted the evolving complexity of warfare with rapid technological change at the heart of it. AI aims to understand and design systems that show signs of intelligence and are able to learn by deriving knowledge from data. There have been multiple AI-related developments in the medical field in areas such as diagnostics, triage, wearable technology and training with direct translations that may benefit UK Defence healthcare. With the increasing use of AI in society and medical practice, it is important to consider whether AI can be trustworthy and has any legal implications, and evaluate its use through an ethical lens. In conclusion, the rapid development of AI presents exciting opportunities for UK Defence to enhance its healthcare delivery. This paper was selected as the BMJ Military Health Essay Prize winner at the Royal Society of Medicine Colt Foundation Meeting 2023.
Collapse
|
32
|
Lindholm V, Annala L, Koskenmies S, Pitkänen S, Isoherranen K, Järvinen A, Jeskanen L, Pölönen I, Ranki A, Raita‐Hakola A, Salmivuori M. Discriminating basal cell carcinoma and Bowen's disease from benign skin lesions with a 3D hyperspectral imaging system and convolutional neural networks. Skin Res Technol 2024; 30:e13677. [PMID: 38558486 PMCID: PMC10982671 DOI: 10.1111/srt.13677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/12/2024] [Indexed: 04/04/2024]
Affiliation(s)
- Vivian Lindholm
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Leevi Annala
- Faculty of Information TechnologyUniversity of JyväskyläJyväskyläFinland
- Department of Food and NutritionUniversity of HelsinkiHelsinkiFinland
- Department of Computer ScienceUniversity of HelsinkiHelsinkiFinland
| | - Sari Koskenmies
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Sari Pitkänen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Kirsi Isoherranen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Anna Järvinen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Leila Jeskanen
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Ilkka Pölönen
- Faculty of Information TechnologyUniversity of JyväskyläJyväskyläFinland
| | - Annamari Ranki
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | | | - Mari Salmivuori
- Department of Dermatology and AllergologyUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| |
Collapse
|
33
|
Fliorent R, Fardman B, Podwojniak A, Javaid K, Tan IJ, Ghani H, Truong TM, Rao B, Heath C. Artificial intelligence in dermatology: advancements and challenges in skin of color. Int J Dermatol 2024; 63:455-461. [PMID: 38444331 DOI: 10.1111/ijd.17076] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/13/2024] [Accepted: 01/30/2024] [Indexed: 03/07/2024]
Abstract
Artificial intelligence (AI) uses algorithms and large language models in computers to simulate human-like problem-solving and decision-making. AI programs have recently acquired widespread popularity in the field of dermatology through the application of online tools in the assessment, diagnosis, and treatment of skin conditions. A literature review was conducted using PubMed and Google Scholar analyzing recent literature (from the last 10 years through October 2023) to evaluate current AI programs in use for dermatologic purposes, identifying challenges in this technology when applied to skin of color (SOC), and proposing future steps to enhance the role of AI in dermatologic practice. Challenges surrounding AI and its application to SOC stem from the underrepresentation of SOC in datasets and issues with image quality and standardization. With these existing issues, current AI programs inevitably do worse at identifying lesions in SOC. Additionally, only 30% of the programs identified in this review had data reported on their use in dermatology, specifically in SOC. Significant development of these applications is required for the accurate depiction of darker skin tone images in datasets. More research is warranted in the future to better understand the efficacy of AI in aiding diagnosis and treatment options for SOC patients.
Collapse
Affiliation(s)
| | - Brian Fardman
- Rowan-Virtua School of Osteopathic Medicine, Stratford, NJ, USA
| | | | - Kiran Javaid
- Rowan-Virtua School of Osteopathic Medicine, Stratford, NJ, USA
| | - Isabella J Tan
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Hira Ghani
- Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Thu M Truong
- Center for Dermatology, Rutgers Robert Wood Johnson, Somerset, NJ, USA
| | - Babar Rao
- Center for Dermatology, Rutgers Robert Wood Johnson, Somerset, NJ, USA
| | - Candrice Heath
- Lewis Katz School of Medicine at Temple University, Philadelphia, PA, USA
| |
Collapse
|
34
|
Naeem A, Anees T. DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images. PLoS One 2024; 19:e0297667. [PMID: 38507348 PMCID: PMC10954125 DOI: 10.1371/journal.pone.0297667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/11/2024] [Indexed: 03/22/2024] Open
Abstract
Skin cancer is a common cancer affecting millions of people annually. Skin cells inside the body that grow in unusual patterns are a sign of this invasive disease. The cells then spread to other organs and tissues through the lymph nodes and destroy them. Lifestyle changes and increased solar exposure contribute to the rise in the incidence of skin cancer. Early identification and staging are essential due to the high mortality rate associated with skin cancer. In this study, we presented a deep learning-based method named DVFNet for the detection of skin cancer from dermoscopy images. To detect skin cancer images are pre-processed using anisotropic diffusion methods to remove artifacts and noise which enhances the quality of images. A combination of the VGG19 architecture and the Histogram of Oriented Gradients (HOG) is used in this research for discriminative feature extraction. SMOTE Tomek is used to resolve the problem of imbalanced images in the multiple classes of the publicly available ISIC 2019 dataset. This study utilizes segmentation to pinpoint areas of significantly damaged skin cells. A feature vector map is created by combining the features of HOG and VGG19. Multiclassification is accomplished by CNN using feature vector maps. DVFNet achieves an accuracy of 98.32% on the ISIC 2019 dataset. Analysis of variance (ANOVA) statistical test is used to validate the model's accuracy. Healthcare experts utilize the DVFNet model to detect skin cancer at an early clinical stage.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
35
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
36
|
Lehner GM, Gockeln L, Naber BM, Thamm JR, Schuh S, Duttler G, Rottenkolber A, Hartmann D, Kramer F, Welzel J. Differences in the annotation between facial images and videos for training an artificial intelligence for skin type determination. Skin Res Technol 2024; 30:e13632. [PMID: 38407411 PMCID: PMC10895547 DOI: 10.1111/srt.13632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 01/04/2024] [Indexed: 02/27/2024]
Abstract
BACKGROUND The Grand-AID research project, consisting of GRANDEL-The Beautyness Company, the dermatology department of Augsburg University Hospital and the Chair of IT Infrastructure for Translational Medical Research at Augsburg University, is currently researching the development of a digital skin consultation tool that uses artificial intelligence (AI) to analyze the user's skin and ultimately perform a personalized skin analysis and a customized skin care routine. Training the AI requires annotation of various skin features on facial images. The central question is whether videos are better suited than static images for assessing dynamic parameters such as wrinkles and elasticity. For this purpose, a pilot study was carried out in which the annotations on images and videos were compared. MATERIALS AND METHODS Standardized image sequences as well as a video with facial expressions were taken from 25 healthy volunteers. Four raters with dermatological expertise annotated eight features (wrinkles, redness, shine, pores, pigmentation spots, dark circles, skin sagging, and blemished skin) with a semi-quantitative and a linear scale in a cross-over design to evaluate differences between the image modalities and between the raters. RESULTS In the videos, most parameters tended to be assessed with higher scores than in the images, and in some cases significantly. Furthermore, there were significant differences between the raters. CONCLUSION The present study shows significant differences between the two evaluation methods using image or video analysis. In addition, the evaluation of the skin analysis depends on subjective criteria. Therefore, when training the AI, we recommend regular training of the annotating individuals and cross-validation of the annotation.
Collapse
Affiliation(s)
- Gabriele Maria Lehner
- Department of Dermatology and AllergologyUniversity Hospital AugsburgAugsburgGermany
| | - Laura Gockeln
- Department of Dermatology and AllergologyUniversity Hospital AugsburgAugsburgGermany
| | - Bettina Marie Naber
- Department of Dermatology and AllergologyUniversity Hospital AugsburgAugsburgGermany
| | - Janis Raphael Thamm
- Department of Dermatology and AllergologyUniversity Hospital AugsburgAugsburgGermany
| | - Sandra Schuh
- Department of Dermatology and AllergologyUniversity Hospital AugsburgAugsburgGermany
| | | | | | - Dennis Hartmann
- IT Infrastructure for Translational Medical ResearchUniversity of AugsburgAugsburgGermany
| | - Frank Kramer
- IT Infrastructure for Translational Medical ResearchUniversity of AugsburgAugsburgGermany
| | - Julia Welzel
- Department of Dermatology and AllergologyUniversity Hospital AugsburgAugsburgGermany
| |
Collapse
|
37
|
Joly-Chevrier M, Nguyen AXL, Liang L, Lesko-Krleza M, Lefrançois P. The State of Artificial Intelligence in Skin Cancer Publications. J Cutan Med Surg 2024; 28:146-152. [PMID: 38323537 PMCID: PMC11015717 DOI: 10.1177/12034754241229361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) in skin cancer is a promising research field to assist physicians and to provide support to patients remotely. Physicians' awareness to new developments in AI research is important to define the best practices and scope of integrating AI-enabled technologies within a clinical setting. OBJECTIVES To analyze the characteristics and trends of AI skin cancer publications from dermatology journals. METHODS AI skin cancer publications were retrieved in June 2022 from the Web of Science. Publications were screened by title, abstract, and keywords to assess eligibility. Publications were fully reviewed. Publications were divided between nonmelanoma skin cancer (NMSC), melanoma, and skin cancer studies. The primary measured outcome was the number of citations. The secondary measured outcomes were articles' general characteristics and features related to AI. RESULTS A total of 168 articles were included: 25 on NMSC, 77 on melanoma, and 66 on skin cancer. The most common types of skin cancers were melanoma (134, 79.8%), basal cell carcinoma (61, 36.3%), and squamous cell carcinoma (45, 26.9%). All articles were published between 2000 and 2022, with 49 (29.2%) of them being published in 2021. Original studies that developed or assessed an algorithm predominantly used supervised learning (66, 97.0%) and deep neural networks (42, 67.7%). The most used imaging modalities were standard dermoscopy (76, 45.2%) and clinical images (39, 23.2%). CONCLUSIONS Most publications focused on developing or assessing screening technologies with mainly deep neural network algorithms. This indicates the eminent need for dermatologists to label or annotate images used by novel AI systems.
Collapse
Affiliation(s)
| | | | - Laurence Liang
- Faculty of Engineering, McGill University, Montreal, QC, Canada
| | - Michael Lesko-Krleza
- Division of Computer Engineering, Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| | - Philippe Lefrançois
- Division of Dermatology, Department of Medicine, McGill University, Montreal, QC, Canada
- Division of Dermatology, Department of Medicine, Jewish General Hospital, Montreal, QC, Canada
- Lady Davis Institute for Medical Research, Montreal, QC, Canada
| |
Collapse
|
38
|
Moturi D, Surapaneni RK, Avanigadda VSG. Developing an efficient method for melanoma detection using CNN techniques. J Egypt Natl Canc Inst 2024; 36:6. [PMID: 38407684 DOI: 10.1186/s43046-024-00210-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 02/14/2024] [Indexed: 02/27/2024] Open
Abstract
BACKGROUND More and more genetic and metabolic abnormalities are now known to cause cancer, which is typically deadly. Any bodily part may become infected by cancerous cells, which can be fatal. Skin cancer is one of the most prevalent types of cancer, and its prevalence is rising across the globe. Squamous and basal cell carcinomas, as well as melanoma, which is clinically aggressive and causes the majority of deaths, are the primary subtypes of skin cancer. Screening for skin cancer is therefore essential. METHODS The best way to quickly and precisely detect skin cancer is by using deep learning techniques. In this research deep learning techniques like MobileNetv2 and Dense net will be used for detecting or identifying two main kinds of tumors malignant and benign. For this research HAM10000 dataset is considered. This dataset consists of 10,000 skin lesion images and the disease comprises nonmelanocytic and melanocytic tumors. These two techniques can be used for detecting the malignant and benign. All these methods are compared and then a result can be inferred from their performance. RESULTS After the model evaluation, the accuracy for the MobileNetV2 was 85% and customized CNN was 95%. A web application has been developed with the Python framework that provides a graphical user interface with the best-trained model. The graphical user interface allows the user to enter the patient details and upload the lesion image. The image will be classified with the appropriate trained model which can predict whether the uploaded image is cancerous or non-cancerous. This web application also displays the percentage of cancer affected. CONCLUSION As per the comparisons between the two techniques customized CNN gives higher accuracy for the detection of melanoma.
Collapse
Affiliation(s)
- Devika Moturi
- Department of Computer Science and Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, India.
| | - Ravi Kishan Surapaneni
- Department of Computer Science and Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, India
| | | |
Collapse
|
39
|
Koumaki D, Manios G, Papadakis M, Doxastaki A, Zacharopoulos GV, Katoulis A, Manios A. Color Analysis of Merkel Cell Carcinoma: A Comparative Study with Cherry Angiomas, Hemangiomas, Basal Cell Carcinomas, and Squamous Cell Carcinomas. Diagnostics (Basel) 2024; 14:230. [PMID: 38275477 PMCID: PMC10814937 DOI: 10.3390/diagnostics14020230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 01/11/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024] Open
Abstract
Merkel cell carcinoma (MCC) is recognized as one of the most malignant skin tumors. Its rarity might explain the limited exploration of digital color studies in this area. The objective of this study was to delineate color alterations in MCCs compared to benign lesions resembling MCC, such as cherry angiomas and hemangiomas, along with other non-melanoma skin cancer lesions like basal cell carcinoma (BCC) and squamous cell carcinoma (SCC), utilizing computer-aided digital color analysis. This was a retrospective study where clinical images of the color of the lesion and adjacent normal skin from 11 patients with primary MCC, 11 patients with cherry angiomas, 12 patients with hemangiomas, and 12 patients with BCC/SCC (totaling 46 patients) were analyzed using the RGB (red, green, and blue) and the CIE Lab color system. The Lab color system aided in estimating the Individual Typology Angle (ITA) change in the skin, and these results are documented in this study. It was demonstrated that the estimation of color components can assist in the differential diagnosis of these types of lesions because there were significant differences in color parameters between MCC and other categories of skin lesions such as hemangiomas, common skin carcinomas, and cherry hemangiomas. Significant differences in values were observed in the blue color of RGB (p = 0.003) and the b* parameter of Lab color (p < 0.0001) of MCC versus cherry angiomas. Similarly, the mean a* value of Merkel cell carcinoma (MCC) compared to basal cell carcinoma and squamous cell carcinoma showed a statistically significant difference (p < 0.0001). Larger prospective studies are warranted to further validate the clinical application of these findings.
Collapse
Affiliation(s)
- Dimitra Koumaki
- Dermatology Department, University Hospital of Heraklion, 71110 Heraklion, Greece;
| | - Georgios Manios
- Department of Computer Science and Biomedical Informatics, University of Thessaly, 35100 Lamia, Greece;
| | - Marios Papadakis
- Department of Surgery II, Witten/Herdecke University, Heusnerstrasse 40, 42283 Witten, Germany;
| | - Aikaterini Doxastaki
- Dermatology Department, University Hospital of Heraklion, 71110 Heraklion, Greece;
| | | | - Alexander Katoulis
- 2nd Department of Dermatology and Venereology, “Attikon” General University Hospital, Medical School, National and Kapodistrian University of Athens, Rimini 1, Haidari, 12462 Athens, Greece;
| | - Andreas Manios
- Plastic Surgery Unit, Surgical Oncology Department, University Hospital of Heraklion, 71110 Heraklion, Greece; (G.V.Z.); (A.M.)
| |
Collapse
|
40
|
Jagemann I, Wensing O, Stegemann M, Hirschfeld G. Acceptance of Medical Artificial Intelligence in Skin Cancer Screening: Choice-Based Conjoint Survey. JMIR Form Res 2024; 8:e46402. [PMID: 38214959 PMCID: PMC10818228 DOI: 10.2196/46402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/17/2023] [Accepted: 11/20/2023] [Indexed: 01/13/2024] Open
Abstract
BACKGROUND There is great interest in using artificial intelligence (AI) to screen for skin cancer. This is fueled by a rising incidence of skin cancer and an increasing scarcity of trained dermatologists. AI systems capable of identifying melanoma could save lives, enable immediate access to screenings, and reduce unnecessary care and health care costs. While such AI-based systems are useful from a public health perspective, past research has shown that individual patients are very hesitant about being examined by an AI system. OBJECTIVE The aim of this study was two-fold: (1) to determine the relative importance of the provider (in-person physician, physician via teledermatology, AI, personalized AI), costs of screening (free, 10€, 25€, 40€; 1€=US $1.09), and waiting time (immediate, 1 day, 1 week, 4 weeks) as attributes contributing to patients' choices of a particular mode of skin cancer screening; and (2) to investigate whether sociodemographic characteristics, especially age, were systematically related to participants' individual choices. METHODS A choice-based conjoint analysis was used to examine the acceptance of medical AI for a skin cancer screening from the patient's perspective. Participants responded to 12 choice sets, each containing three screening variants, where each variant was described through the attributes of provider, costs, and waiting time. Furthermore, the impacts of sociodemographic characteristics (age, gender, income, job status, and educational background) on the choices were assessed. RESULTS Among the 383 clicks on the survey link, a total of 126 (32.9%) respondents completed the online survey. The conjoint analysis showed that the three attributes had more or less equal importance in contributing to the participants' choices, with provider being the most important attribute. Inspecting the individual part-worths of conjoint attributes showed that treatment by a physician was the most preferred modality, followed by electronic consultation with a physician and personalized AI; the lowest scores were found for the three AI levels. Concerning the relationship between sociodemographic characteristics and relative importance, only age showed a significant positive association to the importance of the attribute provider (r=0.21, P=.02), in which younger participants put less importance on the provider than older participants. All other correlations were not significant. CONCLUSIONS This study adds to the growing body of research using choice-based experiments to investigate the acceptance of AI in health contexts. Future studies are needed to explore the reasons why AI is accepted or rejected and whether sociodemographic characteristics are associated with this decision.
Collapse
Affiliation(s)
- Inga Jagemann
- School of Business, University of Applied Sciences and Arts Bielefeld, Bielefeld, Germany
| | - Ole Wensing
- School of Business, University of Applied Sciences and Arts Bielefeld, Bielefeld, Germany
| | - Manuel Stegemann
- School of Business, University of Applied Sciences and Arts Bielefeld, Bielefeld, Germany
| | - Gerrit Hirschfeld
- School of Business, University of Applied Sciences and Arts Bielefeld, Bielefeld, Germany
| |
Collapse
|
41
|
Furriel BCRS, Oliveira BD, Prôa R, Paiva JQ, Loureiro RM, Calixto WP, Reis MRC, Giavina-Bianchi M. Artificial intelligence for skin cancer detection and classification for clinical environment: a systematic review. Front Med (Lausanne) 2024; 10:1305954. [PMID: 38259845 PMCID: PMC10800812 DOI: 10.3389/fmed.2023.1305954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
Background Skin cancer is one of the most common forms worldwide, with a significant increase in incidence over the last few decades. Early and accurate detection of this type of cancer can result in better prognoses and less invasive treatments for patients. With advances in Artificial Intelligence (AI), tools have emerged that can facilitate diagnosis and classify dermatological images, complementing traditional clinical assessments and being applicable where there is a shortage of specialists. Its adoption requires analysis of efficacy, safety, and ethical considerations, as well as considering the genetic and ethnic diversity of patients. Objective The systematic review aims to examine research on the detection, classification, and assessment of skin cancer images in clinical settings. Methods We conducted a systematic literature search on PubMed, Scopus, Embase, and Web of Science, encompassing studies published until April 4th, 2023. Study selection, data extraction, and critical appraisal were carried out by two independent reviewers. Results were subsequently presented through a narrative synthesis. Results Through the search, 760 studies were identified in four databases, from which only 18 studies were selected, focusing on developing, implementing, and validating systems to detect, diagnose, and classify skin cancer in clinical settings. This review covers descriptive analysis, data scenarios, data processing and techniques, study results and perspectives, and physician diversity, accessibility, and participation. Conclusion The application of artificial intelligence in dermatology has the potential to revolutionize early detection of skin cancer. However, it is imperative to validate and collaborate with healthcare professionals to ensure its clinical effectiveness and safety.
Collapse
Affiliation(s)
- Brunna C. R. S. Furriel
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Electrical, Mechanical and Computer Engineering School, Federal University of Goiás, Goiânia, Brazil
- Studies and Researches in Science and Technology Group (GCITE), Federal Institute of Goiás, Goiânia, Brazil
| | - Bruno D. Oliveira
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Renata Prôa
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Joselisa Q. Paiva
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Rafael M. Loureiro
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Wesley P. Calixto
- Electrical, Mechanical and Computer Engineering School, Federal University of Goiás, Goiânia, Brazil
- Studies and Researches in Science and Technology Group (GCITE), Federal Institute of Goiás, Goiânia, Brazil
| | - Márcio R. C. Reis
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Studies and Researches in Science and Technology Group (GCITE), Federal Institute of Goiás, Goiânia, Brazil
| | | |
Collapse
|
42
|
Almufareh MF. Unveiling the Spectrum of UV-Induced DNA Damage in Melanoma: Insights From AI-Based Analysis of Environmental Factors, Repair Mechanisms, and Skin Pigment Interactions. IEEE ACCESS 2024; 12:64837-64860. [DOI: 10.1109/access.2024.3395988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Maram Fahaad Almufareh
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Al Jouf, Saudi Arabia
| |
Collapse
|
43
|
Kushimo OO, Salau AO, Adeleke OJ, Olaoye DS. Deep learning model to improve melanoma detection in people of color. ARAB JOURNAL OF BASIC AND APPLIED SCIENCES 2023. [DOI: 10.1080/25765299.2023.2170066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023] Open
Affiliation(s)
- Oluwatobi O. Kushimo
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| | - Ayodeji Olalekan Salau
- Department of Electrical/Electronics and Computer Engineering, Afe Babalola University, Ado-Ekiti, Nigeria
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Oladapo J. Adeleke
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| | - Doyinsola S. Olaoye
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| |
Collapse
|
44
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
45
|
Zhang J, Zhong F, He K, Ji M, Li S, Li C. Recent Advancements and Perspectives in the Diagnosis of Skin Diseases Using Machine Learning and Deep Learning: A Review. Diagnostics (Basel) 2023; 13:3506. [PMID: 38066747 PMCID: PMC10706240 DOI: 10.3390/diagnostics13233506] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 11/16/2023] [Accepted: 11/17/2023] [Indexed: 01/11/2025] Open
Abstract
OBJECTIVE Skin diseases constitute a widespread health concern, and the application of machine learning and deep learning algorithms has been instrumental in improving diagnostic accuracy and treatment effectiveness. This paper aims to provide a comprehensive review of the existing research on the utilization of machine learning and deep learning in the field of skin disease diagnosis, with a particular focus on recent widely used methods of deep learning. The present challenges and constraints were also analyzed and possible solutions were proposed. METHODS We collected comprehensive works from the literature, sourced from distinguished databases including IEEE, Springer, Web of Science, and PubMed, with a particular emphasis on the most recent 5-year advancements. From the extensive corpus of available research, twenty-nine articles relevant to the segmentation of dermatological images and forty-five articles about the classification of dermatological images were incorporated into this review. These articles were systematically categorized into two classes based on the computational algorithms utilized: traditional machine learning algorithms and deep learning algorithms. An in-depth comparative analysis was carried out, based on the employed methodologies and their corresponding outcomes. CONCLUSIONS Present outcomes of research highlight the enhanced effectiveness of deep learning methods over traditional machine learning techniques in the field of dermatological diagnosis. Nevertheless, there remains significant scope for improvement, especially in improving the accuracy of algorithms. The challenges associated with the availability of diverse datasets, the generalizability of segmentation and classification models, and the interpretability of models also continue to be pressing issues. Moreover, the focus of future research should be appropriately shifted. A significant amount of existing research is primarily focused on melanoma, and consequently there is a need to broaden the field of pigmented dermatology research in the future. These insights not only emphasize the potential of deep learning in dermatological diagnosis but also highlight directions that should be focused on.
Collapse
Affiliation(s)
- Junpeng Zhang
- College of Electrical Engineering, Sichuan University, Chengdu 610017, China; (J.Z.); (F.Z.); (M.J.)
| | - Fan Zhong
- College of Electrical Engineering, Sichuan University, Chengdu 610017, China; (J.Z.); (F.Z.); (M.J.)
| | - Kaiqiao He
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi’an 710032, China;
| | - Mengqi Ji
- College of Electrical Engineering, Sichuan University, Chengdu 610017, China; (J.Z.); (F.Z.); (M.J.)
| | - Shuli Li
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi’an 710032, China;
| | - Chunying Li
- Department of Dermatology, Xijing Hospital, Fourth Military Medical University, Xi’an 710032, China;
| |
Collapse
|
46
|
Kuehle R, Ringwald F, Bouffleur F, Hagen N, Schaufelberger M, Nahm W, Hoffmann J, Freudlsperger C, Engel M, Eisenmann U. The Use of Artificial Intelligence for the Classification of Craniofacial Deformities. J Clin Med 2023; 12:7082. [PMID: 38002694 PMCID: PMC10672418 DOI: 10.3390/jcm12227082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 10/27/2023] [Accepted: 11/02/2023] [Indexed: 11/26/2023] Open
Abstract
Positional cranial deformities are a common finding in toddlers, yet differentiation from craniosynostosis can be challenging. The aim of this study was to train convolutional neural networks (CNNs) to classify craniofacial deformities based on 2D images generated using photogrammetry as a radiation-free imaging technique. A total of 487 patients with photogrammetry scans were included in this retrospective cohort study: children with craniosynostosis (n = 227), positional deformities (n = 206), and healthy children (n = 54). Three two-dimensional images were extracted from each photogrammetry scan. The datasets were divided into training, validation, and test sets. During the training, fine-tuned ResNet-152s were utilized. The performance was quantified using tenfold cross-validation. For the detection of craniosynostosis, sensitivity was at 0.94 with a specificity of 0.85. Regarding the differentiation of the five existing classes (trigonocephaly, scaphocephaly, positional plagiocephaly left, positional plagiocephaly right, and healthy), sensitivity ranged from 0.45 (positional plagiocephaly left) to 0.95 (scaphocephaly) and specificity ranged from 0.87 (positional plagiocephaly right) to 0.97 (scaphocephaly). We present a CNN-based approach to classify craniofacial deformities on two-dimensional images with promising results. A larger dataset would be required to identify rarer forms of craniosynostosis as well. The chosen 2D approach enables future applications for digital cameras or smartphones.
Collapse
Affiliation(s)
- Reinald Kuehle
- Department of Oral and Maxillofacial Surgery, University of Heidelberg, 69120 Heidelberg, Germany; (F.B.); (J.H.); (C.F.)
| | - Friedemann Ringwald
- Institute of Medical Informatics, University of Heidelberg, 69120 Heidelberg, Germany; (F.R.)
| | - Frederic Bouffleur
- Department of Oral and Maxillofacial Surgery, University of Heidelberg, 69120 Heidelberg, Germany; (F.B.); (J.H.); (C.F.)
| | - Niclas Hagen
- Institute of Medical Informatics, University of Heidelberg, 69120 Heidelberg, Germany; (F.R.)
| | - Matthias Schaufelberger
- Institute of Biomedical Engineering, Karlsruhe Institute for Technology, 76131 Karlsruhe, Germany
| | - Werner Nahm
- Institute of Biomedical Engineering, Karlsruhe Institute for Technology, 76131 Karlsruhe, Germany
| | - Jürgen Hoffmann
- Department of Oral and Maxillofacial Surgery, University of Heidelberg, 69120 Heidelberg, Germany; (F.B.); (J.H.); (C.F.)
| | - Christian Freudlsperger
- Department of Oral and Maxillofacial Surgery, University of Heidelberg, 69120 Heidelberg, Germany; (F.B.); (J.H.); (C.F.)
| | - Michael Engel
- Department of Oral and Maxillofacial Surgery, University of Heidelberg, 69120 Heidelberg, Germany; (F.B.); (J.H.); (C.F.)
| | - Urs Eisenmann
- Institute of Medical Informatics, University of Heidelberg, 69120 Heidelberg, Germany; (F.R.)
| |
Collapse
|
47
|
Li H, Zhang P, Wei Z, Qian T, Tang Y, Hu K, Huang X, Xia X, Zhang Y, Cheng H, Yu F, Zhang W, Dan K, Liu X, Ye S, He G, Jiang X, Liu L, Fan Y, Song T, Zhou G, Wang Z, Zhang D, Lv J. Deep skin diseases diagnostic system with Dual-channel Image and Extracted Text. Front Artif Intell 2023; 6:1213620. [PMID: 37928449 PMCID: PMC10620802 DOI: 10.3389/frai.2023.1213620] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 09/12/2023] [Indexed: 11/07/2023] Open
Abstract
Background Due to the lower reliability of laboratory tests, skin diseases are more suitable for diagnosis with AI models. There are limited AI dermatology diagnostic models combining images and text; few of these are for Asian populations, and few cover the most common types of diseases. Methods Leveraging a dataset sourced from Asia comprising over 200,000 images and 220,000 medical records, we explored a deep learning-based system for Dual-channel images and extracted text for the diagnosis of skin diseases model DIET-AI to diagnose 31 skin diseases, which covers the majority of common skin diseases. From 1 September to 1 December 2021, we prospectively collected images from 6,043 cases and medical records from 15 hospitals in seven provinces in China. Then the performance of DIET-AI was compared with that of six doctors of different seniorities in the clinical dataset. Results The average performance of DIET-AI in 31 diseases was not less than that of all the doctors of different seniorities. By comparing the area under the curve, sensitivity, and specificity, we demonstrate that the DIET-AI model is effective in clinical scenarios. In addition, medical records affect the performance of DIET-AI and physicians to varying degrees. Conclusion This is the largest dermatological dataset for the Chinese demographic. For the first time, we built a Dual-channel image classification model on a non-cancer dermatitis dataset with both images and medical records and achieved comparable diagnostic performance to senior doctors about common skin diseases. It provides references for exploring the feasibility and performance evaluation of DIET-AI in clinical use afterward.
Collapse
Affiliation(s)
- Huanyu Li
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
- Shanghai Botanee Bio-technology AI Lab, Shanghai, China
| | - Peng Zhang
- School of Medicine, Shanghai University, Shanghai, China
| | - Zikun Wei
- Shanghai Botanee Bio-technology AI Lab, Shanghai, China
| | - Tian Qian
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Yiqi Tang
- Shanghai Botanee Bio-technology AI Lab, Shanghai, China
| | - Kun Hu
- Shanghai Botanee Bio-technology AI Lab, Shanghai, China
| | - Xianqiong Huang
- Department of Dermatology, Army Medical Center, Chongqing, China
| | - Xinxin Xia
- School of Public Health, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Yishuang Zhang
- School of Pharmacy, East China University of Science and Technology, Shanghai, China
| | - Haixing Cheng
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Fubing Yu
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Wenjia Zhang
- Shanghai Botanee Bio-technology AI Lab, Shanghai, China
| | - Kena Dan
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Xuan Liu
- Faculty of Science, The University of Sydney, Sydney, NSW, Australia
| | - Shujun Ye
- Faculty of Science, The University of Melbourne, Parkville, VIC, Australia
| | - Guangqiao He
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Xia Jiang
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Liwei Liu
- Chongqing Shapingba District People's Hospital, Chongqing, China
| | - Yukun Fan
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Tingting Song
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Guomin Zhou
- Shanghai Medical College, Fudan University, Shanghai, China
| | - Ziyi Wang
- Huazhong Agricultural University, Wuhan, Hubei, China
| | - Daojun Zhang
- The Third Affiliated Hospital of Chongqing Medical University (CQMU), Chongqing, China
| | - Junwei Lv
- Shanghai Botanee Bio-technology AI Lab, Shanghai, China
| |
Collapse
|
48
|
Riaz S, Naeem A, Malik H, Naqvi RA, Loh WK. Federated and Transfer Learning Methods for the Classification of Melanoma and Nonmelanoma Skin Cancers: A Prospective Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:8457. [PMID: 37896548 PMCID: PMC10611214 DOI: 10.3390/s23208457] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/09/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023]
Abstract
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.
Collapse
Affiliation(s)
- Shafia Riaz
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
49
|
Di Biasi L, De Marco F, Auriemma Citarella A, Castrillón-Santana M, Barra P, Tortora G. Refactoring and performance analysis of the main CNN architectures: using false negative rate minimization to solve the clinical images melanoma detection problem. BMC Bioinformatics 2023; 24:386. [PMID: 37821815 PMCID: PMC10568761 DOI: 10.1186/s12859-023-05516-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 10/02/2023] [Indexed: 10/13/2023] Open
Abstract
BACKGROUND Melanoma is one of the deadliest tumors in the world. Early detection is critical for first-line therapy in this tumor pathology and it remains challenging due to the need for histological analysis to ensure correctness in diagnosis. Therefore, multiple computer-aided diagnosis (CAD) systems working on melanoma images were proposed to mitigate the need of a biopsy. However, although the high global accuracy is declared in literature results, the CAD systems for the health fields must focus on the lowest false negative rate (FNR) possible to qualify as a diagnosis support system. The final goal must be to avoid classification type 2 errors to prevent life-threatening situations. Another goal could be to create an easy-to-use system for both physicians and patients. RESULTS To achieve the minimization of type 2 error, we performed a wide exploratory analysis of the principal convolutional neural network (CNN) architectures published for the multiple image classification problem; we adapted these networks to the melanoma clinical image binary classification problem (MCIBCP). We collected and analyzed performance data to identify the best CNN architecture, in terms of FNR, usable for solving the MCIBCP problem. Then, to provide a starting point for an easy-to-use CAD system, we used a clinical image dataset (MED-NODE) because clinical images are easier to access: they can be taken by a smartphone or other hand-size devices. Despite the lower resolution than dermoscopic images, the results in the literature would suggest that it would be possible to achieve high classification performance by using clinical images. In this work, we used MED-NODE, which consists of 170 clinical images (70 images of melanoma and 100 images of naevi). We optimized the following CNNs for the MCIBCP problem: Alexnet, DenseNet, GoogleNet Inception V3, GoogleNet, MobileNet, ShuffleNet, SqueezeNet, and VGG16. CONCLUSIONS The results suggest that a CNN built on the VGG or AlexNet structure can ensure the lowest FNR (0.07) and (0.13), respectively. In both cases, discrete global performance is ensured: 73% (accuracy), 82% (sensitivity) and 59% (specificity) for VGG; 89% (accuracy), 87% (sensitivity) and 90% (specificity) for AlexNet.
Collapse
Affiliation(s)
- Luigi Di Biasi
- Department of Computer Science, University of Salerno, Fisciano, Italy.
| | - Fabiola De Marco
- Department of Computer Science, University of Salerno, Fisciano, Italy
| | | | | | - Paola Barra
- Department of Science and Technology, Parthenope University of Naples, Naples, Italy
| | - Genoveffa Tortora
- Department of Computer Science, University of Salerno, Fisciano, Italy
| |
Collapse
|
50
|
Juan CK, Su YH, Wu CY, Yang CS, Hsu CH, Hung CL, Chen YJ. Deep convolutional neural network with fusion strategy for skin cancer recognition: model development and validation. Sci Rep 2023; 13:17087. [PMID: 37816815 PMCID: PMC10564722 DOI: 10.1038/s41598-023-42693-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 09/13/2023] [Indexed: 10/12/2023] Open
Abstract
We aimed to develop an accurate and efficient skin cancer classification system using deep-learning technology with a relatively small dataset of clinical images. We proposed a novel skin cancer classification method, SkinFLNet, which utilizes model fusion and lifelong learning technologies. The SkinFLNet's deep convolutional neural networks were trained using a dataset of 1215 clinical images of skin tumors diagnosed at Taichung and Taipei Veterans General Hospital between 2015 and 2020. The dataset comprised five categories: benign nevus, seborrheic keratosis, basal cell carcinoma, squamous cell carcinoma, and malignant melanoma. The SkinFLNet's performance was evaluated using 463 clinical images between January and December 2021. SkinFLNet achieved an overall classification accuracy of 85%, precision of 85%, recall of 82%, F-score of 82%, sensitivity of 82%, and specificity of 93%, outperforming other deep convolutional neural network models. We also compared SkinFLNet's performance with that of three board-certified dermatologists, and the average overall performance of SkinFLNet was comparable to, or even better than, the dermatologists. Our study presents an efficient skin cancer classification system utilizing model fusion and lifelong learning technologies that can be trained on a relatively small dataset. This system can potentially improve skin cancer screening accuracy in clinical practice.
Collapse
Affiliation(s)
- Chao-Kuei Juan
- Department of Dermatology, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Dermatology, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Hao Su
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chen-Yi Wu
- Department of Dermatology, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Dermatology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chi-Shun Yang
- Department of Pathology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Chung-Hao Hsu
- Department of Dermatology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Che-Lun Hung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| | - Yi-Ju Chen
- Department of Dermatology, Taichung Veterans General Hospital, Taichung, Taiwan.
- Department of Dermatology, National Yang Ming Chiao Tung University, Taipei, Taiwan.
- Department of Post-Baccalaureate Medicine, Chung-Hsing University, Taichung, Taiwan.
| |
Collapse
|