1
|
Li J. Fusion feature-based hybrid methods for diagnosing oral squamous cell carcinoma in histopathological images. Front Oncol 2025; 15:1551876. [PMID: 40265007 PMCID: PMC12011784 DOI: 10.3389/fonc.2025.1551876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2024] [Accepted: 03/11/2025] [Indexed: 04/24/2025] Open
Abstract
Objective This study is experimental in nature and assesses the effectiveness of the Cross-Attention Vision Transformer (CrossViT) in the early detection of Oral Squamous Cell Carcinoma (OSCC) and proposes a hybrid model that combines CrossViT features with manually extracted features to improve the accuracy and robustness of OSCC diagnosis. Methods We employed the CrossViT architecture, which utilizes a dual attention mechanism to process multi-scale features, in combination with Convolutional Neural Networks (CNN) technology for the effective analysis of image patches. Simultaneously, features were manually extracted by experts from OSCC pathological images and subsequently fused with the features extracted by CrossViT to enhance diagnostic performance. The classification task was performed using an Artificial Neural Networks (ANN) to further improve diagnostic accuracy. Model performance was evaluated based on classification accuracy on two independent OSCC datasets. Results The proposed hybrid feature model demonstrated excellent performance in pathological diagnosis, achieving accuracies of 99.36% and 99.59%, respectively. Compared to CNN and Vision Transformer (ViT) models, the hybrid model was more effective in distinguishing between malignant and benign lesions, significantly improving diagnostic accuracy. Conclusion By combining CrossViT with expert features, diagnostic accuracy for OSCC was significantly enhanced, thereby validating the potential of hybrid artificial intelligence models in clinical pathology. Future research will expand the dataset and explore the model's interpretability to facilitate its practical application in clinical settings.
Collapse
Affiliation(s)
- Jiaxing Li
- Baoan Central Hospital of Shenzhen, Shenzhen, Guangdong, China
| |
Collapse
|
2
|
Di Fede O, La Mantia G, Parola M, Maniscalco L, Matranga D, Tozzo P, Campisi G, Cimino MGCA. Automated Detection of Oral Malignant Lesions Using Deep Learning: Scoping Review and Meta-Analysis. Oral Dis 2025; 31:1054-1064. [PMID: 39489724 PMCID: PMC12022385 DOI: 10.1111/odi.15188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 09/11/2024] [Accepted: 10/17/2024] [Indexed: 11/05/2024]
Abstract
OBJECTIVE Oral diseases, specifically malignant lesions, are serious global health concerns requiring early diagnosis for effective treatment. In recent years, deep learning (DL) has emerged as a powerful tool for the automated detection and classification of oral lesions. This research, by conducting a scoping review and meta-analysis, aims to provide an overview of the progress and achievements in the field of automated detection of oral lesions using DL. MATERIALS AND METHODS A scoping review was conducted to identify relevant studies published in the last 5 years (2018-2023). A comprehensive search was conducted using several electronic databases, including PubMed, Web of Science, and Scopus. Two reviewers independently assessed the studies for eligibility and extracted data using a standardized form, and a meta-analysis was conducted to synthesize the findings. RESULTS Fourteen studies utilizing various DL algorithms were identified and included for the detection and classification of oral lesions from clinical images. Among these, three were included in the meta-analysis. The estimated pooled sensitivity and specificity were 0.86 (95% confidence interval [CI] = 0.80-0.91) and 0.67 (95% CI = 0.58-0.75), respectively. CONCLUSIONS The results of meta-analysis indicate that DL algorithms improve the diagnosis of oral lesions. Future research should develop validated algorithms for automated diagnosis. TRIAL REGISTRATION Open Science Framework (https://osf.io/4n8sm).
Collapse
Affiliation(s)
- Olga Di Fede
- Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.)University of PalermoPalermoItaly
| | - Gaetano La Mantia
- Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.)University of PalermoPalermoItaly
- Unit of Oral Medicine and Dentistry for Fragile Patients, Department of Rehabilitation, Fragility, and Continuity of CareUniversity Hospital PalermoPalermoItaly
- Department of Biomedical and Dental Sciences and Morphofunctional ImagingUniversity of MessinaMessinaItaly
| | - Marco Parola
- Department of Information EngineeringUniversity of PisaPisaItaly
| | - Laura Maniscalco
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical SpecialtiesUniversity of PalermoPalermoItaly
| | - Domenica Matranga
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical SpecialtiesUniversity of PalermoPalermoItaly
| | - Pietro Tozzo
- Unit of StomatologyOspedali Riuniti “Villa Sofia‐Cervello” of PalermoPalermoItaly
| | - Giuseppina Campisi
- Unit of Oral Medicine and Dentistry for Fragile Patients, Department of Rehabilitation, Fragility, and Continuity of CareUniversity Hospital PalermoPalermoItaly
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND)University of PalermoPalermoItaly
| | | |
Collapse
|
3
|
Nieri M, Serni L, Clauser T, Paoletti C, Franchi L. Diagnosis of Oral Cancer With Deep Learning. A Comparative Test Accuracy Systematic Review. Oral Dis 2025. [PMID: 40163741 DOI: 10.1111/odi.15330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Revised: 03/14/2025] [Accepted: 03/21/2025] [Indexed: 04/02/2025]
Abstract
OBJECTIVE To directly compare the diagnostic accuracy of deep learning models with human experts and other diagnostic methods used for the clinical detection of oral cancer. METHODS Comparative diagnostic studies involving patients with photographic images of oral mucosal lesions (cancer or non-cancer) were included. Only studies using deep learning methods were eligible. Medline, EMBASE, Scopus, Google Scholar, and ClinicalTrials.gov were searched until September 2024. QUADAS-C assessed the risk of bias. A Bayesian meta-analysis compared diagnostic test accuracy. RESULTS Eight studies were included, none of which had a low risk of bias. Three studies compared deep learning versus human experts. The difference in sensitivity favored deep learning by 0.024 (95% CI: -0.093, 0.206), while the difference in specificity favored human experts by -0.041 (95% CI: -0.218, 0.038). Two studies compared deep learning versus postgraduate medical students. The differences in sensitivity and specificity favored deep learning by 0.108 (95% CI: -0.038, 0.324) and by 0.010 (95% CI: -0.119, 0.111), respectively. Both comparisons provided low-level evidence. CONCLUSIONS Deep learning models showed comparable sensitivity and specificity to human experts. These models outperformed postgraduate medical students in terms of sensitivity. Prospective clinical trials are needed to evaluate the real-world performance of deep learning models.
Collapse
Affiliation(s)
- Michele Nieri
- Department of Experimental and Clinical Medicine, University of Florence, Italy
| | - Lapo Serni
- Department of Experimental and Clinical Medicine, University of Florence, Italy
| | | | | | - Lorenzo Franchi
- Department of Experimental and Clinical Medicine, University of Florence, Italy
| |
Collapse
|
4
|
Mirfendereski P, Li GY, Pearson AT, Kerr AR. Artificial intelligence and the diagnosis of oral cavity cancer and oral potentially malignant disorders from clinical photographs: a narrative review. FRONTIERS IN ORAL HEALTH 2025; 6:1569567. [PMID: 40130020 PMCID: PMC11931071 DOI: 10.3389/froh.2025.1569567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2025] [Accepted: 02/25/2025] [Indexed: 03/26/2025] Open
Abstract
Oral cavity cancer is associated with high morbidity and mortality, particularly with advanced stage diagnosis. Oral cavity cancer, typically squamous cell carcinoma (OSCC), is often preceded by oral potentially malignant disorders (OPMDs), which comprise eleven disorders with variable risks for malignant transformation. While OPMDs are clinical diagnoses, conventional oral exam followed by biopsy and histopathological analysis is the gold standard for diagnosis of OSCC. There is vast heterogeneity in the clinical presentation of OPMDs, with possible visual similarities to early-stage OSCC or even to various benign oral mucosal abnormalities. The diagnostic challenge of OSCC/OPMDs is compounded in the non-specialist or primary care setting. There has been significant research interest in technology to assist in the diagnosis of OSCC/OPMDs. Artificial intelligence (AI), which enables machine performance of human tasks, has already shown promise in several domains of medical diagnostics. Computer vision, the field of AI dedicated to the analysis of visual data, has over the past decade been applied to clinical photographs for the diagnosis of OSCC/OPMDs. Various methodological concerns and limitations may be encountered in the literature on OSCC/OPMD image analysis. This narrative review delineates the current landscape of AI clinical photograph analysis in the diagnosis of OSCC/OPMDs and navigates the limitations, methodological issues, and clinical workflow implications of this field, providing context for future research considerations.
Collapse
Affiliation(s)
- Payam Mirfendereski
- Departmment of Oral and Maxillofacial Pathology, Radiology, and Medicine, New York University College of Dentistry, New York, NY, United States
| | - Grace Y. Li
- Department of Medicine, Section of Hematology/Oncology, University of Chicago Medical Center, Chicago, IL, United States
| | - Alexander T. Pearson
- Department of Medicine, Section of Hematology/Oncology, University of Chicago Medical Center, Chicago, IL, United States
| | - Alexander Ross Kerr
- Departmment of Oral and Maxillofacial Pathology, Radiology, and Medicine, New York University College of Dentistry, New York, NY, United States
| |
Collapse
|
5
|
Song B, Liang R. Integrating artificial intelligence with smartphone-based imaging for cancer detection in vivo. Biosens Bioelectron 2025; 271:116982. [PMID: 39616900 PMCID: PMC11789447 DOI: 10.1016/j.bios.2024.116982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 11/19/2024] [Accepted: 11/20/2024] [Indexed: 01/03/2025]
Abstract
Cancer is a major global health challenge, accounting for nearly one in six deaths worldwide. Early diagnosis significantly improves survival rates and patient outcomes, yet in resource-limited settings, the scarcity of medical resources often leads to late-stage diagnosis. Integrating artificial intelligence (AI) with smartphone-based imaging systems offers a promising solution by providing portable, cost-effective, and widely accessible tools for early cancer detection. This paper introduces advanced smartphone-based imaging systems that utilize various imaging modalities for in vivo detection of different cancer types and highlights the advancements of AI for in vivo cancer detection in smartphone-based imaging. However, these compact smartphone systems face challenges like low imaging quality and restricted computing power. The use of advanced AI algorithms to address the optical and computational limitations of smartphone-based imaging systems provides promising solutions. AI-based cancer detection also faces challenges. Transparency and reliability are critical factors in gaining the trust and acceptance of AI algorithms for clinical application, explainable and uncertainty-aware AI breaks the black box and will shape the future AI development in early cancer detection. The challenges and solutions for improving AI accuracy, transparency, and reliability are general issues in AI applications, the AI technologies, limitations, and potentials discussed in this paper are applicable to a wide range of biomedical imaging diagnostics beyond smartphones or cancer-specific applications. Smartphone-based multimodal imaging systems and deep learning algorithms for multimodal data analysis are also growing trends, as this approach can provide comprehensive information about the tissue being examined. Future opportunities and perspectives of AI-integrated smartphone imaging systems will be to make cutting-edge diagnostic tools more affordable and accessible, ultimately enabling early cancer detection for a broader population.
Collapse
Affiliation(s)
- Bofan Song
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ, 85721, USA.
| | - Rongguang Liang
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ, 85721, USA.
| |
Collapse
|
6
|
Yadav DP, Sharma B, Noonia A, Mehbodniya A. Explainable label guided lightweight network with axial transformer encoder for early detection of oral cancer. Sci Rep 2025; 15:6391. [PMID: 39984521 PMCID: PMC11845714 DOI: 10.1038/s41598-025-87627-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Accepted: 01/21/2025] [Indexed: 02/23/2025] Open
Abstract
Oral cavity cancer exhibits high morbidity and mortality rates. Therefore, it is essential to diagnose the disease at an early stage. Machine learning and convolution neural networks (CNN) are powerful tools for diagnosing mouth and oral cancer. In this study, we design a lightweight explainable network (LWENet) with label-guided attention (LGA) to provide a second opinion to the expert. The LWENet contains depth-wise separable convolution layers to reduce the computation costs. Moreover, the LGA module provides label consistency to the neighbor pixel and improves the spatial features. Furthermore, AMSA (axial multi-head self-attention) based ViT encoder incorporated in the model to provide global attention. Our ViT (vision transformer) encoder is computationally efficient compared to the classical ViT encoder. We tested LWRNet performance on the MOD (mouth and oral disease) and OCI (oral cancer image) datasets, and results are compared with the other CNN and ViT (vision transformer) based methods. The LWENet achieved a precision and F1-scores of 96.97% and 98.90% on the MOD dataset, and 99.48% and 98.23% on the OCI dataset, respectively. By incorporating Grad-CAM, we visualize the decision-making process, enhancing model interpretability. This work demonstrates the potential of LWENet with LGA in facilitating early oral cancer detection.
Collapse
Affiliation(s)
- Dhirendra Prasad Yadav
- Department of Computer Engineering & Applications, GLA University Mathura, Mathura, India
| | - Bhisham Sharma
- Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
| | - Ajit Noonia
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Abolfazl Mehbodniya
- Department of Electronics and Communication Engineering, Kuwait College of Science and Technology (KCST), Doha Area, 7th Ring Road, Kuwait City, Kuwait
| |
Collapse
|
7
|
Ramani RS, Tan I, Bussau L, O'Reilly LA, Silke J, Angel C, Celentano A, Whitehead L, McCullough M, Yap T. Convolutional neural networks for accurate real-time diagnosis of oral epithelial dysplasia and oral squamous cell carcinoma using high-resolution in vivo confocal microscopy. Sci Rep 2025; 15:2555. [PMID: 39833362 PMCID: PMC11746977 DOI: 10.1038/s41598-025-86400-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Accepted: 01/10/2025] [Indexed: 01/22/2025] Open
Abstract
Oral cancer detection is based on biopsy histopathology, however with digital microscopy imaging technology there is real potential for rapid multi-site imaging and simultaneous diagnostic analysis. Fifty-nine patients with oral mucosal abnormalities were imaged in vivo with a confocal laser endomicroscope using the contrast agents acriflavine and fluorescein for the detection of oral epithelial dysplasia and oral cancer. To analyse the 9168 images frames obtained, three tandem applied pre-trained Inception-V3 convolutional neural network (CNN) models were developed using transfer learning in the PyTorch framework. The first CNN was used to filter for image quality, followed by image specific diagnostic triage models for fluorescein and acriflavine, respectively. Images were categorised based on a histopathological diagnosis into 4 categories: no dysplasia, lichenoid lesions, low-grade dysplasia and high-grade dysplasia/oral squamous cell carcinoma (OSCC). The quality filtering model had an accuracy of 89.5%. The acriflavine diagnostic model performed well for identifying lichenoid (AUC = 0.94) and low-grade dysplasia (AUC = 0.91) but poorly for identifying no dysplasia (AUC = 0.44) or high-grade dysplasia/OSCC (AUC = 0.28). In contrast, the fluorescein diagnostic model had high classification performance for all diagnostic classes (AUC range = 0.90-0.96). These models had a rapid classification speed of less than 1/10th of a second per image. Our study suggests that tandem CNNs can provide highly accurate and rapid real-time diagnostic triage for in vivo assessment of high-risk oral mucosal disease.
Collapse
Affiliation(s)
- Rishi S Ramani
- Melbourne Dental School, University of Melbourne, Level 5, 720 Swanston Street, Carlton, Melbourne, VIC, 3053, Australia.
| | - Ivy Tan
- Melbourne Dental School, University of Melbourne, Level 5, 720 Swanston Street, Carlton, Melbourne, VIC, 3053, Australia
| | | | | | - John Silke
- Walter and Eliza Hall Institute, Melbourne, VIC, Australia
| | | | - Antonio Celentano
- Melbourne Dental School, University of Melbourne, Level 5, 720 Swanston Street, Carlton, Melbourne, VIC, 3053, Australia
| | - Lachlan Whitehead
- Walter and Eliza Hall Institute, Melbourne, VIC, Australia
- Department of Medical Biology, University of Melbourne, Melbourne, VIC, Australia
| | - Michael McCullough
- Melbourne Dental School, University of Melbourne, Level 5, 720 Swanston Street, Carlton, Melbourne, VIC, 3053, Australia
| | - Tami Yap
- Melbourne Dental School, University of Melbourne, Level 5, 720 Swanston Street, Carlton, Melbourne, VIC, 3053, Australia
| |
Collapse
|
8
|
Wankhade D, Dhawale C, Meshram M. Advanced deep learning algorithms in oral cancer detection: Techniques and applications. JOURNAL OF ENVIRONMENTAL SCIENCE AND HEALTH. PART C, TOXICOLOGY AND CARCINOGENESIS 2025; 43:133-158. [PMID: 39819195 DOI: 10.1080/26896583.2024.2445957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2025]
Abstract
As the 16th most common cancer globally, oral cancer yearly accounts for some 355,000 new cases. This study underlines that an early diagnosis can improve the prognosis and cut down on mortality. It discloses a multifaceted approach to the detection of oral cancer, including clinical examination, biopsies, imaging techniques, and the incorporation of artificial intelligence and deep learning methods. This study is distinctive in that it provides a thorough analysis of the most recent AI-based methods for detecting oral cancer, including deep learning models and machine learning algorithms that use convolutional neural networks. By improving the precision and effectiveness of cancer cell detection, these models eventually make early diagnosis and therapy possible. This study also discusses the importance of techniques in image pre-processing and segmentation in improving image quality and feature extraction, an essential component of accurate diagnosis. These techniques have shown promising results, with classification accuracies reaching up to 97.66% in some models. Integrating the conventional methods with the cutting-edge AI technologies, this study seeks to advance early diagnosis of oral cancer, thus enhancing patient outcomes and cutting down on the burden this disease is imposing on healthcare systems.
Collapse
Affiliation(s)
- Dipali Wankhade
- Research Scholar, Datta Meghe Institute of Higher Education and Research Wardha, Nagpur, India
| | - Chitra Dhawale
- Faculty of Science and Technology, Datta Meghe Institute of Higher Education and Research, (Declared as Deemed-to-be-University), Wardha, India
| | - Mrunal Meshram
- Department of Oral Medicine & Radiology, Sharad Pawar Dental Collage, Sawangi, Wardha, India
| |
Collapse
|
9
|
Patel A, Besombes C, Dillibabu T, Sharma M, Tamimi F, Ducret M, Chauvin P, Madathil S. Attention-guided convolutional network for bias-mitigated and interpretable oral lesion classification. Sci Rep 2024; 14:31700. [PMID: 39738228 PMCID: PMC11685657 DOI: 10.1038/s41598-024-81724-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Accepted: 11/28/2024] [Indexed: 01/01/2025] Open
Abstract
Accurate diagnosis of oral lesions, early indicators of oral cancer, is a complex clinical challenge. Recent advances in deep learning have demonstrated potential in supporting clinical decisions. This paper introduces a deep learning model for classifying oral lesions, focusing on accuracy, interpretability, and reducing dataset bias. The model integrates three components: (i) a Classification Stream, utilizing a CNN to categorize images into 16 lesion types (baseline model), (ii) a Guidance Stream, which aligns class activation maps with clinically relevant areas using ground truth segmentation masks (GAIN model), and (iii) an Anatomical Site Prediction Stream, improving interpretability by predicting lesion location (GAIN+ASP model). The development dataset comprised 2765 intra-oral digital images of 16 lesion types from 1079 patients seen at an oral pathology clinic between 1999 and 2021. The GAIN model demonstrated a 7.2% relative improvement in accuracy over the baseline for 16-class classification, with superior class-specific balanced accuracy and AUC scores. Additionally, the GAIN model enhanced lesion localization and improved the alignment between attention maps and ground truth. The proposed models also exhibited greater robustness against dataset bias, as shown in ablation studies.
Collapse
Affiliation(s)
- Adeetya Patel
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada
| | - Camille Besombes
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada
| | - Theerthika Dillibabu
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada
| | - Mridul Sharma
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada
| | - Faleh Tamimi
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| | - Maxime Ducret
- Faculté d'Odontologie, Université Claude Bernard Lyon 1, Lyon, France
| | - Peter Chauvin
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada
| | - Sreenath Madathil
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada.
| |
Collapse
|
10
|
Thakuria T, Rahman T, Mahanta DR, Khataniar SK, Goswami RD, Rahman T, Mahanta LB. Deep learning for early diagnosis of oral cancer via smartphone and DSLR image analysis: a systematic review. Expert Rev Med Devices 2024; 21:1189-1204. [PMID: 39587051 DOI: 10.1080/17434440.2024.2434732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 10/19/2024] [Accepted: 11/22/2024] [Indexed: 11/27/2024]
Abstract
INTRODUCTION Diagnosing oral cancer is crucial in healthcare, with technological advancements enhancing early detection and outcomes. This review examines the impact of handheld AI-based tools, focusing on Convolutional Neural Networks (CNNs) and their advanced architectures in oral cancer diagnosis. METHODS A comprehensive search across PubMed, Scopus, Google Scholar, and Web of Science identified papers on deep learning (DL) in oral cancer diagnosis using digital images. The review, registered with PROSPERO, employed PRISMA and QUADAS-2 for search and risk assessment, with data analyzed through bubble and bar charts. RESULTS Twenty-five papers were reviewed, highlighting classification, segmentation, and object detection as key areas. Despite challenges like limited annotated datasets and data imbalance, models such as DenseNet121, VGG19, and EfficientNet-B0 excelled in binary classification, while EfficientNet-B4, Inception-V4, and Faster R-CNN were effective for multiclass classification and object detection. Models achieved up to 100% precision, 99% specificity, and 97.5% accuracy, showcasing AI's potential to improve diagnostic accuracy. Combining datasets and leveraging transfer learning enhances detection, particularly in resource-limited settings. CONCLUSION Handheld AI tools are transforming oral cancer diagnosis, with ethical considerations guiding their integration into healthcare systems. DL offers explainability, builds trust in AI-driven diagnoses, and facilitates telemedicine integration.
Collapse
Affiliation(s)
- Tapabrat Thakuria
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Taibur Rahman
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Deva Raj Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | | | | | - Tashnin Rahman
- Department of Head & Neck Oncology, Dr. B Borooah Cancer Institute, Guwahati, India
| | - Lipi B Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| |
Collapse
|
11
|
Chen W, Dhawan M, Liu J, Ing D, Mehta K, Tran D, Lawrence D, Ganhewa M, Cirillo N. Mapping the Use of Artificial Intelligence-Based Image Analysis for Clinical Decision-Making in Dentistry: A Scoping Review. Clin Exp Dent Res 2024; 10:e70035. [PMID: 39600121 PMCID: PMC11599430 DOI: 10.1002/cre2.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 09/19/2024] [Accepted: 10/20/2024] [Indexed: 11/29/2024] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is an emerging field in dentistry. AI is gradually being integrated into dentistry to improve clinical dental practice. The aims of this scoping review were to investigate the application of AI in image analysis for decision-making in clinical dentistry and identify trends and research gaps in the current literature. MATERIAL AND METHODS This review followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). An electronic literature search was performed through PubMed and Scopus. After removing duplicates, a preliminary screening based on titles and abstracts was performed. A full-text review and analysis were performed according to predefined inclusion criteria, and data were extracted from eligible articles. RESULTS Of the 1334 articles returned, 276 met the inclusion criteria (consisting of 601,122 images in total) and were included in the qualitative synthesis. Most of the included studies utilized convolutional neural networks (CNNs) on dental radiographs such as orthopantomograms (OPGs) and intraoral radiographs (bitewings and periapicals). AI was applied across all fields of dentistry - particularly oral medicine, oral surgery, and orthodontics - for direct clinical inference and segmentation. AI-based image analysis was use in several components of the clinical decision-making process, including diagnosis, detection or classification, prediction, and management. CONCLUSIONS A variety of machine learning and deep learning techniques are being used for dental image analysis to assist clinicians in making accurate diagnoses and choosing appropriate interventions in a timely manner.
Collapse
Affiliation(s)
- Wei Chen
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Monisha Dhawan
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Jonathan Liu
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Damie Ing
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Kruti Mehta
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Daniel Tran
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | | | - Max Ganhewa
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| | - Nicola Cirillo
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| |
Collapse
|
12
|
Sahoo RK, Sahoo KC, Dash GC, Kumar G, Baliarsingh SK, Panda B, Pati S. Diagnostic performance of artificial intelligence in detecting oral potentially malignant disorders and oral cancer using medical diagnostic imaging: a systematic review and meta-analysis. FRONTIERS IN ORAL HEALTH 2024; 5:1494867. [PMID: 39568787 PMCID: PMC11576460 DOI: 10.3389/froh.2024.1494867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 10/22/2024] [Indexed: 11/22/2024] Open
Abstract
Objective Oral cancer is a widespread global health problem characterised by high mortality rates, wherein early detection is critical for better survival outcomes and quality of life. While visual examination is the primary method for detecting oral cancer, it may not be practical in remote areas. AI algorithms have shown some promise in detecting cancer from medical images, but their effectiveness in oral cancer detection remains Naïve. This systematic review aims to provide an extensive assessment of the existing evidence about the diagnostic accuracy of AI-driven approaches for detecting oral potentially malignant disorders (OPMDs) and oral cancer using medical diagnostic imaging. Methods Adhering to PRISMA guidelines, the review scrutinised literature from PubMed, Scopus, and IEEE databases, with a specific focus on evaluating the performance of AI architectures across diverse imaging modalities for the detection of these conditions. Results The performance of AI models, measured by sensitivity and specificity, was assessed using a hierarchical summary receiver operating characteristic (SROC) curve, with heterogeneity quantified through I2 statistic. To account for inter-study variability, a random effects model was utilized. We screened 296 articles, included 55 studies for qualitative synthesis, and selected 18 studies for meta-analysis. Studies evaluating the diagnostic efficacy of AI-based methods reveal a high sensitivity of 0.87 and specificity of 0.81. The diagnostic odds ratio (DOR) of 131.63 indicates a high likelihood of accurate diagnosis of oral cancer and OPMDs. The SROC curve (AUC) of 0.9758 indicates the exceptional diagnostic performance of such models. The research showed that deep learning (DL) architectures, especially CNNs (convolutional neural networks), were the best at finding OPMDs and oral cancer. Histopathological images exhibited the greatest sensitivity and specificity in these detections. Conclusion These findings suggest that AI algorithms have the potential to function as reliable tools for the early diagnosis of OPMDs and oral cancer, offering significant advantages, particularly in resource-constrained settings. Systematic Review Registration https://www.crd.york.ac.uk/, PROSPERO (CRD42023476706).
Collapse
Affiliation(s)
- Rakesh Kumar Sahoo
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| | - Krushna Chandra Sahoo
- Health Technology Assessment in India (HTAIn), Department of Health Research, Ministry of Health & Family Welfare, Govt. of India, New Delhi, India
| | | | - Gunjan Kumar
- Kalinga Institute of Dental Sciences, KIIT Deemed to be University, Bhubaneswar, India
| | | | - Bhuputra Panda
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
| | - Sanghamitra Pati
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| |
Collapse
|
13
|
Alotaibi S, Deligianni E. AI in oral medicine: is the future already here? A literature review. Br Dent J 2024; 237:765-770. [PMID: 39572810 PMCID: PMC11581975 DOI: 10.1038/s41415-024-8029-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 05/27/2024] [Accepted: 06/04/2024] [Indexed: 11/24/2024]
Abstract
Objective Artificial intelligence (AI) is reshaping many healthcare disciplines, mainly with newly developed computer systems or machines that have the ability to mimic human intelligence. This paper aims to review the available evidence on the applications of AI in oral medicine. The review critically assesses current evidence, shedding light on AI's growing role in this field.Methods Around 20 applicable studies were included in this review from different databases like PubMed and Google Scholar. Studies included involved original research articles, mini-reviews, systematic reviews and meta-analyses.Results Existing papers on AI uses in oral medicine included fundamental areas such as oral cancer, lichen planus, bisphosphonate-related osteonecrosis of the jaw, odontogenic keratocysts and oral lesions classification. AI has proved remarkable potential in terms of accuracy, sensitivity and specificity.Conclusion The outcomes of the papers suggest that AI holds major potential to help dental practitioners diagnose and manage oral diseases with superior precision. While acknowledging the encouraging results, this paper also underscores the necessity for further research and improvement to fully harness the abilities of AI in oral medicine. It calls notice to the fact that AI, although a valued tool, should supplement rather than replace healthcare professionals.
Collapse
Affiliation(s)
- Sultan Alotaibi
- Year 5 BDS Student, Division of Dentistry, School of Medical Sciences, FBMH, University of Manchester, UK.
| | - Eleni Deligianni
- Clinical Lecturer in Oral Medicine, Division of Dentistry, School of Medical Sciences, FBMH, University of Manchester, UK
| |
Collapse
|
14
|
Kumar Y, Shrivastav S, Garg K, Modi N, Wiltos K, Woźniak M, Ijaz MF. Automating cancer diagnosis using advanced deep learning techniques for multi-cancer image classification. Sci Rep 2024; 14:25006. [PMID: 39443621 PMCID: PMC11499884 DOI: 10.1038/s41598-024-75876-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Accepted: 10/08/2024] [Indexed: 10/25/2024] Open
Abstract
Cancer detection poses a significant challenge for researchers and clinical experts due to its status as the leading cause of global mortality. Early detection is crucial, but traditional cancer detection methods often rely on invasive procedures and time-consuming analyses, creating a demand for more efficient and accurate solutions. This paper addresses these challenges by utilizing automated cancer detection through AI-based techniques, specifically focusing on deep learning models. Convolutional Neural Networks (CNNs), including DenseNet121, DenseNet201, Xception, InceptionV3, MobileNetV2, NASNetLarge, NASNetMobile, InceptionResNetV2, VGG19, and ResNet152V2, are evaluated on image datasets for seven types of cancer: brain, oral, breast, kidney, Acute Lymphocytic Leukemia, lung and colon, and cervical cancer. Initially, images undergo segmentation techniques, proceeded by contour feature extraction where parameters such as perimeter, area, and epsilon are computed. The models are rigorously evaluated, with DenseNet121 achieving the highest validation accuracy as 99.94%, 0.0017 as loss, and the lowest Root Mean Square Error (RMSE) values as 0.036056 for training and 0.045826 for validation. These results revealed the capability of AI-based techniques in improving cancer detection accuracy, with DenseNet121 emerging as the most effective model in this study.
Collapse
Affiliation(s)
- Yogesh Kumar
- Department of Computer Science and Engineering, School of Technology, PDEU, Gandhinagar, Gujarat, 382426, India
| | | | - Kinny Garg
- Department of ECE, AMC Engineering College, Bengaluru, Karnataka, India
| | - Nandini Modi
- Department of Computer Science and Engineering, School of Technology, PDEU, Gandhinagar, Gujarat, 382426, India.
| | - Katarzyna Wiltos
- Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, Gliwice, 44100, Poland
| | - Marcin Woźniak
- Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, Gliwice, 44100, Poland.
| | - Muhammad Fazal Ijaz
- School of IT and Engineering, Melbourne Institute of Technology, Melbourne, 3000, Australia.
| |
Collapse
|
15
|
Keser G, Pekiner FN, Bayrakdar İŞ, Çelik Ö, Orhan K. A deep learning approach to detection of oral cancer lesions from intra oral patient images: A preliminary retrospective study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101975. [PMID: 39043293 DOI: 10.1016/j.jormas.2024.101975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/10/2024] [Accepted: 07/20/2024] [Indexed: 07/25/2024]
Abstract
INTRODUCTION Oral squamous cell carcinomas (OSCC) seen in the oral cavity are a category of diseases for which dentists may diagnose and even cure. This study evaluated the performance of diagnostic computer software developed to detect oral cancer lesions in intra-oral retrospective patient images. MATERIALS AND METHODS Oral cancer lesions were labeled with CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) and polygonal type labeling method on a total of 65 anonymous retrospective intraoral patient images of oral mucosa that were diagnosed with oral cancer histopathologically by incisional biopsy from individuals in our clinic. All images have been rechecked and verified by experienced experts. This data set was divided into training (n = 53), validation (n = 6) and test (n = 6) sets. Artificial intelligence model was developed using YOLOv5 architecture, which is a deep learning approach. Model success was evaluated with confusion matrix. RESULTS When the success rate in estimating the images reserved for the test not used in education was evaluated, the F1, sensitivity and precision results of the artificial intelligence model obtained using the YOLOv5 architecture were found to be 0.667, 0.667 and 0.667, respectively. CONCLUSIONS Our study reveals that OCSCC lesions carry discriminative visual appearances, which can be identified by deep learning algorithm. Artificial intelligence shows promise in the prediagnosis of oral cancer lesions. The success rates will increase in the training models of the data set that will be formed with more images.
Collapse
Affiliation(s)
- Gaye Keser
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Department of Oral Diagnosis and Başıbüyük Sağlık, Marmara University, Yerleşkesi Başıbüyük Yolu 9/3 34854, Maltepe, İstanbul, Turkey.
| | - Filiz Namdar Pekiner
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Department of Oral Diagnosis and Başıbüyük Sağlık, Marmara University, Yerleşkesi Başıbüyük Yolu 9/3 34854, Maltepe, İstanbul, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer, Faculty of Science and Letters, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey; Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| |
Collapse
|
16
|
Kapoor DU, Saini PK, Sharma N, Singh A, Prajapati BG, Elossaily GM, Rashid S. AI illuminates paths in oral cancer: transformative insights, diagnostic precision, and personalized strategies. EXCLI JOURNAL 2024; 23:1091-1116. [PMID: 39391057 PMCID: PMC11464865 DOI: 10.17179/excli2024-7253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 08/29/2024] [Indexed: 10/12/2024]
Abstract
Oral cancer retains one of the lowest survival rates worldwide, despite recent therapeutic advancements signifying a tenacious challenge in healthcare. Artificial intelligence exhibits noteworthy potential in escalating diagnostic and treatment procedures, offering promising advancements in healthcare. This review entails the traditional imaging techniques for the oral cancer treatment. The role of artificial intelligence in prognosis of oral cancer including predictive modeling, identification of prognostic factors and risk stratification also discussed significantly in this review. The review also encompasses the utilization of artificial intelligence such as automated image analysis, computer-aided detection and diagnosis integration of machine learning algorithms for oral cancer diagnosis and treatment. The customizing treatment approaches for oral cancer through artificial intelligence based personalized medicine is also part of this review. See also the graphical abstract(Fig. 1).
Collapse
Affiliation(s)
- Devesh U. Kapoor
- Dr. Dayaram Patel Pharmacy College, Bardoli-394601, Gujarat, India
| | - Pushpendra Kumar Saini
- Department of Pharmaceutics, Sri Balaji College of Pharmacy, Jaipur, Rajasthan-302013, India
| | - Narendra Sharma
- Department of Pharmaceutics, Sri Balaji College of Pharmacy, Jaipur, Rajasthan-302013, India
| | - Ankul Singh
- Faculty of Pharmacy, Department of Pharmacology, Dr MGR Educational and Research Institute, Velapanchavadi, Chennai-77, Tamil Nadu, India
| | - Bhupendra G. Prajapati
- Shree S. K. Patel College of Pharmaceutical Education and Research, Ganpat University, Kherva-384012, Gujarat, India
- Faculty of Pharmacy, Silpakorn University, Nakhon Pathom 73000, Thailand
| | - Gehan M. Elossaily
- Department of Basic Medical Sciences, College of Medicine, AlMaarefa University, P.O. Box 71666, Riyadh, 11597, Saudi Arabia
| | - Summya Rashid
- Department of Pharmacology & Toxicology, College of Pharmacy, Prince Sattam Bin Abdulaziz University, P.O. Box 173, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
17
|
Li J, Kot WY, McGrath CP, Chan BWA, Ho JWK, Zheng LW. Diagnostic accuracy of artificial intelligence assisted clinical imaging in the detection of oral potentially malignant disorders and oral cancer: a systematic review and meta-analysis. Int J Surg 2024; 110:5034-5046. [PMID: 38652301 PMCID: PMC11325952 DOI: 10.1097/js9.0000000000001469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/30/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND The objective of this study is to examine the application of artificial intelligence (AI) algorithms in detecting oral potentially malignant disorders (OPMD) and oral cancerous lesions, and to evaluate the accuracy variations among different imaging tools employed in these diagnostic processes. MATERIALS AND METHODS A systematic search was conducted in four databases: Embase, Web of Science, PubMed, and Scopus. The inclusion criteria included studies using machine learning algorithms to provide diagnostic information on specific oral lesions, prospective or retrospective design, and inclusion of OPMD. Sensitivity and specificity analyses were also required. Forest plots were generated to display overall diagnostic odds ratio (DOR), sensitivity, specificity, negative predictive values, and summary receiver operating characteristic (SROC) curves. Meta-regression analysis was conducted to examine potential differences among different imaging tools. RESULTS The overall DOR for AI-based screening of OPMD and oral mucosal cancerous lesions from normal mucosa was 68.438 (95% CI= [39.484-118.623], I2 =86%). The area under the SROC curve was 0.938, indicating excellent diagnostic performance. AI-assisted screening showed a sensitivity of 89.9% (95% CI= [0.866-0.925]; I2 =81%), specificity of 89.2% (95% CI= [0.851-0.922], I2 =79%), and a high negative predictive value of 89.5% (95% CI= [0.851-0.927], I2 =96%). Meta-regression analysis revealed no significant difference among the three image tools. After generating a GOSH plot, the DOR was calculated to be 49.30, and the area under the SROC curve was 0.877. Additionally, sensitivity, specificity, and negative predictive value were 90.5% (95% CI [0.873-0.929], I2 =4%), 87.0% (95% CI [0.813-0.912], I2 =49%) and 90.1% (95% CI [0.860-0.931], I2 =57%), respectively. Subgroup analysis showed that clinical photography had the highest diagnostic accuracy. CONCLUSIONS AI-based detection using clinical photography shows a high DOR and is easily accessible in the current era with billions of phone subscribers globally. This indicates that there is significant potential for AI to enhance the diagnostic capabilities of general practitioners to the level of specialists by utilizing clinical photographs, without the need for expensive specialized imaging equipment.
Collapse
Affiliation(s)
- JingWen Li
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong
| | - Wai Ying Kot
- Faculty of Dentistry, The University of Hong Kong
| | - Colman Patrick McGrath
- Division of Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong
| | - Bik Wan Amy Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, The Chinese University of Hong Kong
| | - Joshua Wing Kei Ho
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, People’s Republic of China
| | - Li Wu Zheng
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong
| |
Collapse
|
18
|
Hsu Y, Chou CY, Huang YC, Liu YC, Lin YL, Zhong ZP, Liao JK, Lee JC, Chen HY, Lee JJ, Chen SJ. Oral mucosal lesions triage via YOLOv7 models. J Formos Med Assoc 2024:S0929-6646(24)00313-9. [PMID: 39003230 DOI: 10.1016/j.jfma.2024.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 06/25/2024] [Accepted: 07/09/2024] [Indexed: 07/15/2024] Open
Abstract
BACKGROUND/PURPOSE The global incidence of lip and oral cavity cancer continues to rise, necessitating improved early detection methods. This study leverages the capabilities of computer vision and deep learning to enhance the early detection and classification of oral mucosal lesions. METHODS A dataset initially consisting of 6903 white-light macroscopic images collected from 2006 to 2013 was expanded to over 50,000 images to train the YOLOv7 deep learning model. Lesions were categorized into three referral grades: benign (green), potentially malignant (yellow), and malignant (red), facilitating efficient triage. RESULTS The YOLOv7 models, particularly the YOLOv7-E6, demonstrated high precision and recall across all lesion categories. The YOLOv7-D6 model excelled at identifying malignant lesions with notable precision, recall, and F1 scores. Enhancements, including the integration of coordinate attention in the YOLOv7-D6-CA model, significantly improved the accuracy of lesion classification. CONCLUSION The study underscores the robust comparison of various YOLOv7 model configurations in the classification to triage oral lesions. The overall results highlight the potential of deep learning models to contribute to the early detection of oral cancers, offering valuable tools for both clinical settings and remote screening applications.
Collapse
Affiliation(s)
- Yu Hsu
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan; Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Cheng-Ying Chou
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu-Cheng Huang
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan
| | - Yu-Chieh Liu
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Yong-Long Lin
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Zi-Ping Zhong
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Jun-Kai Liao
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Jun-Ching Lee
- Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan
| | - Hsin-Yu Chen
- Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan
| | - Jang-Jaer Lee
- Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan; Department of Dentistry, College of Medicine, National Taiwan University, Taipei, Taiwan.
| | - Shyh-Jye Chen
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan; Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan; Department of Radiology, College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
19
|
Xie F, Xu P, Xi X, Gu X, Zhang P, Wang H, Shen X. Oral mucosal disease recognition based on dynamic self-attention and feature discriminant loss. Oral Dis 2024; 30:3094-3107. [PMID: 37731172 DOI: 10.1111/odi.14732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/22/2023] [Accepted: 08/25/2023] [Indexed: 09/22/2023]
Abstract
OBJECTIVES To develop a dynamic self-attention and feature discrimination loss function (DSDF) model for identifying oral mucosal diseases presented to solve the problems of data imbalance, complex image background, and high similarity and difference of visual characteristics among different types of lesion areas. METHODS In DSDF, dynamic self-attention network can fully mine the context information between adjacent areas, improve the visual representation of the network, and promote the network model to learn and locate the image area of interest. Then, the feature discrimination loss function is used to constrain the diversity of channel characteristics, so as to enhance the feature discrimination ability of local similar areas. RESULTS The experimental results show that the recognition accuracy of the proposed method for oral mucosal disease is the highest at 91.16%, and is about 6% ahead of other advanced methods. In addition, DSDF has recall of 90.87% and F1 of 90.60%. CONCLUSIONS Convolutional neural networks can effectively capture the visual features of the oral mucosal disease lesions, and the distinguished visual features of different oral lesions can be extracted better using dynamic self-attention and feature discrimination loss function, which is conducive to the auxiliary diagnosis of oral mucosal diseases.
Collapse
Affiliation(s)
- Fei Xie
- Xi'an Key Laboratory of Human-Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi'an, China
- School of AOAIR, Xidian University, Xi'an, China
| | - Pengfei Xu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Xinyi Xi
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Xiaokang Gu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Panpan Zhang
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Hexu Wang
- Xi'an Key Laboratory of Human-Machine Integration and Control Technology for Intelligent Rehabilitation, Xijing University, Xi'an, China
| | - Xuemin Shen
- Department of Oral Mucosal Diseases, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
20
|
Zhang L, Shi R, Youssefi N. Oral cancer diagnosis based on gated recurrent unit networks optimized by an improved version of Northern Goshawk optimization algorithm. Heliyon 2024; 10:e32077. [PMID: 38912510 PMCID: PMC11190545 DOI: 10.1016/j.heliyon.2024.e32077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/12/2024] [Accepted: 05/28/2024] [Indexed: 06/25/2024] Open
Abstract
Oral cancer early diagnosis is a critical task in the field of medical science, and one of the most necessary things is to develop sound and effective strategies for early detection. The current research investigates a new strategy to diagnose an oral cancer based upon combination of effective learning and medical imaging. The current research investigates a new strategy to diagnose an oral cancer using Gated Recurrent Unit (GRU) networks optimized by an improved model of the NGO (Northern Goshawk Optimization) algorithm. The proposed approach has several advantages over existing methods, including its ability to analyze large and complex datasets, its high accuracy, as well as its capacity to detect oral cancer at the very beginning stage. The improved NGO algorithm is utilized to improve the GRU network that helps to improve the performance of the network and increase the accuracy of the diagnosis. The paper describes the proposed approach and evaluates its performance using a dataset of oral cancer patients. The findings of the study demonstrate the efficiency of the suggested approach in accurately diagnosing oral cancer.
Collapse
Affiliation(s)
- Lei Zhang
- Department of Stomatology, The Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, 250033, Shandong, China
| | - Rongji Shi
- Department of Stomatology, The Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, 250033, Shandong, China
| | - Naser Youssefi
- Islamic Azad University, Science and Research Branch, Tehran, Iran
- College of Technical Engineering, The Islamic University, Najaf, Iraq
| |
Collapse
|
21
|
Vinayahalingam S, van Nistelrooij N, Rothweiler R, Tel A, Verhoeven T, Tröltzsch D, Kesting M, Bergé S, Xi T, Heiland M, Flügge T. Advancements in diagnosing oral potentially malignant disorders: leveraging Vision transformers for multi-class detection. Clin Oral Investig 2024; 28:364. [PMID: 38849649 PMCID: PMC11161543 DOI: 10.1007/s00784-024-05762-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 06/01/2024] [Indexed: 06/09/2024]
Abstract
OBJECTIVES Diagnosing oral potentially malignant disorders (OPMD) is critical to prevent oral cancer. This study aims to automatically detect and classify the most common pre-malignant oral lesions, such as leukoplakia and oral lichen planus (OLP), and distinguish them from oral squamous cell carcinomas (OSCC) and healthy oral mucosa on clinical photographs using vision transformers. METHODS 4,161 photographs of healthy mucosa, leukoplakia, OLP, and OSCC were included. Findings were annotated pixel-wise and reviewed by three clinicians. The photographs were divided into 3,337 for training and validation and 824 for testing. The training and validation images were further divided into five folds with stratification. A Mask R-CNN with a Swin Transformer was trained five times with cross-validation, and the held-out test split was used to evaluate the model performance. The precision, F1-score, sensitivity, specificity, and accuracy were calculated. The area under the receiver operating characteristics curve (AUC) and the confusion matrix of the most effective model were presented. RESULTS The detection of OSCC with the employed model yielded an F1 of 0.852 and AUC of 0.974. The detection of OLP had an F1 of 0.825 and AUC of 0.948. For leukoplakia the F1 was 0.796 and the AUC was 0.938. CONCLUSIONS OSCC were effectively detected with the employed model, whereas the detection of OLP and leukoplakia was moderately effective. CLINICAL RELEVANCE Oral cancer is often detected in advanced stages. The demonstrated technology may support the detection and observation of OPMD to lower the disease burden and identify malignant oral cavity lesions earlier.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
| | - Niels van Nistelrooij
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Nijmegen, the Netherlands
- Department of Oral and Maxillofacial Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - René Rothweiler
- Department of Oral and Maxillofacial Surgery, Translational Implantology, Medical Center, Faculty of Medicine, University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Alessandro Tel
- Clinic of Maxillofacial Surgery, Head&Neck and Neuroscience Department, University Hospital of Udine, Udine, Italy
| | - Tim Verhoeven
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Daniel Tröltzsch
- Department of Oral and Maxillofacial Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Marco Kesting
- Department of Oral and Cranio-Maxillofacial Surgery, Friedrich-Alexander-University Erlangen- Nuremberg (FAU), Erlangen, Germany
| | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Max Heiland
- Department of Oral and Maxillofacial Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Tabea Flügge
- Department of Oral and Maxillofacial Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany.
| |
Collapse
|
22
|
Li C, Chen X, Chen C, Gong Z, Pataer P, Liu X, Lv X. Application of deep learning radiomics in oral squamous cell carcinoma-Extracting more information from medical images using advanced feature analysis. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101840. [PMID: 38548062 DOI: 10.1016/j.jormas.2024.101840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/07/2024] [Accepted: 03/20/2024] [Indexed: 04/02/2024]
Abstract
OBJECTIVE To conduct a systematic review with meta-analyses to assess the recent scientific literature addressing the application of deep learning radiomics in oral squamous cell carcinoma (OSCC). MATERIALS AND METHODS Electronic and manual literature retrieval was performed using PubMed, Web of Science, EMbase, Ovid-MEDLINE, and IEEE databases from 2012 to 2023. The ROBINS-I tool was used for quality evaluation; random-effects model was used; and results were reported according to the PRISMA statement. RESULTS A total of 26 studies involving 64,731 medical images were included in quantitative synthesis. The meta-analysis showed that, the pooled sensitivity and specificity were 0.88 (95 %CI: 0.87∼0.88) and 0.80 (95 %CI: 0.80∼0.81), respectively. Deeks' asymmetry test revealed there existed slight publication bias (P = 0.03). CONCLUSIONS The advances in the application of radiomics combined with learning algorithm in OSCC were reviewed, including diagnosis and differential diagnosis of OSCC, efficacy assessment and prognosis prediction. The demerits of deep learning radiomics at the current stage and its future development direction aimed at medical imaging diagnosis were also summarized and analyzed at the end of the article.
Collapse
Affiliation(s)
- Chenxi Li
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China; Stomatological Research Institute of Xinjiang Uygur Autonomous Region. Urumqi 830054, PR China; Hubei Province Key Laboratory of Oral and Maxillofacial Development and Regeneration, School of Stomatology, Tongji Medical College, Union Hospital, Huazhong University of Science and Technology, Wuhan 430022, PR China.
| | - Xinya Chen
- College of Information Science and Engineering, Xinjiang University. Urumqi 830008, PR China
| | - Cheng Chen
- College of Software, Xinjiang University. Urumqi 830046, PR China
| | - Zhongcheng Gong
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China; Stomatological Research Institute of Xinjiang Uygur Autonomous Region. Urumqi 830054, PR China.
| | - Parekejiang Pataer
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China
| | - Xu Liu
- Department of Maxillofacial Surgery, Hospital of Stomatology, Key Laboratory of Dental-Maxillofacial Reconstruction and Biological Intelligence Manufacturing of Gansu Province, Faculty of Dentistry, Lanzhou University. Lanzhou 730013, PR China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University. Urumqi 830008, PR China; College of Software, Xinjiang University. Urumqi 830046, PR China
| |
Collapse
|
23
|
Soni A, Sethy PK, Dewangan AK, Nanthaamornphong A, Behera SK, Devi B. Enhancing oral squamous cell carcinoma detection: a novel approach using improved EfficientNet architecture. BMC Oral Health 2024; 24:601. [PMID: 38783295 PMCID: PMC11112956 DOI: 10.1186/s12903-024-04307-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
PROBLEM Oral squamous cell carcinoma (OSCC) is the eighth most prevalent cancer globally, leading to the loss of structural integrity within the oral cavity layers and membranes. Despite its high prevalence, early diagnosis is crucial for effective treatment. AIM This study aimed to utilize recent advancements in deep learning for medical image classification to automate the early diagnosis of oral histopathology images, thereby facilitating prompt and accurate detection of oral cancer. METHODS A deep learning convolutional neural network (CNN) model categorizes benign and malignant oral biopsy histopathological images. By leveraging 17 pretrained DL-CNN models, a two-step statistical analysis identified the pretrained EfficientNetB0 model as the most superior. Further enhancement of EfficientNetB0 was achieved by incorporating a dual attention network (DAN) into the model architecture. RESULTS The improved EfficientNetB0 model demonstrated impressive performance metrics, including an accuracy of 91.1%, sensitivity of 92.2%, specificity of 91.0%, precision of 91.3%, false-positive rate (FPR) of 1.12%, F1 score of 92.3%, Matthews correlation coefficient (MCC) of 90.1%, kappa of 88.8%, and computational time of 66.41%. Notably, this model surpasses the performance of state-of-the-art approaches in the field. CONCLUSION Integrating deep learning techniques, specifically the enhanced EfficientNetB0 model with DAN, shows promising results for the automated early diagnosis of oral cancer through oral histopathology image analysis. This advancement has significant potential for improving the efficacy of oral cancer treatment strategies.
Collapse
Affiliation(s)
- Aradhana Soni
- Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, India
| | | | - Amit Kumar Dewangan
- Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, India
| | - Aziz Nanthaamornphong
- College of Computing, Prince of Songkla University, Phuket campus, Phuket, Thailand.
| | | | - Baishnu Devi
- Department of Computer Science and Engineering, VSSUT, Burla, India
| |
Collapse
|
24
|
Ju J, Zhang Q, Guan Z, Shen X, Shen Z, Xu P. NTSM: a non-salient target segmentation model for oral mucosal diseases. BMC Oral Health 2024; 24:521. [PMID: 38698377 PMCID: PMC11639699 DOI: 10.1186/s12903-024-04193-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 03/27/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND Oral mucosal diseases are similar to the surrounding normal tissues, i.e., their many non-salient features, which poses a challenge for accurate segmentation lesions. Additionally, high-precision large models generate too many parameters, which puts pressure on storage and makes it difficult to deploy on portable devices. METHODS To address these issues, we design a non-salient target segmentation model (NTSM) to improve segmentation performance while reducing the number of parameters. The NTSM includes a difference association (DA) module and multiple feature hierarchy pyramid attention (FHPA) modules. The DA module enhances feature differences at different levels to learn local context information and extend the segmentation mask to potentially similar areas. It also learns logical semantic relationship information through different receptive fields to determine the actual lesions and further elevates the segmentation performance of non-salient lesions. The FHPA module extracts pathological information from different views by performing the hadamard product attention (HPA) operation on input features, which reduces the number of parameters. RESULTS The experimental results on the oral mucosal diseases (OMD) dataset and international skin imaging collaboration (ISIC) dataset demonstrate that our model outperforms existing state-of-the-art methods. Compared with the nnU-Net backbone, our model has 43.20% fewer parameters while still achieving a 3.14% increase in the Dice score. CONCLUSIONS Our model has high segmentation accuracy on non-salient areas of oral mucosal diseases and can effectively reduce resource consumption.
Collapse
Affiliation(s)
- Jianguo Ju
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| | - Qian Zhang
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| | - Ziyu Guan
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| | - Xuemin Shen
- Department of Oral Mucosal Diseases, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, No.639, Manufacturing Bureau Road, HuangpuShanghai, 200011, China
| | - Zhengyu Shen
- Department of Dermatology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, No.639, Manufacturing Bureau Road, HuangpuShanghai, 200011, China.
| | - Pengfei Xu
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| |
Collapse
|
25
|
Hassona Y. Applications of artificial intelligence in special care dentistry. SPECIAL CARE IN DENTISTRY 2024; 44:952-953. [PMID: 37532677 DOI: 10.1111/scd.12911] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 07/19/2023] [Accepted: 07/23/2023] [Indexed: 08/04/2023]
Affiliation(s)
- Yazan Hassona
- School of Dentistry, The University of Jordan, Amman, Jordan
- School of Dentistry, Al Ahliyya Amman University, Amman, Jordan
| |
Collapse
|
26
|
Ramani RS. Revolutionizing oral pathology and medicine: The artificial intelligence advantage. J Oral Pathol Med 2024; 53:233-235. [PMID: 38604744 DOI: 10.1111/jop.13534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 04/03/2024] [Indexed: 04/13/2024]
Affiliation(s)
- Rishi Sanjay Ramani
- Oral Medicine and Oral Cancer (OMOC) Group, Melbourne Dental School, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
27
|
Warin K, Suebnukarn S. Deep learning in oral cancer- a systematic review. BMC Oral Health 2024; 24:212. [PMID: 38341571 PMCID: PMC10859022 DOI: 10.1186/s12903-024-03993-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | | |
Collapse
|
28
|
Lee SJ, Oh HJ, Son YD, Kim JH, Kwon IJ, Kim B, Lee JH, Kim HK. Enhancing deep learning classification performance of tongue lesions in imbalanced data: mosaic-based soft labeling with curriculum learning. BMC Oral Health 2024; 24:161. [PMID: 38302981 PMCID: PMC10832072 DOI: 10.1186/s12903-024-03898-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 01/15/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Oral potentially malignant disorders (OPMDs) are associated with an increased risk of cancer of the oral cavity including the tongue. The early detection of oral cavity cancers and OPMDs is critical for reducing cancer-specific morbidity and mortality. Recently, there have been studies to apply the rapidly advancing technology of deep learning for diagnosing oral cavity cancer and OPMDs. However, several challenging issues such as class imbalance must be resolved to effectively train a deep learning model for medical imaging classification tasks. The aim of this study is to evaluate a new technique of artificial intelligence to improve the classification performance in an imbalanced tongue lesion dataset. METHODS A total of 1,810 tongue images were used for the classification. The class-imbalanced dataset consisted of 372 instances of cancer, 141 instances of OPMDs, and 1,297 instances of noncancerous lesions. The EfficientNet model was used as the feature extraction model for classification. Mosaic data augmentation, soft labeling, and curriculum learning (CL) were employed to improve the classification performance of the convolutional neural network. RESULTS Utilizing a mosaic-augmented dataset in conjunction with CL, the final model achieved an accuracy rate of 0.9444, surpassing conventional oversampling and weight balancing methods. The relative precision improvement rate for the minority class OPMD was 21.2%, while the relative [Formula: see text] score improvement rate of OPMD was 4.9%. CONCLUSIONS The present study demonstrates that the integration of mosaic-based soft labeling and curriculum learning improves the classification performance of tongue lesions compared to previous methods, establishing a foundation for future research on effectively learning from imbalanced data.
Collapse
Affiliation(s)
- Sung-Jae Lee
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Seongnam, Republic of Korea
| | - Hyun Jun Oh
- Oral Oncology Clinic, National Cancer Center, Goyang, Republic of Korea
| | - Young-Don Son
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Seongnam, Republic of Korea
- Neuroscience Research Institute, Gachon Advanced Institute for Health Science and Technology, Gachon University, Incheon, Republic of Korea
| | - Jong-Hoon Kim
- Neuroscience Research Institute, Gachon Advanced Institute for Health Science and Technology, Gachon University, Incheon, Republic of Korea
- Department of Psychiatry, Gachon University College of Medicine, Gil Medical Center, Incheon, Republic of Korea
| | - Ik-Jae Kwon
- Department of Oral and Maxillofacial Surgery, Seoul National University Dental Hospital, Seoul, Republic of Korea
- Dental Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Bongju Kim
- Dental Life Science Research Institute, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Jong-Ho Lee
- Oral Oncology Clinic, National Cancer Center, Goyang, Republic of Korea.
- Dental Life Science Research Institute, Seoul National University Dental Hospital, Seoul, Republic of Korea.
| | - Hang-Keun Kim
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Seongnam, Republic of Korea.
- Neuroscience Research Institute, Gachon Advanced Institute for Health Science and Technology, Gachon University, Incheon, Republic of Korea.
| |
Collapse
|
29
|
Rokhshad R, Mohammad-Rahimi H, Price JB, Shoorgashti R, Abbasiparashkouh Z, Esmaeili M, Sarfaraz B, Rokhshad A, Motamedian SR, Soltani P, Schwendicke F. Artificial intelligence for classification and detection of oral mucosa lesions on photographs: a systematic review and meta-analysis. Clin Oral Investig 2024; 28:88. [PMID: 38217733 DOI: 10.1007/s00784-023-05475-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 12/21/2023] [Indexed: 01/15/2024]
Abstract
OBJECTIVE This study aimed to review and synthesize studies using artificial intelligence (AI) for classifying, detecting, or segmenting oral mucosal lesions on photographs. MATERIALS AND METHOD Inclusion criteria were (1) studies employing AI to (2) classify, detect, or segment oral mucosa lesions, (3) on oral photographs of human subjects. Included studies were assessed for risk of bias using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). A PubMed, Scopus, Embase, Web of Science, IEEE, arXiv, medRxiv, and grey literature (Google Scholar) search was conducted until June 2023, without language limitation. RESULTS After initial searching, 36 eligible studies (from 8734 identified records) were included. Based on QUADAS-2, only 7% of studies were at low risk of bias for all domains. Studies employed different AI models and reported a wide range of outcomes and metrics. The accuracy of AI for detecting oral mucosal lesions ranged from 74 to 100%, while that for clinicians un-aided by AI ranged from 61 to 98%. Pooled diagnostic odds ratio for studies which evaluated AI for diagnosing or discriminating potentially malignant lesions was 155 (95% confidence interval 23-1019), while that for cancerous lesions was 114 (59-221). CONCLUSIONS AI may assist in oral mucosa lesion screening while the expected accuracy gains or further health benefits remain unclear so far. CLINICAL RELEVANCE Artificial intelligence assists oral mucosa lesion screening and may foster more targeted testing and referral in the hands of non-specialist providers, for example. So far, it remains unclear if accuracy gains compared with specialized can be realized.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran
| | - Jeffery B Price
- Department of Oncology and Diagnostic Sciences, University of Maryland, School of Dentistry, Baltimore, Maryland 650 W Baltimore St, Baltimore, MD, 21201, USA
| | - Reyhaneh Shoorgashti
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | | | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | - Bita Sarfaraz
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran
| | - Arad Rokhshad
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | - Saeed Reza Motamedian
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran.
| | - Parisa Soltani
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, Dental Research Institute, School of Dentistry, Isfahan University of Medical Sciences, Salamat Blv, Isfahan Dental School, Isfahan, Iran
- Department of Neurosciences, Reproductive and Odontostomatological Sciences, University of Naples Federico II, Nepales, Italy
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Charitépl. 1, 10117, Berlin, Germany
| |
Collapse
|
30
|
Hsieh ST, Cheng YA. Multimodal feature fusion in deep learning for comprehensive dental condition classification. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:303-321. [PMID: 38217632 DOI: 10.3233/xst-230271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2024]
Abstract
BACKGROUND Dental health issues are on the rise, necessitating prompt and precise diagnosis. Automated dental condition classification can support this need. OBJECTIVE The study aims to evaluate the effectiveness of deep learning methods and multimodal feature fusion techniques in advancing the field of automated dental condition classification. METHODS AND MATERIALS A dataset of 11,653 clinically sourced images representing six prevalent dental conditions-caries, calculus, gingivitis, tooth discoloration, ulcers, and hypodontia-was utilized. Features were extracted using five Convolutional Neural Network (CNN) models, then fused into a matrix. Classification models were constructed using Support Vector Machines (SVM) and Naive Bayes classifiers. Evaluation metrics included accuracy, recall rate, precision, and Kappa index. RESULTS The SVM classifier integrated with feature fusion demonstrated superior performance with a Kappa index of 0.909 and accuracy of 0.925. This significantly surpassed individual CNN models such as EfficientNetB0, which achieved a Kappa of 0.814 and accuracy of 0.847. CONCLUSIONS The amalgamation of feature fusion with advanced machine learning algorithms can significantly bolster the precision and robustness of dental condition classification systems. Such a method presents a valuable tool for dental professionals, facilitating enhanced diagnostic accuracy and subsequently improved patient outcomes.
Collapse
Affiliation(s)
- Shang-Ting Hsieh
- Department of Health Beauty, Fooyin University, Kaohsiung City, Taiwan
| | - Ya-Ai Cheng
- Department of Healthcare Administration, I-Shou University, Kaohsiung City, Taiwan
| |
Collapse
|
31
|
Rochefort J, Radoi L, Campana F, Fricain JC, Lescaille G. [Oral cavity cancer: A distinct entity]. Med Sci (Paris) 2024; 40:57-63. [PMID: 38299904 DOI: 10.1051/medsci/2023196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024] Open
Abstract
Oral Squamous cell carcinoma represent the 17th most frequent cancer in the world. The main risk factors are alcohol and tobacco consumption but dietary, familial, genetic, or oral diseases may be involved in oral carcinogenesis. Diagnosis is made on biopsy, but detection remains late, leading to a poor prognosis. New technologies could reduce these delays, notably Artificial Intelligence and the quantitative evaluation of salivary biological markers. Currently, management of oral cancer consists in surgery, which can be mutilating despite possible reconstructions. In the future, immunotherapies could become a therapeutic alternative and the immune microenvironment could constitute a source of prognostic markers.
Collapse
Affiliation(s)
- Juliette Rochefort
- Assistance Publique-Hôpitaux de Paris (AP-HP), Groupe hospitalier Pitié-Salpêtrière, Service de médecine bucco-dentaire, Paris, France - Faculté d'odontologie, université Paris Cité, Paris, France - Sorbonne université, Inserm U.1135, Centre d'immunologie et des maladies infectieuses, CIMI-Paris, Paris, France
| | - Lorédana Radoi
- Faculté d'odontologie, université Paris Cité, Paris, France - Centre de recherche en épidémiologie et santé des populations, Inserm U1018, université Paris Saclay
| | - Fabrice Campana
- Aix Marseille Univ, Assistance Publique-Hôpitaux de Marseille (AP-HM), Timone Hospital, Oral Surgery Department, Marseille, France
| | - Jean-Christophe Fricain
- CHU Bordeaux, Dentistry and Oral Health Department, F-33404 Bordeaux, France - Inserm U1026, université de Bordeaux, Tissue Bioengineering (BioTis), F-33076 Bordeaux, France
| | - Géraldine Lescaille
- Assistance Publique-Hôpitaux de Paris (AP-HP), Groupe hospitalier Pitié-Salpêtrière, Service de médecine bucco-dentaire, Paris, France - Faculté d'odontologie, université Paris Cité, Paris, France - Sorbonne université, Inserm U.1135, Centre d'immunologie et des maladies infectieuses, CIMI-Paris, Paris, France
| |
Collapse
|
32
|
Ou-Yang S, Han S, Sun D, Wu H, Chen J, Cai Y, Yin D, Ou-Yang H, Liao L. The preliminary in vitro study and application of deep learning algorithm in cone beam computed tomography image implant recognition. Sci Rep 2023; 13:18467. [PMID: 37891408 PMCID: PMC10611753 DOI: 10.1038/s41598-023-45757-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 10/23/2023] [Indexed: 10/29/2023] Open
Abstract
To properly repair and maintain implants, which are bone tissue implants that replace natural tooth roots, it is crucial to accurately identify their brand and specification. Deep learning has demonstrated outstanding capabilities in analysis, such as image identification and classification, by learning the inherent rules and degrees of representation of data models. The purpose of this study is to evaluate deep learning algorithms and their supporting application software for their ability to recognize and categorize three dimensional (3D) Cone Beam Computed Tomography (CBCT) images of dental implants. By using CBCT technology, the 3D imaging data of 27 implants of various sizes and brands were obtained. Following manual processing, the data were transformed into a data set that had 13,500 two-dimensional data. Nine deep learning algorithms including GoogleNet, InceptionResNetV2, InceptionV3, ResNet50, ResNet50V2, ResNet101, ResNet101V2, ResNet152 and ResNet152V2 were used to perform the data. Accuracy rates, confusion matrix, ROC curve, AUC, number of model parameters and training times were used to assess the efficacy of these algorithms. These 9 deep learning algorithms achieved training accuracy rates of 100%, 99.3%, 89.3%, 99.2%, 99.1%, 99.5%, 99.4%, 99.5%, 98.9%, test accuracy rates of 98.3%, 97.5%, 94.8%, 85.4%, 92.5%, 80.7%, 93.6%, 93.2%, 99.3%, area under the curve (AUC) values of 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00. When used to identify implants, all nine algorithms perform satisfactorily, with ResNet152V2 achieving the highest test accuracy, classification accuracy, confusion matrix area under the curve, and receiver operating characteristic curve area under the curve area. The results showed that the ResNet152V2 has the best classification effect on identifying implants. The artificial intelligence identification system and application software based on this algorithm can efficiently and accurately identify the brands and specifications of 27 classified implants through processed 3D CBCT images in vitro, with high stability and low recognition cost.
Collapse
Affiliation(s)
- Shaobo Ou-Yang
- The Affiliated Stomatological Hospital of Nanchang University, The Key Laboratory of Oral Biomedicine, Jiangxi Province Clinical Research Centre for Oral Diseases, Nanchang, Jiangxi Province, China
| | - Shuqin Han
- The Affiliated Stomatological Hospital of Nanchang University, The Key Laboratory of Oral Biomedicine, Jiangxi Province Clinical Research Centre for Oral Diseases, Nanchang, Jiangxi Province, China
| | - Dan Sun
- Information Security Evaluation Section, Jiangxi Science and Technology Infrastructure Center, Nanchang, China
| | - Hongping Wu
- Vocational Teachers College, Jiangxi Agricultural University, Nanchang, China
| | - Jianping Chen
- Vocational Teachers College, Jiangxi Agricultural University, Nanchang, China
| | - Ying Cai
- The Affiliated Stomatological Hospital of Nanchang University, The Key Laboratory of Oral Biomedicine, Jiangxi Province Clinical Research Centre for Oral Diseases, Nanchang, Jiangxi Province, China
| | - Dongmei Yin
- The Affiliated Stomatological Hospital of Nanchang University, The Key Laboratory of Oral Biomedicine, Jiangxi Province Clinical Research Centre for Oral Diseases, Nanchang, Jiangxi Province, China
| | - Huidan Ou-Yang
- Vocational Teachers College, Jiangxi Agricultural University, Nanchang, China.
| | - Lan Liao
- The Affiliated Stomatological Hospital of Nanchang University, The Key Laboratory of Oral Biomedicine, Jiangxi Province Clinical Research Centre for Oral Diseases, Nanchang, Jiangxi Province, China.
- School of Stomatology, Nanchang University, The Key Laboratory of Oral Biomedicine, Jiangxi Province, Jiangxi Province Clinical Research Center for Oral Diseases, Nanchang, China.
- Clinical Medical Research Center, Affiliated Hospital of Jinggangshan University, Medical Department of Jinggangshan University, Ji'an, Jiangxi Province, People's Republic of China.
- The Key Laboratory of Oral Biomedicine, The Affiliated Stomatological Hospital of Nanchang University, The Affiliated Hospital of Jinggangshan University, Nanchang, Jiangxi Province, China.
| |
Collapse
|
33
|
Badawy M, Balaha HM, Maklad AS, Almars AM, Elhosseini MA. Revolutionizing Oral Cancer Detection: An Approach Using Aquila and Gorilla Algorithms Optimized Transfer Learning-Based CNNs. Biomimetics (Basel) 2023; 8:499. [PMID: 37887629 PMCID: PMC10604828 DOI: 10.3390/biomimetics8060499] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/11/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023] Open
Abstract
The early detection of oral cancer is pivotal for improving patient survival rates. However, the high cost of manual initial screenings poses a challenge, especially in resource-limited settings. Deep learning offers an enticing solution by enabling automated and cost-effective screening. This study introduces a groundbreaking empirical framework designed to revolutionize the accurate and automatic classification of oral cancer using microscopic histopathology slide images. This innovative system capitalizes on the power of convolutional neural networks (CNNs), strengthened by the synergy of transfer learning (TL), and further fine-tuned using the novel Aquila Optimizer (AO) and Gorilla Troops Optimizer (GTO), two cutting-edge metaheuristic optimization algorithms. This integration is a novel approach, addressing bias and unpredictability issues commonly encountered in the preprocessing and optimization phases. In the experiments, the capabilities of well-established pre-trained TL models, including VGG19, VGG16, MobileNet, MobileNetV3Small, MobileNetV2, MobileNetV3Large, NASNetMobile, and DenseNet201, all initialized with 'ImageNet' weights, were harnessed. The experimental dataset consisted of the Histopathologic Oral Cancer Detection dataset, which includes a 'normal' class with 2494 images and an 'OSCC' (oral squamous cell carcinoma) class with 2698 images. The results reveal a remarkable performance distinction between the AO and GTO, with the AO consistently outperforming the GTO across all models except for the Xception model. The DenseNet201 model stands out as the most accurate, achieving an astounding average accuracy rate of 99.25% with the AO and 97.27% with the GTO. This innovative framework signifies a significant leap forward in automating oral cancer detection, showcasing the tremendous potential of applying optimized deep learning models in the realm of healthcare diagnostics. The integration of the AO and GTO in our CNN-based system not only pushes the boundaries of classification accuracy but also underscores the transformative impact of metaheuristic optimization techniques in the field of medical image analysis.
Collapse
Affiliation(s)
- Mahmoud Badawy
- Department of Computer Science and Informatics, Applied College, Taibah University, Al Madinah Al Munawwarah 41461, Saudi Arabia
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt (M.A.E.)
| | - Hossam Magdy Balaha
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt (M.A.E.)
- Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY 40208, USA
| | - Ahmed S. Maklad
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia; (A.S.M.); (A.M.A.)
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suif 62521, Egypt
| | - Abdulqader M. Almars
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia; (A.S.M.); (A.M.A.)
| | - Mostafa A. Elhosseini
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt (M.A.E.)
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia; (A.S.M.); (A.M.A.)
| |
Collapse
|
34
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
35
|
Araújo ALD, de Souza ESC, Faustino ISP, Saldivia-Siracusa C, Brito-Sarracino T, Lopes MA, Vargas PA, Pearson AT, Kowalski LP, de Carvalho ACPDLF, Santos-Silva AR. Clinicians' perception of oral potentially malignant disorders: a pitfall for image annotation in supervised learning. Oral Surg Oral Med Oral Pathol Oral Radiol 2023; 136:315-321. [PMID: 37037738 DOI: 10.1016/j.oooo.2023.02.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 02/07/2023] [Accepted: 02/22/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE The present study aims to quantify clinicians' perceptions of oral potentially malignant disorders (OPMDs) when evaluating, classifying, and manually annotating clinical images, as well as to understand the source of inter-observer variability when assessing these lesions. The hypothesis was that different interpretations could affect the quality of the annotations used to train a Supervised Learning model. STUDY DESIGN Forty-six clinical images from 37 patients were reviewed, classified, and manually annotated at the pixel level by 3 labelers. We compared the inter-examiner assessment based on clinical criteria through the κ statistics (Fleiss's kappa). The segmentations were also compared using the mean pixel-wise intersection over union (IoU). RESULTS The inter-observer agreement for homogeneous/non-homogeneous criteria was substantial (κ = 63, 95% CI: 0.47-0.80). For the subclassification of non-homogeneous lesions, the inter-observer agreement was moderate (κ = 43, 95% CI: 0.34-0.53) (P < .001). The mean IoU of 0.53 (±0.22) was considered low. CONCLUSION The subjective clinical assessment (based on human visual observation, variable criteria that have suffered adjustments over the years, different educational backgrounds, and personal experience) may explain the source of inter-observer discordance for the classification and annotation of OPMD. Therefore, there is a strong probability of transferring the subjectivity of human analysis to artificial intelligence models. The use of large data sets and segmentation based on the union of all labelers' annotations holds the potential to overcome this limitation.
Collapse
Affiliation(s)
- Anna Luíza Damaceno Araújo
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil; Head and Neck Surgery Department, University of São Paulo Medical School (UFMUSP), São Paulo, São Paulo, Brazil
| | | | | | - Cristina Saldivia-Siracusa
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| | - Tamires Brito-Sarracino
- Institute of Mathematics and Computer Sciences, University of São Paulo (ICMC-USP), São Carlos, São Paulo, Brazil
| | - Márcio Ajudarte Lopes
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| | - Pablo Agustin Vargas
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| | - Alexander T Pearson
- Section of Hemathology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA; University of Chicago Comprehensive Cancer Center, Chicago, IL, USA
| | - Luiz Paulo Kowalski
- Department of Head and Neck Surgery and Otorhinolaryngology, A.C. Camargo Cancer Center, São Paulo, São Paulo, Brazil; Head and Neck Surgery Department and LIM 28, University of São Paulo Medical School, São Paulo, São Paulo, Brazil
| | | | - Alan Roger Santos-Silva
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil.
| |
Collapse
|
36
|
Optimal deep learning neural network using ISSA for diagnosing the oral cancer. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
|
37
|
Ünsal G, Chaurasia A, Akkaya N, Chen N, Abdalla-Aslan R, Koca RB, Orhan K, Roganovic J, Reddy P, Wahjuningrum DA. Deep convolutional neural network algorithm for the automatic segmentation of oral potentially malignant disorders and oral cancers. Proc Inst Mech Eng H 2023; 237:719-726. [PMID: 37222098 DOI: 10.1177/09544119231176116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This study aimed to develop an algorithm to automatically segment the oral potentially malignant diseases (OPMDs) and oral cancers (OCs) of all oral subsites with various deep convolutional neural network applications. A total of 510 intraoral images of OPMDs and OCs were collected over 3 years (2006-2009). All images were confirmed both with patient records and histopathological reports. Following the labeling of the lesions the dataset was arbitrarily split, using random sampling in Python as the study dataset, validation dataset, and test dataset. Pixels were classified as the OPMDs and OCs with the OPMD/OC label and the rest as the background. U-Net architecture was used and the model with the best validation loss was chosen for the testing among the trained 500 epochs. Dice similarity coefficient (DSC) score was noted. The intra-observer ICC was found to be 0.994 while the inter-observer reliability was 0.989. The calculated DSC and validation accuracy across all clinical images were 0.697 and 0.805, respectively. Our algorithm did not maintain an excellent DSC due to multiple reasons for the detection of both OC and OPMDs in oral cavity sites. A better standardization for both 2D and 3D imaging (such as patient positioning) and a bigger dataset are required to improve the quality of such studies. This is the first study which aimed to segment OPMDs and OCs in all subsites of oral cavity which is crucial not only for the early diagnosis but also for higher survival rates.
Collapse
Affiliation(s)
- Gürkan Ünsal
- Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Near East University, Nicosia, Cyprus
| | - Akhilanand Chaurasia
- Faculty of Dental Sciences, King George's Medical University, Lucknow, Uttar Pradesh, India
| | - Nurullah Akkaya
- Department of Computer Engineering, Artificial Intelligence Research Centre, Near East University, Nicosia, Cyprus
| | - Nadler Chen
- Department of Oral Medicine, Hadassah School of Dental Medicine, Hebrew University, Sedation and Maxillofacial Imaging, Hebrew, Israel
| | - Ragda Abdalla-Aslan
- Department of Oral Medicine, Hadassah School of Dental Medicine, Hebrew University, Sedation and Maxillofacial Imaging, Hebrew, Israel
| | - Revan Birke Koca
- Faculty of Dentistry, Department of Periodontology, University of Kyrenia, Kyrenia, Cyprus
| | - Kaan Orhan
- Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Ankara University, Ankara, Turkey
| | - Jelena Roganovic
- Department of Pharmacology in Dentistry, School of Dental Medicine, University of Belgrade, Belgrade, Serbia
| | - Prashanti Reddy
- Department of Oral Medicine and Radiology, Government Dental College, Indore, Madhya Pradesh, India
| | | |
Collapse
|
38
|
de Souza LL, Fonseca FP, Araújo ALD, Lopes MA, Vargas PA, Khurram SA, Kowalski LP, Dos Santos HT, Warnakulasuriya S, Dolezal J, Pearson AT, Santos-Silva AR. Machine learning for detection and classification of oral potentially malignant disorders: A conceptual review. J Oral Pathol Med 2023; 52:197-205. [PMID: 36792771 DOI: 10.1111/jop.13414] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 12/09/2022] [Indexed: 02/17/2023]
Abstract
Oral potentially malignant disorders represent precursor lesions that may undergo malignant transformation to oral cancer. There are many known risk factors associated with the development of oral potentially malignant disorders, and contribute to the risk of malignant transformation. Although many advances have been reported to understand the biological behavior of oral potentially malignant disorders, their clinical features that indicate the characteristics of malignant transformation are not well established. Early diagnosis of malignancy is the most important factor to improve patients' prognosis. The integration of machine learning into routine diagnosis has recently emerged as an adjunct to aid clinical examination. Increased performances of artificial intelligence AI-assisted medical devices are claimed to exceed the human capability in the clinical detection of early cancer. Therefore, the aim of this narrative review is to introduce artificial intelligence terminology, concepts, and models currently used in oncology to familiarize oral medicine scientists with the language skills, best research practices, and knowledge for developing machine learning models applied to the clinical detection of oral potentially malignant disorders.
Collapse
Affiliation(s)
- Lucas Lacerda de Souza
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Felipe Paiva Fonseca
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
- Department of Oral Surgery and Pathology, School of Dentistry, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | | | - Marcio Ajudarte Lopes
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Pablo Agustin Vargas
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Syed Ali Khurram
- Unit of Oral & Maxillofacial Pathology, School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Luiz Paulo Kowalski
- Department of Head and Neck Surgery, University of Sao Paulo Medical School and Department of Head and Neck Surgery and Otorhinolaryngology, AC Camargo Cancer Center, Sao Paulo, Brazil
| | - Harim Tavares Dos Santos
- Department of Otolaryngology-Head and Neck Surgery, University of Missouri, Columbia, Missouri, USA
- Department of Bond Life Sciences Center, University of Missouri, Columbia, Missouri, USA
| | - Saman Warnakulasuriya
- King's College London, London, UK
- WHO Collaborating Centre for Oral Cancer, London, UK
| | - James Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Alan Roger Santos-Silva
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| |
Collapse
|
39
|
Shanmugam DK, Anitha SC, Najimudeen RA, Saravanan M, Arockiaraj J, Belete MA. Conspectus on nanodiagnostics as an incipient platform for detection of oral potentially malignant disorders and oral squamous cell carcinoma. Int J Surg 2023; 109:542-544. [PMID: 36906784 PMCID: PMC10389231 DOI: 10.1097/js9.0000000000000021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 12/06/2022] [Indexed: 03/13/2023]
Affiliation(s)
| | | | | | - Muthupandian Saravanan
- AMR and Nanomedicine Laboratory, Department of Pharmacology, Saveetha Dental College, Saveetha Institute of Medical and Technical Sciences (SIMATS), Chennai, India
| | - Jesu Arockiaraj
- Department of Biotechnology, College of Science and Humanities, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, India
| | - Melaku A. Belete
- Department of Medical Laboratory Science, College of Medicine and Health Sciences, Wollo University, Dessie, Ethiopia
| |
Collapse
|
40
|
Araújo ALD, da Silva VM, Kudo MS, de Souza ESC, Saldivia-Siracusa C, Giraldo-Roldán D, Lopes MA, Vargas PA, Khurram SA, Pearson AT, Kowalski LP, de Carvalho ACPDLF, Santos-Silva AR, Moraes MC. Machine learning concepts applied to oral pathology and oral medicine: A convolutional neural networks' approach. J Oral Pathol Med 2023; 52:109-118. [PMID: 36599081 DOI: 10.1111/jop.13397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 12/05/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023]
Abstract
INTRODUCTION Artificial intelligence models and networks can learn and process dense information in a short time, leading to an efficient, objective, and accurate clinical and histopathological analysis, which can be useful to improve treatment modalities and prognostic outcomes. This paper targets oral pathologists, oral medicinists, and head and neck surgeons to provide them with a theoretical and conceptual foundation of artificial intelligence-based diagnostic approaches, with a special focus on convolutional neural networks, the state-of-the-art in artificial intelligence and deep learning. METHODS The authors conducted a literature review, and the convolutional neural network's conceptual foundations and functionality were illustrated based on a unique interdisciplinary point of view. CONCLUSION The development of artificial intelligence-based models and computer vision methods for pattern recognition in clinical and histopathological image analysis of head and neck cancer has the potential to aid diagnosis and prognostic prediction.
Collapse
Affiliation(s)
- Anna Luíza Damaceno Araújo
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil.,Head and Neck Surgery Department and LIM 28, University of São Paulo Medical School, São Paulo, São Paulo, Brazil
| | - Viviane Mariano da Silva
- Institute of Science and Technology, Federal University of São Paulo (ICT-Unifesp), São José dos Campos, São Paulo, Brazil
| | - Maíra Suzuka Kudo
- Institute of Science and Technology, Federal University of São Paulo (ICT-Unifesp), São José dos Campos, São Paulo, Brazil
| | | | - Cristina Saldivia-Siracusa
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Daniela Giraldo-Roldán
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Marcio Ajudarte Lopes
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Pablo Agustin Vargas
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Syed Ali Khurram
- Unit of Oral and Maxillofacial Pathology, School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Alexander T Pearson
- Section of Hemathology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA.,University of Chicago Comprehensive Cancer Center, Chicago, Illinois, USA
| | - Luiz Paulo Kowalski
- Head and Neck Surgery Department and LIM 28, University of São Paulo Medical School, São Paulo, São Paulo, Brazil.,Department of Head and Neck Surgery and Otorhinolaryngology, A.C. Camargo Cancer Center, São Paulo, São Paulo, Brazil
| | | | - Alan Roger Santos-Silva
- Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Matheus Cardoso Moraes
- Institute of Science and Technology, Federal University of São Paulo (ICT-Unifesp), São José dos Campos, São Paulo, Brazil
| |
Collapse
|
41
|
Almășan O, Leucuța DC, Hedeșiu M, Mureșanu S, Popa ȘL. Temporomandibular Joint Osteoarthritis Diagnosis Employing Artificial Intelligence: Systematic Review and Meta-Analysis. J Clin Med 2023; 12:942. [PMID: 36769590 PMCID: PMC9918072 DOI: 10.3390/jcm12030942] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/20/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
The aim was to systematically synthesize the current research and influence of artificial intelligence (AI) models on temporomandibular joint (TMJ) osteoarthritis (OA) diagnosis using cone-beam computed tomography (CBCT) or panoramic radiography. Seven databases (PubMed, Embase, Scopus, Web of Science, LILACS, ProQuest, and SpringerLink) were searched for TMJ OA and AI articles. We used QUADAS-2 to assess the risk of bias, while with MI-CLAIM we checked the minimum information about clinical artificial intelligence modeling. Two hundred and three records were identified, out of which seven were included, amounting to 10,077 TMJ images. Three studies focused on the diagnosis of TMJ OA using panoramic radiography with various transfer learning models (ResNet model) on which the meta-analysis was performed. The pooled sensitivity was 0.76 (95% CI 0.35-0.95) and the specificity was 0.79 (95% CI 0.75-0.83). The other studies investigated the 3D shape of the condyle and disease classification observed on CBCT images, as well as the numerous radiomics features that can be combined with clinical and proteomic data to investigate the most effective models and promising features for the diagnosis of TMJ OA. The accuracy of the methods was nearly equivalent; it was higher when the indeterminate diagnosis was excluded or when fine-tuning was used.
Collapse
Affiliation(s)
- Oana Almășan
- Department of Prosthetic Dentistry and Dental Materials, Iuliu Hațieganu University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania
| | - Daniel-Corneliu Leucuța
- Department of Medical Informatics and Biostatistics, Iuliu Hațieganu University of Medicine and Pharmacy, 400349 Cluj-Napoca, Romania
| | - Mihaela Hedeșiu
- Department of Oral and Maxillofacial Surgery and Implantology, Iuliu Hațieganu University of Medicine and Pharmacy, 400029 Cluj-Napoca, Romania
| | - Sorana Mureșanu
- Department of Oral and Maxillofacial Surgery and Implantology, Iuliu Hațieganu University of Medicine and Pharmacy, 400029 Cluj-Napoca, Romania
| | - Ștefan Lucian Popa
- 2nd Medical Department, Iuliu Hațieganu University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania
| |
Collapse
|
42
|
Dholariya S, Singh RD, Sonagra A, Yadav D, Vajaria BN, Parchwani D. Integrating Cutting-Edge Methods to Oral Cancer Screening, Analysis, and Prognosis. Crit Rev Oncog 2023; 28:11-44. [PMID: 37830214 DOI: 10.1615/critrevoncog.2023047772] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Oral cancer (OC) has become a significant barrier to health worldwide due to its high morbidity and mortality rates. OC is among the most prevalent types of cancer that affect the head and neck region, and the overall survival rate at 5 years is still around 50%. Moreover, it is a multifactorial malignancy instigated by genetic and epigenetic variabilities, and molecular heterogeneity makes it a complex malignancy. Oral potentially malignant disorders (OPMDs) are often the first warning signs of OC, although it is challenging to predict which cases will develop into malignancies. Visual oral examination and histological examination are still the standard initial steps in diagnosing oral lesions; however, these approaches have limitations that might lead to late diagnosis of OC or missed diagnosis of OPMDs in high-risk individuals. The objective of this review is to present a comprehensive overview of the currently used novel techniques viz., liquid biopsy, next-generation sequencing (NGS), microarray, nanotechnology, lab-on-a-chip (LOC) or microfluidics, and artificial intelligence (AI) for the clinical diagnostics and management of this malignancy. The potential of these novel techniques in expanding OC diagnostics and clinical management is also reviewed.
Collapse
Affiliation(s)
- Sagar Dholariya
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), Rajkot, Gujarat, India
| | - Ragini D Singh
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), Rajkot, Gujarat, India
| | - Amit Sonagra
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), Rajkot, Gujarat, India
| | | | | | - Deepak Parchwani
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), Rajkot, Gujarat, India
| |
Collapse
|
43
|
Artificial-Intelligence-Based Decision Making for Oral Potentially Malignant Disorder Diagnosis in Internet of Medical Things Environment. Healthcare (Basel) 2022; 11:healthcare11010113. [PMID: 36611573 PMCID: PMC9818760 DOI: 10.3390/healthcare11010113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/25/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
Oral cancer is considered one of the most common cancer types in several counties. Earlier-stage identification is essential for better prognosis, treatment, and survival. To enhance precision medicine, Internet of Medical Things (IoMT) and deep learning (DL) models can be developed for automated oral cancer classification to improve detection rate and decrease cancer-specific mortality. This article focuses on the design of an optimal Inception-Deep Convolution Neural Network for Oral Potentially Malignant Disorder Detection (OIDCNN-OPMDD) technique in the IoMT environment. The presented OIDCNN-OPMDD technique mainly concentrates on identifying and classifying oral cancer by using an IoMT device-based data collection process. In this study, the feature extraction and classification process are performed using the IDCNN model, which integrates the Inception module with DCNN. To enhance the classification performance of the IDCNN model, the moth flame optimization (MFO) technique can be employed. The experimental results of the OIDCNN-OPMDD technique are investigated, and the results are inspected under specific measures. The experimental outcome pointed out the enhanced performance of the OIDCNN-OPMDD model over other DL models.
Collapse
|
44
|
Diagnosis of Oral Squamous Cell Carcinoma Using Deep Neural Networks and Binary Particle Swarm Optimization on Histopathological Images: An AIoMT Approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6364102. [PMID: 36210968 PMCID: PMC9546660 DOI: 10.1155/2022/6364102] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 07/04/2022] [Accepted: 08/17/2022] [Indexed: 11/24/2022]
Abstract
Overall prediction of oral cavity squamous cell carcinoma (OCSCC) remains inadequate, as more than half of patients with oral cavity cancer are detected at later stages. It is generally accepted that the differential diagnosis of OCSCC is usually difficult and requires expertise and experience. Diagnosis from biopsy tissue is a complex process, and it is slow, costly, and prone to human error. To overcome these problems, a computer-aided diagnosis (CAD) approach was proposed in this work. A dataset comprising two categories, normal epithelium of the oral cavity (NEOR) and squamous cell carcinoma of the oral cavity (OSCC), was used. Feature extraction was performed from this dataset using four deep learning (DL) models (VGG16, AlexNet, ResNet50, and Inception V3) to realize artificial intelligence of medial things (AIoMT). Binary Particle Swarm Optimization (BPSO) was used to select the best features. The effects of Reinhard stain normalization on performance were also investigated. After the best features were extracted and selected, they were classified using the XGBoost. The best classification accuracy of 96.3% was obtained when using Inception V3 with BPSO. This approach significantly contributes to improving the diagnostic efficiency of OCSCC patients using histopathological images while reducing diagnostic costs.
Collapse
|
45
|
Bansal K, Bathla RK, Kumar Y. Deep transfer learning techniques with hybrid optimization in early prediction and diagnosis of different types of oral cancer. Soft comput 2022. [DOI: 10.1007/s00500-022-07246-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
46
|
Machine learning in point-of-care automated classification of oral potentially malignant and malignant disorders: a systematic review and meta-analysis. Sci Rep 2022; 12:13797. [PMID: 35963880 PMCID: PMC9376104 DOI: 10.1038/s41598-022-17489-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
Machine learning (ML) algorithms are becoming increasingly pervasive in the domains of medical diagnostics and prognostication, afforded by complex deep learning architectures that overcome the limitations of manual feature extraction. In this systematic review and meta-analysis, we provide an update on current progress of ML algorithms in point-of-care (POC) automated diagnostic classification systems for lesions of the oral cavity. Studies reporting performance metrics on ML algorithms used in automatic classification of oral regions of interest were identified and screened by 2 independent reviewers from 4 databases. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. 35 studies were suitable for qualitative synthesis, and 31 for quantitative analysis. Outcomes were assessed using a bivariate random-effects model following an assessment of bias and heterogeneity. 4 distinct methodologies were identified for POC diagnosis: (1) clinical photography; (2) optical imaging; (3) thermal imaging; (4) analysis of volatile organic compounds. Estimated AUROC across all studies was 0.935, and no difference in performance was identified between methodologies. We discuss the various classical and modern approaches to ML employed within identified studies, and highlight issues that will need to be addressed for implementation of automated classification systems in screening and early detection.
Collapse
|
47
|
Hegde S, Ajila V, Zhu W, Zeng C. Review of the Use of Artificial Intelligence in Early Diagnosis and Prevention of Oral Cancer. Asia Pac J Oncol Nurs 2022; 9:100133. [PMID: 36389623 PMCID: PMC9664349 DOI: 10.1016/j.apjon.2022.100133] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/12/2022] [Indexed: 11/30/2022] Open
Abstract
The global occurrence of oral cancer (OC) has increased in recent years. OC that is diagnosed in its advanced stages results in morbidity and mortality. The use of technology may be beneficial for early detection and diagnosis and thus help the clinician with better patient management. The advent of artificial intelligence (AI) has the potential to improve OC screening. AI can precisely analyze an enormous dataset from various imaging modalities and provide assistance in the field of oncology. This review focused on the applications of AI in the early diagnosis and prevention of OC. A literature search was conducted in the PubMed and Scopus databases using the search terminology “oral cancer” and “artificial intelligence.” Further information regarding the topic was collected by scrutinizing the reference lists of selected articles. Based on the information obtained, this article reviews and discusses the applications and advantages of AI in OC screening, early diagnosis, disease prediction, treatment planning, and prognosis. Limitations and the future scope of AI in OC research are also highlighted.
Collapse
|
48
|
Kim JS, Kim BG, Hwang SH. Efficacy of Artificial Intelligence-Assisted Discrimination of Oral Cancerous Lesions from Normal Mucosa Based on the Oral Mucosal Image: A Systematic Review and Meta-Analysis. Cancers (Basel) 2022; 14:cancers14143499. [PMID: 35884560 PMCID: PMC9320189 DOI: 10.3390/cancers14143499] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/16/2022] [Accepted: 07/17/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Early detection of oral cancer is important to increase the survival rate and reduce morbidity. For the past few years, the early detection of oral cancer using artificial intelligence (AI) technology based on autofluorescence imaging, photographic imaging, and optical coherence tomography imaging has been an important research area. In this study, diagnostic values including sensitivity and specificity data were comprehensively confirmed in various studies that performed AI analysis of images. The diagnostic sensitivity of AI-assisted screening was 0.92. In subgroup analysis, there was no statistically significant difference in the diagnostic rate according to each image tool. AI shows good diagnostic performance with high sensitivity for oral cancer. Image analysis using AI is expected to be used as a clinical tool for early detection and evaluation of treatment efficacy for oral cancer. Abstract The accuracy of artificial intelligence (AI)-assisted discrimination of oral cancerous lesions from normal mucosa based on mucosal images was evaluated. Two authors independently reviewed the database until June 2022. Oral mucosal disorder, as recorded by photographic images, autofluorescence, and optical coherence tomography (OCT), was compared with the reference results by histology findings. True-positive, true-negative, false-positive, and false-negative data were extracted. Seven studies were included for discriminating oral cancerous lesions from normal mucosa. The diagnostic odds ratio (DOR) of AI-assisted screening was 121.66 (95% confidence interval [CI], 29.60; 500.05). Twelve studies were included for discriminating all oral precancerous lesions from normal mucosa. The DOR of screening was 63.02 (95% CI, 40.32; 98.49). Subgroup analysis showed that OCT was more diagnostically accurate (324.33 vs. 66.81 and 27.63) and more negatively predictive (0.94 vs. 0.93 and 0.84) than photographic images and autofluorescence on the screening for all oral precancerous lesions from normal mucosa. Automated detection of oral cancerous lesions by AI would be a rapid, non-invasive diagnostic tool that could provide immediate results on the diagnostic work-up of oral cancer. This method has the potential to be used as a clinical tool for the early diagnosis of pathological lesions.
Collapse
Affiliation(s)
- Ji-Sun Kim
- Department of Otolaryngology-Head and Neck Surgery, Eunpyeong St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Seoul 03312, Korea; (J.-S.K.); (B.G.K.)
| | - Byung Guk Kim
- Department of Otolaryngology-Head and Neck Surgery, Eunpyeong St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Seoul 03312, Korea; (J.-S.K.); (B.G.K.)
| | - Se Hwan Hwang
- Department of Otolaryngology-Head and Neck Surgery, Bucheon St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Bucheon 14647, Korea
- Correspondence: ; Tel.: +82-32-340-7044
| |
Collapse
|
49
|
Alabi RO, Almangush A, Elmusrati M, Leivo I, Mäkitie A. Measuring the Usability and Quality of Explanations of a Machine Learning Web-Based Tool for Oral Tongue Cancer Prognostication. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19148366. [PMID: 35886221 PMCID: PMC9322510 DOI: 10.3390/ijerph19148366] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 06/23/2022] [Accepted: 07/04/2022] [Indexed: 12/10/2022]
Abstract
Background: Machine learning models have been reported to assist in the proper management of cancer through accurate prognostication. Integrating such models as a web-based prognostic tool or calculator may help to improve cancer care and assist clinicians in making oral cancer management-related decisions. However, none of these models have been recommended in daily practices of oral cancer due to concerns related to machine learning methodologies and clinical implementation challenges. An instance of the concerns inherent to the science of machine learning is explainability. Objectives: This study measures the usability and explainability of a machine learning-based web prognostic tool that was designed for prediction of oral tongue cancer. We used the System Usability Scale (SUS) and System Causability Scale (SCS) to evaluate the explainability of the prognostic tool. In addition, we propose a framework for the evaluation of post hoc explainability of web-based prognostic tools. Methods: A SUS- and SCS-based questionnaire was administered amongst pathologists, radiologists, cancer and machine learning researchers and surgeons (n = 11) to evaluate the quality of explanations offered by the machine learning-based web prognostic tool to address the concern of explainability and usability of these models for cancer management. The examined web-based tool was developed by our group and is freely available online. Results: In terms of the usability of the web-based tool using the SUS, 81.9% (45.5% strongly agreed; 36.4% agreed) agreed that neither the support of a technical assistant nor a need to learn many things were required to use the web-based tool. Furthermore, 81.8% agreed that the evaluated web-based tool was not cumbersome to use (usability). The average score for the SCS (explainability) was 0.74. A total of 91.0% of the participants strongly agreed that the web-based tool can assist in clinical decision-making. These scores indicated that the examined web-based tool offers a significant level of usability and explanations about the outcome of interest. Conclusions: Integrating the trained and internally and externally validated model as a web-based tool or calculator is poised to offer an effective and easy approach towards the usage and acceptance of these models in the future daily practice. This approach has received significant attention in recent years. Thus, it is important that the usability and explainability of these models are measured to achieve such touted benefits. A usable and well-explained web-based tool further brings the use of these web-based tools closer to everyday clinical practices. Thus, the concept of more personalized and precision oncology can be achieved.
Collapse
Affiliation(s)
- Rasheed Omobolaji Alabi
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, 00100 Helsinki, Finland; (A.A.); (A.M.)
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, 65200 Vaasa, Finland;
- Correspondence:
| | - Alhadi Almangush
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, 00100 Helsinki, Finland; (A.A.); (A.M.)
- Department of Pathology, University of Helsinki, Haartmaninkatu 3 (P.O. Box 21), FIN-00014 Helsinki, Finland
- Institute of Biomedicine, University of Turku, Pathology, 20500 Turku, Finland;
- Faculty of Dentistry, Misurata University, Misurata 2478, Libya
| | - Mohammed Elmusrati
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, 65200 Vaasa, Finland;
| | - Ilmo Leivo
- Institute of Biomedicine, University of Turku, Pathology, 20500 Turku, Finland;
| | - Antti Mäkitie
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, 00100 Helsinki, Finland; (A.A.); (A.M.)
- Department of Otorhinolaryngology—Head and Neck Surgery, University of Helsinki, Helsinki University Hospital, 00029 HUS Helsinki, Finland
- Department of Clinical Sciences, Intervention and Technology, Division of Ear, Nose and Throat Diseases, Karolinska Institute, Karolinska University Hospital, 17177 Stockholm, Sweden
| |
Collapse
|
50
|
Li C, Zhang Q, Sun K, Jia H, Shen X, Tang G, Liu W, Shi L. Autofluorescence imaging as a noninvasive tool of risk stratification for malignant transformation of oral leukoplakia: A follow-up cohort study. Oral Oncol 2022; 130:105941. [DOI: 10.1016/j.oraloncology.2022.105941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 05/24/2022] [Indexed: 01/30/2023]
|