1
|
Jiang Y, Sun HT, Luo Z, Wang J, Xiao RP. Efficacy of a deep learning system for automatic analysis of the comprehensive spatial relationship between the mandibular third molar and inferior alveolar canal on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2025; 139:612-622. [PMID: 39915134 DOI: 10.1016/j.oooo.2024.12.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Revised: 12/10/2024] [Accepted: 12/24/2024] [Indexed: 03/18/2025]
Abstract
OBJECTIVE To develop and evaluate a deep learning (DL) system for predicting the contact and relative position relationships between the mandibular third molar (M3) and inferior alveolar canal (IAC) using panoramic radiographs (PRs) for preoperative assessment of patients for M3 surgery. STUDY DESIGN In total, 279 PRs with 441 M3s from individuals aged 18-32 years were collected, with one PR and cone beam computed tomography (CBCT) scan per individual. Six DL models were compared using 5-fold cross-validation. Model performance was evaluated using accuracy, precision, recall, specificity, F1-score, and area under the receiver operating characteristic (AUROC) curve. System performance was compared to that of experienced dentists. The diagnostic performance was investigated based on the reference standard for contact and relative position between M3 and IAC as determined by CBCT. RESULTS ResNet50 exhibited the best performance among all models tested. For contact prediction, ResNet50 achieved an accuracy of 0.748, F1-score of 0.759, and AUROC of 0.811. For relative position relationship prediction, ResNet50 yielded an accuracy of 0.611, F1-score of 0.548, and AUROC of 0.731. The DL system demonstrated advantages over experienced dentists in diagnostic outcomes. CONCLUSIONS The developed DL system shows broad application potential for comprehensive spatial relationship recognition between M3 and IAC. This system can assist dentists in treatment decision-making for M3 surgery and improve dentist training efficiency.
Collapse
Affiliation(s)
- Yi Jiang
- College of Future Technology, Peking University, Beijing, China
| | - Hai-Tao Sun
- Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Zhengchao Luo
- College of Future Technology, Peking University, Beijing, China
| | - Jinzhuo Wang
- College of Future Technology, Peking University, Beijing, China.
| | - Rui-Ping Xiao
- College of Future Technology, Peking University, Beijing, China.
| |
Collapse
|
2
|
Mitbander R, Brenes D, Coole JB, Kortum A, Vohra IS, Carns J, Schwarz RA, Varghese I, Durab S, Anderson S, Bass NE, Clayton AD, Badaoui H, Anandasivam L, Giese RA, Gillenwater AM, Vigneswaran N, Richards-Kortum R. Development and Evaluation of an Automated Multimodal Mobile Detection of Oral Cancer Imaging System to Aid in Risk-Based Management of Oral Mucosal Lesions. Cancer Prev Res (Phila) 2025; 18:197-207. [PMID: 39817650 PMCID: PMC11959271 DOI: 10.1158/1940-6207.capr-24-0253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 12/02/2024] [Accepted: 01/14/2025] [Indexed: 01/18/2025]
Abstract
Oral cancer is a major global health problem. It is commonly diagnosed at an advanced stage, although often preceded by clinically visible oral mucosal lesions, termed oral potentially malignant disorders, which are associated with an increased risk of oral cancer development. There is an unmet clinical need for effective screening tools to assist front-line healthcare providers to determine which patients should be referred to an oral cancer specialist for evaluation. This study reports the development and evaluation of the mobile detection of oral cancer (mDOC) imaging system and an automated algorithm that generates a referral recommendation from mDOC images. mDOC is a smartphone-based autofluorescence and white light imaging tool that captures images of the oral cavity. Data were collected using mDOC from a total of 332 oral sites in a study of 29 healthy volunteers and 120 patients seeking care for an oral mucosal lesion. A multimodal image classification algorithm was developed to generate a recommendation of "refer" or "do not refer" from mDOC images using expert clinical referral decision as the ground truth label. A referral algorithm was developed using cross-validation methods on 80% of the dataset and then retrained and evaluated on a separate holdout test set. Referral decisions generated in the holdout test set had a sensitivity of 93.9% and a specificity of 79.3% with respect to expert clinical referral decisions. The mDOC system has the potential to be utilized in community physicians' and dentists' offices to help identify patients who need further evaluation by an oral cancer specialist. Prevention Relevance: Our research focuses on improving the early detection of oral precancers/cancers in primary dental care settings with a novel mobile platform that can be used by front-line providers to aid in assessing whether a patient has an oral mucosal condition that requires further follow-up with an oral cancer specialist.
Collapse
Affiliation(s)
| | - David Brenes
- Department of Bioengineering, Rice University, Houston, Texas
| | | | - Alex Kortum
- Department of Bioengineering, Rice University, Houston, Texas
| | - Imran S. Vohra
- Department of Bioengineering, Rice University, Houston, Texas
| | - Jennifer Carns
- Department of Bioengineering, Rice University, Houston, Texas
| | | | - Ida Varghese
- Department of Diagnostic and Biomedical Sciences, The University of Texas Health Science Center at Houston School of Dentistry, Houston, Texas
| | - Safia Durab
- Department of Diagnostic and Biomedical Sciences, The University of Texas Health Science Center at Houston School of Dentistry, Houston, Texas
| | - Sean Anderson
- Department of Diagnostic and Biomedical Sciences, The University of Texas Health Science Center at Houston School of Dentistry, Houston, Texas
| | - Nancy E. Bass
- Department of Diagnostic and Biomedical Sciences, The University of Texas Health Science Center at Houston School of Dentistry, Houston, Texas
| | | | - Hawraa Badaoui
- Department of Head and Neck Surgery, The University of Texas M.D. Anderson Cancer Center, Houston, Texas
| | | | - Rachel A. Giese
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Health Science Center San Antonio, San Antonio, Texas
| | - Ann M. Gillenwater
- Department of Head and Neck Surgery, The University of Texas M.D. Anderson Cancer Center, Houston, Texas
| | - Nadarajah Vigneswaran
- Department of Diagnostic and Biomedical Sciences, The University of Texas Health Science Center at Houston School of Dentistry, Houston, Texas
| | | |
Collapse
|
3
|
Di Fede O, La Mantia G, Parola M, Maniscalco L, Matranga D, Tozzo P, Campisi G, Cimino MGCA. Automated Detection of Oral Malignant Lesions Using Deep Learning: Scoping Review and Meta-Analysis. Oral Dis 2025; 31:1054-1064. [PMID: 39489724 PMCID: PMC12022385 DOI: 10.1111/odi.15188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 09/11/2024] [Accepted: 10/17/2024] [Indexed: 11/05/2024]
Abstract
OBJECTIVE Oral diseases, specifically malignant lesions, are serious global health concerns requiring early diagnosis for effective treatment. In recent years, deep learning (DL) has emerged as a powerful tool for the automated detection and classification of oral lesions. This research, by conducting a scoping review and meta-analysis, aims to provide an overview of the progress and achievements in the field of automated detection of oral lesions using DL. MATERIALS AND METHODS A scoping review was conducted to identify relevant studies published in the last 5 years (2018-2023). A comprehensive search was conducted using several electronic databases, including PubMed, Web of Science, and Scopus. Two reviewers independently assessed the studies for eligibility and extracted data using a standardized form, and a meta-analysis was conducted to synthesize the findings. RESULTS Fourteen studies utilizing various DL algorithms were identified and included for the detection and classification of oral lesions from clinical images. Among these, three were included in the meta-analysis. The estimated pooled sensitivity and specificity were 0.86 (95% confidence interval [CI] = 0.80-0.91) and 0.67 (95% CI = 0.58-0.75), respectively. CONCLUSIONS The results of meta-analysis indicate that DL algorithms improve the diagnosis of oral lesions. Future research should develop validated algorithms for automated diagnosis. TRIAL REGISTRATION Open Science Framework (https://osf.io/4n8sm).
Collapse
Affiliation(s)
- Olga Di Fede
- Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.)University of PalermoPalermoItaly
| | - Gaetano La Mantia
- Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.)University of PalermoPalermoItaly
- Unit of Oral Medicine and Dentistry for Fragile Patients, Department of Rehabilitation, Fragility, and Continuity of CareUniversity Hospital PalermoPalermoItaly
- Department of Biomedical and Dental Sciences and Morphofunctional ImagingUniversity of MessinaMessinaItaly
| | - Marco Parola
- Department of Information EngineeringUniversity of PisaPisaItaly
| | - Laura Maniscalco
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical SpecialtiesUniversity of PalermoPalermoItaly
| | - Domenica Matranga
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical SpecialtiesUniversity of PalermoPalermoItaly
| | - Pietro Tozzo
- Unit of StomatologyOspedali Riuniti “Villa Sofia‐Cervello” of PalermoPalermoItaly
| | - Giuseppina Campisi
- Unit of Oral Medicine and Dentistry for Fragile Patients, Department of Rehabilitation, Fragility, and Continuity of CareUniversity Hospital PalermoPalermoItaly
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND)University of PalermoPalermoItaly
| | | |
Collapse
|
4
|
Song B, Liang R. Integrating artificial intelligence with smartphone-based imaging for cancer detection in vivo. Biosens Bioelectron 2025; 271:116982. [PMID: 39616900 PMCID: PMC11789447 DOI: 10.1016/j.bios.2024.116982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 11/19/2024] [Accepted: 11/20/2024] [Indexed: 01/03/2025]
Abstract
Cancer is a major global health challenge, accounting for nearly one in six deaths worldwide. Early diagnosis significantly improves survival rates and patient outcomes, yet in resource-limited settings, the scarcity of medical resources often leads to late-stage diagnosis. Integrating artificial intelligence (AI) with smartphone-based imaging systems offers a promising solution by providing portable, cost-effective, and widely accessible tools for early cancer detection. This paper introduces advanced smartphone-based imaging systems that utilize various imaging modalities for in vivo detection of different cancer types and highlights the advancements of AI for in vivo cancer detection in smartphone-based imaging. However, these compact smartphone systems face challenges like low imaging quality and restricted computing power. The use of advanced AI algorithms to address the optical and computational limitations of smartphone-based imaging systems provides promising solutions. AI-based cancer detection also faces challenges. Transparency and reliability are critical factors in gaining the trust and acceptance of AI algorithms for clinical application, explainable and uncertainty-aware AI breaks the black box and will shape the future AI development in early cancer detection. The challenges and solutions for improving AI accuracy, transparency, and reliability are general issues in AI applications, the AI technologies, limitations, and potentials discussed in this paper are applicable to a wide range of biomedical imaging diagnostics beyond smartphones or cancer-specific applications. Smartphone-based multimodal imaging systems and deep learning algorithms for multimodal data analysis are also growing trends, as this approach can provide comprehensive information about the tissue being examined. Future opportunities and perspectives of AI-integrated smartphone imaging systems will be to make cutting-edge diagnostic tools more affordable and accessible, ultimately enabling early cancer detection for a broader population.
Collapse
Affiliation(s)
- Bofan Song
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ, 85721, USA.
| | - Rongguang Liang
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ, 85721, USA.
| |
Collapse
|
5
|
Yadav DP, Sharma B, Noonia A, Mehbodniya A. Explainable label guided lightweight network with axial transformer encoder for early detection of oral cancer. Sci Rep 2025; 15:6391. [PMID: 39984521 PMCID: PMC11845714 DOI: 10.1038/s41598-025-87627-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Accepted: 01/21/2025] [Indexed: 02/23/2025] Open
Abstract
Oral cavity cancer exhibits high morbidity and mortality rates. Therefore, it is essential to diagnose the disease at an early stage. Machine learning and convolution neural networks (CNN) are powerful tools for diagnosing mouth and oral cancer. In this study, we design a lightweight explainable network (LWENet) with label-guided attention (LGA) to provide a second opinion to the expert. The LWENet contains depth-wise separable convolution layers to reduce the computation costs. Moreover, the LGA module provides label consistency to the neighbor pixel and improves the spatial features. Furthermore, AMSA (axial multi-head self-attention) based ViT encoder incorporated in the model to provide global attention. Our ViT (vision transformer) encoder is computationally efficient compared to the classical ViT encoder. We tested LWRNet performance on the MOD (mouth and oral disease) and OCI (oral cancer image) datasets, and results are compared with the other CNN and ViT (vision transformer) based methods. The LWENet achieved a precision and F1-scores of 96.97% and 98.90% on the MOD dataset, and 99.48% and 98.23% on the OCI dataset, respectively. By incorporating Grad-CAM, we visualize the decision-making process, enhancing model interpretability. This work demonstrates the potential of LWENet with LGA in facilitating early oral cancer detection.
Collapse
Affiliation(s)
- Dhirendra Prasad Yadav
- Department of Computer Engineering & Applications, GLA University Mathura, Mathura, India
| | - Bhisham Sharma
- Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
| | - Ajit Noonia
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Abolfazl Mehbodniya
- Department of Electronics and Communication Engineering, Kuwait College of Science and Technology (KCST), Doha Area, 7th Ring Road, Kuwait City, Kuwait
| |
Collapse
|
6
|
Vinay V, Jodalli P, Chavan MS, Buddhikot CS, Luke AM, Ingafou MSH, Reda R, Pawar AM, Testarelli L. Artificial Intelligence in Oral Cancer: A Comprehensive Scoping Review of Diagnostic and Prognostic Applications. Diagnostics (Basel) 2025; 15:280. [PMID: 39941210 PMCID: PMC11816433 DOI: 10.3390/diagnostics15030280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 01/19/2025] [Accepted: 01/22/2025] [Indexed: 02/16/2025] Open
Abstract
Background/Objectives: Oral cancer, the sixth most common cancer worldwide, is linked to smoke, alcohol, and HPV. This scoping analysis summarized early-onset oral cancer diagnosis applications to address a gap. Methods: A scoping review identified, selected, and synthesized AI-based oral cancer diagnosis, screening, and prognosis literature. The review verified study quality and relevance using frameworks and inclusion criteria. A full search included keywords, MeSH phrases, and Pubmed. Oral cancer AI applications were tested through data extraction and synthesis. Results: AI outperforms traditional oral cancer screening, analysis, and prediction approaches. Medical pictures can be used to diagnose oral cancer with convolutional neural networks. Smartphone and AI-enabled telemedicine make screening affordable and accessible in resource-constrained areas. AI methods predict oral cancer risk using patient data. AI can also arrange treatment using histopathology images and address data heterogeneity, restricted longitudinal research, clinical practice inclusion, and ethical and legal difficulties. Future potential includes uniform standards, long-term investigations, ethical and regulatory frameworks, and healthcare professional training. Conclusions: AI may transform oral cancer diagnosis and treatment. It can develop early detection, risk modelling, imaging phenotypic change, and prognosis. AI approaches should be standardized, tested longitudinally, and ethical and practical issues related to real-world deployment should be addressed.
Collapse
Affiliation(s)
- Vineet Vinay
- Department of Public Health Dentistry, Manipal College of Dental Sciences Mangalore, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
- Department of Public Health Dentistry, Sinhgad Dental College & Hospital, Pune 411041, Maharashtra, India
| | - Praveen Jodalli
- Department of Public Health Dentistry, Manipal College of Dental Sciences Mangalore, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
| | - Mahesh S. Chavan
- Department of Oral Medicine and Radiology, Sinhgad Dental College & Hospital, Pune 411041, Maharashtra, India;
| | - Chaitanya. S. Buddhikot
- Department of Public Health Dentistry, Dr. D. Y. Patil Dental College and Hospital Pune, Dr. D. Y. Patil Vidyapeeth Pimpri Pune, Pune 411018, Maharashtra, India;
| | - Alexander Maniangat Luke
- Department of Clinical Science, College of Dentistry, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates; (A.M.L.); (M.S.H.I.)
- Centre of Medical and Bio-Allied Health Science Research, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates
| | - Mohamed Saleh Hamad Ingafou
- Department of Clinical Science, College of Dentistry, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates; (A.M.L.); (M.S.H.I.)
- Centre of Medical and Bio-Allied Health Science Research, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates
| | - Rodolfo Reda
- Department of Oral and Maxillo-Facial Sciences, Sapienza University of Rome, Via Caserta 06, 00161 Rome, Italy;
| | - Ajinkya M. Pawar
- Department of Conservative Dentistry and Endodontics, Nair Hospital Dental College, Mumbai 400034, Maharashtra, India
| | - Luca Testarelli
- Department of Oral and Maxillo-Facial Sciences, Sapienza University of Rome, Via Caserta 06, 00161 Rome, Italy;
| |
Collapse
|
7
|
Dang RR, Kadaikal B, Abbadi SE, Brar BR, Sethi A, Chigurupati R. The current landscape of artificial intelligence in oral and maxillofacial surgery- a narrative review. Oral Maxillofac Surg 2025; 29:37. [PMID: 39820789 DOI: 10.1007/s10006-025-01334-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 01/03/2025] [Indexed: 01/19/2025]
Abstract
OBJECTIVE This narrative review aims to explore the current applications and future prospects of AI within the subfields of oral and maxillofacial surgery (OMS), emphasizing its potential benefits and anticipated challenges. METHODS A detailed review of the literature was conducted to evaluate the role of AI in oral and maxillofacial surgery. All domains within OMS were reviewed with a focus on diagnostic, therapeutic and prognostic interventions. RESULTS AI has been successfully integrated into surgical specialties to enhance clinical outcomes. In OMS, AI demonstrates potential to improve clinical and administrative workflows in both ambulatory and hospital-based settings. Notable applications include more accurate risk prediction, minimally invasive surgical techniques, and optimized postoperative management. CONCLUSION OMS stands to benefit enormously from the integration of AI. However, significant roadblocks, such as ethical concerns, data security, and integration challenges, must be addressed to ensure effective adoption. Further research and innovation are needed to fully realize the potential of AI in this specialty.
Collapse
Affiliation(s)
- Rushil Rajiv Dang
- Department of Oral and Maxillofacial, Boston University and Boston Medical Center, 635 Albany Street, 02118, Boston, MA, USA.
| | - Balram Kadaikal
- Henry M. Goldman School of Dental Medicine, Boston University, Boston, MA, USA
| | - Sam El Abbadi
- Consultant, Department of Plastic, Reconstructive and Aesthetic Surgery, University Hospital OWL, Campus Klinikum Bielefeld, Bielefeld, Germany
| | - Branden R Brar
- Department of Oral and Maxillofacial, Boston University and Boston Medical Center, Boston, MA, USA
| | - Amit Sethi
- Department of Oral and Maxillofacial, Boston University and Boston Medical Center, Boston, MA, USA
| | - Radhika Chigurupati
- Department of Oral and Maxillofacial surgery, Boston Medical Center, Boston, MA, USA
| |
Collapse
|
8
|
Surdu A, Budala DG, Luchian I, Foia LG, Botnariu GE, Scutariu MM. Using AI in Optimizing Oral and Dental Diagnoses-A Narrative Review. Diagnostics (Basel) 2024; 14:2804. [PMID: 39767164 PMCID: PMC11674583 DOI: 10.3390/diagnostics14242804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 11/30/2024] [Accepted: 12/11/2024] [Indexed: 01/11/2025] Open
Abstract
Artificial intelligence (AI) is revolutionizing the field of oral and dental healthcare by offering innovative tools and techniques for optimizing diagnosis, treatment planning, and patient management. This narrative review explores the current applications of AI in dentistry, focusing on its role in enhancing diagnostic accuracy and efficiency. AI technologies, such as machine learning, deep learning, and computer vision, are increasingly being integrated into dental practice to analyze clinical images, identify pathological conditions, and predict disease progression. By utilizing AI algorithms, dental professionals can detect issues like caries, periodontal disease and oral cancer at an earlier stage, thus improving patient outcomes.
Collapse
Affiliation(s)
- Amelia Surdu
- Department of Oral Diagnosis, Faculty of Dental Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Dana Gabriela Budala
- Department of Dentures, Faculty of Dental Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Ionut Luchian
- Department of Periodontology, Faculty of Dental Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Liliana Georgeta Foia
- Department of Biochemistry, Faculty of Dental Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 16 Universitătii Street, 700115 Iasi, Romania
- St. Spiridon Emergency County Hospital, 700111 Iasi, Romania
| | - Gina Eosefina Botnariu
- Department of Internal Medicine II, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 16 Universitătii Street, 700115 Iasi, Romania
- Department of Diabetes, Nutrition and Metabolic Diseases, St. Spiridon Emergency County Hospital, 700111 Iasi, Romania
| | - Monica Mihaela Scutariu
- Department of Oral Diagnosis, Faculty of Dental Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania
| |
Collapse
|
9
|
Thakuria T, Rahman T, Mahanta DR, Khataniar SK, Goswami RD, Rahman T, Mahanta LB. Deep learning for early diagnosis of oral cancer via smartphone and DSLR image analysis: a systematic review. Expert Rev Med Devices 2024; 21:1189-1204. [PMID: 39587051 DOI: 10.1080/17434440.2024.2434732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 10/19/2024] [Accepted: 11/22/2024] [Indexed: 11/27/2024]
Abstract
INTRODUCTION Diagnosing oral cancer is crucial in healthcare, with technological advancements enhancing early detection and outcomes. This review examines the impact of handheld AI-based tools, focusing on Convolutional Neural Networks (CNNs) and their advanced architectures in oral cancer diagnosis. METHODS A comprehensive search across PubMed, Scopus, Google Scholar, and Web of Science identified papers on deep learning (DL) in oral cancer diagnosis using digital images. The review, registered with PROSPERO, employed PRISMA and QUADAS-2 for search and risk assessment, with data analyzed through bubble and bar charts. RESULTS Twenty-five papers were reviewed, highlighting classification, segmentation, and object detection as key areas. Despite challenges like limited annotated datasets and data imbalance, models such as DenseNet121, VGG19, and EfficientNet-B0 excelled in binary classification, while EfficientNet-B4, Inception-V4, and Faster R-CNN were effective for multiclass classification and object detection. Models achieved up to 100% precision, 99% specificity, and 97.5% accuracy, showcasing AI's potential to improve diagnostic accuracy. Combining datasets and leveraging transfer learning enhances detection, particularly in resource-limited settings. CONCLUSION Handheld AI tools are transforming oral cancer diagnosis, with ethical considerations guiding their integration into healthcare systems. DL offers explainability, builds trust in AI-driven diagnoses, and facilitates telemedicine integration.
Collapse
Affiliation(s)
- Tapabrat Thakuria
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Taibur Rahman
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Deva Raj Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | | | | | - Tashnin Rahman
- Department of Head & Neck Oncology, Dr. B Borooah Cancer Institute, Guwahati, India
| | - Lipi B Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| |
Collapse
|
10
|
Chen Y, Du P, Zhang Y, Guo X, Song Y, Wang J, Yang LL, He W. Image-based multi-omics analysis for oral science: Recent progress and perspectives. J Dent 2024; 151:105425. [PMID: 39427959 DOI: 10.1016/j.jdent.2024.105425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 10/01/2024] [Accepted: 10/18/2024] [Indexed: 10/22/2024] Open
Abstract
OBJECTIVES The diagnosis and treatment of oral and dental diseases rely heavily on various types of medical imaging. Deep learning-mediated multi-omics analysis can extract more representative features than those identified through traditional diagnostic methods. This review aims to discuss the applications and recent advances in image-based multi-omics analysis in oral science and to highlight its potential to enhance traditional diagnostic approaches for oral diseases. STUDY SELECTION, DATA, AND SOURCES A systematic search was conducted in the PubMed, Web of Science, and Google Scholar databases, covering all available records. This search thoroughly examined and summarized advances in image-based multi-omics analysis in oral and maxillofacial medicine. CONCLUSIONS This review comprehensively summarizes recent advancements in image-based multi-omics analysis for oral science, including radiomics, pathomics, and photographic-based omics analysis. It also discusses the ongoing challenges and future perspectives that could provide new insights into exploiting the potential of image-based omics analysis in the field of oral science. CLINICAL SIGNIFICANCE This review article presents the state of image-based multi-omics analysis in stomatology, aiming to help oral clinicians recognize the utility of combining omics analyses with imaging during diagnosis and treatment, which can improve diagnostic accuracy, shorten times to diagnosis, save medical resources, and reduce disparity in professional knowledge among clinicians.
Collapse
Affiliation(s)
- Yizhuo Chen
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Pengxi Du
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Yinyin Zhang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Xin Guo
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Yujing Song
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Jianhua Wang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Lei-Lei Yang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China.
| | - Wei He
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China.
| |
Collapse
|
11
|
Wei X, Chanjuan L, Ke J, Linyun Y, Jinxing G, Quanbing W. Convolutional neural network for oral cancer detection combined with improved tunicate swarm algorithm to detect oral cancer. Sci Rep 2024; 14:28675. [PMID: 39562767 PMCID: PMC11577024 DOI: 10.1038/s41598-024-79250-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 11/07/2024] [Indexed: 11/21/2024] Open
Abstract
Early Diagnosis of oral cancer is very important and can save you from some oral malignancies. However, while this approach aids in the rapid healing of patients and the preservation of their lives, there are several causes for poor and wrong diagnosis of oral cancer. In recent years, the use of computer-aided design diagnosis tools as an auxiliary tool alongside clinicians has greatly benefited in more accurate identification of this malignancy. The current study proposes a new approach for identifying oral cancer patients based on image processing and deep learning. The current study employs a recently integrated model of an improved tunicate swarm algorithm to produce an efficient tool for improving a convolutional neural network and delivering an accurate cancer diagnostic system. The approach is then implemented on the oral cancer pictures dataset. The approach is then validated by comparing it to other published papers using various measurement markers. The proposed model achieved an accuracy of 98.70% and a recall of 93.71% in detecting oral cancerous lesions from photographic images. The model also achieved an F1-score of 90.08% and a precision of 96.42%. The final results demonstrate that the offered approach can produce more exact results and can be used in conjunction with clinicians to help in diagnosing oral cancer.
Collapse
Affiliation(s)
- Xiao Wei
- Zhejiang Provincial JianDe First People's Hospital, HangZhou, Zhejiang, China
| | - Liu Chanjuan
- graduate school, Bengbu Medical College, Bengbu, AnHui, China
| | - Jiang Ke
- graduate school, Bengbu Medical College, Bengbu, AnHui, China
| | - Ye Linyun
- Zhejiang Provincial JianDe First People's Hospital, HangZhou, Zhejiang, China
| | - Gao Jinxing
- Zhejiang Provincial JianDe First People's Hospital, HangZhou, Zhejiang, China.
| | - Wang Quanbing
- Zhejiang Provincial JianDe First People's Hospital, HangZhou, Zhejiang, China.
| |
Collapse
|
12
|
Sahoo RK, Sahoo KC, Dash GC, Kumar G, Baliarsingh SK, Panda B, Pati S. Diagnostic performance of artificial intelligence in detecting oral potentially malignant disorders and oral cancer using medical diagnostic imaging: a systematic review and meta-analysis. FRONTIERS IN ORAL HEALTH 2024; 5:1494867. [PMID: 39568787 PMCID: PMC11576460 DOI: 10.3389/froh.2024.1494867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 10/22/2024] [Indexed: 11/22/2024] Open
Abstract
Objective Oral cancer is a widespread global health problem characterised by high mortality rates, wherein early detection is critical for better survival outcomes and quality of life. While visual examination is the primary method for detecting oral cancer, it may not be practical in remote areas. AI algorithms have shown some promise in detecting cancer from medical images, but their effectiveness in oral cancer detection remains Naïve. This systematic review aims to provide an extensive assessment of the existing evidence about the diagnostic accuracy of AI-driven approaches for detecting oral potentially malignant disorders (OPMDs) and oral cancer using medical diagnostic imaging. Methods Adhering to PRISMA guidelines, the review scrutinised literature from PubMed, Scopus, and IEEE databases, with a specific focus on evaluating the performance of AI architectures across diverse imaging modalities for the detection of these conditions. Results The performance of AI models, measured by sensitivity and specificity, was assessed using a hierarchical summary receiver operating characteristic (SROC) curve, with heterogeneity quantified through I2 statistic. To account for inter-study variability, a random effects model was utilized. We screened 296 articles, included 55 studies for qualitative synthesis, and selected 18 studies for meta-analysis. Studies evaluating the diagnostic efficacy of AI-based methods reveal a high sensitivity of 0.87 and specificity of 0.81. The diagnostic odds ratio (DOR) of 131.63 indicates a high likelihood of accurate diagnosis of oral cancer and OPMDs. The SROC curve (AUC) of 0.9758 indicates the exceptional diagnostic performance of such models. The research showed that deep learning (DL) architectures, especially CNNs (convolutional neural networks), were the best at finding OPMDs and oral cancer. Histopathological images exhibited the greatest sensitivity and specificity in these detections. Conclusion These findings suggest that AI algorithms have the potential to function as reliable tools for the early diagnosis of OPMDs and oral cancer, offering significant advantages, particularly in resource-constrained settings. Systematic Review Registration https://www.crd.york.ac.uk/, PROSPERO (CRD42023476706).
Collapse
Affiliation(s)
- Rakesh Kumar Sahoo
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| | - Krushna Chandra Sahoo
- Health Technology Assessment in India (HTAIn), Department of Health Research, Ministry of Health & Family Welfare, Govt. of India, New Delhi, India
| | | | - Gunjan Kumar
- Kalinga Institute of Dental Sciences, KIIT Deemed to be University, Bhubaneswar, India
| | | | - Bhuputra Panda
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
| | - Sanghamitra Pati
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| |
Collapse
|
13
|
Mehari M, Sibih Y, Dada A, Chang SM, Wen PY, Molinaro AM, Chukwueke UN, Budhu JA, Jackson S, McFaline-Figueroa JR, Porter A, Hervey-Jumper SL. Enhancing neuro-oncology care through equity-driven applications of artificial intelligence. Neuro Oncol 2024; 26:1951-1963. [PMID: 39159285 PMCID: PMC11534320 DOI: 10.1093/neuonc/noae127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/21/2024] Open
Abstract
The disease course and clinical outcome for brain tumor patients depend not only on the molecular and histological features of the tumor but also on the patient's demographics and social determinants of health. While current investigations in neuro-oncology have broadly utilized artificial intelligence (AI) to enrich tumor diagnosis and more accurately predict treatment response, postoperative complications, and survival, equity-driven applications of AI have been limited. However, AI applications to advance health equity in the broader medical field have the potential to serve as practical blueprints to address known disparities in neuro-oncologic care. In this consensus review, we will describe current applications of AI in neuro-oncology, postulate viable AI solutions for the most pressing inequities in neuro-oncology based on broader literature, propose a framework for the effective integration of equity into AI-based neuro-oncology research, and close with the limitations of AI.
Collapse
Affiliation(s)
- Mulki Mehari
- Department of Neurosurgery, University of California, San Francisco, San Francisco, California, USA
| | - Youssef Sibih
- Department of Neurosurgery, University of California, San Francisco, San Francisco, California, USA
| | - Abraham Dada
- Department of Neurosurgery, University of California, San Francisco, San Francisco, California, USA
| | - Susan M Chang
- Division of Neuro-Oncology, University of California San Francisco and Weill Institute for Neurosciences, San Francisco, California, USA
| | - Patrick Y Wen
- Center for Neuro-Oncology, Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts, USA
| | - Annette M Molinaro
- Department of Neurosurgery, University of California, San Francisco, San Francisco, California, USA
| | - Ugonma N Chukwueke
- Center for Neuro-Oncology, Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts, USA
| | - Joshua A Budhu
- Department of Neurology, Memorial Sloan Kettering Cancer Center, Department of Neurology, Weill Cornell Medicine, Joan & Sanford I. Weill Medical College of Cornell University, New York, New York, USA
| | - Sadhana Jackson
- Surgical Neurology Branch, National Institute of Neurological Disorders and Stroke, Pediatric Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - J Ricardo McFaline-Figueroa
- Center for Neuro-Oncology, Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts, USA
| | - Alyx Porter
- Division of Neuro-Oncology, Department of Neurology, Mayo Clinic, Phoenix, Arizona, USA
| | - Shawn L Hervey-Jumper
- Department of Neurosurgery, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
14
|
Alghamdi AS, Aldhaheri RW. A Low-Cost, Portable, Multi-Cancer Screening Device Based on a Ratio Fluorometry and Signal Correlation Technique. BIOSENSORS 2024; 14:482. [PMID: 39451695 PMCID: PMC11506725 DOI: 10.3390/bios14100482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 10/02/2024] [Accepted: 10/05/2024] [Indexed: 10/26/2024]
Abstract
The autofluorescence of erythrocyte porphyrins has emerged as a potential method for multi-cancer early detection (MCED). With this method's dependence on research-grade spectrofluorometers, significant improvements in instrumentation are necessary to translate its potential into clinical practice, as with any promising medical technology. To fill this gap, in this paper, we present an automated ratio porphyrin analyzer for cancer screening (ARPA-CS), a low-cost, portable, and automated instrument for MCED via the ratio fluorometry of porphyrins. The ARPA-CS aims to facilitate cancer screening in an inexpensive, rapid, non-invasive, and reasonably accurate manner for use in primary clinics or at point of care. To accomplish this, the ARPA-CS uses an ultraviolet-excited optical apparatus for ratio fluorometry that features two photodetectors for detection at 590 and 630 nm. Additionally, it incorporates a synchronous detector for the precision measurement of signals based on the Walsh-ordered Walsh-Hadamard transform (WHT)w and circular shift. To estimate its single-photodetector capability, we established a linear calibration curve for the ARBA-CS exceeding four orders of magnitude with a linearity of up to 0.992 and a low detection limit of 0.296 µg/mL for riboflavin. The ARPA-CS also exhibited excellent repeatability (0.21%) and stability (0.60%). Moreover, the ratio fluorometry of three serially diluted standard solutions of riboflavin yielded a ratio of 0.4, which agrees with that expected based on the known emission spectra of riboflavin. Additionally, the ratio fluorometry of the porphyrin solution yielded a ratio of 49.82, which was ascribed to the predominant concentration of protoporphyrin IX in the brown eggshells, as confirmed in several studies. This study validates this instrument for the ratio fluorometry of porphyrins as a biomarker for MCED. Nevertheless, large and well-designed clinical trials are necessary to further elaborate more on this matter.
Collapse
Affiliation(s)
| | - Rabah W. Aldhaheri
- Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| |
Collapse
|
15
|
Oliver J, Alapati R, Lee J, Bur A. Artificial Intelligence in Head and Neck Surgery. Otolaryngol Clin North Am 2024; 57:803-820. [PMID: 38910064 PMCID: PMC11374486 DOI: 10.1016/j.otc.2024.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This article explores artificial intelligence's (AI's) role in otolaryngology for head and neck cancer diagnosis and management. It highlights AI's potential in pattern recognition for early cancer detection, prognostication, and treatment planning, primarily through image analysis using clinical, endoscopic, and histopathologic images. Radiomics is also discussed at length, as well as the many ways that radiologic image analysis can be utilized, including for diagnosis, lymph node metastasis prediction, and evaluation of treatment response. The study highlights AI's promise and limitations, underlining the need for clinician-data scientist collaboration to enhance head and neck cancer care.
Collapse
Affiliation(s)
- Jamie Oliver
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Rahul Alapati
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Jason Lee
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Andrés Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA.
| |
Collapse
|
16
|
Khemtonglang K, Liu W, Lee H, Wang W, Li S, Li ZY, Shepherd S, Yang Y, Diel DG, Fang Y, Cunningham BT. Portable, smartphone-linked, and miniaturized photonic resonator absorption microscope (PRAM Mini) for point-of-care diagnostics. BIOMEDICAL OPTICS EXPRESS 2024; 15:5691-5705. [PMID: 39421766 PMCID: PMC11482178 DOI: 10.1364/boe.531388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 08/13/2024] [Accepted: 08/25/2024] [Indexed: 10/19/2024]
Abstract
We report the design, development, and characterization of a miniaturized version of the photonic resonator absorption microscope (PRAM Mini), whose cost, size, and functionality are compatible with point-of-care (POC) diagnostic assay applications. Compared to previously reported versions of the PRAM instrument, the PRAM Mini components are integrated within an optical framework comprised of an acrylic breadboard and plastic alignment fixtures. The instrument incorporates a Raspberry Pi microprocessor and Bluetooth communication circuit board for wireless control and data connection to a linked smartphone. PRAM takes advantage of enhanced optical absorption of ∼80 nm diameter gold nanoparticles (AuNP) whose localized surface plasmon resonance overlaps with the ∼625 nm resonant reflection wavelength of a photonic crystal (PC) surface. When illuminated with wide-field low-intensity collimated light from a ∼617 nm wavelength red LED, each AuNP linked to the PC surface results in locally reduced reflection intensity, which is visualized by observing dark spots in the PC-reflected image with an inexpensive CMOS image sensor. Each AuNP in the image field of view can be easily counted with digital resolution. We report upon the selection of optical/electronic components, image processing algorithm, and contrast achieved for single AuNP detection. The instrument is operated via a wireless connection to a linked mobile device using a custom-developed software application that runs on an Android smartphone. As a representative POC application, we used the PRAM Mini as the detection instrument for an assay that measures the presence of antibodies against SARS-CoV-2 infection in cat serum samples, where each dark spot in the image represents a complex between one immobilized viral antigen, one antibody molecule, and one AuNP tag. With dimensions of 23 × 21 × 10 cm3, the PRAM Mini offers a compact detection instrument for POC diagnostics.
Collapse
Affiliation(s)
- Kodchakorn Khemtonglang
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Weinan Liu
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Hankeun Lee
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Weijing Wang
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Siyan Li
- Department of Pathobiology, College of Veterinary Medicine, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Zhao Yuan Li
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Skye Shepherd
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Yihong Yang
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Zhejiang University-University of Illinois Urbana-Champaign Institute, Zhejiang, China
| | - Diego G. Diel
- Department of Population Medicine and Diagnostic Sciences, Cornell University, Ithaca, New York, USA
| | - Ying Fang
- Department of Pathobiology, College of Veterinary Medicine, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Brian T. Cunningham
- Nick Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Cancer Center at Illinois, Urbana, Illinois, USA
| |
Collapse
|
17
|
Keser G, Pekiner FN, Bayrakdar İŞ, Çelik Ö, Orhan K. A deep learning approach to detection of oral cancer lesions from intra oral patient images: A preliminary retrospective study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101975. [PMID: 39043293 DOI: 10.1016/j.jormas.2024.101975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/10/2024] [Accepted: 07/20/2024] [Indexed: 07/25/2024]
Abstract
INTRODUCTION Oral squamous cell carcinomas (OSCC) seen in the oral cavity are a category of diseases for which dentists may diagnose and even cure. This study evaluated the performance of diagnostic computer software developed to detect oral cancer lesions in intra-oral retrospective patient images. MATERIALS AND METHODS Oral cancer lesions were labeled with CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) and polygonal type labeling method on a total of 65 anonymous retrospective intraoral patient images of oral mucosa that were diagnosed with oral cancer histopathologically by incisional biopsy from individuals in our clinic. All images have been rechecked and verified by experienced experts. This data set was divided into training (n = 53), validation (n = 6) and test (n = 6) sets. Artificial intelligence model was developed using YOLOv5 architecture, which is a deep learning approach. Model success was evaluated with confusion matrix. RESULTS When the success rate in estimating the images reserved for the test not used in education was evaluated, the F1, sensitivity and precision results of the artificial intelligence model obtained using the YOLOv5 architecture were found to be 0.667, 0.667 and 0.667, respectively. CONCLUSIONS Our study reveals that OCSCC lesions carry discriminative visual appearances, which can be identified by deep learning algorithm. Artificial intelligence shows promise in the prediagnosis of oral cancer lesions. The success rates will increase in the training models of the data set that will be formed with more images.
Collapse
Affiliation(s)
- Gaye Keser
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Department of Oral Diagnosis and Başıbüyük Sağlık, Marmara University, Yerleşkesi Başıbüyük Yolu 9/3 34854, Maltepe, İstanbul, Turkey.
| | - Filiz Namdar Pekiner
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Department of Oral Diagnosis and Başıbüyük Sağlık, Marmara University, Yerleşkesi Başıbüyük Yolu 9/3 34854, Maltepe, İstanbul, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer, Faculty of Science and Letters, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey; Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| |
Collapse
|
18
|
Yilmaz S, Tasyurek M, Amuk M, Celik M, Canger EM. Developing deep learning methods for classification of teeth in dental panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:118-127. [PMID: 37316425 DOI: 10.1016/j.oooo.2023.02.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 09/13/2022] [Accepted: 02/10/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVES We aimed to develop an artificial intelligence-based clinical dental decision-support system using deep-learning methods to reduce diagnostic interpretation error and time and increase the effectiveness of dental treatment and classification. STUDY DESIGN We compared the performance of 2 deep-learning methods, You Only Look Once V4 (YOLO-V4) and Faster Regions with the Convolutional Neural Networks (R-CNN), for tooth classification in dental panoramic radiography for tooth classification in dental panoramic radiography to determine which is more successful in terms of accuracy, time, and detection ability. Using a method based on deep-learning models trained on a semantic segmentation task, we analyzed 1200 panoramic radiographs selected retrospectively. In the classification process, our model identified 36 classes, including 32 teeth and 4 impacted teeth. RESULTS The YOLO-V4 method achieved a mean 99.90% precision, 99.18% recall, and 99.54% F1 score. The Faster R-CNN method achieved a mean 93.67% precision, 90.79% recall, and 92.21% F1 score. Experimental evaluations showed that the YOLO-V4 method outperformed the Faster R-CNN method in terms of accuracy of predicted teeth in the tooth classification process, speed of tooth classification, and ability to detect impacted and erupted third molars. CONCLUSIONS The YOLO-V4 method outperforms the Faster R-CNN method in terms of accuracy of tooth prediction, speed of detection, and ability to detect impacted third molars and erupted third molars. The proposed deep learning based methods can assist dentists in clinical decision making, save time, and reduce the negative effects of stress and fatigue in daily practice.
Collapse
Affiliation(s)
- Serkan Yilmaz
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Murat Tasyurek
- Department of Computer Engineering, Kayseri University, Kayseri, Turkey
| | - Mehmet Amuk
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Mete Celik
- Department of Computer Engineering, Erciyes University, Kayseri, Turkey
| | - Emin Murat Canger
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey.
| |
Collapse
|
19
|
Shukla S, Deo BS, Vishwakarma C, Mishra S, Ahirwar S, Sah AN, Pandey K, Singh S, Prasad SN, Padhi AK, Pal M, Panigrahi PK, Pradhan A. A smartphone-based standalone fluorescence spectroscopy tool for cervical precancer diagnosis in clinical conditions. JOURNAL OF BIOPHOTONICS 2024; 17:e202300468. [PMID: 38494870 DOI: 10.1002/jbio.202300468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 02/07/2024] [Accepted: 02/07/2024] [Indexed: 03/19/2024]
Abstract
Real-time prediction about the severity of noncommunicable diseases like cancers is a boon for early diagnosis and timely cure. Optical techniques due to their minimally invasive nature provide better alternatives in this context than the conventional techniques. The present study talks about a standalone, field portable smartphone-based device which can classify different grades of cervical cancer on the basis of the spectral differences captured in their intrinsic fluorescence spectra with the help of AI/ML technique. In this study, a total number of 75 patients and volunteers, from hospitals at different geographical locations of India, have been tested and classified with this device. A classification approach employing a hybrid mutual information long short-term memory model has been applied to categorize various subject groups, resulting in an average accuracy, specificity, and sensitivity of 96.56%, 96.76%, and 94.37%, respectively using 10-fold cross-validation. This exploratory study demonstrates the potential of combining smartphone-based technology with fluorescence spectroscopy and artificial intelligence as a diagnostic screening approach which could enhance the detection and screening of cervical cancer.
Collapse
Affiliation(s)
- Shivam Shukla
- Center for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Bhaswati Singha Deo
- Center for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Chaitanya Vishwakarma
- Center for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Subrata Mishra
- Center for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Shikha Ahirwar
- PhotoSpIMeDx Pvt. Ltd., Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Amar Nath Sah
- Department of Biological Sciences and Bioengineering, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Kiran Pandey
- Obstetrics and Gynecology Department, GSVM Medical College Kanpur, Kanpur, Uttar Pradesh, India
| | - Sweta Singh
- Department of Obstetrics and Gynecology, AIIMS Bhubaneswar, Bhubaneswar, Odisha, India
| | - S N Prasad
- Radiation Oncology Department, J.K. Cancer Institute Kanpur, Kanpur, Uttar Pradesh, India
| | - Ashok Kumar Padhi
- Gynecologic Oncology Department, Acharya Harihar Regional Cancer Research Centre, Cuttack, Odisha, India
| | - Mayukha Pal
- ABB Ability Innovation Center, Asea Brown Boveri Company, Hyderabad, India
| | - Prasanta K Panigrahi
- Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur, West Bengal, India
- Centre for Quantum Science and Technology, Siksha 'O' Anusandhan University, Bhubaneswar, Odisha, India
| | - Asima Pradhan
- Center for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
- PhotoSpIMeDx Pvt. Ltd., Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
- Department of Physics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| |
Collapse
|
20
|
Li C, Chen X, Chen C, Gong Z, Pataer P, Liu X, Lv X. Application of deep learning radiomics in oral squamous cell carcinoma-Extracting more information from medical images using advanced feature analysis. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101840. [PMID: 38548062 DOI: 10.1016/j.jormas.2024.101840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/07/2024] [Accepted: 03/20/2024] [Indexed: 04/02/2024]
Abstract
OBJECTIVE To conduct a systematic review with meta-analyses to assess the recent scientific literature addressing the application of deep learning radiomics in oral squamous cell carcinoma (OSCC). MATERIALS AND METHODS Electronic and manual literature retrieval was performed using PubMed, Web of Science, EMbase, Ovid-MEDLINE, and IEEE databases from 2012 to 2023. The ROBINS-I tool was used for quality evaluation; random-effects model was used; and results were reported according to the PRISMA statement. RESULTS A total of 26 studies involving 64,731 medical images were included in quantitative synthesis. The meta-analysis showed that, the pooled sensitivity and specificity were 0.88 (95 %CI: 0.87∼0.88) and 0.80 (95 %CI: 0.80∼0.81), respectively. Deeks' asymmetry test revealed there existed slight publication bias (P = 0.03). CONCLUSIONS The advances in the application of radiomics combined with learning algorithm in OSCC were reviewed, including diagnosis and differential diagnosis of OSCC, efficacy assessment and prognosis prediction. The demerits of deep learning radiomics at the current stage and its future development direction aimed at medical imaging diagnosis were also summarized and analyzed at the end of the article.
Collapse
Affiliation(s)
- Chenxi Li
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China; Stomatological Research Institute of Xinjiang Uygur Autonomous Region. Urumqi 830054, PR China; Hubei Province Key Laboratory of Oral and Maxillofacial Development and Regeneration, School of Stomatology, Tongji Medical College, Union Hospital, Huazhong University of Science and Technology, Wuhan 430022, PR China.
| | - Xinya Chen
- College of Information Science and Engineering, Xinjiang University. Urumqi 830008, PR China
| | - Cheng Chen
- College of Software, Xinjiang University. Urumqi 830046, PR China
| | - Zhongcheng Gong
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China; Stomatological Research Institute of Xinjiang Uygur Autonomous Region. Urumqi 830054, PR China.
| | - Parekejiang Pataer
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China
| | - Xu Liu
- Department of Maxillofacial Surgery, Hospital of Stomatology, Key Laboratory of Dental-Maxillofacial Reconstruction and Biological Intelligence Manufacturing of Gansu Province, Faculty of Dentistry, Lanzhou University. Lanzhou 730013, PR China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University. Urumqi 830008, PR China; College of Software, Xinjiang University. Urumqi 830046, PR China
| |
Collapse
|
21
|
Zayed SO, Abd-Rabou RYM, Abdelhameed GM, Abdelhamid Y, Khairy K, Abulnoor BA, Ibrahim SH, Khaled H. The innovation of AI-based software in oral diseases: clinical-histopathological correlation diagnostic accuracy primary study. BMC Oral Health 2024; 24:598. [PMID: 38778322 PMCID: PMC11112957 DOI: 10.1186/s12903-024-04347-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND Machine learning (ML) through artificial intelligence (AI) could provide clinicians and oral pathologists to advance diagnostic problems in the field of potentially malignant lesions, oral cancer, periodontal diseases, salivary gland disease, oral infections, immune-mediated disease, and others. AI can detect micro-features beyond human eyes and provide solution in critical diagnostic cases. OBJECTIVE The objective of this study was developing a software with all needed feeding data to act as AI-based program to diagnose oral diseases. So our research question was: Can we develop a Computer-Aided Software for accurate diagnosis of oral diseases based on clinical and histopathological data inputs? METHOD The study sample included clinical images, patient symptoms, radiographic images, histopathological images and texts for the oral diseases of interest in the current study (premalignant lesions, oral cancer, salivary gland neoplasms, immune mediated oral mucosal lesions, oral reactive lesions) total oral diseases enrolled in this study was 28 diseases retrieved from the archives of oral maxillofacial pathology department. Total 11,200 texts and 3000 images (2800 images were used for training data to the program and 100 images were used as test data to the program and 100 cases for calculating accuracy, sensitivity& specificity). RESULTS The correct diagnosis rates for group 1 (software users), group 2 (microscopic users) and group 3 (hybrid) were 87%, 90.6, 95% respectively. The reliability for inter-observer value was done by calculating Cronbach's alpha and interclass correlation coefficient. The test revealed for group 1, 2 and 3 the following values respectively 0.934, 0.712 & 0.703. All groups showed acceptable reliability especially for Diagnosis Oral Diseases Software (DODS) that revealed higher reliability value than other groups. However, The accuracy, sensitivity & specificity of this software was lower than those of oral pathologists (master's degree). CONCLUSION The correct diagnosis rate of DODS was comparable to oral pathologists using standard microscopic examination. The DODS program could be utilized as diagnostic guidance tool with high reliability & accuracy.
Collapse
Affiliation(s)
- Shaimaa O Zayed
- Department of Oral maxillofacial Pathology, Faculty of Dentistry, Cairo University, Cairo, Egypt
- Department of Oral Pathology, Misr University for Science and Technology, P. O. Box 77, Giza, Egypt
| | - Rawan Y M Abd-Rabou
- Faculty of Oral Medicine & Dental Surgery, Misr University for Science and Technology, P. O. Box 77, Giza, Egypt
| | | | - Youssef Abdelhamid
- Philosophy & Interactive Media Minors, New York University, Abu Dhabi, United Arab Emirates
| | | | - Bassam A Abulnoor
- Fixes Prosthodontics, Faculty of Dentistry, Ain Shams University, Cairo, Egypt
| | | | - Heba Khaled
- Lecturer of Oral Maxillofacial Pathology, Faculty of Dentistry, Cairo University, Cairo, Egypt
| |
Collapse
|
22
|
Le LTP, Nguyen AHQ, Phan LMT, Ngo HTT, Wang X, Cunningham B, Valera E, Bashir R, Taylor-Robinson AW, Do CD. Current smartphone-assisted point-of-care cancer detection: Towards supporting personalized cancer monitoring. Trends Analyt Chem 2024; 174:117681. [DOI: 10.1016/j.trac.2024.117681] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2025]
|
23
|
Mali SB. Screening of head neck cancer. ORAL ONCOLOGY REPORTS 2024; 9:100142. [DOI: 10.1016/j.oor.2023.100142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
24
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Almeida JD, Pereira CT, Rodrigues JDA, Carrard VC. Convolutional neural network misclassification analysis in oral lesions: an error evaluation criterion by image characteristics. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:243-252. [PMID: 38161085 DOI: 10.1016/j.oooo.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 10/02/2023] [Accepted: 10/04/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE This retrospective study analyzed the errors generated by a convolutional neural network (CNN) when performing automated classification of oral lesions according to their clinical characteristics, seeking to identify patterns in systemic errors in the intermediate layers of the CNN. STUDY DESIGN A cross-sectional analysis nested in a previous trial in which automated classification by a CNN model of elementary lesions from clinical images of oral lesions was performed. The resulting CNN classification errors formed the dataset for this study. A total of 116 real outputs were identified that diverged from the estimated outputs, representing 7.6% of the total images analyzed by the CNN. RESULTS The discrepancies between the real and estimated outputs were associated with problems relating to image sharpness, resolution, and focus; human errors; and the impact of data augmentation. CONCLUSIONS From qualitative analysis of errors in the process of automated classification of clinical images, it was possible to confirm the impact of image quality, as well as identify the strong impact of the data augmentation process. Knowledge of the factors that models evaluate to make decisions can increase confidence in the high classification potential of CNNs.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil.
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | | | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| | - Janete Dias Almeida
- Department of Biosciences and Oral Diagnostics, São Paulo State University, Campus São José dos Campos, São Paulo, Brazil
| | | | - Jonas de Almeida Rodrigues
- Department of Surgery and Orthopaedics, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil; TelessaudeRS-UFRGS, Federal University of Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil; Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
25
|
Song B, KC DR, Yang RY, Li S, Zhang C, Liang R. Classification of Mobile-Based Oral Cancer Images Using the Vision Transformer and the Swin Transformer. Cancers (Basel) 2024; 16:987. [PMID: 38473348 PMCID: PMC10931180 DOI: 10.3390/cancers16050987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 02/23/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Oral cancer, a pervasive and rapidly growing malignant disease, poses a significant global health concern. Early and accurate diagnosis is pivotal for improving patient outcomes. Automatic diagnosis methods based on artificial intelligence have shown promising results in the oral cancer field, but the accuracy still needs to be improved for realistic diagnostic scenarios. Vision Transformers (ViT) have outperformed learning CNN models recently in many computer vision benchmark tasks. This study explores the effectiveness of the Vision Transformer and the Swin Transformer, two cutting-edge variants of the transformer architecture, for the mobile-based oral cancer image classification application. The pre-trained Swin transformer model achieved 88.7% accuracy in the binary classification task, outperforming the ViT model by 2.3%, while the conventional convolutional network model VGG19 and ResNet50 achieved 85.2% and 84.5% accuracy. Our experiments demonstrate that these transformer-based architectures outperform traditional convolutional neural networks in terms of oral cancer image classification, and underscore the potential of the ViT and the Swin Transformer in advancing the state of the art in oral cancer image analysis.
Collapse
Affiliation(s)
- Bofan Song
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ 85721, USA
| | - Dharma Raj KC
- Computer Science Department, The University of Arizona, Tucson, AZ 85721, USA
| | - Rubin Yuchan Yang
- Computer Science Department, The University of Arizona, Tucson, AZ 85721, USA
| | - Shaobai Li
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ 85721, USA
| | - Chicheng Zhang
- Computer Science Department, The University of Arizona, Tucson, AZ 85721, USA
| | - Rongguang Liang
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ 85721, USA
| |
Collapse
|
26
|
Richards-Kortum R, Lorenzoni C, Bagnato VS, Schmeler K. Optical imaging for screening and early cancer diagnosis in low-resource settings. NATURE REVIEWS BIOENGINEERING 2024; 2:25-43. [PMID: 39301200 PMCID: PMC11412616 DOI: 10.1038/s44222-023-00135-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/05/2023] [Indexed: 09/22/2024]
Abstract
Low-cost optical imaging technologies have the potential to reduce inequalities in healthcare by improving the detection of pre-cancer or early cancer and enabling more effective and less invasive treatment. In this Review, we summarise technologies for in vivo widefield, multi-spectral, endoscopic, and high-resolution optical imaging that could offer affordable approaches to improve cancer screening and early detection at the point-of-care. Additionally, we discuss approaches to slide-free microscopy, including confocal imaging, lightsheet microscopy, and phase modulation techniques that can reduce the infrastructure and expertise needed for definitive cancer diagnosis. We also evaluate how machine learning-based algorithms can improve the accuracy and accessibility of optical imaging systems and provide real-time image analysis. To achieve the potential of optical technologies, developers must ensure that devices are easy to use; the optical technologies must be evaluated in multi-institutional, prospective clinical tests in the intended setting; and the barriers to commercial scale-up in under-resourced markets must be overcome. Therefore, test developers should view the production of simple and effective diagnostic tools that are accessible and affordable for all countries and settings as a central goal of their profession.
Collapse
Affiliation(s)
- Rebecca Richards-Kortum
- Department of Bioengineering, Rice University, Houston, TX, USA
- Institute for Global Health Technologies, Rice University, Houston, TX, USA
| | - Cesaltina Lorenzoni
- National Cancer Control Program, Ministry of Health, Maputo, Mozambique
- Department of Pathology, Universidade Eduardo Mondlane (UEM), Maputo, Mozambique
- Maputo Central Hospital, Maputo, Mozambique
| | - Vanderlei S Bagnato
- São Carlos Institute of Physics, University of São Paulo, São Carlos, Brazil
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Kathleen Schmeler
- Department of Gynecologic Oncology and Reproductive Medicine, The University of Texas M.D. Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
27
|
Zuhair V, Babar A, Ali R, Oduoye MO, Noor Z, Chris K, Okon II, Rehman LU. Exploring the Impact of Artificial Intelligence on Global Health and Enhancing Healthcare in Developing Nations. J Prim Care Community Health 2024; 15:21501319241245847. [PMID: 38605668 PMCID: PMC11010755 DOI: 10.1177/21501319241245847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/19/2024] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI), which combines computer science with extensive datasets, seeks to mimic human-like intelligence. Subsets of AI are being applied in almost all fields of medicine and surgery. AIM This review focuses on the applications of AI in healthcare settings in developing countries, designed to underscore its significance by comprehensively outlining the advancements made thus far, the shortcomings encountered in AI applications, the present status of AI integration, persistent challenges, and innovative strategies to surmount them. METHODOLOGY Articles from PubMed, Google Scholar, and Cochrane were searched from 2000 to 2023 with keywords including AI and healthcare, focusing on multiple medical specialties. RESULTS The increasing role of AI in diagnosis, prognosis prediction, and patient management, as well as hospital management and community healthcare, has made the overall healthcare system more efficient, especially in the high patient load setups and resource-limited areas of developing countries where patient care is often compromised. However, challenges, including low adoption rates and the absence of standardized guidelines, high installation and maintenance costs of equipment, poor transportation and connectivvity issues hinder AI's full use in healthcare. CONCLUSION Despite these challenges, AI holds a promising future in healthcare. Adequate knowledge and expertise of healthcare professionals for the use of AI technology in healthcare is imperative in developing nations.
Collapse
Affiliation(s)
- Varisha Zuhair
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Areesha Babar
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Rabbiya Ali
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Malik Olatunde Oduoye
- The Medical Research Circle, (MedReC), Gisenyi, Goma, Democratic Republic of the Congo
| | - Zainab Noor
- Institute of Dentistry CMH Lahore Medical College, Lahore, Punjab, Pakistan
| | - Kitumaini Chris
- The Medical Research Circle, (MedReC), Gisenyi, Goma, Democratic Republic of the Congo
- Université Libre des Pays des Grands-Lacs Goma, Noth-Kivu, Democratic Republic of the Congo
| | - Inibehe Ime Okon
- The Medical Research Circle, (MedReC), Gisenyi, Goma, Democratic Republic of the Congo
- NiMSA SCOPH, Uyo, Akwa-Ibom State, Nigeria
| | | |
Collapse
|
28
|
Temilola DO, Adeola HA, Grobbelaar J, Chetty M. Liquid Biopsy in Head and Neck Cancer: Its Present State and Future Role in Africa. Cells 2023; 12:2663. [PMID: 37998398 PMCID: PMC10670726 DOI: 10.3390/cells12222663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/12/2023] [Accepted: 11/17/2023] [Indexed: 11/25/2023] Open
Abstract
The rising mortality and morbidity rate of head and neck cancer (HNC) in Africa has been attributed to factors such as the poor state of health infrastructures, genetics, and late presentation resulting in the delayed diagnosis of these tumors. If well harnessed, emerging molecular and omics diagnostic technologies such as liquid biopsy can potentially play a major role in optimizing the management of HNC in Africa. However, to successfully apply liquid biopsy technology in the management of HNC in Africa, factors such as genetic, socioeconomic, environmental, and cultural acceptability of the technology must be given due consideration. This review outlines the role of circulating molecules such as tumor cells, tumor DNA, tumor RNA, proteins, and exosomes, in liquid biopsy technology for the management of HNC with a focus on studies conducted in Africa. The present state and the potential opportunities for the future use of liquid biopsy technology in the effective management of HNC in resource-limited settings such as Africa is further discussed.
Collapse
Affiliation(s)
- Dada Oluwaseyi Temilola
- Department of Craniofacial Biology, Faculty of Dentistry, University of the Western Cape, Tygerberg Hospital, Cape Town 7505, South Africa;
| | - Henry Ademola Adeola
- Department of Oral and Maxillofacial Pathology, Faculty of Dentistry, University of the Western Cape, Tygerberg Hospital, Cape Town 7505, South Africa;
- Division of Dermatology, Department of Medicine, Faculty of Health Sciences and Groote Schuur Hospital, University of Cape Town, Cape Town 7925, South Africa
| | - Johan Grobbelaar
- Division of Otorhinolaryngology, Department of Surgical Sciences, Faculty of Medicine and Health Sciences, Stellenbosch University, Tygerberg Hospital, Cape Town 7505, South Africa;
| | - Manogari Chetty
- Department of Craniofacial Biology, Faculty of Dentistry, University of the Western Cape, Tygerberg Hospital, Cape Town 7505, South Africa;
| |
Collapse
|
29
|
Fonseca AU, Felix JP, Pinheiro H, Vieira GS, Mourão ÝC, Monteiro JCG, Soares F. An Intelligent System to Improve Diagnostic Support for Oral Squamous Cell Carcinoma. Healthcare (Basel) 2023; 11:2675. [PMID: 37830712 PMCID: PMC10572543 DOI: 10.3390/healthcare11192675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/14/2023] Open
Abstract
Oral squamous cell carcinoma (OSCC) is one of the most-prevalent cancer types worldwide, and it poses a serious threat to public health due to its high mortality and morbidity rates. OSCC typically has a poor prognosis, significantly reducing the chances of patient survival. Therefore, early detection is crucial to achieving a favorable prognosis by providing prompt treatment and increasing the chances of remission. Salivary biomarkers have been established in numerous studies to be a trustworthy and non-invasive alternative for early cancer detection. In this sense, we propose an intelligent system that utilizes feed-forward artificial neural networks to classify carcinoma with salivary biomarkers extracted from control and OSCC patient samples. We conducted experiments using various salivary biomarkers, ranging from 1 to 51, to train the model, and we achieved excellent results with precision, sensitivity, and specificity values of 98.53%, 96.30%, and 97.56%, respectively. Our system effectively classified the initial cases of OSCC with different amounts of biomarkers, aiding medical professionals in decision-making and providing a more-accurate diagnosis. This could contribute to a higher chance of treatment success and patient survival. Furthermore, the minimalist configuration of our model presents the potential for incorporation into resource-limited devices or environments.
Collapse
Affiliation(s)
- Afonso U. Fonseca
- Institute of Informatics, Federal University of Goiás, Goiânia 74690-900, GO, Brazil; (J.P.F.); (H.P.); (G.S.V.); (F.S.)
| | - Juliana P. Felix
- Institute of Informatics, Federal University of Goiás, Goiânia 74690-900, GO, Brazil; (J.P.F.); (H.P.); (G.S.V.); (F.S.)
| | - Hedenir Pinheiro
- Institute of Informatics, Federal University of Goiás, Goiânia 74690-900, GO, Brazil; (J.P.F.); (H.P.); (G.S.V.); (F.S.)
| | - Gabriel S. Vieira
- Institute of Informatics, Federal University of Goiás, Goiânia 74690-900, GO, Brazil; (J.P.F.); (H.P.); (G.S.V.); (F.S.)
- Federal Institute Goiano, Computer Vision Lab, Urutaí 75790-000, GO, Brazil
| | | | | | - Fabrizzio Soares
- Institute of Informatics, Federal University of Goiás, Goiânia 74690-900, GO, Brazil; (J.P.F.); (H.P.); (G.S.V.); (F.S.)
| |
Collapse
|
30
|
Liyanage V, Tao M, Park JS, Wang KN, Azimi S. Malignant and non-malignant oral lesions classification and diagnosis with deep neural networks. J Dent 2023; 137:104657. [PMID: 37574105 DOI: 10.1016/j.jdent.2023.104657] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/04/2023] [Accepted: 08/10/2023] [Indexed: 08/15/2023] Open
Abstract
OBJECTIVES Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. METHODS The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. RESULTS The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. CONCLUSIONS Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. CLINICAL SIGNIFICANCE The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions.
Collapse
Affiliation(s)
- Viduni Liyanage
- International Research Collaborative - Oral Health and Equity, The University of Western Australia, Crawley, Western Australia, Australia; UWA Dental School, The University of Western Australia, Nedlands, Western Australia, Australia
| | - Mengqiu Tao
- School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Melbourne, Australia
| | - Joon Soo Park
- International Research Collaborative - Oral Health and Equity, The University of Western Australia, Crawley, Western Australia, Australia; UWA Dental School, The University of Western Australia, Nedlands, Western Australia, Australia; School of Engineering, Information Technology and Physical Sciences, Federation University, Ballarat, Victoria, Australia.
| | - Kate N Wang
- School of Biomedical and Health Sciences, Royal Melbourne Institute of Technology, Bundoora, Victoria, Australia
| | - Somayyeh Azimi
- International Research Collaborative - Oral Health and Equity, The University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
31
|
Zhou L, Jiang H, Li G, Ding J, Lv C, Duan M, Wang W, Chen K, Shen N, Huang X. Point-wise spatial network for identifying carcinoma at the upper digestive and respiratory tract. BMC Med Imaging 2023; 23:140. [PMID: 37749498 PMCID: PMC10521533 DOI: 10.1186/s12880-023-01076-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 08/07/2023] [Indexed: 09/27/2023] Open
Abstract
PROBLEM Artificial intelligence has been widely investigated for diagnosis and treatment strategy design, with some models proposed for detecting oral pharyngeal, nasopharyngeal, or laryngeal carcinoma. However, no comprehensive model has been established for these regions. AIM Our hypothesis was that a common pattern in the cancerous appearance of these regions could be recognized and integrated into a single model, thus improving the efficacy of deep learning models. METHODS We utilized a point-wise spatial attention network model to perform semantic segmentation in these regions. RESULTS Our study demonstrated an excellent outcome, with an average mIoU of 86.3%, and an average pixel accuracy of 96.3%. CONCLUSION The research confirmed that the mucosa of oral pharyngeal, nasopharyngeal, and laryngeal regions may share a common appearance, including the appearance of tumors, which can be recognized by a single artificial intelligence model. Therefore, a deep learning model could be constructed to effectively recognize these tumors.
Collapse
Affiliation(s)
- Lei Zhou
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Huaili Jiang
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Guangyao Li
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Jiaye Ding
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Cuicui Lv
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Maoli Duan
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
- Department of Otolaryngology Head and Neck Surgery, Karolinska University Hospital, 171 76, Stockholm, Sweden
| | - Wenfeng Wang
- Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou, 510006, P. R. China
| | - Kongyang Chen
- Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou, 510006, P. R. China
- Pazhou Lab, Guangzhou, 510330, P. R. China
| | - Na Shen
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China.
| | - Xinsheng Huang
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China.
| |
Collapse
|
32
|
Ramachandran RA, Barão VAR, Ozevin D, Sukotjo C, Srinivasa PP, Mathew M. Early Predicting Tribocorrosion Rate of Dental Implant Titanium Materials Using Random Forest Machine Learning Models. TRIBOLOGY INTERNATIONAL 2023; 187:108735. [PMID: 37720691 PMCID: PMC10503681 DOI: 10.1016/j.triboint.2023.108735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/19/2023]
Abstract
Early detection and prediction of bio-tribocorrosion can avert unexpected damage that may lead to secondary revision surgery and associated risks of implantable devices. Therefore, this study sought to develop a state-of-the-art prediction technique leveraging machine learning(ML) models to classify and predict the possibility of mechanical degradation in dental implant materials. Key features considered in the study involving pure titanium and titanium-zirconium (zirconium = 5, 10, and 15 in wt%) alloys include corrosion potential, acoustic emission(AE) absolute energy, hardness, and weight-loss estimates. ML prototype models deployed confirms its suitability in tribocorrosion prediction with an accuracy above 90%. Proposed system can evolve as a continuous structural-health monitoring as well as a reliable predictive modeling technique for dental implant monitoring.
Collapse
Affiliation(s)
| | - Valentim A R Barão
- Department of Prosthodontics and Periodontology, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| | - Didem Ozevin
- Department of Civil, Materials, and Environmental Engineering, University of Illinois at Chicago, IL, USA
| | - Cortino Sukotjo
- Department of Restorative Dentistry, College of Dentistry, University of Illinois at Chicago, IL, USA
| | - Pai P Srinivasa
- Department of Mechanical Engineering, NMAM IT, Nitte, Karnataka, India
| | - Mathew Mathew
- Department of Biomedical Engineering, University of Illinois at Chicago, IL, USA
- Department of Restorative Dentistry, College of Dentistry, University of Illinois at Chicago, IL, USA
| |
Collapse
|
33
|
Talwar V, Singh P, Mukhia N, Shetty A, Birur P, Desai KM, Sunkavalli C, Varma KS, Sethuraman R, Jawahar CV, Vinod PK. AI-Assisted Screening of Oral Potentially Malignant Disorders Using Smartphone-Based Photographic Images. Cancers (Basel) 2023; 15:4120. [PMID: 37627148 PMCID: PMC10452422 DOI: 10.3390/cancers15164120] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
The prevalence of oral potentially malignant disorders (OPMDs) and oral cancer is surging in low- and middle-income countries. A lack of resources for population screening in remote locations delays the detection of these lesions in the early stages and contributes to higher mortality and a poor quality of life. Digital imaging and artificial intelligence (AI) are promising tools for cancer screening. This study aimed to evaluate the utility of AI-based techniques for detecting OPMDs in the Indian population using photographic images of oral cavities captured using a smartphone. A dataset comprising 1120 suspicious and 1058 non-suspicious oral cavity photographic images taken by trained front-line healthcare workers (FHWs) was used for evaluating the performance of different deep learning models based on convolution (DenseNets) and Transformer (Swin) architectures. The best-performing model was also tested on an additional independent test set comprising 440 photographic images taken by untrained FHWs (set I). DenseNet201 and Swin Transformer (base) models show high classification performance with an F1-score of 0.84 (CI 0.79-0.89) and 0.83 (CI 0.78-0.88) on the internal test set, respectively. However, the performance of models decreases on test set I, which has considerable variation in the image quality, with the best F1-score of 0.73 (CI 0.67-0.78) obtained using DenseNet201. The proposed AI model has the potential to identify suspicious and non-suspicious oral lesions using photographic images. This simplified image-based AI solution can assist in screening, early detection, and prompt referral for OPMDs.
Collapse
Affiliation(s)
- Vivek Talwar
- CVIT, International Institute of Information Technology, Hyderabad 500032, India; (V.T.); (C.V.J.)
| | - Pragya Singh
- INAI, International Institute of Information Technology, Hyderabad 500032, India; (P.S.); (K.S.V.)
| | - Nirza Mukhia
- Department of Oral Medicine and Radiology, KLE Society’s Institute of Dental Sciences, Bengaluru 560022, India; (N.M.); (P.B.)
| | | | - Praveen Birur
- Department of Oral Medicine and Radiology, KLE Society’s Institute of Dental Sciences, Bengaluru 560022, India; (N.M.); (P.B.)
| | - Karishma M. Desai
- iHUB-Data, International Institute of Information Technology, Hyderabad 500032, India;
| | | | - Konala S. Varma
- INAI, International Institute of Information Technology, Hyderabad 500032, India; (P.S.); (K.S.V.)
- Intel Technology India Private Limited, Bengaluru, India;
| | | | - C. V. Jawahar
- CVIT, International Institute of Information Technology, Hyderabad 500032, India; (V.T.); (C.V.J.)
| | - P. K. Vinod
- CCNSB, International Institute of Information Technology, Hyderabad 500032, India
| |
Collapse
|
34
|
Ruiz AJ, Allen R, Giallorenzi MK, Samkoe KS, Shane Chapman M, Pogue BW. Smartphone-based dual radiometric fluorescence and white-light imager for quantification of protoporphyrin IX in skin. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:086003. [PMID: 37638107 PMCID: PMC10460113 DOI: 10.1117/1.jbo.28.8.086003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/02/2023] [Accepted: 07/06/2023] [Indexed: 08/29/2023]
Abstract
Significance The quantification of protoporphyrin IX (PpIX) in skin can be used to study photodynamic therapy (PDT) treatments, understand porphyrin mechanisms, and enhance preoperative mapping of non-melanoma skin cancers. Aim We aim to develop a smartphone-based imager for performing simultaneous radiometric fluorescence (FL) and white light (WL) imaging to study the baseline levels, accumulation, and photobleaching of PpIX in skin. Approach A smartphone-based dual FL and WL imager (sDUO) is introduced alongside new radiometric calibration methods for providing SI-units of measurements in both pre-clinical and clinical settings. These radiometric measurements and corresponding PpIX concentration estimations are applied to clinical measurements to understand mechanistic differences between PDT treatments, accumulation differences between normal tissue and actinic keratosis lesions, and the correlation of photosensitizer concentrations to treatment outcomes. Results The sDUO alongside the developed methods provided radiometric FL measurements (nW / cm 2 ) with a demonstrated sub nanomolar PpIX sensitivity in 1% intralipid phantoms. Patients undergoing PDT treatment of actinic keratosis (AK) lesions were imaged, capturing the increase and subsequent decrease in FL associated with the incubation and irradiation timepoints of lamp-based PDT. Furthermore, the clinical measurements showed mechanistic differences in new daylight-based treatment modalities alongside the selective accumulation of PpIX within AK lesions. The use of the radiometric calibration enabled the reporting of detected PpIX FL in units of nW / cm 2 with the use of liquid phantom measurements allowing for the estimation of in-vivo molar concentrations of skin PpIX. Conclusions The phantom, pre-clinical, and clinical measurements demonstrated the capability of the sDUO to provide quantitative measurements of PpIX FL. The results demonstrate the use of the sDUO for the quantification of PpIX accumulation and photobleaching in a clinical setting, with implications for improving the diagnosis and treatment of various skin conditions.
Collapse
Affiliation(s)
- Alberto J. Ruiz
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
- QUEL Imaging, LLC, White River Junction, Vermont, United States
| | - Richard Allen
- QUEL Imaging, LLC, White River Junction, Vermont, United States
| | - Mia K. Giallorenzi
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
| | - Kimberley S. Samkoe
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
| | - M. Shane Chapman
- Dartmouth Health, Department of Dermatology, Lebanon, New Hampshire, United States
| | - Brian W. Pogue
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
| |
Collapse
|
35
|
Dee EC, Ho FDV, Yee K, Lin VK. Survivorship Care for People With Cancer in the Indo-Pacific: The Imperative to Harness Political Determinants, International Exchange, and Technological Innovation. JCO Glob Oncol 2023; 9:e2300052. [PMID: 37290023 PMCID: PMC10497291 DOI: 10.1200/go.23.00052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 03/07/2023] [Indexed: 06/10/2023] Open
Affiliation(s)
- Edward Christopher Dee
- Edward Christopher Dee, MD, Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Frances Dominique V. Ho, BSc, College of Medicine, University of the Philippines, Manila, Philippines; Kaisin Yee, BSc, Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA, SingHealth Duke-NUS Global Health Institute, Singapore, Singapore; and Vivian K. Lin, DrPH, MPH, LKS Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Frances Dominique V. Ho
- Edward Christopher Dee, MD, Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Frances Dominique V. Ho, BSc, College of Medicine, University of the Philippines, Manila, Philippines; Kaisin Yee, BSc, Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA, SingHealth Duke-NUS Global Health Institute, Singapore, Singapore; and Vivian K. Lin, DrPH, MPH, LKS Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Kaisin Yee
- Edward Christopher Dee, MD, Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Frances Dominique V. Ho, BSc, College of Medicine, University of the Philippines, Manila, Philippines; Kaisin Yee, BSc, Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA, SingHealth Duke-NUS Global Health Institute, Singapore, Singapore; and Vivian K. Lin, DrPH, MPH, LKS Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Vivian K. Lin
- Edward Christopher Dee, MD, Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Frances Dominique V. Ho, BSc, College of Medicine, University of the Philippines, Manila, Philippines; Kaisin Yee, BSc, Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA, SingHealth Duke-NUS Global Health Institute, Singapore, Singapore; and Vivian K. Lin, DrPH, MPH, LKS Faculty of Medicine, University of Hong Kong, Hong Kong, China
| |
Collapse
|
36
|
Gomes RFT, Schuch LF, Martins MD, Honório EF, de Figueiredo RM, Schmith J, Machado GN, Carrard VC. Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review. J Digit Imaging 2023; 36:1060-1070. [PMID: 36650299 PMCID: PMC10287602 DOI: 10.1007/s10278-023-00775-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil.
| | - Lauren Frenzel Schuch
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | - Manoela Domingues Martins
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Rodrigo Marques de Figueiredo
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Jean Schmith
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Giovanna Nunes Machado
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Vinicius Coelho Carrard
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Epidemiology, School of Medicine, TelessaúdeRS-UFRGS, Federal University of Rio Grande Do Sul, Porto Alegre, RS, Brazil
- Department of Oral Medicine, Otorhinolaryngology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, RS, Brazil
| |
Collapse
|
37
|
Nguyen J, Takesh T, Parsangi N, Song B, Liang R, Wilder-Smith P. Compliance with Specialist Referral for Increased Cancer Risk in Low-Resource Settings: In-Person vs. Telehealth Options. Cancers (Basel) 2023; 15:2775. [PMID: 37345112 PMCID: PMC10216349 DOI: 10.3390/cancers15102775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/13/2023] [Accepted: 05/13/2023] [Indexed: 06/23/2023] Open
Abstract
Efforts are underway to improve the accuracy of non-specialist screening for oral cancer (OC) risk, yet better screening will only translate into improved outcomes if at-risk individuals comply with specialist referral. Most individuals from low-resource, minority, and underserved (LRMU) populations fail to complete a specialist referral for OC risk. The goal was to evaluate the impact of a novel approach on specialist referral compliance in individuals with a positive OC risk screening outcome. A total of 60 LRMU subjects who had screened positive for increased OC risk were recruited and given the choice of referral for an in-person (20 subjects) or a telehealth (40 subjects) specialist visit. Referral compliance was tracked weekly over 6 months. Compliance was 30% in the in-person group, and 83% in the telehealth group. Approximately 83-85% of subjects from both groups who had complied with the first specialist referral complied with a second follow-up in-person specialist visit. Overall, 72.5% of subjects who had chosen a remote first specialist visit had entered into the continuum of care by the study end, vs. 25% of individuals in the in-person specialist group. A two-step approach that uses telehealth to overcome barriers may improve specialist referral compliance in LRMU individuals with increased OC risk.
Collapse
Affiliation(s)
- James Nguyen
- Beckman Laser Institute and Medical Clinic, University of California Irvine School of Medicine, Irvine, CA 92612, USA
| | - Thair Takesh
- Beckman Laser Institute and Medical Clinic, University of California Irvine School of Medicine, Irvine, CA 92612, USA
| | - Negah Parsangi
- Beckman Laser Institute and Medical Clinic, University of California Irvine School of Medicine, Irvine, CA 92612, USA
| | - Bofan Song
- College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
| | - Rongguang Liang
- College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
| | - Petra Wilder-Smith
- Beckman Laser Institute and Medical Clinic, University of California Irvine School of Medicine, Irvine, CA 92612, USA
| |
Collapse
|
38
|
Raman S, Shafie AA, Tan BY, Abraham MT, Chen Kiong S, Cheong SC. Economic Evaluation of Oral Cancer Screening Programs: Review of Outcomes and Study Designs. Healthcare (Basel) 2023; 11:healthcare11081198. [PMID: 37108032 PMCID: PMC10138408 DOI: 10.3390/healthcare11081198] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 04/09/2023] [Accepted: 04/19/2023] [Indexed: 04/29/2023] Open
Abstract
A lack of guidance on economic evaluations for oral cancer screening programs forms a challenge for policymakers and researchers to fill the knowledge gap on their cost-effectiveness. This systematic review thus aims to compare the outcomes and design of such evaluations. A search for economic evaluations of oral cancer screening was performed on Medline, CINAHL, Cochrane, PubMed, health technology assessment databases, and EBSCO Open Dissertations. The quality of studies was appraised using QHES and the Philips Checklist. Data abstraction was based on reported outcomes and study design characteristics. Of the 362 studies identified, 28 were evaluated for eligibility. The final six studies reviewed consisted of modeling approaches (n = 4), a randomized controlled trial (n = 1), and a retrospective observational study (n = 1). Screening initiatives were mostly shown to be cost-effective compared to non-screening. However, inter-study comparisons remained ambiguous due to large variations. The observational and randomized controlled trials provided considerably accurate evidence of implementation costs and outcomes. Modeling approaches, conversely, appeared more feasible for the estimation of long-term consequences and the exploration of strategy options. The current evidence of the cost-effectiveness of oral cancer screening remains heterogeneous and inadequate to support its institutionalization. Nevertheless, evaluations incorporating modeling methods may provide a practical and robust solution.
Collapse
Affiliation(s)
- Sivaraj Raman
- Centre for Health Economics Research, Institute for Health Systems Research, National Institutes of Health, Shah Alam 40170, Malaysia
| | - Asrul Akmal Shafie
- Institutional Planning and Strategic Center, Universiti Sains Malaysia, Penang 11800, Malaysia
- Discipline of Social and Administrative Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia
| | - Bee Ying Tan
- Discipline of Social and Administrative Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia
| | - Mannil Thomas Abraham
- Oral and Maxillofacial Surgery Department, Hospital Tengku Ampuan Rahimah, Ministry of Health, Klang 41200, Malaysia
| | - Shim Chen Kiong
- Oral and Maxillofacial Surgery Department, Hospital Umum Sarawak, Ministry of Health, Kuching 93586, Malaysia
| | - Sok Ching Cheong
- Digital Health Research Unit, Cancer Research Malaysia, Subang Jaya 47500, Malaysia
- Department of Oral and Maxillofacial Clinical Sciences, Faculty of Dentistry, University of Malaya, Kuala Lumpur 50603, Malaysia
| |
Collapse
|
39
|
Dixit S, Kumar A, Srinivasan K. A Current Review of Machine Learning and Deep Learning Models in Oral Cancer Diagnosis: Recent Technologies, Open Challenges, and Future Research Directions. Diagnostics (Basel) 2023; 13:1353. [PMID: 37046571 PMCID: PMC10093759 DOI: 10.3390/diagnostics13071353] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 03/25/2023] [Accepted: 04/03/2023] [Indexed: 04/08/2023] Open
Abstract
Cancer is a problematic global health issue with an extremely high fatality rate throughout the world. The application of various machine learning techniques that have appeared in the field of cancer diagnosis in recent years has provided meaningful insights into efficient and precise treatment decision-making. Due to rapid advancements in sequencing technologies, the detection of cancer based on gene expression data has improved over the years. Different types of cancer affect different parts of the body in different ways. Cancer that affects the mouth, lip, and upper throat is known as oral cancer, which is the sixth most prevalent form of cancer worldwide. India, Bangladesh, China, the United States, and Pakistan are the top five countries with the highest rates of oral cavity disease and lip cancer. The major causes of oral cancer are excessive use of tobacco and cigarette smoking. Many people's lives can be saved if oral cancer (OC) can be detected early. Early identification and diagnosis could assist doctors in providing better patient care and effective treatment. OC screening may advance with the implementation of artificial intelligence (AI) techniques. AI can provide assistance to the oncology sector by accurately analyzing a large dataset from several imaging modalities. This review deals with the implementation of AI during the early stages of cancer for the proper detection and treatment of OC. Furthermore, performance evaluations of several DL and ML models have been carried out to show that the DL model can overcome the difficult challenges associated with early cancerous lesions in the mouth. For this review, we have followed the rules recommended for the extension of scoping reviews and meta-analyses (PRISMA-ScR). Examining the reference lists for the chosen articles helped us gather more details on the subject. Additionally, we discussed AI's drawbacks and its potential use in research on oral cancer. There are methods for reducing risk factors, such as reducing the use of tobacco and alcohol, as well as immunization against HPV infection to avoid oral cancer, or to lessen the burden of the disease. Additionally, officious methods for preventing oral diseases include training programs for doctors and patients as well as facilitating early diagnosis via screening high-risk populations for the disease.
Collapse
Affiliation(s)
- Shriniket Dixit
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Anant Kumar
- School of Bioscience and Technology, Vellore Institute of Technology, Vellore 632014, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
40
|
de Souza LL, Fonseca FP, Araújo ALD, Lopes MA, Vargas PA, Khurram SA, Kowalski LP, Dos Santos HT, Warnakulasuriya S, Dolezal J, Pearson AT, Santos-Silva AR. Machine learning for detection and classification of oral potentially malignant disorders: A conceptual review. J Oral Pathol Med 2023; 52:197-205. [PMID: 36792771 DOI: 10.1111/jop.13414] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 12/09/2022] [Indexed: 02/17/2023]
Abstract
Oral potentially malignant disorders represent precursor lesions that may undergo malignant transformation to oral cancer. There are many known risk factors associated with the development of oral potentially malignant disorders, and contribute to the risk of malignant transformation. Although many advances have been reported to understand the biological behavior of oral potentially malignant disorders, their clinical features that indicate the characteristics of malignant transformation are not well established. Early diagnosis of malignancy is the most important factor to improve patients' prognosis. The integration of machine learning into routine diagnosis has recently emerged as an adjunct to aid clinical examination. Increased performances of artificial intelligence AI-assisted medical devices are claimed to exceed the human capability in the clinical detection of early cancer. Therefore, the aim of this narrative review is to introduce artificial intelligence terminology, concepts, and models currently used in oncology to familiarize oral medicine scientists with the language skills, best research practices, and knowledge for developing machine learning models applied to the clinical detection of oral potentially malignant disorders.
Collapse
Affiliation(s)
- Lucas Lacerda de Souza
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Felipe Paiva Fonseca
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
- Department of Oral Surgery and Pathology, School of Dentistry, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | | | - Marcio Ajudarte Lopes
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Pablo Agustin Vargas
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Syed Ali Khurram
- Unit of Oral & Maxillofacial Pathology, School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Luiz Paulo Kowalski
- Department of Head and Neck Surgery, University of Sao Paulo Medical School and Department of Head and Neck Surgery and Otorhinolaryngology, AC Camargo Cancer Center, Sao Paulo, Brazil
| | - Harim Tavares Dos Santos
- Department of Otolaryngology-Head and Neck Surgery, University of Missouri, Columbia, Missouri, USA
- Department of Bond Life Sciences Center, University of Missouri, Columbia, Missouri, USA
| | - Saman Warnakulasuriya
- King's College London, London, UK
- WHO Collaborating Centre for Oral Cancer, London, UK
| | - James Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Alan Roger Santos-Silva
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| |
Collapse
|
41
|
Interpretable and Reliable Oral Cancer Classifier with Attention Mechanism and Expert Knowledge Embedding via Attention Map. Cancers (Basel) 2023; 15:cancers15051421. [PMID: 36900210 PMCID: PMC10001266 DOI: 10.3390/cancers15051421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/12/2023] Open
Abstract
Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding.
Collapse
|
42
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Carrard VC. Use of Artificial Intelligence in the Classification of Elementary Oral Lesions from Clinical Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3894. [PMID: 36900902 PMCID: PMC10002140 DOI: 10.3390/ijerph20053894] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 06/01/2023]
Abstract
OBJECTIVES Artificial intelligence has generated a significant impact in the health field. The aim of this study was to perform the training and validation of a convolutional neural network (CNN)-based model to automatically classify six clinical representation categories of oral lesion images. METHOD The CNN model was developed with the objective of automatically classifying the images into six categories of elementary lesions: (1) papule/nodule; (2) macule/spot; (3) vesicle/bullous; (4) erosion; (5) ulcer and (6) plaque. We selected four architectures and using our dataset we decided to test the following architectures: ResNet-50, VGG16, InceptionV3 and Xception. We used the confusion matrix as the main metric for the CNN evaluation and discussion. RESULTS A total of 5069 images of oral mucosa lesions were used. The oral elementary lesions classification reached the best result using an architecture based on InceptionV3. After hyperparameter optimization, we reached more than 71% correct predictions in all six lesion classes. The classification achieved an average accuracy of 95.09% in our dataset. CONCLUSIONS We reported the development of an artificial intelligence model for the automated classification of elementary lesions from oral clinical images, achieving satisfactory performance. Future directions include the study of including trained layers to establish patterns of characteristics that determine benign, potentially malignant and malignant lesions.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre 90035-003, Brazil
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos—UNISINOS, São Leopoldo 93022-750, Brazil
- Technology in Automation and Electronics Laboratory—TECAE Lab, University of Vale do Rio dos Sinos—UNISINOS, São Leopoldo 93022-750, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos—UNISINOS, São Leopoldo 93022-750, Brazil
- Technology in Automation and Electronics Laboratory—TECAE Lab, University of Vale do Rio dos Sinos—UNISINOS, São Leopoldo 93022-750, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos—UNISINOS, São Leopoldo 93022-750, Brazil
| | - Giovanna Nunes Machado
- Polytechnic School, University of Vale do Rio dos Sinos—UNISINOS, São Leopoldo 93022-750, Brazil
| | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre 90035-003, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre 90035-003, Brazil
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre 90035-003, Brazil
- TelessaudeRS, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre 91501-970, Brazil
| |
Collapse
|
43
|
Kim HN, Kim K, Lee Y. Intra-Oral Photograph Analysis for Gingivitis Screening in Orthodontic Patients. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3705. [PMID: 36834398 PMCID: PMC9967138 DOI: 10.3390/ijerph20043705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/14/2023] [Accepted: 02/17/2023] [Indexed: 06/18/2023]
Abstract
This study aimed to confirm the presence of gingival inflammation through image analysis of the papillary gingiva using intra-oral photographs (IOPs) before and after orthodontic treatment and to confirm the possibility of using gingival image analysis for gingivitis screening. Five hundred and eighty-eight (n = 588) gingival sites from the IOPs of 98 patients were included. Twenty-five participants who had completed their orthodontic treatments and were aged between 20 and 37 were included. Six points on the papillary gingiva were selected in the maxillary and mandibular anterior incisors. The red/green (R/G) ratio values were obtained for the selected gingival images and the modified gingival index (GI) was compared. The change in the R/G values during the orthodontic treatment period appeared in the order of before orthodontic treatment (BO), mid-point of orthodontic treatment (MO), three-quarters of the way through orthodontic treatment (TO), and immediately after debonding (IDO), confirming that it was similar to the change in the GI. The R/G value of the gingiva in the image correlated with the GI. Therefore, it could be used as a major index for gingivitis diagnosis using images.
Collapse
Affiliation(s)
- Han-Na Kim
- Department of Dental Hygiene, College of Health and Medical Sciences, Cheongju University, Cheongju 28503, Republic of Korea
| | - Kyuseok Kim
- Department of Biomedical Engineering, Eulji University, Seongnam 13135, Republic of Korea
- Department of Radiological Science, College of Health Science, Gachon University, Incheon 21936, Republic of Korea
| | - Youngjin Lee
- Department of Radiological Science, College of Health Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
44
|
Istasy P, Lee WS, Iansavichene A, Upshur R, Gyawali B, Burkell J, Sadikovic B, Lazo-Langner A, Chin-Yee B. The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review. J Med Internet Res 2022; 24:e39748. [PMID: 36005841 PMCID: PMC9667381 DOI: 10.2196/39748] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 08/11/2022] [Accepted: 08/24/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The field of oncology is at the forefront of advances in artificial intelligence (AI) in health care, providing an opportunity to examine the early integration of these technologies in clinical research and patient care. Hope that AI will revolutionize health care delivery and improve clinical outcomes has been accompanied by concerns about the impact of these technologies on health equity. OBJECTIVE We aimed to conduct a scoping review of the literature to address the question, "What are the current and potential impacts of AI technologies on health equity in oncology?" METHODS Following PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines for scoping reviews, we systematically searched MEDLINE and Embase electronic databases from January 2000 to August 2021 for records engaging with key concepts of AI, health equity, and oncology. We included all English-language articles that engaged with the 3 key concepts. Articles were analyzed qualitatively for themes pertaining to the influence of AI on health equity in oncology. RESULTS Of the 14,011 records, 133 (0.95%) identified from our review were included. We identified 3 general themes in the literature: the use of AI to reduce health care disparities (58/133, 43.6%), concerns surrounding AI technologies and bias (16/133, 12.1%), and the use of AI to examine biological and social determinants of health (55/133, 41.4%). A total of 3% (4/133) of articles focused on many of these themes. CONCLUSIONS Our scoping review revealed 3 main themes on the impact of AI on health equity in oncology, which relate to AI's ability to help address health disparities, its potential to mitigate or exacerbate bias, and its capability to help elucidate determinants of health. Gaps in the literature included a lack of discussion of ethical challenges with the application of AI technologies in low- and middle-income countries, lack of discussion of problems of bias in AI algorithms, and a lack of justification for the use of AI technologies over traditional statistical methods to address specific research questions in oncology. Our review highlights a need to address these gaps to ensure a more equitable integration of AI in cancer research and clinical practice. The limitations of our study include its exploratory nature, its focus on oncology as opposed to all health care sectors, and its analysis of solely English-language articles.
Collapse
Affiliation(s)
- Paul Istasy
- Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Rotman Institute of Philosophy, Western University, London, ON, Canada
| | - Wen Shen Lee
- Department of Pathology & Laboratory Medicine, Schulich School of Medicine, Western University, London, ON, Canada
| | | | - Ross Upshur
- Division of Clinical Public Health, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Bridgepoint Collaboratory for Research and Innovation, Lunenfeld Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
| | - Bishal Gyawali
- Division of Cancer Care and Epidemiology, Department of Oncology, Queen's University, Kingston, ON, Canada
- Division of Cancer Care and Epidemiology, Department of Public Health Sciences, Queen's University, Kingston, ON, Canada
| | - Jacquelyn Burkell
- Faculty of Information and Media Studies, Western University, London, ON, Canada
| | - Bekim Sadikovic
- Department of Pathology & Laboratory Medicine, Schulich School of Medicine, Western University, London, ON, Canada
| | - Alejandro Lazo-Langner
- Division of Hematology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Benjamin Chin-Yee
- Rotman Institute of Philosophy, Western University, London, ON, Canada
- Division of Hematology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Division of Hematology, Department of Medicine, London Health Sciences Centre, London, ON, Canada
| |
Collapse
|
45
|
Song B, Li S, Sunny S, Gurushanth K, Mendonca P, Mukhia N, Patrick S, Peterson T, Gurudath S, Raghavan S, Tsusennaro I, Leivon ST, Kolur T, Shetty V, Bushan V, Ramesh R, Pillai V, Wilder-Smith P, Suresh A, Kuriakose MA, Birur P, Liang R. Exploring uncertainty measures in convolutional neural network for semantic segmentation of oral cancer images. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:115001. [PMID: 36329004 PMCID: PMC9630461 DOI: 10.1117/1.jbo.27.11.115001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 10/13/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Oral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output. AIM We aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions. APPROACH This work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists. RESULTS The proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings. CONCLUSIONS Our study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model’s prediction can be improved.
Collapse
Affiliation(s)
- Bofan Song
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Shaobai Li
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Sumsum Sunny
- Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India
| | | | | | - Nirza Mukhia
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
| | | | - Tyler Peterson
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Shubha Gurudath
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
| | | | - Imchen Tsusennaro
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Shirley T. Leivon
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Trupti Kolur
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Vivek Shetty
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Vidya Bushan
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Rohan Ramesh
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Vijay Pillai
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Petra Wilder-Smith
- University of California, Beckman Laser Institute & Medical Clinic, Irvine, California, United States
| | - Amritha Suresh
- Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | | | - Praveen Birur
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
- Biocon Foundation, Bangalore, Karnataka, India
| | - Rongguang Liang
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| |
Collapse
|
46
|
Dailah HG. Mobile Health (mHealth) Technology in Early Detection and Diagnosis of Oral Cancer-A Scoping Review of the Current Scenario and Feasibility. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4383303. [PMID: 36312594 PMCID: PMC9605853 DOI: 10.1155/2022/4383303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/30/2022] [Indexed: 11/25/2022]
Abstract
Objective Oral cancer is one of the most common types of cancer with dreadful consequences. But it can be detected early without much expensive equipment. Screening and early detection of oral cancer using Mobile health (mHealth) technology are reported due to the availability of the extensive network of mobile phones across populations. Therefore, we aimed to explore the existing literature regarding mHealth feasibility in the early detection of oral cancer. Materials and Method. An extensive search was conducted to explore the literature on the feasibility of mobile health for early oral cancer. Clinical studies reporting kappa agreement between on-site dentists and offsite health care workers/dentists in the early detection of oral cancer were included in this review. Studies describing the development of a diagnostic device, app development, and qualitative interviews among practitioners trained in using mobile health were also included in this review for a broader perspective on mHealth. Results While most of the studies described various diagnostic accuracies using mHealth for oral cancer early detection, few studies reported the development of mobile applications, novel device designs for mHealth applications, and the feasibility of a few mHealth programs for early oral cancer detection. Community health workers equipped with a mobile phone-based app could identify "abnormal" oral lesions. Overall, many studies reported high sensitivity, specificity, and Kappa value of agreement. Effectiveness, advantages, and barriers in oral cancer screening using mHealth are also described. Conclusion The overall results show that remote diagnosis for early detection of oral cancer using mHealth was found useful in remote settings.
Collapse
Affiliation(s)
- Hamad Ghaleb Dailah
- Research and Scientific Studies Unit, College of Nursing, Jazan University, Jazan, Saudi Arabia
| |
Collapse
|
47
|
Diagnosis of Oral Squamous Cell Carcinoma Using Deep Neural Networks and Binary Particle Swarm Optimization on Histopathological Images: An AIoMT Approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6364102. [PMID: 36210968 PMCID: PMC9546660 DOI: 10.1155/2022/6364102] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 07/04/2022] [Accepted: 08/17/2022] [Indexed: 11/24/2022]
Abstract
Overall prediction of oral cavity squamous cell carcinoma (OCSCC) remains inadequate, as more than half of patients with oral cavity cancer are detected at later stages. It is generally accepted that the differential diagnosis of OCSCC is usually difficult and requires expertise and experience. Diagnosis from biopsy tissue is a complex process, and it is slow, costly, and prone to human error. To overcome these problems, a computer-aided diagnosis (CAD) approach was proposed in this work. A dataset comprising two categories, normal epithelium of the oral cavity (NEOR) and squamous cell carcinoma of the oral cavity (OSCC), was used. Feature extraction was performed from this dataset using four deep learning (DL) models (VGG16, AlexNet, ResNet50, and Inception V3) to realize artificial intelligence of medial things (AIoMT). Binary Particle Swarm Optimization (BPSO) was used to select the best features. The effects of Reinhard stain normalization on performance were also investigated. After the best features were extracted and selected, they were classified using the XGBoost. The best classification accuracy of 96.3% was obtained when using Inception V3 with BPSO. This approach significantly contributes to improving the diagnostic efficiency of OCSCC patients using histopathological images while reducing diagnostic costs.
Collapse
|
48
|
Batra P, Tagra H, Katyal S. Artificial Intelligence in Teledentistry. Discoveries (Craiova) 2022; 10:153. [PMID: 36530958 PMCID: PMC9748636 DOI: 10.15190/d.2022.12] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 08/23/2022] [Accepted: 09/12/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI) has grown tremendously in the past decade. The application of AI in teledentistry can reform the way dental care, dental education, research, and subsequent innovations can happen remotely. Machine learning including deep learning-based algorithms can be developed to create predictive models of risk assessment for oral health related conditions, consequent complications, and patient stratification. Patients can be empowered to self-diagnose and apply preventive measures or self-manage some early stages of dental diseases. Applications of AI in teledentistry can be beneficial for both, the dental surgeon and the patient. AI enables better remote screening, diagnosis, record keeping, triaging, and monitoring of dental patients based on smart devices. This will take away rudimentary cases requiring run-of-the-mill treatments from dentists and enable them to concentrate on highly complex cases. This would also enable the dentists to serve a larger and deprived population in inaccessible areas. Its usage in teledentistry can bring a paradigm shift from curative to preventive personalised approach in dentistry. A strong asset to teledentistry could be a robust and comprehensive feedback mechanism routed through various channels proposed in this paper. This paper discusses the application of AI in teledentistry and proposes a feedback mechanism to enhance performance in teledentistry.
Collapse
Affiliation(s)
- Panchali Batra
- Department of Orthodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India,* Corresponding author: Dr. Panchali Batra, Professor, Department of Orthodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India. e-mail: , Phone: +91-9999908022
| | | | - Sakshi Katyal
- Department of Orthodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| |
Collapse
|
49
|
Field validation of deep learning based Point-of-Care device for early detection of oral malignant and potentially malignant disorders. Sci Rep 2022; 12:14283. [PMID: 35995987 PMCID: PMC9395355 DOI: 10.1038/s41598-022-18249-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Accepted: 08/08/2022] [Indexed: 11/28/2022] Open
Abstract
Early detection of oral cancer in low-resource settings necessitates a Point-of-Care screening tool that empowers Frontline-Health-Workers (FHW). This study was conducted to validate the accuracy of Convolutional-Neural-Network (CNN) enabled m(mobile)-Health device deployed with FHWs for delineation of suspicious oral lesions (malignant/potentially-malignant disorders). The effectiveness of the device was tested in tertiary-care hospitals and low-resource settings in India. The subjects were screened independently, either by FHWs alone or along with specialists. All the subjects were also remotely evaluated by oral cancer specialist/s. The program screened 5025 subjects (Images: 32,128) with 95% (n = 4728) having telediagnosis. Among the 16% (n = 752) assessed by onsite specialists, 20% (n = 102) underwent biopsy. Simple and complex CNN were integrated into the mobile phone and cloud respectively. The onsite specialist diagnosis showed a high sensitivity (94%), when compared to histology, while telediagnosis showed high accuracy in comparison with onsite specialists (sensitivity: 95%; specificity: 84%). FHWs, however, when compared with telediagnosis, identified suspicious lesions with less sensitivity (60%). Phone integrated, CNN (MobileNet) accurately delineated lesions (n = 1416; sensitivity: 82%) and Cloud-based CNN (VGG19) had higher accuracy (sensitivity: 87%) with tele-diagnosis as reference standard. The results of the study suggest that an automated mHealth-enabled, dual-image system is a useful triaging tool and empowers FHWs for oral cancer screening in low-resource settings.
Collapse
|
50
|
Machine learning in point-of-care automated classification of oral potentially malignant and malignant disorders: a systematic review and meta-analysis. Sci Rep 2022; 12:13797. [PMID: 35963880 PMCID: PMC9376104 DOI: 10.1038/s41598-022-17489-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
Machine learning (ML) algorithms are becoming increasingly pervasive in the domains of medical diagnostics and prognostication, afforded by complex deep learning architectures that overcome the limitations of manual feature extraction. In this systematic review and meta-analysis, we provide an update on current progress of ML algorithms in point-of-care (POC) automated diagnostic classification systems for lesions of the oral cavity. Studies reporting performance metrics on ML algorithms used in automatic classification of oral regions of interest were identified and screened by 2 independent reviewers from 4 databases. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. 35 studies were suitable for qualitative synthesis, and 31 for quantitative analysis. Outcomes were assessed using a bivariate random-effects model following an assessment of bias and heterogeneity. 4 distinct methodologies were identified for POC diagnosis: (1) clinical photography; (2) optical imaging; (3) thermal imaging; (4) analysis of volatile organic compounds. Estimated AUROC across all studies was 0.935, and no difference in performance was identified between methodologies. We discuss the various classical and modern approaches to ML employed within identified studies, and highlight issues that will need to be addressed for implementation of automated classification systems in screening and early detection.
Collapse
|