1
|
Estrella NF, Alexandra DS, Yun C, Palma-Fernández JC, Alejandro IL. AI-AIDED VOLUMETRIC ROOT RESORPTION ASSESSMENT FOLLOWING PERSONALIZED FORCES IN ORTHODONTICS: PRELIMINARY RESULTS OF A RANDOMIZED CLINICAL TRIAL. J Evid Based Dent Pract 2025; 25:102095. [PMID: 40335201 DOI: 10.1016/j.jebdp.2025.102095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 12/31/2024] [Accepted: 01/16/2025] [Indexed: 05/09/2025]
Abstract
INTRODUCTION External apical root resorption (EARR) is an undesirable loss of hard tissues of the tooth root frequently affecting to the maxillary incisors. The magnitude of orthodontic forces is a major treatment-related factor associated with EARR occurrence in orthodontics. The primary aim of the present randomized clinical trial was (i) to quantify the impact of a sequence of personalized force archwires on EARR compared to the conventional standard of care and (ii) compare the 3D-quantification of EARR using two quantification methods (manual or automated AI-aided segmentation). MATERIAL AND METHODS A superiority two arms-parallel-randomized clinical trial (RCT) was conducted to quantify the EARR of two regime forces [CONSORT-guidelines]. A total of 18/43 patients were randomly assigned [block-size: 4] to Control Group [Ni-Ti archwires sequence] or Experimental Group [selective individualized force archwires]. After 142 days sectorial CBCT were obtained; upper incisors were segmented manually and with AI and the volume/length of root quantified. Method error/descriptive statistics (mean; SD; range) and Student t-test were used to assess the differences between groups (Post hoc adjustment for confounders [95% CI; P < .05]). RESULTS The total root volume loss detected by AI was 2.44 ± 6.59 mm3 / 2.42 ± 4.75 mm3 (P > .05) and the mean root length loss was 0.20± 0.23mm/0.42 ± 0.43 mm (P = .045) for control/test group, respectively. Despite length loss showed similar changes when it was quantified with both methods, manual and automatic segmentations (P > .05), differences are observed at volume loss. The results demonstrated greater volume loss detection with manual segmentation than with AI-aided segmentation at the global level, volume by thirds, and 4 mm from the apex. However, as we approached apically, the differences equalized and even diminished, resulting in a greater loss with automatic segmentation 1 mm from the apex in the EG (P = .011). CONCLUSIONS A non direct-force-dependent effect over EARR (6 months) was observed. Individualized force induces slightly higher root resorption at the apical third at 1-2 mm.
Collapse
Affiliation(s)
| | | | - Chen Yun
- School of Dentistry, Complutense University of Madrid, Madrid, Spain
| | | | - Iglesias-Linares Alejandro
- School of Dentistry, Complutense University of Madrid, Madrid, Spain; BIOCRAN, Craniofacial Biology and Orthodontics Research Group, School of Dentistry, Complutense University of Madrid, Madrid, Spain.
| |
Collapse
|
2
|
Fouad EM, Abu-Seida A, Alsheshtawi KA. An overview of the applications of AI for detecting anatomical configurations in endodontics. Ann Anat 2025; 260:152671. [PMID: 40345561 DOI: 10.1016/j.aanat.2025.152671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2024] [Revised: 04/19/2025] [Accepted: 05/01/2025] [Indexed: 05/11/2025]
Abstract
BACKGROUND Artificial intelligence (AI), which uses algorithms to replicate human intellect, allows robots to learn from data and complete complex tasks on their own. STUDY DESIGN Mini-narrative review. OBJECTIVE This mini-narrative review evaluates AI's potential to improve the detection of anatomical features of root and canal systems, focusing on its benefits, challenges, and future applications. METHODS A comprehensive literature search was conducted using PubMed, Scopus, and Google Scholar to identify studies on AI applications in detecting anatomical features of root and root canal systems in endodontics. Inclusion criteria encompassed all relevant literature focused on anatomical feature detection, with no restrictions on time or language. Studies were excluded if they were unrelated to the topic or focused on pathological rather than anatomical feature detection. CONCLUSION AI has significantly improved the detection of root and canal anatomy, including minor constrictions, working length, second mesio-buccal canals, and complex systems like C-shaped canals, with diagnostic accuracy comparable to or surpassing experienced practitioners. While challenges remain in technology, ethics, and regulation, AI enhances precision, efficiency, and patient outcomes. Addressing these hurdles will further advance its integration into endodontic practice and shape its future positively.
Collapse
Affiliation(s)
- Eman M Fouad
- Department of Endodontics, Collage of Oral and Dental Surgery, Misr University for Science and, Technology (MUST), P.O.Box 77, Giza, Egypt
| | - Ashraf Abu-Seida
- Department of Surgery, Anesthesiology & Radiology, Faculty of Veterinary Medicine, Cairo University, Giza PO: 12211, Egypt; Faculty of Dentistry, Galala University, New Galala City, Suez 43511, Egypt.
| | - Khaled A Alsheshtawi
- Computer Science Department, Faculty of Informatics and Computer Science, The British University in Egypt, Cairo, Egypt
| |
Collapse
|
3
|
Ghasemi N, Rokhshad R, Zare Q, Shobeiri P, Schwendicke F. Artificial intelligence for osteoporosis detection on panoramic radiography: A systematic review and meta analysis. J Dent 2025; 156:105650. [PMID: 40010536 DOI: 10.1016/j.jdent.2025.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Revised: 02/13/2025] [Accepted: 02/23/2025] [Indexed: 02/28/2025] Open
Abstract
INTRODUCTION Osteoporosis is a disease characterized by low bone mineral density and an increased risk of fractures. In dentistry, mandibular bone morphology, assessed for example on panoramic images, has been employed to detect osteoporosis. Artificial intelligence (AI) can aid in diagnosing bone diseases from radiographs. We aimed to systematically review, synthesize and appraise the available evidence supporting AI in detecting osteoporosis on panoramic radiographs. DATA Studies that used AI to detect osteoporosis on dental panoramic images were included. SOURCES On April 8, 2023, a first comprehensive search of electronic databases was conducted, including PubMed, Scopus, Embase, IEEE, arXiv, and Google Scholar (grey literature). This search was subsequently updated on October 6, 2024. STUDY SELECTION The Quality Assessment and Diagnostic Accuracy Tool-2 was employed to determine the risk of bias in the studies. Quantitative analyses involved meta-analyses of diagnostic accuracy measures, including sensitivity and specificity, yielding Diagnostic Odds Ratios (DOR) and synthesized positive likelihood ratios (LR+). The certainty of evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation system. RESULTS A total of 24 studies were included. Accuracy ranged from 50% to 99%, sensitivity from 50% to 100%, and specificity from 38% to 100%. A minority of studies (n=10) had a low risk of bias in all domains, while the majority (n=18) showed low risk of applicability concerns. Pooled sensitivity was 87.92% and specificity 81.93%. DOR was 32.99, and L+ 4.87. Meta-regression analysis indicated that sample size had only a marginal impact on heterogeneity (R² = 0.078, p = 0.052), suggesting other study-level factors may contribute to variability. Egger's test suggested potential small-study effects (p < 0.001), indicating a risk of publication bias. CONCLUSION AI, particularly deep learning, showed high diagnostic accuracy in detecting osteoporosis on panoramic radiographs. The results indicate a strong potential for AI to enhance osteoporosis screening in dental settings. However, significant heterogeneity across studies and potential small-study effects highlight the need for further validation, standardization, and larger, well-powered studies to improve model generalizability. CLINICAL SIGNIFICANCE The application of AI in analyzing panoramic radiographs could transform osteoporosis screening in routine dental practice by providing early and accurate diagnosis. This has the potential to integrate osteoporosis detection seamlessly into dental workflows, improving patient outcomes and enabling timely referrals for medical intervention. Addressing issues of model validation and comparability is critical to translating these findings into widespread clinical use.
Collapse
Affiliation(s)
- Nikoo Ghasemi
- Department of Orthodontics and Dentofacial Orthopedics, School of Dentistry, Zanjan University of Medical Sciences, Zanjan, Iran
| | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, WHO Focus Group AI on Health, Berlin, Germany.
| | - Qonche Zare
- Department of oral and maxillofacial radiology, School of Dentistry, Hormozgan University of Medical Sciences, Bandar Abbas, Iran
| | - Parnian Shobeiri
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, 10065, NY, United States
| | - Falk Schwendicke
- Clinic for Conservative Dentistry and Periodontology, LMU Klinikum, Munich, Germany
| |
Collapse
|
4
|
Gao S, Wang X, Xia Z, Zhang H, Yu J, Yang F. Artificial Intelligence in Dentistry: A Narrative Review of Diagnostic and Therapeutic Applications. Med Sci Monit 2025; 31:e946676. [PMID: 40195079 PMCID: PMC11992950 DOI: 10.12659/msm.946676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 02/11/2025] [Indexed: 04/09/2025] Open
Abstract
Advancements in digital and precision medicine have fostered the rapid development of artificial intelligence (AI) applications, including machine learning, artificial neural networks (ANN), and deep learning, within the field of dentistry, particularly in imaging diagnosis and treatment. This review examines the progress of AI across various domains of dentistry, focusing on its role in enhancing diagnostics and optimizing treatment for oral diseases such as endodontic disease, periodontal disease, oral implantology, orthodontics, prosthodontic treatment, and oral and maxillofacial surgery. Additionally, it discusses the emerging opportunities and challenges associated with these technologies. The findings indicate that AI can be effectively utilized in numerous aspects of oral healthcare, including prevention, early screening, accurate diagnosis, treatment plan design assistance, treatment execution, follow-up monitoring, and prognosis assessment. However, notable challenges persist, including issues related to inaccurate data annotation, limited capability for fine-grained feature expression, a lack of universally applicable models, potential biases in learning algorithms, and legal risks pertaining to medical malpractice and data privacy breaches. Looking forward, future research is expected to concentrate on overcoming these challenges to enhance the accuracy and applicability of AI in diagnosing and treating oral diseases. This review aims to provide a comprehensive overview of the current state of AI in dentistry and to identify pathways for its effective integration into clinical practice.
Collapse
Affiliation(s)
- Sizhe Gao
- Department of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, PR China
| | - Xianyun Wang
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, PR China
| | - Zhuoheng Xia
- Department of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, PR China
| | - Huicong Zhang
- Center for Plastic and Reconstructive Surgery, Department of Stomatology, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, PR China
| | - Jun Yu
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, PR China
| | - Fan Yang
- Department of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, PR China
| |
Collapse
|
5
|
Ali M, Irfan M, Ali T, Wei CR, Akilimali A. Artificial intelligence in dental radiology: a narrative review. Ann Med Surg (Lond) 2025; 87:2212-2217. [PMID: 40212156 PMCID: PMC11981376 DOI: 10.1097/ms9.0000000000003127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2024] [Accepted: 02/24/2025] [Indexed: 04/13/2025] Open
Abstract
This article examines how artificial intelligence (AI) is revolutionizing dental radiology, a vital aspect of dental diagnosis and treatment planning. AI improves diagnosis accuracy through sophisticated applications like automated anomaly identification, image segmentation, and treatment planning, whereas traditional imaging techniques like periapical and panoramic radiography have limits. Clinical procedures are streamlined, and accurate dental condition diagnosis is made possible by methods such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs). AI contributes to improved patient outcomes by lowering radiation exposure and improving picture quality. Dental radiography has a bright future despite current obstacles including data collection and algorithm training; nonetheless, further study and cooperation are required to maximize AI's incorporation into clinical practice. AI has the potential to revolutionize dental diagnostics, despite obstacles in data collection and the requirement for strong algorithm training. The creation of innovative imaging modalities, further research on AI applications, and cooperative efforts between scientists, physicians, and industry participants are some of the future directions. The dentistry community may better utilize AI to enhance patient care and diagnostic skills by creating clear criteria for its integration. In the long run, AI has the potential to transform dental radiology, resulting in better treatment outcomes and a more effective practice.
Collapse
Affiliation(s)
- Muneeba Ali
- Karachi Medical and Dental College, Karachi, Pakistan
| | - Memoona Irfan
- Karachi Medical and Dental College, Karachi, Pakistan
| | - Tooba Ali
- Dow University of Health Sciences, Karachi, Pakistan
| | - Calvin R. Wei
- Department of Research and Development, Shing Huei Group, Taipei, Taiwan
| | - Aymar Akilimali
- Department of Research, Medical Research Circle (MedReC), Goma, DR Congo
- International Veterinary Vaccinology Network, The Roslin Institute University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
6
|
Wang L, Xu Y, Wang W, Lu Y. Application of machine learning in dentistry: insights, prospects and challenges. Acta Odontol Scand 2025; 84:145-154. [PMID: 40145687 PMCID: PMC11971948 DOI: 10.2340/aos.v84.43345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Accepted: 03/10/2025] [Indexed: 03/28/2025]
Abstract
BACKGROUND Machine learning (ML) is transforming dentistry by setting new standards for precision and efficiency in clinical practice, while driving improvements in care delivery and quality. OBJECTIVES This review: (1) states the necessity to develop ML in dentistry for the purpose of breaking the limitations of traditional dental technologies; (2) discusses the principles of ML-based models utilised in dental clinical practice and care; (3) outlines the application respects of ML in dentistry; and (4) highlights the prospects and challenges to be addressed. DATA AND SOURCES In this narrative review, a comprehensive search was conducted in PubMed/MEDLINE, Web of Science, ScienceDirect, and Institute of Electrical and Electronics Engineers (IEEE) Xplore databases. Conclusions: Machine Learning has demonstrated significant potential in dentistry with its intelligently assistive function, promoting diagnostic efficiency, personalised treatment plans and related streamline workflows. However, challenges related to data privacy, security, interpretability, and ethical considerations were highly urgent to be addressed in the next review, with the objective of creating a backdrop for future research in this rapidly expanding arena. Clinical significance: Development of ML brought transformative impact in the fields of dentistry, from diagnostic, personalised treatment plan to dental care workflows. Particularly, integrating ML-based models with diagnostic tools will significantly enhance the diagnostic efficiency and precision in dental surgeries and treatments.
Collapse
Affiliation(s)
- Lin Wang
- Hangzhou Stomatology Hospital, Hangzhou, China
| | - Yanyan Xu
- Health Service Center in Xiaoying Street Community, Hangzhou, China
| | | | - Yuanyuan Lu
- College of Environmental and Resources Sciences, Zhejiang University, Hangzhou, China.
| |
Collapse
|
7
|
Ren R, Liu J, Li S, Wu X, Peng X, Liao W, Zhao Z. Data-driven AI platform for dens evaginatus detection on orthodontic intraoral photographs. BMC Oral Health 2025; 25:328. [PMID: 40025464 PMCID: PMC11872327 DOI: 10.1186/s12903-024-05231-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 11/19/2024] [Indexed: 03/04/2025] Open
Abstract
BACKGROUND The aim of our study was to develop and evaluate a deep learning model (BiStageNet) for automatic detection of dens evaginatus (DE) premolars on orthodontic intraoral photographs. Additionally, based on the training results, we developed a DE detection platform for orthodontic clinical applications. METHODS We manually selected the premolar areas for automatic premolar recognition training using a dataset of 1,400 high-quality intraoral photographs. Next, we labeled each premolar for DE detection training using a dataset of 2,128 images. We introduced the Dice coefficient, accuracy, sensitivity, specificity, F1-score, ROC curve as well as areas under the ROC curve to evaluate the learning results of our model. Finally, we constructed an automatic DE detection platform based on our trained model (BiStageNet) using Pytorch. RESULTS Our DE detection platform achieved a mean Dice coefficient of 0.961 in premolar recognition, with a diagnostic accuracy of 85.0%, sensitivity of 88.0%, specificity of 82.0%, F1 Score of 0.854, and AUC of 0.93. Experimental results revealed that dental interns, when manually identifying DE, showed low specificity. With the tool's assistance, specificity significantly improved for all interns, effectively reducing false positives without sacrificing sensitivity. This led to enhanced diagnostic precision, evidenced by improved PPV, NPV, and F1-Scores. CONCLUSION Our BiStageNet was capable of recognizing premolars and detecting DE with high accuracy on intraoral photographs. On top of that, our self-developed DE detection platform was promising for clinical application and promotion.
Collapse
Affiliation(s)
- Ruiyang Ren
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Jialing Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Shihao Li
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Xiaoyue Wu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Xingchen Peng
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, 610041, China.
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, 610041, China.
| |
Collapse
|
8
|
Flügge T, Vinayahalingam S, van Nistelrooij N, Kellner S, Xi T, van Ginneken B, Bergé S, Heiland M, Kernen F, Ludwig U, Odaka K. Automated tooth segmentation in magnetic resonance scans using deep learning - A pilot study. Dentomaxillofac Radiol 2025; 54:12-18. [PMID: 39589897 PMCID: PMC11664100 DOI: 10.1093/dmfr/twae059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Revised: 07/19/2024] [Accepted: 10/29/2024] [Indexed: 11/28/2024] Open
Abstract
OBJECTIVES The main objective was to develop and evaluate an artificial intelligence model for tooth segmentation in magnetic resonance (MR) scans. METHODS MR scans of 20 patients performed with a commercial 64-channel head coil with a T1-weighted 3D-SPACE (Sampling Perfection with Application Optimized Contrasts using different flip angle Evolution) sequence were included. Sixteen datasets were used for model training and 4 for accuracy evaluation. Two clinicians segmented and annotated the teeth in each dataset. A segmentation model was trained using the nnU-Net framework. The manual reference tooth segmentation and the inferred tooth segmentation were superimposed and compared by computing precision, sensitivity, and Dice-Sørensen coefficient. Surface meshes were extracted from the segmentations, and the distances between points on each mesh and their closest counterparts on the other mesh were computed, of which the mean (average symmetric surface distance) and 95th percentile (Hausdorff distance 95%, HD95) were reported. RESULTS The model achieved an overall precision of 0.867, a sensitivity of 0.926, a Dice-Sørensen coefficient of 0.895, and a 95% Hausdorff distance of 0.91 mm. The model predictions were less accurate for datasets containing dental restorations due to image artefacts. CONCLUSIONS The current study developed an automated method for tooth segmentation in MR scans with moderate to high effectiveness for scans with respectively without artefacts.
Collapse
Affiliation(s)
- Tabea Flügge
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
- Department of Artificial Intelligence, Radboud University, Thomas van Aquinostraat 4, Nijmegen, 6525 GD, the Netherlands
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Waldeyerstraße 30, 48149 Münster, Germany
| | - Niels van Nistelrooij
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
| | - Stefanie Kellner
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
| | - Bram van Ginneken
- Department of Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, Nijmegen, 6525 GA, the Netherlands
| | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
| | - Max Heiland
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Florian Kernen
- Department of Oral and Maxillofacial Surgery, Translational Implantology, Medical Center , Faculty of Medicine, University of Freiburg, Hugstetter Straße 55, 79106 Freiburg, Germany
| | - Ute Ludwig
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, University Medical Center Freiburg, University of Freiburg, Kilianstraße 5a, 79106 Freiburg im Breisgau, Germany
| | - Kento Odaka
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
- Department of Oral and Maxillofacial Radiology, Tokyo Dental College, 2-9-18, Kandamisakicho, Chiyoda-ku, Tokyo, 101-0061, Japan
| |
Collapse
|
9
|
Kavousinejad S, Ameli-Mazandarani Z, Behnaz M, Ebadifar A. A Deep Learning Framework for Automated Classification and Archiving of Orthodontic Diagnostic Documents. Cureus 2024; 16:e76530. [PMID: 39877794 PMCID: PMC11774544 DOI: 10.7759/cureus.76530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/28/2024] [Indexed: 01/31/2025] Open
Abstract
Background Orthodontic diagnostic workflows often rely on manual classification and archiving of large volumes of patient images, a process that is both time-consuming and prone to errors such as mislabeling and incomplete documentation. These challenges can compromise treatment accuracy and overall patient care. To address these issues, we propose an artificial intelligence (AI)-driven deep learning framework based on convolutional neural networks (CNNs) to automate the classification and archiving of orthodontic diagnostic images. Our AI-based framework enhances workflow efficiency and reduces human errors. This study is an initial step towards fully automating orthodontic diagnosis and treatment planning systems, specifically focusing on the automation of orthodontic diagnostic record classification using AI. Methods This study employed a dataset comprising 61,842 images collected from three dental clinics, distributed across 13 categories. A sequential classification approach was developed, starting with a primary model that categorized images into three main groups: extraoral, intraoral, and radiographic. Secondary models were applied within each group to perform the final classification. The proposed model, enhanced with attention modules, was trained and compared with pre-trained models such as ResNet50 (Microsoft Corporation, Redmond, Washington, United States) and InceptionV3 (Google LLC, Mountain View, California, United States). External validation was performed using 13,729 new samples to assess the artificial intelligence (AI) system's accuracy and generalizability compared to expert assessments. Results The deep learning framework achieved an accuracy of 99.24% on an external validation set, demonstrating performance almost on par with human experts. Additionally, the model demonstrated significantly faster processing times compared to manual methods. Gradient-weighted class activation mapping (Grad-CAM) visualizations confirmed that the model effectively focused on clinically relevant features during classification, further supporting its clinical applicability. Conclusion This study introduces a deep learning framework for automating the classification and archiving of orthodontic diagnostic images. The model achieved impressive accuracy and demonstrated clinically relevant feature focus through Grad-CAM visualizations. Beyond its high accuracy, the framework offers significant improvements in processing speed, making it a viable tool for real-time applications in orthodontics. This approach not only reduces the workload in healthcare settings but also lays the foundation for future automated diagnostic and treatment planning systems in digital orthodontics.
Collapse
Affiliation(s)
- Shahab Kavousinejad
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, IRN
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, IRN
| | - Zahra Ameli-Mazandarani
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, IRN
| | - Mohammad Behnaz
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, IRN
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, IRN
| | - Asghar Ebadifar
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, IRN
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, IRN
| |
Collapse
|
10
|
Huang Y, Liu W, Yao C, Miao X, Guan X, Lu X, Liang X, Ma L, Tang S, Zhang Z, Zhan J. A multimodal dental dataset facilitating machine learning research and clinic services. Sci Data 2024; 11:1291. [PMID: 39604495 PMCID: PMC11603170 DOI: 10.1038/s41597-024-04130-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 11/13/2024] [Indexed: 11/29/2024] Open
Abstract
Oral diseases affect nearly 3.5 billion people, and medical resources are limited, which makes access to oral health services nontrivial. Imaging-based machine learning technology is one of the most promising technologies to improve oral medical services and reduce patient costs. The development of machine learning technology requires publicly accessible datasets. However, previous public dental datasets have several limitations: a small volume of computed tomography (CT) images, a lack of multimodal data, and a lack of complexity and diversity of data. These issues are detrimental to the development of the field of dentistry. Thus, to solve these problems, this paper introduces a new dental dataset that contains 169 patients, three commonly used dental image modalities, and images of various health conditions of the oral cavity. The proposed dataset has good potential to facilitate research on oral medical services, such as reconstructing the 3D structure of assisting clinicians in diagnosis and treatment, image translation, and image segmentation.
Collapse
Affiliation(s)
- Yunyou Huang
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China
- The International Open Benchmark Council, 19801, Delaware, USA
| | - Wenjing Liu
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China
- Guilin Medical University, Guilin, 541199, China
| | - Caiqin Yao
- The Second Nanning People's Hospital, Nanning, 530031, China
| | - Xiuxia Miao
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China
| | - Xianglong Guan
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China
| | - Xiangjiang Lu
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China
| | - Xiaoshuang Liang
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China
| | - Li Ma
- Guilin Medical University, Guilin, 541199, China.
| | - Suqin Tang
- Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China.
| | - Zhifei Zhang
- Department of Physiology and Pathophysiology, Capital Medical University, Beijing, 100069, China.
| | - Jianfeng Zhan
- The International Open Benchmark Council, 19801, Delaware, USA.
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100086, China.
- University of Chinese Academy of Sciences, Beijing, 100086, China.
| |
Collapse
|
11
|
Esmaeilyfard R, Esmaeeli N, Paknahad M. An artificial intelligence mechanism for detecting cystic lesions on CBCT images using deep learning. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 126:102152. [PMID: 39551180 DOI: 10.1016/j.jormas.2024.102152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Accepted: 11/14/2024] [Indexed: 11/19/2024]
Abstract
INTRODUCTION The present study aimed to provide and evaluate the efficiency of an artificial intelligence mechanism for detecting cystic lesions on cone beam computed tomography (CBCT) scans. METHOD AND MATERIALS The CBCT image dataset consisted of 150 samples, including 50 cases without lesions, 50 dentigerous cysts (DC), and 50 periapical cysts (PC) based on both radiographic and histopathological diagnosis. The dataset was divided into a development set with 70 % of samples for training and validation and a final test set with the other 30 % of samples. Four images were obtained for each case, including panoramic, manually segmented panoramic, axial, and manually segmented axial images. A deep convolutional neural network (CNN) architecture was used for automatic lesion detection and diagnosing the type of cystic lesion. To increase the number of image samples and avoid overfitting, a data augmentation procedure was applied. Recall, precision, F1-score, and average precision (AP) values were measured for lesion detection performance, and sensitivity, specificity, and accuracy indicators from the confusion matrix were calculated for the lesion classification performance of the CNN model. RESULTS Mean average precision, recall, and F1-score for the detection of DCs and PCs were respectively, 0.87, 0.92, and 0.89 before data augmentation, and 0.93, 0.95, and 0.93, after the augmentation process. For the classification of DCs with data augmentation, sensitivity, specificity, accuracy, and AUC values were 96.4 %, 99.5 %, 97.3 %, and 0.98, respectively, and for PCs with augmentation, these values were 89.6 %, 98.9 %, 98.1 %, and 0.94, respectively. Lastly, for no lesion samples, sensitivity, specificity, accuracy, and AUC values were 100 %, 99.1 %, 99.4 %, and 0.99, respectively, by application of data augmentation. CONCLUSION Our developed deep learning-based CNN algorithm showed high accuracy, sensitivity, and precision values (more than 90 %) for detecting and classifying dentigerous and periapical cysts on CBCT images using data augmentation.
Collapse
Affiliation(s)
- Rasool Esmaeilyfard
- Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran
| | - Nasim Esmaeeli
- Assistant Professor, Department of oral and maxillofacial Radiology, School of Dentistry, Qom University of Medical Sciences, Qom, Iran
| | - Maryam Paknahad
- Oral and Dental Disease Research Center, Oral and Maxillofacial Radiology, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, Iran.
| |
Collapse
|
12
|
Radaic A, Kamarajan P, Cho A, Wang S, Hung G, Najarzadegan F, Wong DT, Ton‐That H, Wang C, Kapila YL. Biological biomarkers of oral cancer. Periodontol 2000 2024; 96:250-280. [PMID: 38073011 PMCID: PMC11163022 DOI: 10.1111/prd.12542] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/09/2023] [Indexed: 06/12/2024]
Abstract
The oral squamous cell carcinoma (OSCC) 5 year survival rate of 41% has marginally improved in the last few years, with less than a 1% improvement per year from 2005 to 2017, with higher survival rates when detected at early stages. Based on histopathological grading of oral dysplasia, it is estimated that severe dysplasia has a malignant transformation rate of 7%-50%. Despite these numbers, oral dysplasia grading does not reliably predict its clinical behavior. Thus, more accurate markers predicting oral dysplasia progression to cancer would enable better targeting of these lesions for closer follow-up, especially in the early stages of the disease. In this context, molecular biomarkers derived from genetics, proteins, and metabolites play key roles in clinical oncology. These molecular signatures can help predict the likelihood of OSCC development and/or progression and have the potential to detect the disease at an early stage and, support treatment decision-making and predict treatment responsiveness. Also, identifying reliable biomarkers for OSCC detection that can be obtained non-invasively would enhance management of OSCC. This review will discuss biomarkers for OSCC that have emerged from different biological areas, including genomics, transcriptomics, proteomics, metabolomics, immunomics, and microbiomics.
Collapse
Affiliation(s)
- Allan Radaic
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Pachiyappan Kamarajan
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Alex Cho
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Sandy Wang
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Guo‐Chin Hung
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | | | - David T. Wong
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Hung Ton‐That
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Cun‐Yu Wang
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| | - Yvonne L. Kapila
- School of DentistryUniversity of California, Los Angeles (UCLA)Los AngelesCaliforniaUSA
| |
Collapse
|
13
|
Liu Y, Xia K, Cen Y, Ying S, Zhao Z. Artificial intelligence for caries detection: a novel diagnostic tool using deep learning algorithms. Oral Radiol 2024; 40:375-384. [PMID: 38498223 DOI: 10.1007/s11282-024-00741-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 01/26/2024] [Indexed: 03/20/2024]
Abstract
OBJECTIVES The aim of this study was to develop an assessment tool for automatic detection of dental caries in periapical radiographs using convolutional neural network (CNN) architecture. METHODS A novel diagnostic model named ResNet + SAM was established using numerous periapical radiographs (4278 images) annotated by medical experts to automatically detect dental caries. The performance of the model was compared to the traditional CNNs (VGG19, ResNet-50), and the dentists. The Gradient-weighted Class Activation Mapping (Grad-CAM) technique shows the region of interest in the image for the CNNs. RESULTS ResNet + SAM demonstrated significantly improved performance compared to the modified ResNet-50 model, with an average F1 score of 0.886 (95% CI 0.855-0.918), accuracy of 0.885 (95% CI 0.862-0.901) and AUC of 0.954 (95% CI 0.924-0.980). The comparison between the performance of the model and the dentists revealed that the model achieved higher accuracy than that of the junior dentists. With the assist of the tool, the dentists achieved superior metrics with a mean F1 score of 0.827 and the interobserver agreement for dental caries is enhanced from 0.592/0.610 to 0.706/0.723. CONCLUSIONS According to the results obtained from the experiments, the automatic assessment tool using the ResNet + SAM model shows remarkable performance and has excellent possibilities in identifying dental caries. The use of the assessment tool in clinical practice can be of great benefit as a clinical decision-making support in dentistry and reduce the workload of dentists.
Collapse
Affiliation(s)
- Yiliang Liu
- College of Computer Science, Sichuan University, No.24 South Section 1, Yihuan Road, Chengdu, 610065, China
- State Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, 610064, Sichuan, China
| | - Kai Xia
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, No. 14, 3rd section, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Yueyan Cen
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, No. 14, 3rd section, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Sancong Ying
- College of Computer Science, Sichuan University, No.24 South Section 1, Yihuan Road, Chengdu, 610065, China.
- State Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, 610064, Sichuan, China.
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, No. 14, 3rd section, South Renmin Road, Chengdu, 610041, Sichuan, China
| |
Collapse
|
14
|
Wajer R, Wajer A, Kazimierczak N, Wilamowska J, Serafin Z. The Impact of AI on Metal Artifacts in CBCT Oral Cavity Imaging. Diagnostics (Basel) 2024; 14:1280. [PMID: 38928694 PMCID: PMC11203150 DOI: 10.3390/diagnostics14121280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
OBJECTIVE This study aimed to assess the impact of artificial intelligence (AI)-driven noise reduction algorithms on metal artifacts and image quality parameters in cone-beam computed tomography (CBCT) images of the oral cavity. MATERIALS AND METHODS This retrospective study included 70 patients, 61 of whom were analyzed after excluding those with severe motion artifacts. CBCT scans, performed using a Hyperion X9 PRO 13 × 10 CBCT machine, included images with dental implants, amalgam fillings, orthodontic appliances, root canal fillings, and crowns. Images were processed with the ClariCT.AI deep learning model (DLM) for noise reduction. Objective image quality was assessed using metrics such as the differentiation between voxel values (ΔVVs), the artifact index (AIx), and the contrast-to-noise ratio (CNR). Subjective assessments were performed by two experienced readers, who rated overall image quality and artifact intensity on predefined scales. RESULTS Compared with native images, DLM reconstructions significantly reduced the AIx and increased the CNR (p < 0.001), indicating improved image clarity and artifact reduction. Subjective assessments also favored DLM images, with higher ratings for overall image quality and lower artifact intensity (p < 0.001). However, the ΔVV values were similar between the native and DLM images, indicating that while the DLM reduced noise, it maintained the overall density distribution. Orthodontic appliances produced the most pronounced artifacts, while implants generated the least. CONCLUSIONS AI-based noise reduction using ClariCT.AI significantly enhances CBCT image quality by reducing noise and metal artifacts, thereby improving diagnostic accuracy and treatment planning. Further research with larger, multicenter cohorts is recommended to validate these findings.
Collapse
Affiliation(s)
- Róża Wajer
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
| | | | - Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland;
| | - Justyna Wilamowska
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| |
Collapse
|
15
|
Fan FY, Lin WC, Huang HY, Shen YK, Chang YC, Li HY, Ruslin M, Lee SY. Applying machine learning to assess the morphology of sculpted teeth. J Dent Sci 2024; 19:542-549. [PMID: 38303893 PMCID: PMC10829735 DOI: 10.1016/j.jds.2023.09.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/21/2023] [Indexed: 02/03/2024] Open
Abstract
Background/purpose Producing tooth crowns through dental technology is a basic function of dentistry. The morphology of tooth crowns is the most important parameter for evaluating its acceptability. The procedures were divided into four steps: tooth collection, scanning skills, use of mathematical methods and software, and machine learning calculation. Materials and methods Dental plaster rods were prepared. The effective data collected were to classify 121 teeth (15th tooth position), 342 teeth (16th tooth position), 69 teeth (21st tooth position), and 89 teeth (43rd tooth position), for a total of 621 teeth. The procedures are divided into four steps: tooth collection, scanning skills, use of mathematical methods and software, and machine learning calculation. Results The area under the curve (AUC) value was 0, 0.5, and 0.72 in this study. The precision rate and recall rate of micro-averaging/macro-averaging were 0.75/0.73 and 0.75/0.72. If we took a newly carved tooth picture into the program, the current effectiveness of machine learning was about 70%-75% to evaluate the quality of tooth morphology. Through the calculation and analysis of the two different concepts of micro-average/macro-average and AUC, similar values could be obtained. Conclusion This study established a set of procedures that can judge the quality of hand-carved plaster sticks and teeth, and the accuracy rate is about 70%-75%. It is expected that this process can be used to assist dental technicians in judging the pros and cons of hand-carved plaster sticks and teeth, so as to help dental technicians to learn the tooth morphology more effectively.
Collapse
Affiliation(s)
- Fang-Yu Fan
- School of Dental Technology, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
| | - Wei-Chun Lin
- School of Dental Technology, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
- Department of Dentistry, Wan-Fang Hospital, Taipei Medical University, Taipei, Taiwan
- Center for Tooth Bank and Dental Stem Cell Technology, Taipei Medical University, Taipei, Taiwan
| | - Huei-Yu Huang
- Department of Dentistry, Taipei Medical University-Shuang Ho Hospital, New Taipei City, Taiwan
- School of Dentistry, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
| | - Yung-Kang Shen
- School of Dental Technology, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
- Department of Oral Biology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | - Yung-Chun Chang
- Graduate Institute of Data Science, College of Management, Taipei Medical University, Taipei, Taiwan
| | - Heng-Yu Li
- School of Dental Technology, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
| | - Muhammad Ruslin
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Hasanuddin University, Makassar, Indonesia
| | - Sheng-Yang Lee
- Department of Dentistry, Wan-Fang Hospital, Taipei Medical University, Taipei, Taiwan
- Center for Tooth Bank and Dental Stem Cell Technology, Taipei Medical University, Taipei, Taiwan
- School of Dentistry, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
| |
Collapse
|
16
|
Kapoor S, Shyagali TR, Kuraria A, Gupta A, Tiwari A, Goyal P. An artificial neural network approach for rational decision-making in borderline orthodontic cases: A preliminary analytical observational in silico study. J Orthod 2023; 50:439-448. [PMID: 37148164 DOI: 10.1177/14653125231172527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) technology has transformed the way healthcare functions in the present scenario. In orthodontics, expert systems and machine learning have aided clinicians in making complex, multifactorial decisions. One such scenario is an extraction decision in a borderline case. OBJECTIVE The present in silico study was planned with the intention of building an AI model for extraction decisions in borderline orthodontic cases. DESIGN An observational analytical study. SETTING Department of Orthodontics, Hitkarini Dental College and Hospital, Madhya Pradesh Medical University, Jabalpur, India. METHODS An artificial neural network (ANN) model for extraction or non-extraction decisions in borderline orthodontic cases was constructed based on a supervised learning algorithm using the Python (version 3.9) Sci-Kit Learn library and feed-forward backpropagation method. Based on 40 borderline orthodontic cases, 20 experienced clinicians were asked to recommend extraction or non-extraction treatment. The decision of the orthodontist and the diagnostic records, including the selected extraoral and intra-oral features, model analysis and cephalometric analysis parameters, constituted the training dataset of AI. The built-in model was then tested using a testing dataset of 20 borderline cases. After running the model on the testing dataset, the accuracy, F1 score, precision and recall were calculated. RESULTS The present AI model showed an accuracy of 97.97% for extraction and non-extraction decision-making. The receiver operating curve (ROC) and cumulative accuracy profile showed a near-perfect model with precision, recall and F1 values of 0.80, 0.84 and 0.82 for non-extraction decisions and 0.90, 0.87 and 0.88 for extraction decisions. LIMITATION As the present study was preliminary in nature, the dataset included was too small and population-specific. CONCLUSION The present AI model gave accurate results in decision-making capabilities related to extraction and non-extraction treatment modalities in borderline orthodontic cases of the present population.
Collapse
Affiliation(s)
- Shanya Kapoor
- Department of Orthodontics and Dentofacial, Hitkarini Dental College and Hospital, Jabalpur, Madhya Pradesh, India
| | - Tarulatha R Shyagali
- Department of Orthodontics and Dentofacial, MR Ambedkar Dental College and Hospital, Bangalore, Karnataka, India
| | - Amit Kuraria
- Department of Computer Sciences, Rabindranath Tagore University, Bhopal, Madhya Pradesh, India
| | - Abhishek Gupta
- Department of Orthodontics and Dentofacial, Hitkarini Dental College and Hospital, Jabalpur, Madhya Pradesh, India
| | - Anil Tiwari
- Department of Orthodontics and Dentofacial, Hitkarini Dental College and Hospital, Jabalpur, Madhya Pradesh, India
| | - Payal Goyal
- Department of Orthodontics and Dentofacial, Hitkarini Dental College and Hospital, Jabalpur, Madhya Pradesh, India
| |
Collapse
|
17
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
18
|
Kantharimuthu M, M M, P S, G AAM, N KBS, K JD. Oral Cancer Prediction Using a Probability Neural Network (PNN). Asian Pac J Cancer Prev 2023; 24:2991-2995. [PMID: 37774049 PMCID: PMC10762769 DOI: 10.31557/apjcp.2023.24.9.2991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 09/10/2023] [Indexed: 10/01/2023] Open
Abstract
OBJECTIVE In India, usually, oral cancer is mostly identified at a progressive stage of malignancy. Hence, we are motivated to identify oral cancer in its early stages, which helps to increase the lifetime of the patient, but this early detection is also more challenging. METHODS The proposed research work uses a probabilistic neural network (PNN) for the prediction of oral malignancy. The recommended work uses PNN along with the discrete wavelet transform to predict the cancer cells accurately. The classification accuracy of the PNN model is 80%, and hence this technique is best for the prediction of oral cancer. RESULT Due to heterogeneity in the appearance of oral lesions, it is difficult to identify the cancer region. This research work explores the different computer vision techniques that help in the prediction of oral cancer. CONCLUSION Oral screening is important in making a decision about oral lesions and also in avoiding delayed referrals, which reduces mortality rates.
Collapse
Affiliation(s)
| | - Malathi M
- Department of ECE, Rajalakshmi Institute of Technology, Chennai, India.
| | - Sinthia P
- Department of ECE, Saveetha Engineering College, Chennai, India.
| | - Aloy Anuja Mary G
- VelTech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, India.
| | | | - Jalal Deen K
- Solamalai College of Engineering, Madurai, India.
| |
Collapse
|
19
|
Strunga M, Urban R, Surovková J, Thurzo A. Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment. Healthcare (Basel) 2023; 11:healthcare11050683. [PMID: 36900687 PMCID: PMC10000479 DOI: 10.3390/healthcare11050683] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 03/03/2023] Open
Abstract
This scoping review examines the contemporary applications of advanced artificial intelligence (AI) software in orthodontics, focusing on its potential to improve daily working protocols, but also highlighting its limitations. The aim of the review was to evaluate the accuracy and efficiency of current AI-based systems compared to conventional methods in diagnosing, assessing the progress of patients' treatment and follow-up stability. The researchers used various online databases and identified diagnostic software and dental monitoring software as the most studied software in contemporary orthodontics. The former can accurately identify anatomical landmarks used for cephalometric analysis, while the latter enables orthodontists to thoroughly monitor each patient, determine specific desired outcomes, track progress, and warn of potential changes in pre-existing pathology. However, there is limited evidence to assess the stability of treatment outcomes and relapse detection. The study concludes that AI is an effective tool for managing orthodontic treatment from diagnosis to retention, benefiting both patients and clinicians. Patients find the software easy to use and feel better cared for, while clinicians can make diagnoses more easily and assess compliance and damage to braces or aligners more quickly and frequently.
Collapse
|
20
|
Pan F, Liu J, Cen Y, Chen Y, Cai R, Zhao Z, Liao W, Wang J. Accuracy of RGB-D camera-based and stereophotogrammetric facial scanners: a comparative study. J Dent 2022; 127:104302. [PMID: 36152954 DOI: 10.1016/j.jdent.2022.104302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 09/05/2022] [Accepted: 09/20/2022] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVES This study aimed to evaluate and compare the accuracy and inter-operator reliability of a low-cost red-green-blue-depth (RGB-D) camera-based facial scanner (Bellus3D Arc7) with a stereophotogrammetry facial scanner (3dMD) and to explore the possibility of the former as a clinical substitute for the latter. METHODS A mannequin head was selected as the research object. In the RGB-D camera-based facial scanner group, the head was continuously scanned five times using an RGB-D camera-based facial scanner (Bellus3D Arc7), and the outcome data of each scan was then imported into CAD software (MeshLab) to reconstruct three-dimensional (3D) facial photographs. In the stereophotogrammetry facial scanner group, the mannequin head was scanned with a stereophotogrammetry facial scanner (3dMD). Selected parameters were directly measured on the reconstructed 3D virtual faces using a CAD software. The same parameters were then measured directly on the mannequin head using the direct anthropometry (DA) method as the gold standard for later comparison. The accuracy of the facial scanners was evaluated in terms of trueness and precision. Trueness was evaluated by comparing the measurement results of the two groups with each other and with that of DA using equivalence tests and average absolute deviations, while precision and inter-operator reliability were assessed using the intraclass correlation coefficient (ICC). A 3D facial mesh deviation between the two groups was also calculated for further reference using a 3D metrology software (GOM inspect pro). RESULTS In terms of trueness, the average absolute deviations between RGB-D camera-based and stereophotogrammetry facial scanners, between RGB-D camera-based facial scanner and DA, and between stereophotogrammetry facial scanner and DA were statistically equivalent at 0.50±0.27 mm, 0.61±0.42 mm, and 0.28±0.14 mm, respectively. Equivalence test results confirmed that their equivalence was within clinical requirements (<1 mm). The ICC for each parameter was approximately 0.999 in terms of precision and inter-operator reliability. A 3D facial mesh analysis suggested that the deviation between the two groups was 0.37±0.01 mm. CONCLUSIONS For facial scanners, an accuracy of <1 mm is commonly considered clinically acceptable. Both the RGB-D camera-based and stereophotogrammetry facial scanners in this study showed acceptable trueness, high precision, and inter-operator reliability. A low-cost RGB-D camera-based facial scanner could be an eligible clinical substitute for traditional stereophotogrammetry. CLINICAL SIGNIFICANCE The low-cost RGB-D camera-based facial scanner showed clinically acceptable trueness, high precision, and inter-operator reliability; thus, it could be an eligible clinical substitute for traditional stereophotogrammetry.
Collapse
Affiliation(s)
- Fangwei Pan
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Jialing Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Yueyan Cen
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Ye Chen
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Ruilie Cai
- Department of Epidemiology and Biostatistics, Arnold School of Public Health, University of South Carolina, South Carolina, United States
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| | - Jian Wang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
21
|
Updates and Original Case Studies Focused on the NMR-Linked Metabolomics Analysis of Human Oral Fluids Part II: Applications to the Diagnosis and Prognostic Monitoring of Oral and Systemic Cancers. Metabolites 2022; 12:metabo12090778. [PMID: 36144183 PMCID: PMC9505390 DOI: 10.3390/metabo12090778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 08/12/2022] [Accepted: 08/15/2022] [Indexed: 11/24/2022] Open
Abstract
Human saliva offers many advantages over other biofluids regarding its use and value as a bioanalytical medium for the identification and prognostic monitoring of human diseases, mainly because its collection is largely non-invasive, is relatively cheap, and does not require any major clinical supervision, nor supervisory input. Indeed, participants donating this biofluid for such purposes, including the identification, validation and quantification of surrogate biomarkers, may easily self-collect such samples in their homes following the provision of full collection details to them by researchers. In this report, the authors have focused on the applications of metabolomics technologies to the diagnosis and progressive severity monitoring of human cancer conditions, firstly oral cancers (e.g., oral cavity squamous cell carcinoma), and secondly extra-oral (systemic) cancers such as lung, breast and prostate cancers. For each publication reviewed, the authors provide a detailed evaluation and critical appraisal of the experimental design, sample size, ease of sample collection (usually but not exclusively as whole mouth saliva (WMS)), their transport, length of storage and preparation for analysis. Moreover, recommended protocols for the optimisation of NMR pulse sequences for analysis, along with the application of methods and techniques for verifying and resonance assignments and validating the quantification of biomolecules responsible, are critically considered. In view of the authors’ specialisms and research interests, the majority of these investigations were conducted using NMR-based metabolomics techniques. The extension of these studies to determinations of metabolic pathways which have been pathologically disturbed in these diseases is also assessed here and reviewed. Where available, data for the monitoring of patients’ responses to chemotherapeutic treatments, and in one case, radiotherapy, are also evaluated herein. Additionally, a novel case study featured evaluates the molecular nature, levels and diagnostic potential of 1H NMR-detectable salivary ‘acute-phase’ glycoprotein carbohydrate side chains, and/or their monomeric saccharide derivatives, as biomarkers for cancer and inflammatory conditions.
Collapse
|
22
|
Nanni L, Brahnam S, Paci M, Ghidoni S. Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22166129. [PMID: 36015898 PMCID: PMC9415767 DOI: 10.3390/s22166129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/09/2022] [Accepted: 08/12/2022] [Indexed: 05/08/2023]
Abstract
CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering in the hundreds of thousands. Building Deep CNN ensembles of pre-trained CNNs is one powerful method for overcoming this problem. Ensembles combine the outputs of multiple classifiers to improve performance. This method relies on the introduction of diversity, which can be introduced on many levels in the classification workflow. A recent ensembling method that has shown promise is to vary the activation functions in a set of CNNs or within different layers of a single CNN. This study aims to examine the performance of both methods using a large set of twenty activations functions, six of which are presented here for the first time: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The proposed method was tested on fifteen medical data sets representing various classification tasks. The best performing ensemble combined two well-known CNNs (VGG16 and ResNet50) whose standard ReLU activation layers were randomly replaced with another. Results demonstrate the superiority in performance of this approach.
Collapse
Affiliation(s)
- Loris Nanni
- Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy
| | - Sheryl Brahnam
- Department of Information Technology and Cybersecurity, Missouri State University, 901 S. National Street, Springfield, MO 65804, USA
- Correspondence:
| | - Michelangelo Paci
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, Arvo Ylpön katu 34, D 219, FI-33520 Tampere, Finland
| | - Stefano Ghidoni
- Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy
| |
Collapse
|
23
|
Artificial intelligence-aided detection of ectopic eruption of maxillary first molars based on panoramic radiography. J Dent 2022; 125:104239. [PMID: 35863549 DOI: 10.1016/j.jdent.2022.104239] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 02/08/2023] Open
Abstract
OBJECTIVES Ectopic eruption (EE) of maxillary permanent first molars (PFMs) is among the most frequent ectopic eruption, which leads to premature loss of adjacent primary second molars, impaction of premolars and a decrease in dental arch length. Apart from oral manifestations such as delayed eruption and discoloration of PFMs, panoramic radiography can reveal EE of maxillary PFMs as well. Identifying eruption anomalies in radiographs can be strongly experience-dependent, leading us to develop here an automatic model that can aid dentists in this task and allow timelier interventions. METHODS Panoramic X-ray images from 1480 patients aged 4-9 years old were used to train an auto-screening model. Another 100 panoramic images were used to validate and test the model. RESULTS The positive and negative predictive values of this auto-screening system were 0.86 and 0.88, respectively, with a specificity of 0.90 and a sensitivity of 0.86. Using the model to aid dentists in detecting EE on the 100 panoramic images led to higher sensitivity and specificity than when three experienced pediatric dentists detected EE manually. CONCLUSIONS Deep learning-based automatic screening system is useful and promising in the detection EE of maxillary PFMs with relatively high specificity. However, deep learning is not completely accurate in the detection of EE. To minimize the effect of possible false negative diagnosis, regular follow-ups and re-evaluation are required if necessary. CLINICAL SIGNIFICANCE Identification of EE through a semi-automatic screening model can improve the efficacy and accuracy of clinical diagnosis compared to human experts alone. This method may allow earlier detection and timelier intervention and management.
Collapse
|
24
|
Ragodos R, Wang T, Padilla C, Hecht JT, Poletta FA, Orioli IM, Buxó CJ, Butali A, Valencia-Ramirez C, Restrepo Muñeton C, Wehby GL, Weinberg SM, Marazita ML, Moreno Uribe LM, Howe BJ. Dental anomaly detection using intraoral photos via deep learning. Sci Rep 2022; 12:11577. [PMID: 35804050 PMCID: PMC9270352 DOI: 10.1038/s41598-022-15788-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/29/2022] [Indexed: 11/08/2022] Open
Abstract
Children with orofacial clefting (OFC) present with a wide range of dental anomalies. Identifying these anomalies is vital to understand their etiology and to discern the complex phenotypic spectrum of OFC. Such anomalies are currently identified using intra-oral exams by dentists, a costly and time-consuming process. We claim that automating the process of anomaly detection using deep neural networks (DNNs) could increase efficiency and provide reliable anomaly detection while potentially increasing the speed of research discovery. This study characterizes the use of` DNNs to identify dental anomalies by training a DNN model using intraoral photographs from the largest international cohort to date of children with nonsyndromic OFC and controls (OFC1). In this project, the intraoral images were submitted to a Convolutional Neural Network model to perform multi-label multi-class classification of 10 dental anomalies. The network predicts whether an individual exhibits any of the 10 anomalies and can do so significantly faster than a human rater can. For all but three anomalies, F1 scores suggest that our model performs competitively at anomaly detection when compared to a dentist with 8 years of clinical experience. In addition, we use saliency maps to provide a post-hoc interpretation for our model's predictions. This enables dentists to examine and verify our model's predictions.
Collapse
Affiliation(s)
- Ronilo Ragodos
- Department of Management Sciences, Tippie College of Business, University of Iowa, Iowa City, IA, USA
| | - Tong Wang
- Department of Management Sciences, Tippie College of Business, University of Iowa, Iowa City, IA, USA.
| | - Carmencita Padilla
- Department of Pediatrics, College of Medicine, University of the Philippines, Manila, Philippines
| | - Jacqueline T Hecht
- Department of Pediatrics, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Fernando A Poletta
- ECLAMC at Center for Medical Education and Clinical Research, CEMIC-CONICET, Buenos Aires, Argentina
| | - Iêda M Orioli
- ECLAMC at Department of Genetics, Institute of Biology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Carmen J Buxó
- Dental and Craniofacial Genomics Core, School of Dental Medicine, University of Puerto Rico, San Juan, PR, USA
| | - Azeez Butali
- Department of Oral Pathology, Radiology, and Medicine, University of Iowa, Iowa City, IA, USA
- The Iowa Institute for Oral Health Research, College of Dentistry, University of Iowa, Iowa City, IA, USA
| | | | | | - George L Wehby
- Department of Health Management and Policy, College of Public Health, University of Iowa, Iowa City, IA, USA
| | - Seth M Weinberg
- Center for Craniofacial and Dental Genetics, School of Dental Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Mary L Marazita
- Center for Craniofacial and Dental Genetics, School of Dental Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lina M Moreno Uribe
- The Iowa Institute for Oral Health Research, College of Dentistry, University of Iowa, Iowa City, IA, USA
- Department of Orthodontics, College of Dentistry, University of Iowa, Iowa City, IA, USA
| | - Brian J Howe
- The Iowa Institute for Oral Health Research, College of Dentistry, University of Iowa, Iowa City, IA, USA.
- Department of Family Dentistry, College of Dentistry, University of Iowa, Iowa City, IA, 52242, USA.
| |
Collapse
|
25
|
Panneerselvam K, Ishikawa S, Krishnan R, Sugimoto M. Salivary Metabolomics for Oral Cancer Detection: A Narrative Review. Metabolites 2022; 12:metabo12050436. [PMID: 35629940 PMCID: PMC9144467 DOI: 10.3390/metabo12050436] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/07/2022] [Accepted: 05/11/2022] [Indexed: 12/24/2022] Open
Abstract
The development of low- or non-invasive screening tests for cancer is crucial for early detection. Saliva is an ideal biofluid containing informative components for monitoring oral and systemic diseases. Metabolomics has frequently been used to identify and quantify numerous metabolites in saliva samples, serving as novel biomarkers associated with various conditions, including cancers. This review summarizes the recent applications of salivary metabolomics in biomarker discovery in oral cancers. We discussed the prevalence, epidemiologic characteristics, and risk factors of oral cancers, as well as the currently available screening programs, in India and Japan. These data imply that the development of biomarkers by itself is inadequate in cancer detection. The use of current diagnostic methods and new technologies is necessary for efficient salivary metabolomics analysis. We also discuss the gap between biomarker discovery and nationwide screening for the early detection of oral cancer and its prevention.
Collapse
Affiliation(s)
- Karthika Panneerselvam
- Department of Oral Pathology and Microbiology, Karpaga Vinayaga Institute of Dental Sciences, GST Road, Chinna Kolambakkam, Palayanoor PO, Madurantagam Taluk, Kancheepuram 603308, Tamil Nadu, India;
| | - Shigeo Ishikawa
- Department of Dentistry, Oral and Maxillofacial Plastic and Reconstructive Surgery, Faculty of Medicine, Yamagata University, Yamagata 990-9585, Japan;
| | - Rajkumar Krishnan
- Department of Oral Pathology, SRM Dental College, Bharathi Salai, Ramapuram, Chennai 600089, Tamil Nadu, India;
| | - Masahiro Sugimoto
- Institute of Medical Research, Tokyo Medical University, Tokyo 160-0022, Japan
- Institute for Advanced Biosciences, Keio University, Yamagata 997-0811, Japan
- Correspondence: ; Tel.: +81-235-29-0528
| |
Collapse
|
26
|
Yang SY, Li SH, Liu JL, Sun XQ, Cen YY, Ren RY, Ying SC, Chen Y, Zhao ZH, Liao W. Histopathology-Based Diagnosis of Oral Squamous Cell Carcinoma Using Deep Learning. J Dent Res 2022; 101:1321-1327. [PMID: 35446176 DOI: 10.1177/00220345221089858] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Oral squamous cell carcinoma (OSCC) is prevalent around the world and is associated with poor prognosis. OSCC is typically diagnosed from tissue biopsy sections by pathologists who rely on their empirical experience. Deep learning models may improve the accuracy and speed of image classification, thus reducing human error and workload. Here we developed a custom-made deep learning model to assist pathologists in detecting OSCC from histopathology images. We collected and analyzed a total of 2,025 images, among which 1,925 images were included in the training set and 100 images were included in the testing set. Our model was able to automatically evaluate these images and arrive at a diagnosis with a sensitivity of 0.98, specificity of 0.92, positive predictive value of 0.924, negative predictive value of 0.978, and F1 score of 0.951. Using a subset of 100 images, we examined whether our model could improve the diagnostic performance of junior and senior pathologists. We found that junior pathologists were able to delineate OSCC in these images 6.26 min faster when assisted by the model than when working alone. When the clinicians were assisted by the model, their average F1 score improved from 0.9221 to 0.9566 in the case of junior pathologists and from 0.9361 to 0.9463 in the case of senior pathologists. Our findings indicate that deep learning can improve the accuracy and speed of OSCC diagnosis from histopathology images.
Collapse
Affiliation(s)
- S Y Yang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - S H Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - J L Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - X Q Sun
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y Y Cen
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - R Y Ren
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - S C Ying
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Y Chen
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Z H Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - W Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
27
|
Li S, Liu J, Zhou Z, Zhou Z, Wu X, Li Y, Wang S, Liao W, Ying S, Zhao Z. Artificial intelligence for caries and periapical periodontitis detection. J Dent 2022; 122:104107. [PMID: 35341892 DOI: 10.1016/j.jdent.2022.104107] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 03/16/2022] [Accepted: 03/22/2022] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Periapical periodontitis and caries are common chronic oral diseases affecting most teenagers and adults worldwide. The purpose of this study was to develop an evaluation tool to automatically detect dental caries and periapical periodontitis on periapical radiographs using deep learning. METHODS A modified deep learning model was developed using a large dataset (4,129 images) with high-quality annotations to support the automatic detection of both dental caries and periapical periodontitis. The performance of the model was compared to the classification performance of dentists. RESULTS The deep learning model automatically distinguished dental caries with an F1-score of 0.829 and periapical periodontitis with an F1-score of 0.828. The comparison of model-only and expert-only detection performance showed that the accuracy of the fully automatic method was significantly higher than that of the junior dentists. With deep learning assistance, the experts not only reached a higher diagnostic accuracy with an average F1-score of 0.7844 for dental caries and 0.8208 for periapical periodontitis compared to expert-only scenarios but also increased interobserver agreement from 0.585/0.590 to 0.726/0.713 for dental caries and from 0.623/0.563 to 0.752/0.740 for periapical periodontitis. CONCLUSIONS Based on the experimental results, deep learning can improve the accuracy and consistency of evaluating dental caries and periapical periodontitis on periapical radiographs. CLINICAL SIGNIFICANCE Deep learning models can improve accuracy and consistency and reduce the workload of dentists, making AI a powerful tool for clinical practice.
Collapse
Affiliation(s)
- Shihao Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, No. 24 South Section 1, Yihuan Road, Chengdu Sichuan, China, 610065.
| | - Jialing Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Zirui Zhou
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Zilin Zhou
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Xiaoyue Wu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Yazhen Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Shida Wang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| | - Sancong Ying
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610041, China.
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Diseases, West China Hospital of Stomatology, Sichuan University, No. 17 People's South Road, 610041, Chengdu, Sichuan, China.
| |
Collapse
|
28
|
Artificial Intelligence for Classifying and Archiving Orthodontic Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1473977. [PMID: 35127938 PMCID: PMC8813223 DOI: 10.1155/2022/1473977] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 12/17/2021] [Accepted: 01/04/2022] [Indexed: 01/03/2023]
Abstract
One of the main requirements for orthodontic treatment is continuous image acquisition. However, the conventional system of orthodontic image acquisition, which includes manual classification, archiving, and monitoring, is time-consuming and prone to errors caused by fatigue. This study is aimed at developing an effective artificial intelligence tool for the automated classification and monitoring of orthodontic images. We comprehensively evaluated the ability of a deep learning model based on Deep hidden IDentity (DeepID) features to classify and archive photographs and radiographs. This evaluation was performed using a dataset of >14,000 images encompassing all 14 categories of orthodontic images. Our model automatically classified orthodontic images in an external dataset with an accuracy of 0.994 and macro area under the curve of 1.00 in 0.08 min. This was 236 times faster than a human expert (18.93 min). Furthermore, human experts with deep learning assistance required an average of 8.10 min to classify images in the external dataset, much shorter than 18.93 min. We conclude that deep learning can improve the accuracy, speed, and efficiency of classification, archiving, and monitoring of orthodontic images.
Collapse
|
29
|
Alabi RO, Almangush A, Elmusrati M, Mäkitie AA. Deep Machine Learning for Oral Cancer: From Precise Diagnosis to Precision Medicine. FRONTIERS IN ORAL HEALTH 2022; 2:794248. [PMID: 35088057 PMCID: PMC8786902 DOI: 10.3389/froh.2021.794248] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 12/13/2021] [Indexed: 12/21/2022] Open
Abstract
Oral squamous cell carcinoma (OSCC) is one of the most prevalent cancers worldwide and its incidence is on the rise in many populations. The high incidence rate, late diagnosis, and improper treatment planning still form a significant concern. Diagnosis at an early-stage is important for better prognosis, treatment, and survival. Despite the recent improvement in the understanding of the molecular mechanisms, late diagnosis and approach toward precision medicine for OSCC patients remain a challenge. To enhance precision medicine, deep machine learning technique has been touted to enhance early detection, and consequently to reduce cancer-specific mortality and morbidity. This technique has been reported to have made a significant progress in data extraction and analysis of vital information in medical imaging in recent years. Therefore, it has the potential to assist in the early-stage detection of oral squamous cell carcinoma. Furthermore, automated image analysis can assist pathologists and clinicians to make an informed decision regarding cancer patients. This article discusses the technical knowledge and algorithms of deep learning for OSCC. It examines the application of deep learning technology in cancer detection, image classification, segmentation and synthesis, and treatment planning. Finally, we discuss how this technique can assist in precision medicine and the future perspective of deep learning technology in oral squamous cell carcinoma.
Collapse
Affiliation(s)
- Rasheed Omobolaji Alabi
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, Vaasa, Finland
| | - Alhadi Almangush
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Pathology, University of Helsinki, Helsinki, Finland
- Institute of Biomedicine, Pathology, University of Turku, Turku, Finland
| | - Mohammed Elmusrati
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, Vaasa, Finland
| | - Antti A. Mäkitie
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Otorhinolaryngology – Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and Technology, Karolinska Institute and Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
30
|
Bernauer SA, Zitzmann NU, Joda T. The Use and Performance of Artificial Intelligence in Prosthodontics: A Systematic Review. SENSORS 2021; 21:s21196628. [PMID: 34640948 PMCID: PMC8512216 DOI: 10.3390/s21196628] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/01/2021] [Accepted: 10/01/2021] [Indexed: 12/15/2022]
Abstract
(1) Background: The rapid pace of digital development in everyday life is also reflected in dentistry, including the emergence of the first systems based on artificial intelligence (AI). This systematic review focused on the recent scientific literature and provides an overview of the application of AI in the dental discipline of prosthodontics. (2) Method: According to a modified PICO-strategy, an electronic (MEDLINE, EMBASE, CENTRAL) and manual search up to 30 June 2021 was carried out for the literature published in the last five years reporting the use of AI in the field of prosthodontics. (3) Results: 560 titles were screened, of which 30 abstracts and 16 full texts were selected for further review. Seven studies met the inclusion criteria and were analyzed. Most of the identified studies reported the training and application of an AI system (n = 6) or explored the function of an intrinsic AI system in a CAD software (n = 1). (4) Conclusions: While the number of included studies reporting the use of AI was relatively low, the summary of the obtained findings by the included studies represents the latest AI developments in prosthodontics demonstrating its application for automated diagnostics, as a predictive measure, and as a classification or identification tool. In the future, AI technologies will likely be used for collecting, processing, and organizing patient-related datasets to provide patient-centered, individualized dental treatment.
Collapse
|