1
|
Fang EP, Liew DJ, Chang YC, Fang CY. Enhancing wisdom teeth detection in panoramic radiographs using multi-channel convolutional neural network with clinical knowledge. Comput Biol Med 2025; 192:110368. [PMID: 40381475 DOI: 10.1016/j.compbiomed.2025.110368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2025] [Revised: 05/07/2025] [Accepted: 05/08/2025] [Indexed: 05/20/2025]
Abstract
This study presents a novel artificial intelligence approach for detecting wisdom teeth in panoramic radiographs using a multi-channel convolutional neural network (CNN). First, a curated dataset of annotated panoramic dental images was collected, with bounding box annotations provided by a senior oral and maxillofacial surgeon. Each image was then preprocessed and split into three input channels-full, left-side, and right-side views-to replicate the diagnostic workflow of dental professionals. These channels were simultaneously fed into a classification-based CNN model designed to predict the presence or absence of wisdom teeth in each of the four quadrants. Unlike traditional segmentation or object detection approaches, our method avoids pixel-level labeling and offers a simpler, faster pipeline with reduced annotation overhead. The proposed model achieved an accuracy of 82.46 %, with an AUROC of 0.8866 and an AUPRC of 0.8542, demonstrating reliable detection performance across diverse image conditions. This system supports consistent and objective diagnosis, particularly benefiting less experienced practitioners and enabling efficient screening in clinical settings.
Collapse
Affiliation(s)
- Emma Peng Fang
- Department of Psychology and Linguistics, University of British Columbia, Vancouver, Canada
| | - Di-Jie Liew
- Graduate Institute of Data Science, Taipei Medical University, New Taipei City, Taiwan
| | - Yung-Chun Chang
- Graduate Institute of Data Science, Taipei Medical University, New Taipei City, Taiwan; Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei City, Taiwan
| | - Chih-Yuan Fang
- Department of Oral and Maxillofacial Surgery, Wan Fang Hospital, Taipei City, Taiwan; School of Dentistry, College of Oral Medicine, Taipei Medical University, Taipei City, Taiwan.
| |
Collapse
|
2
|
Ver Berne J, Saadi SB, Oliveira-Santos N, Marinho-Vieira LE, Jacobs R. Automated classification of panoramic radiographs with inflammatory periapical lesions using a CNN-LSTM architecture. J Dent 2025; 156:105688. [PMID: 40101853 DOI: 10.1016/j.jdent.2025.105688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2024] [Revised: 03/08/2025] [Accepted: 03/11/2025] [Indexed: 03/20/2025] Open
Abstract
OBJECTIVES Considering Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network approaches have shown promising image classification performance, the aim of this study was to compare the performance of novel Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) architectures with a classic CNN for classification of panoramic radiographs with inflammatory periapical lesions. METHODS A dataset of 356 panoramic radiographs with periapical lesions and 769 control images were retrospectively collected and divided into training, validation, and testing sets. Next, four different models were constructed: a classic CNN, a classic LSTM, a cascaded CNN-LSTM, and parallel CNN-LSTM architecture. In each model the CNN took the full panoramic radiograph as input while the LSTM network ran on the images divided into 6 sequential patches. Sensitivity, specificity, and Area Under the Receiver-Operating Curve (AUC) were calculated. McNemar's test compared the sensitivity and specificity between the classic CNN and the other models. RESULTS Parallel CNN-LSTM had a significantly higher sensitivity than classic CNN for detecting periapical lesions (95% vs. 81%, 95% confidence interval for the difference = 6 - 22 %, P = 0.002), while also exhibiting the best overall performance of the four models [AUC = 96% vs. 90% (classic CNN), 92% (classic LSTM), and 94% (cascaded CNN-LSTM)]. CONCLUSIONS The parallel CNN-LSTM architecture outperformed the classic CNN for classification of panoramic radiographs with inflammatory periapical lesions. CLINICAL SIGNIFICANCE Combining CNN and LSTM models improves the classification of panoramic radiographs with and without inflammatory periapical lesions.
Collapse
Affiliation(s)
- Jonas Ver Berne
- OMFS-IMPATH Research Group, Department of Imaging & Pathology, Catholic University Leuven, Belgium; Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, Belgium
| | - Soroush Baseri Saadi
- Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, Belgium
| | - Nicolly Oliveira-Santos
- OMFS-IMPATH Research Group, Department of Imaging & Pathology, Catholic University Leuven, Belgium; Department of Oral Surgery & Stomatology, Division of Oral Diagnostic Sciences, School of Dental Medicine, University of Bern, Switzerland
| | - Luiz Eduardo Marinho-Vieira
- OMFS-IMPATH Research Group, Department of Imaging & Pathology, Catholic University Leuven, Belgium; Division of Oral Radiology, Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Brazil
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging & Pathology, Catholic University Leuven, Belgium; Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Sweden.
| |
Collapse
|
3
|
Shoorgashti R, Alimohammadi M, Baghizadeh S, Radmard B, Ebrahimi H, Lesan S. Artificial Intelligence Models Accuracy for Odontogenic Keratocyst Detection From Panoramic View Radiographs: A Systematic Review and Meta-Analysis. Health Sci Rep 2025; 8:e70614. [PMID: 40165928 PMCID: PMC11956212 DOI: 10.1002/hsr2.70614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2024] [Revised: 02/04/2025] [Accepted: 03/08/2025] [Indexed: 04/02/2025] Open
Abstract
Background and Aims Odontogenic keratocyst (OKC) is a radiolucent jaw lesion often mistaken for similar conditions like ameloblastomas on panoramic radiographs. Accurate diagnosis is vital for effective management, but manual image interpretation can be inconsistent. While deep learning algorithms in AI have shown promise in improving diagnostic accuracy for OKCs, their performance across studies is still unclear. This systematic review and meta-analysis aimed to evaluate the diagnostic accuracy of AI models in detecting OKC from panoramic radiographs. Methods A systematic search was performed across 5 databases. Studies were included if they examined the PICO question of whether AI models (I) could improve the diagnostic accuracy (O) of OKC in panoramic radiographs (P) compared to reference standards (C). Key performance metrics including sensitivity, specificity, accuracy, and area under the curve (AUC) were extracted and pooled using random-effects models. Meta-regression and subgroup analyses were conducted to identify sources of heterogeneity. Publication bias was evaluated through funnel plots and Egger's test. Results Eight studies were included in the meta-analysis. The pooled sensitivity across all studies was 83.66% (95% CI:73.75%-93.57%) and specificity was 82.89% (95% CI:70.31%-95.47%). YOLO-based models demonstrated superior diagnostic performance with a sensitivity of 96.4% and specificity of 96.0%, compared to other architectures. Meta-regression analysis indicated that model architecture was a significant predictor of diagnostic performance, accounting for a significant portion of the observed heterogeneity. However, the analysis also revealed publication bias and high variability across studies (Egger's test, p = 0.042). Conclusion AI models, particularly YOLO-based architectures, can improve the diagnostic accuracy of OKCs in panoramic radiographs. While AI shows strong capabilities in simple cases, it should complement, not replace, human expertise, especially in complex situations.
Collapse
Affiliation(s)
- Reyhaneh Shoorgashti
- Department of Oral and Maxillofacial Medicine, School of DentistryIslamic Azad University of Medical SciencesTehranIran
| | | | - Sana Baghizadeh
- Faculty of Dentistry, Tehran Medical SciencesIslamic Azad UniversityTehranIran
| | - Bahareh Radmard
- School of DentistryShahid Beheshti University of Medical SciencesTehranIran
| | - Hooman Ebrahimi
- Department of Oral and Maxillofacial Medicine, School of DentistryIslamic Azad University of Medical SciencesTehranIran
| | - Simin Lesan
- Department of Oral and Maxillofacial Medicine, School of DentistryIslamic Azad University of Medical SciencesTehranIran
| |
Collapse
|
4
|
Chen W, Dhawan M, Liu J, Ing D, Mehta K, Tran D, Lawrence D, Ganhewa M, Cirillo N. Mapping the Use of Artificial Intelligence-Based Image Analysis for Clinical Decision-Making in Dentistry: A Scoping Review. Clin Exp Dent Res 2024; 10:e70035. [PMID: 39600121 PMCID: PMC11599430 DOI: 10.1002/cre2.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 09/19/2024] [Accepted: 10/20/2024] [Indexed: 11/29/2024] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is an emerging field in dentistry. AI is gradually being integrated into dentistry to improve clinical dental practice. The aims of this scoping review were to investigate the application of AI in image analysis for decision-making in clinical dentistry and identify trends and research gaps in the current literature. MATERIAL AND METHODS This review followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). An electronic literature search was performed through PubMed and Scopus. After removing duplicates, a preliminary screening based on titles and abstracts was performed. A full-text review and analysis were performed according to predefined inclusion criteria, and data were extracted from eligible articles. RESULTS Of the 1334 articles returned, 276 met the inclusion criteria (consisting of 601,122 images in total) and were included in the qualitative synthesis. Most of the included studies utilized convolutional neural networks (CNNs) on dental radiographs such as orthopantomograms (OPGs) and intraoral radiographs (bitewings and periapicals). AI was applied across all fields of dentistry - particularly oral medicine, oral surgery, and orthodontics - for direct clinical inference and segmentation. AI-based image analysis was use in several components of the clinical decision-making process, including diagnosis, detection or classification, prediction, and management. CONCLUSIONS A variety of machine learning and deep learning techniques are being used for dental image analysis to assist clinicians in making accurate diagnoses and choosing appropriate interventions in a timely manner.
Collapse
Affiliation(s)
- Wei Chen
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Monisha Dhawan
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Jonathan Liu
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Damie Ing
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Kruti Mehta
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Daniel Tran
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | | | - Max Ganhewa
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| | - Nicola Cirillo
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| |
Collapse
|
5
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:641-655. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
6
|
Kwon KW, Kim J, Kang D. Automated detection of maxillary sinus opacifications compatible with sinusitis from CT images. Dentomaxillofac Radiol 2024; 53:549-557. [PMID: 39107903 DOI: 10.1093/dmfr/twae042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Revised: 07/26/2024] [Accepted: 07/31/2024] [Indexed: 11/28/2024] Open
Abstract
BACKGROUND Sinusitis is a commonly encountered clinical condition that imposes a considerable burden on the healthcare systems. A significant number of maxillary sinus opacifications are diagnosed as sinusitis, often overlooking the precise differentiation between cystic formations and inflammatory sinusitis, resulting in inappropriate clinical treatment. This study aims to improve diagnostic accuracy by investigating the feasibility of differentiating maxillary sinusitis, retention cysts, and normal sinuses. METHODS We developed a deep learning-based automatic detection model to diagnose maxillary sinusitis using ostiomeatal unit CT images. Of the 1080 randomly selected coronal-view CT images, including 2158 maxillary sinuses, datasets of maxillary sinus lesions comprised 1138 normal sinuses, 366 cysts, and 654 sinusitis based on radiographic findings, and were divided into training (n = 648 CT images), validation (n = 216), and test (n = 216) sets. We utilized a You Only Look Once based model for object detection, enhanced by the transfer learning method. To address the insufficiency of training data, various data augmentation techniques were adopted, thereby improving the model's robustness. RESULTS The trained You Only Look Once version 8 nano model achieved an overall precision of 97.1%, with the following class precisions on the test set: normal = 96.9%, cyst = 95.2%, and sinusitis = 99.2%. With an average F1-score of 95.4%, the F1-score was the highest for normal, then sinusitis, and finally, cysts. Upon evaluating a performance on difficulty level, the precision decreased to 92.4% on challenging test dataset. CONCLUSIONS The developed model is feasible for assisting clinicians in screening maxillary sinusitis lesions.
Collapse
Affiliation(s)
- Kyung Won Kwon
- Department of Otolaryngology, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon 51353, Republic of Korea
| | - Jihun Kim
- School of Electronic and Electrical Engineering, Hongik University, Seoul 04066, Republic of Korea
| | - Dongwoo Kang
- School of Electronic and Electrical Engineering, Hongik University, Seoul 04066, Republic of Korea
| |
Collapse
|
7
|
Mureșanu S, Hedeșiu M, Iacob L, Eftimie R, Olariu E, Dinu C, Jacobs R, on behalf of Team Project Group. Automating Dental Condition Detection on Panoramic Radiographs: Challenges, Pitfalls, and Opportunities. Diagnostics (Basel) 2024; 14:2336. [PMID: 39451659 PMCID: PMC11507083 DOI: 10.3390/diagnostics14202336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 10/13/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024] Open
Abstract
Background/Objectives: The integration of AI into dentistry holds promise for improving diagnostic workflows, particularly in the detection of dental pathologies and pre-radiotherapy screening for head and neck cancer patients. This study aimed to develop and validate an AI model for detecting various dental conditions, with a focus on identifying teeth at risk prior to radiotherapy. Methods: A YOLOv8 model was trained on a dataset of 1628 annotated panoramic radiographs and externally validated on 180 radiographs from multiple centers. The model was designed to detect a variety of dental conditions, including periapical lesions, impacted teeth, root fragments, prosthetic restorations, and orthodontic devices. Results: The model showed strong performance in detecting implants, endodontic treatments, and surgical devices, with precision and recall values exceeding 0.8 for several conditions. However, performance declined during external validation, highlighting the need for improvements in generalizability. Conclusions: YOLOv8 demonstrated robust detection capabilities for several dental conditions, especially in training data. However, further refinement is needed to enhance generalizability in external datasets and improve performance for conditions like periapical lesions and bone loss.
Collapse
Affiliation(s)
- Sorana Mureșanu
- Department of Oral and Maxillofacial Surgery and Radiology, Iuliu Hațieganu University of Medicine and Pharmacy, 32 Clinicilor Street, 400006 Cluj-Napoca, Romania
| | - Mihaela Hedeșiu
- Department of Oral and Maxillofacial Surgery and Radiology, Iuliu Hațieganu University of Medicine and Pharmacy, 32 Clinicilor Street, 400006 Cluj-Napoca, Romania
| | - Liviu Iacob
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Radu Eftimie
- Iuliu Hațieganu University of Medicine and Pharmacy, 32 Clinicilor Street, 400006 Cluj-Napoca, Romania
| | - Eliza Olariu
- Department of Electrical Engineering, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Cristian Dinu
- Department of Oral and Maxillofacial Surgery and Radiology, Iuliu Hațieganu University of Medicine and Pharmacy, 32 Clinicilor Street, 400006 Cluj-Napoca, Romania
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, Katholieke Universiteit Leuven, 3000 Louvain, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Louvain, Belgium
- Department of Dental Medicine, Karolinska Institute, 171 77 Stockholm, Sweden
| | | |
Collapse
|
8
|
Fedato Tobias RS, Teodoro AB, Evangelista K, Leite AF, Valladares-Neto J, de Freitas Silva BS, Yamamoto-Silva FP, Almeida FT, Silva MAG. Diagnostic capability of artificial intelligence tools for detecting and classifying odontogenic cysts and tumors: a systematic review and meta-analysis. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:414-426. [PMID: 38845306 DOI: 10.1016/j.oooo.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 03/09/2024] [Accepted: 03/11/2024] [Indexed: 08/23/2024]
Abstract
OBJECTIVE To evaluate the diagnostic capability of artificial intelligence (AI) for detecting and classifying odontogenic cysts and tumors, with special emphasis on odontogenic keratocyst (OKC) and ameloblastoma. STUDY DESIGN Nine electronic databases and the gray literature were examined. Human-based studies using AI algorithms to detect or classify odontogenic cysts and tumors by using panoramic radiographs or CBCT were included. Diagnostic tests were evaluated, and a meta-analysis was performed for classifying OKCs and ameloblastomas. Heterogeneity, risk of bias, and certainty of evidence were evaluated. RESULTS Twelve studies concluded that AI is a promising tool for the detection and/or classification of lesions, producing high diagnostic test values. Three articles assessed the sensitivity of convolutional neural networks in classifying similar lesions using panoramic radiographs, specifically OKC and ameloblastoma. The accuracy was 0.893 (95% CI 0.832-0.954). AI applied to cone beam computed tomography produced superior accuracy based on only 4 studies. The results revealed heterogeneity in the models used, variations in imaging examinations, and discrepancies in the presentation of metrics. CONCLUSION AI tools exhibited a relatively high level of accuracy in detecting and classifying OKC and ameloblastoma. Panoramic radiography appears to be an accurate method for AI-based classification of these lesions, albeit with a low level of certainty. The accuracy of CBCT model data appears to be high and promising, although with limited available data.
Collapse
Affiliation(s)
| | - Ana Beatriz Teodoro
- Graduate Program, School of Dentistry, Federal University of Goias, Goiânia, Goiás, Brazil
| | - Karine Evangelista
- Department of Orthodontics, School of Dentistry, Federal University of Goias, Goiânia, Goiás, Brazil
| | - André Ferreira Leite
- Oral and Maxillofacial Radiology, Department of Dentistry, Faculty of Health Sciences, Brasília-DF, Brazil
| | - José Valladares-Neto
- Department of Orthodontics, School of Dentistry, Federal University of Goias, Goiânia, Goiás, Brazil
| | | | | | - Fabiana T Almeida
- Oral and Maxillofacial Radiology, Faculty of Medicine and Dentistry, University of Alberta, Canada
| | | |
Collapse
|
9
|
Shrivastava PK, Hasan S, Abid L, Injety R, Shrivastav AK, Sybil D. Accuracy of machine learning in the diagnosis of odontogenic cysts and tumors: a systematic review and meta-analysis. Oral Radiol 2024; 40:342-356. [PMID: 38530559 DOI: 10.1007/s11282-024-00745-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/06/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND The recent impact of artificial intelligence in diagnostic services has been enormous. Machine learning tools offer an innovative alternative to diagnose cysts and tumors radiographically that pose certain challenges due to the near similar presentation, anatomical variations, and superimposition. It is crucial that the performance of these models is evaluated for their clinical applicability in diagnosing cysts and tumors. METHODS A comprehensive literature search was carried out on eminent databases for published studies between January 2015 and December 2022. Studies utilizing machine learning models in the diagnosis of odontogenic cysts or tumors using Orthopantomograms (OPG) or Cone Beam Computed Tomographic images (CBCT) were included. QUADAS-2 tool was used for the assessment of the risk of bias and applicability concerns. Meta-analysis was performed for studies reporting sufficient performance metrics, separately for OPG and CBCT. RESULTS 16 studies were included for qualitative synthesis including a total of 10,872 odontogenic cysts and tumors. The sensitivity and specificity of machine learning in diagnosing cysts and tumors through OPG were 0.83 (95% CI 0.81-0.85) and 0.82 (95% CI 0.81-0.83) respectively. Studies utilizing CBCT noted a sensitivity of 0.88 (95% CI 0.87-0.88) and specificity of 0.88 (95% CI 0.87-0.89). Highest classification accuracy was 100%, noted for Support Vector Machine classifier. CONCLUSION The results from the present review favoured machine learning models to be used as a clinical adjunct in the radiographic diagnosis of odontogenic cysts and tumors, provided they undergo robust training with a huge dataset. However, the arduous process, investment, and certain ethical concerns associated with the total dependence on technology must be taken into account. Standardized reporting of outcomes for diagnostic studies utilizing machine learning methods is recommended to ensure homogeneity in assessment criteria, facilitate comparison between different studies, and promote transparency in research findings.
Collapse
Affiliation(s)
| | - Shamimul Hasan
- Department of Oral Medicine and Radiology, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| | - Laraib Abid
- Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| | - Ranjit Injety
- Department of Neurology, Christian Medical College & Hospital, Ludhiana, Punjab, India
| | - Ayush Kumar Shrivastav
- Computer Science and Engineering, Centre for Development of Advanced Computing, Noida, Uttar Pradesh, India
| | - Deborah Sybil
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India.
| |
Collapse
|
10
|
Dai F, Liu Q, Guo Y, Xie R, Wu J, Deng T, Zhu H, Deng L, Song L. Convolutional neural networks combined with classification algorithms for the diagnosis of periodontitis. Oral Radiol 2024; 40:357-366. [PMID: 38393548 DOI: 10.1007/s11282-024-00739-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 01/03/2024] [Indexed: 02/25/2024]
Abstract
OBJECTIVES We aim to develop a deep learning model based on a convolutional neural network (CNN) combined with a classification algorithm (CA) to assist dentists in quickly and accurately diagnosing the stage of periodontitis. MATERIALS AND METHODS Periapical radiographs (PERs) and clinical data were collected. The CNNs including Alexnet, VGG16, and ResNet18 were trained on PER to establish the PER-CNN models for no periodontal bone loss (PBL) and PBL. The CAs including random forest (RF), support vector machine (SVM), naive Bayes (NB), logistic regression (LR), and k-nearest neighbor (KNN) were added to the PER-CNN model for control, stage I, stage II and stage III/IV periodontitis. Heat map was produced using a gradient-weighted class activation mapping method to visualize the regions of interest of the PER-Alexnet model. Clustering analysis was performed based on the ten PER-CNN scores and the clinical characteristics. RESULTS The accuracy of the PER-Alexnet and PER-VGG16 models with the higher performance was 0.872 and 0.853, respectively. The accuracy of the PER-Alexnet + RF model with the highest performance for control, stage I, stage II and stage III/IV was 0.968, 0.960, 0.835 and 0.842, respectively. Heat map showed that the regions of interest predicted by the model were periodontitis bone lesions. We found that age and smoking were significantly related to periodontitis based on the PER-Alexnet scores. CONCLUSION The PER-Alexnet + RF model has reached high performance for whole-case periodontal diagnosis. The CNN models combined with CA can assist dentists in quickly and accurately diagnosing the stage of periodontitis.
Collapse
Affiliation(s)
- Fang Dai
- Center of Stomatology, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, No.1, Minde Road, Nanchang, 330000, Jiangxi, China
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Qiangdong Liu
- Center of Stomatology, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, No.1, Minde Road, Nanchang, 330000, Jiangxi, China
- The Second Clinical Medical School, Nanchang University, Nanchang, China
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yuchen Guo
- The Second Clinical Medical School, Nanchang University, Nanchang, China
| | - Ruixiang Xie
- School of Life Sciences, Nanchang University, Nanchang, China
| | - Jingting Wu
- Center of Stomatology, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, No.1, Minde Road, Nanchang, 330000, Jiangxi, China
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Tian Deng
- Center of Stomatology, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, No.1, Minde Road, Nanchang, 330000, Jiangxi, China
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Hongbiao Zhu
- Center of Stomatology, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, No.1, Minde Road, Nanchang, 330000, Jiangxi, China
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Libin Deng
- School of Public Health, Nanchang University, No.1299, Xuefu Avenue, Nanchang, 330000, Jiangxi, China.
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China.
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China.
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Li Song
- Center of Stomatology, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, No.1, Minde Road, Nanchang, 330000, Jiangxi, China.
- The Institute of Periodontal Disease, Nanchang University, Nanchang, China.
- JXHC Key Laboratory of Periodontology, The Second Affiliated Hospital of Nanchang University, Nanchang, China.
| |
Collapse
|
11
|
Lee HS, Yang S, Han JY, Kang JH, Kim JE, Huh KH, Yi WJ, Heo MS, Lee SS. Automatic detection and classification of nasopalatine duct cyst and periapical cyst on panoramic radiographs using deep convolutional neural networks. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:184-195. [PMID: 38158267 DOI: 10.1016/j.oooo.2023.09.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/01/2023] [Accepted: 09/15/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE The aim of this study was to evaluate a deep convolutional neural network (DCNN) method for the detection and classification of nasopalatine duct cysts (NPDC) and periapical cysts (PAC) on panoramic radiographs. STUDY DESIGN A total of 1,209 panoramic radiographs with 606 NPDC and 603 PAC were labeled with a bounding box and divided into training, validation, and test sets with an 8:1:1 ratio. The networks used were EfficientDet-D3, Faster R-CNN, YOLO v5, RetinaNet, and SSD. Mean average precision (mAP) was used to assess performance. Sixty images with no lesion in the anterior maxilla were added to the previous test set and were tested on 2 dentists with no training in radiology (GP) and on EfficientDet-D3. The performances were comparatively examined. RESULTS The mAP for each DCNN was EfficientDet-D3 93.8%, Faster R-CNN 90.8%, YOLO v5 89.5%, RetinaNet 79.4%, and SSD 60.9%. The classification performance of EfficientDet-D3 was higher than that of the GPs' with accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 94.4%, 94.4%, 97.2%, 94.6%, and 97.2%, respectively. CONCLUSIONS The proposed method achieved high performance for the detection and classification of NPDC and PAC compared with the GPs and presented promising prospects for clinical application.
Collapse
Affiliation(s)
- Han-Sol Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Ji-Yong Han
- Interdisciplinary Program in Bioengineering, College of Engineering, Seoul National University, Seoul, South Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea; Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea; Interdisciplinary Program in Bioengineering, College of Engineering, Seoul National University, Seoul, South Korea.
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| |
Collapse
|
12
|
Shi YJ, Li JP, Wang Y, Ma RH, Wang YL, Guo Y, Li G. Deep learning in the diagnosis for cystic lesions of the jaws: a review of recent progress. Dentomaxillofac Radiol 2024; 53:271-280. [PMID: 38814810 PMCID: PMC11211683 DOI: 10.1093/dmfr/twae022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 05/06/2024] [Accepted: 05/09/2024] [Indexed: 06/01/2024] Open
Abstract
Cystic lesions of the gnathic bones present challenges in differential diagnosis. In recent years, artificial intelligence (AI) represented by deep learning (DL) has rapidly developed and emerged in the field of dental and maxillofacial radiology (DMFR). Dental radiography provides a rich resource for the study of diagnostic analysis methods for cystic lesions of the jaws and has attracted many researchers. The aim of the current study was to investigate the diagnostic performance of DL for cystic lesions of the jaws. Online searches were done on Google Scholar, PubMed, and IEEE Xplore databases, up to September 2023, with subsequent manual screening for confirmation. The initial search yielded 1862 titles, and 44 studies were ultimately included. All studies used DL methods or tools for the identification of a variable number of maxillofacial cysts. The performance of algorithms with different models varies. Although most of the reviewed studies demonstrated that DL methods have better discriminative performance than clinicians, further development is still needed before routine clinical implementation due to several challenges and limitations such as lack of model interpretability, multicentre data validation, etc. Considering the current limitations and challenges, future studies for the differential diagnosis of cystic lesions of the jaws should follow actual clinical diagnostic scenarios to coordinate study design and enhance the impact of AI in the diagnosis of oral and maxillofacial diseases.
Collapse
Affiliation(s)
- Yu-Jie Shi
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Ju-Peng Li
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Yue Wang
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Ruo-Han Ma
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, Beijing, 100081, China
| | - Yan-Lin Wang
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, Beijing, 100081, China
| | - Yong Guo
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Gang Li
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, Beijing, 100081, China
| |
Collapse
|
13
|
Motmaen I, Xie K, Schönbrunn L, Berens J, Grunert K, Plum AM, Raufeisen J, Ferreira A, Hermans A, Egger J, Hölzle F, Truhn D, Puladi B. Insights into Predicting Tooth Extraction from Panoramic Dental Images: Artificial Intelligence vs. Dentists. Clin Oral Investig 2024; 28:381. [PMID: 38886242 PMCID: PMC11182848 DOI: 10.1007/s00784-024-05781-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 06/11/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVES Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE AI could help monitor at-risk teeth and reduce errors in indications for extractions.
Collapse
Affiliation(s)
- Ila Motmaen
- Department of Oral and Maxillofacial Surgery, University Hospital Knappschaftskrankenhaus Bochum, 44892, Bochum, Germany
| | - Kunpeng Xie
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Leon Schönbrunn
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Jeff Berens
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Kim Grunert
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Anna Maria Plum
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Johannes Raufeisen
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - André Ferreira
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Centre Algoritmi / LASI, University of Minho, 4710-057, Braga, Portugal
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, 45147, Essen, Germany
| | - Alexander Hermans
- Visual Computing Institute, Computer Science and Natural Sciences, RWTH Aachen University, 52074, Aachen, Germany
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University, 52074, Aachen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, 45147, Essen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University, 52074, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
14
|
Kise Y, Kuwada C, Mori M, Fukuda M, Ariji Y, Ariji E. Deep learning system for distinguishing between nasopalatine duct cysts and radicular cysts arising in the midline region of the anterior maxilla on panoramic radiographs. Imaging Sci Dent 2024; 54:33-41. [PMID: 38571775 PMCID: PMC10985522 DOI: 10.5624/isd.20230169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/31/2023] [Accepted: 11/22/2023] [Indexed: 04/05/2024] Open
Abstract
Purpose The aims of this study were to create a deep learning model to distinguish between nasopalatine duct cysts (NDCs), radicular cysts, and no-lesions (normal) in the midline region of the anterior maxilla on panoramic radiographs and to compare its performance with that of dental residents. Materials and Methods One hundred patients with a confirmed diagnosis of NDC (53 men, 47 women; average age, 44.6±16.5 years), 100 with radicular cysts (49 men, 51 women; average age, 47.5±16.4 years), and 100 with normal groups (56 men, 44 women; average age, 34.4±14.6 years) were enrolled in this study. Cases were randomly assigned to the training datasets (80%) and the test dataset (20%). Then, 20% of the training data were randomly assigned as validation data. A learning model was created using a customized DetectNet built in Digits version 5.0 (NVIDIA, Santa Clara, USA). The performance of the deep learning system was assessed and compared with that of two dental residents. Results The performance of the deep learning system was superior to that of the dental residents except for the recall of radicular cysts. The areas under the curve (AUCs) for NDCs and radicular cysts in the deep learning system were significantly higher than those of the dental residents. The results for the dental residents revealed a significant difference in AUC between NDCs and normal groups. Conclusion This study showed superior performance in detecting NDCs and radicular cysts and in distinguishing between these lesions and normal groups.
Collapse
Affiliation(s)
- Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Mizuho Mori
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
15
|
Niu L, Zhong S, Yang Z, Tan B, Zhao J, Zhou W, Zhang P, Hua L, Sun W, Li H. Mask refinement network for tooth segmentation on panoramic radiographs. Dentomaxillofac Radiol 2024; 53:127-136. [PMID: 38166355 DOI: 10.1093/dmfr/twad012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/13/2023] [Accepted: 11/12/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVES Instance-level tooth segmentation extracts abundant localization and shape information from panoramic radiographs (PRs). The aim of this study was to evaluate the performance of a mask refinement network that extracts precise tooth edges. METHODS A public dataset which consists of 543 PRs and 16211 labelled teeth was utilized. The structure of a typical Mask Region-based Convolutional Neural Network (Mask RCNN) was used as the baseline. A novel loss function was designed focus on producing accurate mask edges. In addition to our proposed method, 3 existing tooth segmentation methods were also implemented on the dataset for comparative analysis. The average precisions (APs), mean intersection over union (mIoU), and mean Hausdorff distance (mHAU) were exploited to evaluate the performance of the network. RESULTS A novel mask refinement region-based convolutional neural network was designed based on Mask RCNN architecture to extract refined masks for individual tooth on PRs. A total of 3311 teeth were correctly detected from 3382 tested teeth in 111 PRs. The AP, precision, and recall were 0.686, 0.979, and 0.952, respectively. Moreover, the mIoU and mHAU achieved 0.941 and 9.7, respectively, which are significantly better than the other existing segmentation methods. CONCLUSIONS This study proposed an efficient deep learning algorithm for accurately extracting the mask of any individual tooth from PRs. Precise tooth masks can provide valuable reference for clinical diagnosis and treatment. This algorithm is a fundamental basis for further automated processing applications.
Collapse
Affiliation(s)
- Li Niu
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Shengwei Zhong
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Zhiyu Yang
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Baochun Tan
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Junjie Zhao
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Wei Zhou
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Peng Zhang
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Lingchen Hua
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Weibin Sun
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| | - Houxuan Li
- Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province 210008, China
| |
Collapse
|
16
|
Farajollahi M, Safarian MS, Hatami M, Esmaeil Nejad A, Peters OA. Applying artificial intelligence to detect and analyse oral and maxillofacial bone loss-A scoping review. AUST ENDOD J 2023; 49:720-734. [PMID: 37439465 DOI: 10.1111/aej.12775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 07/14/2023]
Abstract
Radiographic evaluation of bone changes is one of the main tools in the diagnosis of many oral and maxillofacial diseases. However, this approach to assessment has limitations in accuracy, inconsistency and comparatively low diagnostic efficiency. Recently, artificial intelligence (AI)-based algorithms like deep learning networks have been introduced as a solution to overcome these challenges. Based on recent studies, AI can improve the detection accuracy of an expert clinician for periapical pathology, periodontal diseases and their prognostication, as well as peri-implant bone loss. Also, AI has been successfully used to detect and diagnose oral and maxillofacial lesions with a high predictive value. This study aims to review the current evidence on artificial intelligence applications in the detection and analysis of bone loss in the oral and maxillofacial regions.
Collapse
Affiliation(s)
- Mehran Farajollahi
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Sadegh Safarian
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Masoud Hatami
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Azadeh Esmaeil Nejad
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ove A Peters
- School of Dentistry, The University of Queensland, Herston, Queensland, Australia
| |
Collapse
|
17
|
Kuwada C, Ariji Y, Kise Y, Fukuda M, Ota J, Ohara H, Kojima N, Ariji E. Detection of unilateral and bilateral cleft alveolus on panoramic radiographs using a deep-learning system. Dentomaxillofac Radiol 2023; 52:20210436. [PMID: 35076259 PMCID: PMC10968766 DOI: 10.1259/dmfr.20210436] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 01/11/2022] [Accepted: 01/18/2022] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES The purpose of this study was to evaluate the difference in performance of deep-learning (DL) models with respect to the image classes and amount of training data to create an effective DL model for detecting both unilateral cleft alveoli (UCAs) and bilateral cleft alveoli (BCAs) on panoramic radiographs. METHODS Model U was created using UCA and normal images, and Model B was created using BCA and normal images. Models C1 and C2 were created using the combined data of UCA, BCA, and normal images. The same number of CAs was used for training Models U, B, and C1, whereas Model C2 was created with a larger amount of data. The performance of all four models was evaluated with the same test data and compared with those of two human observers. RESULTS The recall values were 0.60, 0.73, 0.80, and 0.88 for Models A, B, C1, and C2, respectively. The results of Model C2 were highest in precision and F-measure (0.98 and 0.92) and almost the same as those of human observers. Significant differences were found in the ratios of detected to undetected CAs of Models U and C1 (p = 0.01), Models U and C2 (p < 0.001), and Models B and C2 (p = 0.036). CONCLUSIONS The DL models trained using both UCA and BCA data (Models C1 and C2) achieved high detection performance. Moreover, the performance of a DL model may depend on the amount of training data.
Collapse
Affiliation(s)
- Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, Osaka, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Jun Ota
- Department of General Dentistry, Aichi-Gakuin University School of Dentistry, Dental Hospital, Nagoya, Japan
| | - Hisanobu Ohara
- Department of General Dentistry, Aichi-Gakuin University School of Dentistry, Dental Hospital, Nagoya, Japan
| | - Norinaga Kojima
- Department of General Dentistry, Aichi-Gakuin University School of Dentistry, Dental Hospital, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
18
|
Icoz D, Terzioglu H, Ozel MA, Karakurt R. Evaluation of an artificial intelligence system for the diagnosis of apical periodontitis on digital panoramic images. Niger J Clin Pract 2023; 26:1085-1090. [PMID: 37635600 DOI: 10.4103/njcp.njcp_624_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Aims The aim of the present study was to evaluate the effectiveness of an artificial intelligence (AI) system in the detection of roots with apical periodontitis (AP) on digital panoramic radiographs. Materials and Methods Three hundred and six panoramic radiographs containing 400 roots with AP (an equal number for both jaws) were used to test the diagnostic performance of an AI system. Panoramic radiographs of the patients were selected with the terms 'apical lesion' and 'apical periodontitis' from the archive and then with the agreement of two oral and maxillofacial radiologists. The radiologists also carried out the grouping and determination of the lesion borders. A deep learning (DL) model was built and the diagnostic performance of the model was evaluated by using recall, precision, and F measure. Results The recall, precision, and F-measure scores were 0.98, 0.56, and 0.71, respectively. While the number of roots with AP detected correctly in the mandible was 169 of 200 roots, it was only 56 of 200 roots in the maxilla. Only four roots without AP were incorrectly identified as those with AP. Conclusions The DL method developed for the automatic detection of AP on digital panoramic radiographs showed high recall, precision, and F measure values for the mandible, but low values for the maxilla, especially for the widened periodontal ligament (PL)/uncertain AP.
Collapse
Affiliation(s)
- D Icoz
- Oral and Maxillofacial Radiology, Faculty of Dentistry, Selcuk University, Turkey
| | - H Terzioglu
- Electrical Electronics Engineering, Faculty of Technology, Selcuk University, Turkey
| | - M A Ozel
- Private Practice, Department of Research and Development, Aydin Spare Parts Industry, Turkey
| | - R Karakurt
- Beyhekim Oral and Dental Health Center, Department of Oral and Maxillofacial Radiology, Konya, Turkey
| |
Collapse
|
19
|
Sivari E, Senirkentli GB, Bostanci E, Guzel MS, Acici K, Asuroglu T. Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review. Diagnostics (Basel) 2023; 13:2512. [PMID: 37568875 PMCID: PMC10416832 DOI: 10.3390/diagnostics13152512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 07/21/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023] Open
Abstract
Deep learning and diagnostic applications in oral and dental health have received significant attention recently. In this review, studies applying deep learning to diagnose anomalies and diseases in dental image material were systematically compiled, and their datasets, methodologies, test processes, explainable artificial intelligence methods, and findings were analyzed. Tests and results in studies involving human-artificial intelligence comparisons are discussed in detail to draw attention to the clinical importance of deep learning. In addition, the review critically evaluates the literature to guide and further develop future studies in this field. An extensive literature search was conducted for the 2019-May 2023 range using the Medline (PubMed) and Google Scholar databases to identify eligible articles, and 101 studies were shortlisted, including applications for diagnosing dental anomalies (n = 22) and diseases (n = 79) using deep learning for classification, object detection, and segmentation tasks. According to the results, the most commonly used task type was classification (n = 51), the most commonly used dental image material was panoramic radiographs (n = 55), and the most frequently used performance metric was sensitivity/recall/true positive rate (n = 87) and accuracy (n = 69). Dataset sizes ranged from 60 to 12,179 images. Although deep learning algorithms are used as individual or at least individualized architectures, standardized architectures such as pre-trained CNNs, Faster R-CNN, YOLO, and U-Net have been used in most studies. Few studies have used the explainable AI method (n = 22) and applied tests comparing human and artificial intelligence (n = 21). Deep learning is promising for better diagnosis and treatment planning in dentistry based on the high-performance results reported by the studies. For all that, their safety should be demonstrated using a more reproducible and comparable methodology, including tests with information about their clinical applicability, by defining a standard set of tests and performance metrics.
Collapse
Affiliation(s)
- Esra Sivari
- Department of Computer Engineering, Cankiri Karatekin University, Cankiri 18100, Turkey
| | | | - Erkan Bostanci
- Department of Computer Engineering, Ankara University, Ankara 06830, Turkey
| | | | - Koray Acici
- Department of Artificial Intelligence and Data Engineering, Ankara University, Ankara 06830, Turkey
| | - Tunc Asuroglu
- Faculty of Medicine and Health Technology, Tampere University, 33720 Tampere, Finland
| |
Collapse
|
20
|
Zhu J, Chen Z, Zhao J, Yu Y, Li X, Shi K, Zhang F, Yu F, Shi K, Sun Z, Lin N, Zheng Y. Artificial intelligence in the diagnosis of dental diseases on panoramic radiographs: a preliminary study. BMC Oral Health 2023; 23:358. [PMID: 37270488 DOI: 10.1186/s12903-023-03027-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/09/2023] [Indexed: 06/05/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has been introduced to interpret the panoramic radiographs (PRs). The aim of this study was to develop an AI framework to diagnose multiple dental diseases on PRs, and to initially evaluate its performance. METHODS The AI framework was developed based on 2 deep convolutional neural networks (CNNs), BDU-Net and nnU-Net. 1996 PRs were used for training. Diagnostic evaluation was performed on a separate evaluation dataset including 282 PRs. Sensitivity, specificity, Youden's index, the area under the curve (AUC), and diagnostic time were calculated. Dentists with 3 different levels of seniority (H: high, M: medium, L: low) diagnosed the same evaluation dataset independently. Mann-Whitney U test and Delong test were conducted for statistical analysis (ɑ=0.05). RESULTS Sensitivity, specificity, and Youden's index of the framework for diagnosing 5 diseases were 0.964, 0.996, 0.960 (impacted teeth), 0.953, 0.998, 0.951 (full crowns), 0.871, 0.999, 0.870 (residual roots), 0.885, 0.994, 0.879 (missing teeth), and 0.554, 0.990, 0.544 (caries), respectively. AUC of the framework for the diseases were 0.980 (95%CI: 0.976-0.983, impacted teeth), 0.975 (95%CI: 0.972-0.978, full crowns), and 0.935 (95%CI: 0.929-0.940, residual roots), 0.939 (95%CI: 0.934-0.944, missing teeth), and 0.772 (95%CI: 0.764-0.781, caries), respectively. AUC of the AI framework was comparable to that of all dentists in diagnosing residual roots (p > 0.05), and its AUC values were similar to (p > 0.05) or better than (p < 0.05) that of M-level dentists for diagnosing 5 diseases. But AUC of the framework was statistically lower than some of H-level dentists for diagnosing impacted teeth, missing teeth, and caries (p < 0.05). The mean diagnostic time of the framework was significantly shorter than that of all dentists (p < 0.001). CONCLUSIONS The AI framework based on BDU-Net and nnU-Net demonstrated high specificity on diagnosing impacted teeth, full crowns, missing teeth, residual roots, and caries with high efficiency. The clinical feasibility of AI framework was preliminary verified since its performance was similar to or even better than the dentists with 3-10 years of experience. However, the AI framework for caries diagnosis should be improved.
Collapse
Affiliation(s)
- Junhua Zhu
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Zhi Chen
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Jing Zhao
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Yueyuan Yu
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Xiaojuan Li
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Kangjian Shi
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Fan Zhang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Feifei Yu
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Keying Shi
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Zhe Sun
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Nengjie Lin
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Yuanna Zheng
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China.
| |
Collapse
|
21
|
Bu WQ, Guo YX, Zhang D, Du SY, Han MQ, Wu ZX, Tang Y, Chen T, Guo YC, Meng HT. Automatic sex estimation using deep convolutional neural network based on orthopantomogram images. Forensic Sci Int 2023; 348:111704. [PMID: 37094502 DOI: 10.1016/j.forsciint.2023.111704] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 04/09/2023] [Accepted: 04/19/2023] [Indexed: 04/26/2023]
Abstract
Sex estimation is very important in forensic applications as part of individual identification. Morphological sex estimation methods predominantly focus on anatomical measurements. Based on the close relationship between sex chromosome genes and facial characterization, craniofacial hard tissues morphology shows sex dimorphism. In order to establish a more labor-saving, rapid, and accurate reference for sex estimation, the study investigated a deep learning network-based artificial intelligence (AI) model using orthopantomograms (OPG) to estimate sex in northern Chinese subjects. In total, 10703 OPG images were divided into training (80%), validation (10%), and test sets (10%). At the same time, different age thresholds were selected to compare the accuracy differences between adults and minors. The accuracy of sex estimation using CNN (convolutional neural network) model was higher for adults (90.97%) compared with minors (82.64%). This work demonstrated that the proposed model trained with a large dataset could be used in automatic morphological sex-related identification with favorable performance and practical significance in forensic science for adults in northern China, while also providing a reference for minors to some extent.
Collapse
Affiliation(s)
- Wen-Qing Bu
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China; Department of Orthodontics, Stomatological Hospital of Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China
| | - Yu-Xin Guo
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China
| | - Dong Zhang
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, People's Republic of China
| | - Shao-Yi Du
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, People's Republic of China
| | - Meng-Qi Han
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China
| | - Zi-Xuan Wu
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China; Department of Orthodontics, Stomatological Hospital of Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China
| | - Yu Tang
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China
| | - Teng Chen
- College of Medicine and Forensics, Xi'an Jiaotong University Health Science Center, 76 West Yanta Road, Xi'an 710004, Shaanxi, People's Republic of China
| | - Yu-Cheng Guo
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China; Department of Orthodontics, Stomatological Hospital of Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China; National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, People's Republic of China.
| | - Hao-Tian Meng
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China.
| |
Collapse
|
22
|
Deep learning for preliminary profiling of panoramic images. Oral Radiol 2023; 39:275-281. [PMID: 35759114 DOI: 10.1007/s11282-022-00634-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 05/27/2022] [Indexed: 10/17/2022]
Abstract
OBJECTIVE This study explored the feasibility of using deep learning for profiling of panoramic radiographs. STUDY DESIGN Panoramic radiographs of 1000 patients were used. Patients were categorized using seven dental or physical characteristics: age, gender, mixed or permanent dentition, number of presenting teeth, impacted wisdom tooth status, implant status, and prosthetic treatment status. A Neural Network Console (Sony Network Communications Inc., Tokyo, Japan) deep learning system and the VGG-Net deep convolutional neural network were used for classification. RESULTS Dentition and prosthetic treatment status exhibited classification accuracies of 93.5% and 90.5%, respectively. Tooth number and implant status both exhibited 89.5% classification accuracy; impacted wisdom tooth status exhibited 69.0% classification accuracy. Age and gender exhibited classification accuracies of 56.0% and 75.5%, respectively. CONCLUSION Our proposed preliminary profiling method may be useful for preliminary interpretation of panoramic images and preprocessing before the application of additional artificial intelligence techniques.
Collapse
|
23
|
Kuwada C, Ariji Y, Kise Y, Fukuda M, Nishiyama M, Funakoshi T, Takeuchi R, Sana A, Kojima N, Ariji E. Deep-learning systems for diagnosing cleft palate on panoramic radiographs in patients with cleft alveolus. Oral Radiol 2023; 39:349-354. [PMID: 35984588 PMCID: PMC10017636 DOI: 10.1007/s11282-022-00644-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 07/15/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVES The aim of the present study was to create effective deep learning-based models for diagnosing the presence or absence of cleft palate (CP) in patients with unilateral or bilateral cleft alveolus (CA) on panoramic radiographs. METHODS The panoramic images of 491 patients who had unilateral or bilateral cleft alveolus were used to create two models. Model A, which detects the upper incisor area on panoramic radiographs and classifies the areas into the presence or absence of CP, was created using both object detection and classification functions of DetectNet. Using the same data for developing Model A, Model B, which directly classifies the presence or absence of CP on panoramic radiographs, was created using classification function of VGG-16. The performances of both models were evaluated with the same test data and compared with those of two radiologists. RESULTS The recall, precision, and F-measure were all 1.00 in Model A. The area under the receiver operating characteristic curve (AUC) values were 0.95, 0.93, 0.70, and 0.63 for Model A, Model B, and the radiologists, respectively. The AUCs of the models were significantly higher than those of the radiologists. CONCLUSIONS The deep learning-based models developed in the present study have potential for use in supporting observer interpretations of the presence of cleft palate on panoramic radiographs.
Collapse
Affiliation(s)
- Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan.
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, 5-17, Otemae 1-chome, Chuo-ku, Osaka, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Takuma Funakoshi
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Rihoko Takeuchi
- Department of General Dentistry, Dental Hospital, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Airi Sana
- Department of General Dentistry, Dental Hospital, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Norinaga Kojima
- Department of General Dentistry, Dental Hospital, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, Japan
| |
Collapse
|
24
|
Deep Learning for Detection of Periapical Radiolucent Lesions: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy. J Endod 2023; 49:248-261.e3. [PMID: 36563779 DOI: 10.1016/j.joen.2022.12.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 12/11/2022] [Accepted: 12/12/2022] [Indexed: 12/25/2022]
Abstract
INTRODUCTION The aim of this systematic review and meta-analysis was to investigate the overall accuracy of deep learning models in detecting periapical (PA) radiolucent lesions in dental radiographs, when compared to expert clinicians. METHODS Electronic databases of Medline (via PubMed), Embase (via Ovid), Scopus, Google Scholar, and arXiv were searched. Quality of eligible studies was assessed by using Quality Assessment and Diagnostic Accuracy Tool-2. Quantitative analyses were conducted using hierarchical logistic regression for meta-analyses on diagnostic accuracy. Subgroup analyses on different image modalities (PA radiographs, panoramic radiographs, and cone beam computed tomographic images) and on different deep learning tasks (classification, segmentation, object detection) were conducted. Certainty of evidence was assessed by using Grading of Recommendations Assessment, Development, and Evaluation system. RESULTS A total of 932 studies were screened. Eighteen studies were included in the systematic review, out of which 6 studies were selected for quantitative analyses. Six studies had low risk of bias. Twelve studies had risk of bias. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio of included studies (all image modalities; all tasks) were 0.925 (95% confidence interval [CI], 0.862-0.960), 0.852 (95% CI, 0.810-0.885), 6.261 (95% CI, 4.717-8.311), 0.087 (95% CI, 0.045-0.168), and 71.692 (95% CI, 29.957-171.565), respectively. No publication bias was detected (Egger's test, P = .82). Grading of Recommendations Assessment, Development and Evaluationshowed a "high" certainty of evidence for the studies included in the meta-analyses. CONCLUSION Compared to expert clinicians, deep learning showed highly accurate results in detecting PA radiolucent lesions in dental radiographs. Most studies had risk of bias. There was a lack of prospective studies.
Collapse
|
25
|
Dai X, Jiang X, Jing Q, Zheng J, Zhu S, Mao T, Wang D. A one-stage deep learning method for fully automated mesiodens localization on panoramic radiographs. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
26
|
Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current Applications of Deep Learning and Radiomics on CT and CBCT for Maxillofacial Diseases. Diagnostics (Basel) 2022; 13:110. [PMID: 36611402 PMCID: PMC9818323 DOI: 10.3390/diagnostics13010110] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The increasing use of computed tomography (CT) and cone beam computed tomography (CBCT) in oral and maxillofacial imaging has driven the development of deep learning and radiomics applications to assist clinicians in early diagnosis, accurate prognosis prediction, and efficient treatment planning of maxillofacial diseases. This narrative review aimed to provide an up-to-date overview of the current applications of deep learning and radiomics on CT and CBCT for the diagnosis and management of maxillofacial diseases. Based on current evidence, a wide range of deep learning models on CT/CBCT images have been developed for automatic diagnosis, segmentation, and classification of jaw cysts and tumors, cervical lymph node metastasis, salivary gland diseases, temporomandibular (TMJ) disorders, maxillary sinus pathologies, mandibular fractures, and dentomaxillofacial deformities, while CT-/CBCT-derived radiomics applications mainly focused on occult lymph node metastasis in patients with oral cancer, malignant salivary gland tumors, and TMJ osteoarthritis. Most of these models showed high performance, and some of them even outperformed human experts. The models with performance on par with human experts have the potential to serve as clinically practicable tools to achieve the earliest possible diagnosis and treatment, leading to a more precise and personalized approach for the management of maxillofacial diseases. Challenges and issues, including the lack of the generalizability and explainability of deep learning models and the uncertainty in the reproducibility and stability of radiomic features, should be overcome to gain the trust of patients, providers, and healthcare organizers for daily clinical use of these models.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Qi Yong H. Ai
- Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lun M. Wong
- Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Dion Tik Shun Li
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Yiu Yan Leung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
27
|
Bayrakdar IS, Orhan K, Akarsu S, Çelik Ö, Atasoy S, Pekince A, Yasa Y, Bilgir E, Sağlam H, Aslan AF, Odabaş A. Deep-learning approach for caries detection and segmentation on dental bitewing radiographs. Oral Radiol 2022; 38:468-479. [PMID: 34807344 DOI: 10.1007/s11282-021-00577-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 11/09/2021] [Indexed: 02/05/2023]
Abstract
OBJECTIVES The aim of this study is to recommend an automatic caries detection and segmentation model based on the Convolutional Neural Network (CNN) algorithms in dental bitewing radiographs using VGG-16 and U-Net architecture and evaluate the clinical performance of the model comparing to human observer. METHODS A total of 621 anonymized bitewing radiographs were used to progress the Artificial Intelligence (AI) system (CranioCatch, Eskisehir, Turkey) for the detection and segmentation of caries lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Ordu University. VGG-16 and U-Net implemented with PyTorch models were used for the detection and segmentation of caries lesions, respectively. RESULTS The sensitivity, precision, and F-measure rates for caries detection and caries segmentation were 0.84, 0.81; 0.84, 0.86; and 0.84, 0.84, respectively. Comparing to 5 different experienced observers and AI models on external radiographic dataset, AI models showed superiority to assistant specialists. CONCLUSION CNN-based AI algorithms can have the potential to detect and segmentation of dental caries accurately and effectively in bitewing radiographs. AI algorithms based on the deep-learning method have the potential to assist clinicians in routine clinical practice for quickly and reliably detecting the tooth caries. The use of these algorithms in clinical practice can provide to important benefit to physicians as a clinical decision support system in dentistry.
Collapse
Affiliation(s)
- Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskisehir, Turkey.
- Eskisehir Osmangazi University Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir, Turkey.
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| | - Serdar Akarsu
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| | - Samet Atasoy
- Department of Restorative Dentistry, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Adem Pekince
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Karabuk University, Karabuk, Turkey
| | - Yasin Yasa
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskisehir, Turkey
| | - Hande Sağlam
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskisehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Alper Odabaş
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| |
Collapse
|
28
|
Kotaki S, Nishiguchi T, Araragi M, Akiyama H, Fukuda M, Ariji E, Ariji Y. Transfer learning in diagnosis of maxillary sinusitis using panoramic radiography and conventional radiography. Oral Radiol 2022:10.1007/s11282-022-00658-3. [DOI: 10.1007/s11282-022-00658-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 09/19/2022] [Indexed: 10/14/2022]
|
29
|
Ming Y, Dong X, Zhao J, Chen Z, Wang H, Wu N. Deep learning-based multimodal image analysis for cervical cancer detection. Methods 2022; 205:46-52. [DOI: 10.1016/j.ymeth.2022.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 03/25/2022] [Accepted: 05/16/2022] [Indexed: 10/18/2022] Open
|
30
|
Potential and impact of artificial intelligence algorithms in dento-maxillofacial radiology. Clin Oral Investig 2022; 26:5535-5555. [PMID: 35438326 DOI: 10.1007/s00784-022-04477-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 03/25/2022] [Indexed: 12/20/2022]
Abstract
OBJECTIVES Novel artificial intelligence (AI) learning algorithms in dento-maxillofacial radiology (DMFR) are continuously being developed and improved using advanced convolutional neural networks. This review provides an overview of the potential and impact of AI algorithms in DMFR. MATERIALS AND METHODS A narrative review was conducted on the literature on AI algorithms in DMFR. RESULTS In the field of DMFR, AI algorithms were mainly proposed for (1) automated detection of dental caries, periapical pathologies, root fracture, periodontal/peri-implant bone loss, and maxillofacial cysts/tumors; (2) classification of mandibular third molars, skeletal malocclusion, and dental implant systems; (3) localization of cephalometric landmarks; and (4) improvement of image quality. Data insufficiency, overfitting, and the lack of interpretability are the main issues in the development and use of image-based AI algorithms. Several strategies have been suggested to address these issues, such as data augmentation, transfer learning, semi-supervised training, few-shot learning, and gradient-weighted class activation mapping. CONCLUSIONS Further integration of relevant AI algorithms into one fully automatic end-to-end intelligent system for possible multi-disciplinary applications is very likely to be a field of increased interest in the future. CLINICAL RELEVANCE This review provides dental practitioners and researchers with a comprehensive understanding of the current development, performance, issues, and prospects of image-based AI algorithms in DMFR.
Collapse
|
31
|
Optimization of College English Classroom Teaching Efficiency by Deep Learning SDD Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1014501. [PMID: 35096036 PMCID: PMC8799347 DOI: 10.1155/2022/1014501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/21/2021] [Accepted: 12/27/2021] [Indexed: 12/03/2022]
Abstract
In order to improve the teaching efficiency of English teachers in classroom teaching, the target detection algorithm in deep learning and the monitoring information from teachers are used, the target detection algorithm of deep learning Single Shot MultiBox Detector (SSD) is optimized, and the optimized Mobilenet-Single Shot MultiBox Detector (Mobilenet-SSD) is designed. After analyzing the Mobilenet-SSD algorithm, it is recognized that the algorithm has the shortcomings of large amount of basic network parameters and poor small target detection. The deficiencies are optimized in the following partThrough related experiments of student behaviour analysis, the average detection accuracy of the optimized algorithm reached 82.13%, and the detection speed reached 23.5 fps (frames per second). Through experiments, the algorithm has achieved 81.11% in detecting students' writing behaviour. This proves that the proposed algorithm has improved the accuracy of small target recognition without changing the operation speed of the traditional algorithm. The designed algorithm has more advantages in detection accuracy compared with previous detection algorithms. The optimized algorithm improves the detection efficiency of the algorithm, which is beneficial to provide modern technical support for English teachers to understand the learning status of students and has strong practical significance for improving the efficiency of English classroom teaching.
Collapse
|
32
|
Automatic detection and segmentation of morphological changes of the maxillary sinus mucosa on cone-beam computed tomography images using a three-dimensional convolutional neural network. Clin Oral Investig 2022; 26:3987-3998. [DOI: 10.1007/s00784-021-04365-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 12/29/2021] [Indexed: 02/07/2023]
|
33
|
Kise Y, Ariji Y, Kuwada C, Fukuda M, Ariji E. Effect of deep transfer learning with a different kind of lesion on classification performance of pre-trained model: Verification with radiolucent lesions on panoramic radiographs. Imaging Sci Dent 2022; 53:27-34. [PMID: 37006785 PMCID: PMC10060760 DOI: 10.5624/isd.20220133] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/28/2022] [Accepted: 10/31/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose The aim of this study was to clarify the influence of training with a different kind of lesion on the performance of a target model. Materials and Methods A total of 310 patients (211 men, 99 women; average age, 47.9±16.1 years) were selected and their panoramic images were used in this study. We created a source model using panoramic radiographs including mandibular radiolucent cyst-like lesions (radicular cyst, dentigerous cyst, odontogenic keratocyst, and ameloblastoma). The model was simulatively transferred and trained on images of Stafne's bone cavity. A learning model was created using a customized DetectNet built in the Digits version 5.0 (NVIDIA, Santa Clara, CA). Two machines (Machines A and B) with identical specifications were used to simulate transfer learning. A source model was created from the data consisting of ameloblastoma, odontogenic keratocyst, dentigerous cyst, and radicular cyst in Machine A. Thereafter, it was transferred to Machine B and trained on additional data of Stafne's bone cavity to create target models. To investigate the effect of the number of cases, we created several target models with different numbers of Stafne's bone cavity cases. Results When the Stafne's bone cavity data were added to the training, both the detection and classification performances for this pathology improved. Even for lesions other than Stafne's bone cavity, the detection sensitivities tended to increase with the increase in the number of Stafne's bone cavities. Conclusion This study showed that using different lesions for transfer learning improves the performance of the model.
Collapse
Affiliation(s)
- Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, Osaka, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
34
|
Nozawa M, Ito H, Ariji Y, Fukuda M, Igarashi C, Nishiyama M, Ogi N, Katsumata A, Kobayashi K, Ariji E. Automatic segmentation of the temporomandibular joint disc on magnetic resonance images using a deep learning technique. Dentomaxillofac Radiol 2022; 51:20210185. [PMID: 34347537 PMCID: PMC8693319 DOI: 10.1259/dmfr.20210185] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
OBJECTIVES The aims of the present study were to construct a deep learning model for automatic segmentation of the temporomandibular joint (TMJ) disc on magnetic resonance (MR) images, and to evaluate the performances using the internal and external test data. METHODS In total, 1200 MR images of closed and open mouth positions in patients with temporomandibular disorder (TMD) were collected from two hospitals (Hospitals A and B). The training and validation data comprised 1000 images from Hospital A, which were used to create a segmentation model. The performance was evaluated using 200 images from Hospital A (internal validity test) and 200 images from Hospital B (external validity test). RESULTS Although the analysis of performance determined with data from Hospital B showed low recall (sensitivity), compared with the performance determined with data from Hospital A, both performances were above 80%. Precision (positive predictive value) was lower when test data from Hospital A were used for the position of anterior disc displacement. According to the intra-articular TMD classification, the proportions of accurately assigned TMJs were higher when using images from Hospital A than when using images from Hospital B. CONCLUSION The segmentation deep learning model created in this study may be useful for identifying disc positions on MR images.
Collapse
Affiliation(s)
| | - Hirokazu Ito
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Chinami Igarashi
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Nobumi Ogi
- Department of Oral and Maxillofacial Surgery, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Kaoru Kobayashi
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
35
|
Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol 2022; 51:20210197. [PMID: 34233515 PMCID: PMC8693331 DOI: 10.1259/dmfr.20210197] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Collapse
Affiliation(s)
| | - Chiaki Doi
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Nobuhiro Yoda
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Jl. Mayjen Prof. Dr. Moestopo no 47, Surabaya, Indonesia
| | - Keiichi Sasaki
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| |
Collapse
|
36
|
Detection and classification of unilateral cleft alveolus with and without cleft palate on panoramic radiographs using a deep learning system. Sci Rep 2021; 11:16044. [PMID: 34363000 PMCID: PMC8346464 DOI: 10.1038/s41598-021-95653-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/28/2021] [Indexed: 11/09/2022] Open
Abstract
Although panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of this study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance in comparison to human observers, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP) or without cleft palate (CA only) and 210 patients without CA (normal) were used to create two models on the DetectNet. The models 1 and 2 were developed based on the data without and with normal subjects, respectively, to detect the CAs and classify them into with or without CP. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The overall accuracy of Model 2 was higher than Model 1 and human observers. The model created in this study appeared to have the potential to detect and classify CAs on panoramic radiographs, and might be useful to assist the human observers.
Collapse
|
37
|
Machine Learning and Intelligent Diagnostics in Dental and Orofacial Pain Management: A Systematic Review. Pain Res Manag 2021; 2021:6659133. [PMID: 33986900 PMCID: PMC8093041 DOI: 10.1155/2021/6659133] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/11/2021] [Accepted: 04/17/2021] [Indexed: 02/07/2023]
Abstract
Purpose The study explored the clinical influence, effectiveness, limitations, and human comparison outcomes of machine learning in diagnosing (1) dental diseases, (2) periodontal diseases, (3) trauma and neuralgias, (4) cysts and tumors, (5) glandular disorders, and (6) bone and temporomandibular joint as possible causes of dental and orofacial pain. Method Scopus, PubMed, and Web of Science (all databases) were searched by 2 reviewers until 29th October 2020. Articles were screened and narratively synthesized according to PRISMA-DTA guidelines based on predefined eligibility criteria. Articles that made direct reference test comparisons to human clinicians were evaluated using the MI-CLAIM checklist. The risk of bias was assessed by JBI-DTA critical appraisal, and certainty of the evidence was evaluated using the GRADE approach. Information regarding the quantification method of dental pain and disease, the conditional characteristics of both training and test data cohort in the machine learning, diagnostic outcomes, and diagnostic test comparisons with clinicians, where applicable, were extracted. Results 34 eligible articles were found for data synthesis, of which 8 articles made direct reference comparisons to human clinicians. 7 papers scored over 13 (out of the evaluated 15 points) in the MI-CLAIM approach with all papers scoring 5+ (out of 7) in JBI-DTA appraisals. GRADE approach revealed serious risks of bias and inconsistencies with most studies containing more positive cases than their true prevalence in order to facilitate machine learning. Patient-perceived symptoms and clinical history were generally found to be less reliable than radiographs or histology for training accurate machine learning models. A low agreement level between clinicians training the models was suggested to have a negative impact on the prediction accuracy. Reference comparisons found nonspecialized clinicians with less than 3 years of experience to be disadvantaged against trained models. Conclusion Machine learning in dental and orofacial healthcare has shown respectable results in diagnosing diseases with symptomatic pain and with improved future iterations and can be used as a diagnostic aid in the clinics. The current review did not internally analyze the machine learning models and their respective algorithms, nor consider the confounding variables and factors responsible for shaping the orofacial disorders responsible for eliciting pain.
Collapse
|
38
|
Nishiyama M, Ishibashi K, Ariji Y, Fukuda M, Nishiyama W, Umemura M, Katsumata A, Fujita H, Ariji E. Performance of deep learning models constructed using panoramic radiographs from two hospitals to diagnose fractures of the mandibular condyle. Dentomaxillofac Radiol 2021; 50:20200611. [PMID: 33769840 DOI: 10.1259/dmfr.20200611] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
OBJECTIVE The present study aimed to verify the classification performance of deep learning (DL) models for diagnosing fractures of the mandibular condyle on panoramic radiographs using data sets from two hospitals and to compare their internal and external validities. METHODS Panoramic radiographs of 100 condyles with and without fractures were collected from two hospitals and a fivefold cross-validation method was employed to construct and evaluate the DL models. The internal and external validities of classification performance were evaluated as accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS For internal validity, high classification performance was obtained, with AUC values of >0.85. Conversely, external validity for the data sets from the two hospitals exhibited low performance. Using combined data sets from both hospitals, the DL model exhibited high performance, which was slightly superior or equal to that of the internal validity but without a statistically significant difference. CONCLUSION The constructed DL model can be clinically employed for diagnosing fractures of the mandibular condyle using panoramic radiographs. However, the domain shift phenomenon should be considered when generalizing DL systems.
Collapse
Affiliation(s)
- Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Kenichiro Ishibashi
- Department of Oral and Maxillofacial Surgery, Ogaki Municipal Hospital, Ogaki, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Wataru Nishiyama
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Masahiro Umemura
- Department of Oral and Maxillofacial Surgery, Ogaki Municipal Hospital, Ogaki, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic and Computer Engineering, Gifu University, Gifu, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
39
|
Differential diagnosis of ameloblastoma and odontogenic keratocyst by machine learning of panoramic radiographs. Int J Comput Assist Radiol Surg 2021; 16:415-422. [PMID: 33547985 PMCID: PMC7946691 DOI: 10.1007/s11548-021-02309-0] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 01/03/2021] [Indexed: 12/13/2022]
Abstract
Purpose The differentiation of the ameloblastoma and odontogenic keratocyst directly affects the formulation of surgical plans, while the results of differential diagnosis by imaging alone are not satisfactory. This paper aimed to propose an algorithm based on convolutional neural networks (CNN) structure to significantly improve the classification accuracy of these two tumors. Methods A total of 420 digital panoramic radiographs provided by 401 patients were acquired from the Shanghai Ninth People’s Hospital. Each of them was cropped to a patch as a region of interest by radiologists. Furthermore, inverse logarithm transformation and histogram equalization were employed to increase the contrast of the region of interest (ROI). To alleviate overfitting, random rotation and flip transform as data augmentation algorithms were adopted to the training dataset. We provided a CNN structure based on a transfer learning algorithm, which consists of two branches in parallel. The output of the network is a two-dimensional vector representing the predicted scores of ameloblastoma and odontogenic keratocyst, respectively. Results The proposed network achieved an accuracy of 90.36% (AUC = 0.946), while sensitivity and specificity were 92.88% and 87.80%, respectively. Two other networks named VGG-19 and ResNet-50 and a network trained from scratch were also used in the experiment, which achieved accuracy of 80.72%, 78.31%, and 69.88%, respectively. Conclusions We proposed an algorithm that significantly improves the differential diagnosis accuracy of ameloblastoma and odontogenic keratocyst and has the utility to provide a reliable recommendation to the oral maxillofacial specialists before surgery.
Collapse
|