1
|
Sohrabniya F, Hassanzadeh-Samani S, Ourang SA, Jafari B, Farzinnia G, Gorjinejad F, Ghalyanchi-Langeroudi A, Mohammad-Rahimi H, Tichy A, Motamedian SR, Schwendicke F. Exploring a decade of deep learning in dentistry: A comprehensive mapping review. Clin Oral Investig 2025; 29:143. [PMID: 39969623 DOI: 10.1007/s00784-025-06216-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2024] [Accepted: 02/08/2025] [Indexed: 02/20/2025]
Abstract
OBJECTIVES Artificial Intelligence (AI), particularly deep learning, has significantly impacted healthcare, including dentistry, by improving diagnostics, treatment planning, and prognosis prediction. This systematic mapping review explores the current applications of deep learning in dentistry, offering a comprehensive overview of trends, models, and their clinical significance. MATERIALS AND METHODS Following a structured methodology, relevant studies published from January 2012 to September 2023 were identified through database searches in PubMed, Scopus, and Embase. Key data, including clinical purpose, deep learning tasks, model architectures, and data modalities, were extracted for qualitative synthesis. RESULTS From 21,242 screened studies, 1,007 were included. Of these, 63.5% targeted diagnostic tasks, primarily with convolutional neural networks (CNNs). Classification (43.7%) and segmentation (22.9%) were the main methods, and imaging data-such as cone-beam computed tomography and orthopantomograms-were used in 84.4% of cases. Most studies (95.2%) applied fully supervised learning, emphasizing the need for annotated data. Pathology (21.5%), radiology (17.5%), and orthodontics (10.2%) were prominent fields, with 24.9% of studies relating to more than one specialty. CONCLUSION This review explores the advancements in deep learning in dentistry, particulary for diagnostics, and identifies areas for further improvement. While CNNs have been used successfully, it is essential to explore emerging model architectures, learning approaches, and ways to obtain diverse and reliable data. Furthermore, fostering trust among all stakeholders by advancing explainable AI and addressing ethical considerations is crucial for transitioning AI from research to clinical practice. CLINICAL RELEVANCE This review offers a comprehensive overview of a decade of deep learning in dentistry, showcasing its significant growth in recent years. By mapping its key applications and identifying research trends, it provides a valuable guide for future studies and highlights emerging opportunities for advancing AI-driven dental care.
Collapse
Affiliation(s)
- Fatemeh Sohrabniya
- ITU/WHO/WIPO Global Initiative on Artificial Intelligence for Health - Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Sahel Hassanzadeh-Samani
- ITU/WHO/WIPO Global Initiative on Artificial Intelligence for Health - Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Seyed AmirHossein Ourang
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Bahare Jafari
- Division of Orthodontics, The Ohio State University, Columbus, OH, 43210, USA
| | | | - Fatemeh Gorjinejad
- ITU/WHO/WIPO Global Initiative on Artificial Intelligence for Health - Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Azadeh Ghalyanchi-Langeroudi
- Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR),Advanced Medical Technology and Equipment Institute (AMTEI), Tehran University of Medical Science (TUMS), Tehran, Iran
| | - Hossein Mohammad-Rahimi
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, Aarhus C, 8000, Aarhus, Denmark
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Antonin Tichy
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
- Institute of Dental Medicine, First Faculty of Medicine of the Charles University and General University Hospital, Prague, Czech Republic
| | - Saeed Reza Motamedian
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Falk Schwendicke
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
2
|
Zhang R, Mo H, Hu W, Jie B, Xu L, He Y, Ke J, Wang J. Super-resolution landmark detection networks for medical images. Comput Biol Med 2024; 182:109095. [PMID: 39236661 DOI: 10.1016/j.compbiomed.2024.109095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/06/2024] [Accepted: 08/30/2024] [Indexed: 09/07/2024]
Abstract
Craniomaxillofacial (CMF) and nasal landmark detection are fundamental components in computer-assisted surgery. Medical landmark detection method includes regression-based and heatmap-based methods, and heatmap-based methods are among the main methodology branches. The method relies on high-resolution (HR) features containing more location information to reduce the network error caused by sub-pixel location. Previous studies extracted HR patches around each landmark from downsampling images via object detection and subsequently input them into the network to obtain HR features. Complex multistage tasks affect accuracy. The network error caused by downsampling and upsampling operations during training, which interpolates low-resolution features to generate HR features or predicted heatmap, is still significant. We propose standard super-resolution landmark detection networks (SRLD-Net) and super-resolution UNet (SR-UNet) to reduce network error effectively. SRLD-Net used Pyramid pooling block, Pyramid fusion block and super-resolution fusion block to combine global prior knowledge and multi-scale local features, similarly, SR-UNet adopts Pyramid pooling block and super-resolution block. They can obviously improve representation learning ability of our proposed methods. Then the super-resolution upsampling layer is utilized to generate detail predicted heatmap. Our proposed networks were compared to state-of-the-art methods using the craniomaxillofacial, nasal, and mandibular molar datasets, demonstrating better performance. The mean errors of 18 CMF, 6 nasal and 14 mandibular landmarks are 1.39 ± 1.04, 1.31 ± 1.09, 2.01 ± 4.33 mm. These results indicate that the super-resolution methods have great potential in medical landmark detection tasks. This paper provides two effective heatmap-based landmark detection networks and the code is released in https://github.com/Runshi-Zhang/SRLD-Net.
Collapse
Affiliation(s)
- Runshi Zhang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Hao Mo
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Weini Hu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Bimeng Jie
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Lin Xu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Yang He
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Jia Ke
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China.
| |
Collapse
|
3
|
Lee Y, Pyeon JH, Han SH, Kim NJ, Park WJ, Park JB. A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis. APPLIED SCIENCES 2024; 14:7342. [DOI: 10.3390/app14167342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/10/2025]
Abstract
Background: Researchers have noted that the advent of artificial intelligence (AI) heralds a promising era, with potential to significantly enhance diagnostic and predictive abilities in clinical settings. The aim of this meta-analysis is to evaluate the discrepancies in identifying anatomical landmarks between AI and manual approaches. Methods: A comprehensive search strategy was employed, incorporating controlled vocabulary (MeSH) and free-text terms. This search was conducted by two reviewers to identify published systematic reviews. Three major electronic databases, namely, Medline via PubMed, the Cochrane database, and Embase, were searched up to May 2024. Results: Initially, 369 articles were identified. After conducting a comprehensive search and applying strict inclusion criteria, a total of ten studies were deemed eligible for inclusion in the meta-analysis. The results showed that the average difference in detecting anatomical landmarks between artificial intelligence and manual approaches was 0.35, with a 95% confidence interval (CI) ranging from −0.09 to 0.78. Additionally, the overall effect between the two groups was found to be insignificant. Upon further analysis of the subgroup of cephalometric radiographs, it was determined that there were no significant differences between the two groups in terms of detecting anatomical landmarks. Similarly, the subgroup of cone-beam computed tomography (CBCT) revealed no significant differences between the groups. Conclusions: In summary, the study concluded that the use of artificial intelligence is just as effective as the manual approach when it comes to detecting anatomical landmarks, both in general and in specific contexts such as cephalometric radiographs and CBCT evaluations.
Collapse
Affiliation(s)
- Yoonji Lee
- Orthodontics, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Jeong-Hye Pyeon
- Orthodontics, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Sung-Hoon Han
- Department of Orthodontics, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Na Jin Kim
- Medical Library, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Won-Jong Park
- Department of Oral and Maxillofacial Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Jun-Beom Park
- Department of Periodontics, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
- Dental Implantology, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
- Department of Medicine, Graduate School, The Catholic University of Korea, Seoul 06591, Republic of Korea
| |
Collapse
|
4
|
Hendrickx J, Gracea RS, Vanheers M, Winderickx N, Preda F, Shujaat S, Jacobs R. Can artificial intelligence-driven cephalometric analysis replace manual tracing? A systematic review and meta-analysis. Eur J Orthod 2024; 46:cjae029. [PMID: 38895901 PMCID: PMC11185929 DOI: 10.1093/ejo/cjae029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to investigate the accuracy and efficiency of artificial intelligence (AI)-driven automated landmark detection for cephalometric analysis on two-dimensional (2D) lateral cephalograms and three-dimensional (3D) cone-beam computed tomographic (CBCT) images. SEARCH METHODS An electronic search was conducted in the following databases: PubMed, Web of Science, Embase, and grey literature with search timeline extending up to January 2024. SELECTION CRITERIA Studies that employed AI for 2D or 3D cephalometric landmark detection were included. DATA COLLECTION AND ANALYSIS The selection of studies, data extraction, and quality assessment of the included studies were performed independently by two reviewers. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. A meta-analysis was conducted to evaluate the accuracy of the 2D landmarks identification based on both mean radial error and standard error. RESULTS Following the removal of duplicates, title and abstract screening, and full-text reading, 34 publications were selected. Amongst these, 27 studies evaluated the accuracy of AI-driven automated landmarking on 2D lateral cephalograms, while 7 studies involved 3D-CBCT images. A meta-analysis, based on the success detection rate of landmark placement on 2D images, revealed that the error was below the clinically acceptable threshold of 2 mm (1.39 mm; 95% confidence interval: 0.85-1.92 mm). For 3D images, meta-analysis could not be conducted due to significant heterogeneity amongst the study designs. However, qualitative synthesis indicated that the mean error of landmark detection on 3D images ranged from 1.0 to 5.8 mm. Both automated 2D and 3D landmarking proved to be time-efficient, taking less than 1 min. Most studies exhibited a high risk of bias in data selection (n = 27) and reference standard (n = 29). CONCLUSION The performance of AI-driven cephalometric landmark detection on both 2D cephalograms and 3D-CBCT images showed potential in terms of accuracy and time efficiency. However, the generalizability and robustness of these AI systems could benefit from further improvement. REGISTRATION PROSPERO: CRD42022328800.
Collapse
Affiliation(s)
- Julie Hendrickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Rellyca Sola Gracea
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
| | - Michiel Vanheers
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Nicolas Winderickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh 14611, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, 141 04 Stockholm, Sweden
| |
Collapse
|
5
|
Ayupova I, Makhota A, Kolsanov A, Popov N, Davidyuk M, Nekrasov I, Romanova P, Khamadeeva A. Capabilities of Cephalometric Methods to Study X-rays in Three-Dimensional Space (Review). Sovrem Tekhnologii Med 2024; 16:62-73. [PMID: 39650278 PMCID: PMC11618529 DOI: 10.17691/stm2024.16.3.07] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Indexed: 12/11/2024] Open
Abstract
The aim of the study was a systematic review of modern methods of three-dimensional cephalometric analysis, and the assessment of their efficiency. The scientific papers describing modern diagnostic methods of MFA in dental practice were searched in databases PubMed, Web of Science, eLIBRARY.RU, as well as in a searching system Google Scholar by the following key words: three-dimensional cephalometry, three-dimensional cephalometric analysis, orthodontics, asymmetric deformities, maxillofacial anomalies, 3D cephalometry, CBCT. The literature analysis showed many methods of cephalometric analysis described as three-dimensional to use two-dimensional reformates for measurements. True three-dimensional methods are not applicable for practical purposes due to the fragmentary nature of the studies. There is the disunity in choosing landmarks and supporting planes that makes the diagnosis difficult and costly. The major issue is the lack of uniform standards for tree-dimensional measurements of anatomical structures of the skull, and the data revealed can be compared to them. In this regard, the use of artificial neuron networks and in-depth study technologies to process three-dimensional images and determining standard indicators appear to be promising.
Collapse
Affiliation(s)
- I.O. Ayupova
- MD, PhD, Associate Professor, Department of Pediatric Dentistry and Orthodontics; Samara State Medical University, 89 Chapayevskaya St., Samara, 443099, Russia
| | - A.Yu. Makhota
- Student, Institute of Dentistry; Samara State Medical University, 89 Chapayevskaya St., Samara, 443099, Russia
| | - A.V. Kolsanov
- MD, DSc, Professor of the Russian Academy of Sciences, Head of the Department of Operative Surgery and Clinical Anatomy with Innovation Technology Course; Samara State Medical University, 89 Chapayevskaya St., Samara, 443099, Russia Rector; Samara State Medical University, 89 Chapayevskaya St., Samara, 443099, Russia
| | - N.V. Popov
- MD, DSc, Associate Professor, Department of Pediatric Dentistry and Orthodontics; Samara State Medical University, 89 Chapayevskaya St., Samara, 443099, Russia
| | - M.A. Davidyuk
- Bachelor of Computer Science; University of the People, 595 E. Colorado Boulevard, Suite 623, Pasadena, California, 91101, USA
| | - I.A. Nekrasov
- Student, Faculty of Dentistry; The Patrice Lumumba Peoples’ Friendship University of Russia, 6 Miklukho-Maklaya St., Moscow, 117198, Russia
| | - P.A. Romanova
- Student, Faculty of Dentistry; Tver State Medical University, 4 Sovetskaya St., Tver, 170100, Russia
| | - A.M. Khamadeeva
- MD, DSc, Professor, Department of Pediatric Dentistry and Orthodontics; Samara State Medical University, 89 Chapayevskaya St., Samara, 443099, Russia
| |
Collapse
|
6
|
Raj G, Raj M, Saigo L. Accuracy of conventional versus cone-beam CT-synthesised lateral cephalograms for cephalometric analysis: A systematic review. J Orthod 2024; 51:160-176. [PMID: 37340975 DOI: 10.1177/14653125231178038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
OBJECTIVE To assess the accuracy of cone-beam computed tomography (CBCT)-synthesised lateral cephalograms (CSLCs) compared with conventional lateral cephalograms for cephalometric analysis in human participants and skull models. METHODS The authors performed a search of PubMed, Scopus, Google Scholar and Embase databases on 4 October 2021. Included studies met the following criteria: published in English; compared conventional lateral cephalograms and CSLCs; assessed hard- and soft-tissue landmarks; and were performed on human or skull models. Data extraction from eligible studies was performed by two independent reviewers. The quality of evidence was assessed using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist tool - diagnostic accuracy studies. RESULTS A total of 20 eligible articles were included in this systematic review. Of these 20 studies, 17 presented with a low risk of bias, while three were found to have a moderate risk of bias. Hard- and soft-tissue analyses were evaluated for each imaging modality. The findings reveal that CSLCs are accurate and comparable to conventional lateral cephalograms for cephalometric analysis and demonstrate good inter-observer reliability. Four studies reported a higher accuracy with CSLCs. CONCLUSION Overall, the diagnostic accuracy and reproducibility of CSLCs were comparable to conventional lateral cephalograms in cephalometric analysis. It is justified that patients who have an existing CBCT scan do not need an additional lateral cephalogram, minimising unnecessary radiation exposure, expenses and time for the patient. Larger voxel sizes and low-dose CBCT protocols can be considered to minimise radiation exposure. REGISTRATION This study was registered with PROSPERO (CRD42021282019).
Collapse
Affiliation(s)
- Grace Raj
- National Dental Centre Singapore, Singapore
| | - Mary Raj
- National Dental Centre Singapore, Singapore
| | - Leonardo Saigo
- Department of Oral & Maxillofacial Surgery, National Dental Centre Singapore, Singapore
| |
Collapse
|
7
|
Yang S, Kim KD, Ariji E, Kise Y. Generative adversarial networks in dental imaging: a systematic review. Oral Radiol 2024; 40:93-108. [PMID: 38001347 DOI: 10.1007/s11282-023-00719-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023]
Abstract
OBJECTIVES This systematic review on generative adversarial network (GAN) architectures for dental image analysis provides a comprehensive overview to readers regarding current GAN trends in dental imagery and potential future applications. METHODS Electronic databases (PubMed/MEDLINE, Scopus, Embase, and Cochrane Library) were searched to identify studies involving GANs for dental image analysis. Eighteen full-text articles describing the applications of GANs in dental imagery were reviewed. Risk of bias and applicability concerns were assessed using the QUADAS-2 tool. RESULTS GANs were used for various imaging modalities, including two-dimensional and three-dimensional images. In dental imaging, GANs were utilized for tasks such as artifact reduction, denoising, and super-resolution, domain transfer, image generation for augmentation, outcome prediction, and identification. The generated images were incorporated into tasks such as landmark detection, object detection and classification. Because of heterogeneity among the studies, a meta-analysis could not be conducted. Most studies (72%) had a low risk of bias in all four domains. However, only three (17%) studies had a low risk of applicability concerns. CONCLUSIONS This extensive analysis of GANs in dental imaging highlighted their broad application potential within the dental field. Future studies should address limitations related to the stability, repeatability, and overall interpretability of GAN architectures. By overcoming these challenges, the applicability of GANs in dentistry can be enhanced, ultimately benefiting the dental field in its use of GANs and artificial intelligence.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
8
|
Umer F, Adnan N. Generative artificial intelligence: synthetic datasets in dentistry. BDJ Open 2024; 10:13. [PMID: 38429258 PMCID: PMC10907705 DOI: 10.1038/s41405-024-00198-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/03/2024] Open
Abstract
INTRODUCTION Artificial Intelligence (AI) algorithms, particularly Deep Learning (DL) models are known to be data intensive. This has increased the demand for digital data in all domains of healthcare, including dentistry. The main hindrance in the progress of AI is access to diverse datasets which train DL models ensuring optimal performance, comparable to subject experts. However, administration of these traditionally acquired datasets is challenging due to privacy regulations and the extensive manual annotation required by subject experts. Biases such as ethical, socioeconomic and class imbalances are also incorporated during the curation of these datasets, limiting their overall generalizability. These challenges prevent their accrual at a larger scale for training DL models. METHODS Generative AI techniques can be useful in the production of Synthetic Datasets (SDs) that can overcome issues affecting traditionally acquired datasets. Variational autoencoders, generative adversarial networks and diffusion models have been used to generate SDs. The following text is a review of these generative AI techniques and their operations. It discusses the chances of SDs and challenges with potential solutions which will improve the understanding of healthcare professionals working in AI research. CONCLUSION Synthetic data customized to the need of researchers can be produced to train robust AI models. These models, having been trained on such a diverse dataset will be applicable for dissemination across countries. However, there is a need for the limitations associated with SDs to be better understood, and attempts made to overcome those concerns prior to their widespread use.
Collapse
Affiliation(s)
- Fahad Umer
- Operative Dentistry and Endodontics, Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan
| | - Niha Adnan
- Operative Dentistry and Endodontics, Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan.
| |
Collapse
|
9
|
Yang S, Kim KD, Ariji E, Takata N, Kise Y. Evaluating the performance of generative adversarial network-synthesized periapical images in classifying C-shaped root canals. Sci Rep 2023; 13:18038. [PMID: 37865655 PMCID: PMC10590373 DOI: 10.1038/s41598-023-45290-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 10/18/2023] [Indexed: 10/23/2023] Open
Abstract
This study evaluated the performance of generative adversarial network (GAN)-synthesized periapical images for classifying C-shaped root canals, which are challenging to diagnose because of their complex morphology. GANs have emerged as a promising technique for generating realistic images, offering a potential solution for data augmentation in scenarios with limited training datasets. Periapical images were synthesized using the StyleGAN2-ADA framework, and their quality was evaluated based on the average Frechet inception distance (FID) and the visual Turing test. The average FID was found to be 35.353 (± 4.386) for synthesized C-shaped canal images and 25.471 (± 2.779) for non C-shaped canal images. The visual Turing test conducted by two radiologists on 100 randomly selected images revealed that distinguishing between real and synthetic images was difficult. These results indicate that GAN-synthesized images exhibit satisfactory visual quality. The classification performance of the neural network, when augmented with GAN data, showed improvements compared with using real data alone, and could be advantageous in addressing data conditions with class imbalance. GAN-generated images have proven to be an effective data augmentation method, addressing the limitations of limited training data and computational resources in diagnosing dental anomalies.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan
| | - Natsuho Takata
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
10
|
Romero-Tapiero N, Giraldo-Mejía A, Herrera-Rubio A, Aristizábal-Pérez JF. Concordance and reproducibility in the location of reference points for a volumetric craniofacial analysis: Cross-sectional study. J Dent Res Dent Clin Dent Prospects 2023; 17:87-95. [PMID: 37649819 PMCID: PMC10462468 DOI: 10.34172/joddd.2023.37025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 03/26/2023] [Indexed: 09/01/2023] Open
Abstract
Background Considering the limitations of visualization that occur even with the use of radiographs, the cone beam computed tomography (CBCT) becomes more attractive to diagnose and propose an assertive treatment plan. This study aimed to evaluate intra and interobserver reproducibility, and concordance of 31 reference points we described considering visualization tools and the three planes of space in a bimaxillary CBCT. Methods Three observers located in triplicate the 31 reference points in the CBCT of six healthy patients. Friedman test was used to compare intraobserver paired samples, and interobserver concordance was determined by the intraclass correlation coefficient (ICC) with ranges>0.75 (excellent), between 0.60 and 0.74 (good), between 0.40 and 0.59 (sufficient) and<0.40 (poor). The P value was set at<0.05. Results A high ICC (>0.75%) was obtained by comparing the x, y, and z values at the location of landmark points. Excellent ICC>0.75 was for 81.7% and poor<0.40 was 7.5% in the interobserver evaluation. Data showed that 25 points had excellent concordance on the x-plane, 25 on the y-plane, and 26 on the z-plane (0.75%). Conclusion Intraobserver concordance analysis indicated that location of anatomical reference points on bimaxillary CBCT is performed with great reproducibility by interpreting their location with a clear description in the three planes of space. Complexity of achieving a good precision degree in the manual marking of reference points caused by convexities of the anatomical structures involved, might explain the variability found. The systematized location of the reference points would contribute to reduce such variability.
Collapse
Affiliation(s)
- Natali Romero-Tapiero
- Department of Orthodontics, Faculty of Health, Universidad del Valle, Cali, Colombia
| | - Andrés Giraldo-Mejía
- Department of Orthodontics, Faculty of Health, Universidad CES, Medellín, Colombia
| | - Adriana Herrera-Rubio
- Department of Orthodontics, Faculty of Health, Universidad del Valle, Cali, Colombia
| | | |
Collapse
|
11
|
de Queiroz Tavares Borges Mesquita G, Vieira WA, Vidigal MTC, Travençolo BAN, Beaini TL, Spin-Neto R, Paranhos LR, de Brito Júnior RB. Artificial Intelligence for Detecting Cephalometric Landmarks: A Systematic Review and Meta-analysis. J Digit Imaging 2023; 36:1158-1179. [PMID: 36604364 PMCID: PMC10287619 DOI: 10.1007/s10278-022-00766-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/19/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023] Open
Abstract
Using computer vision through artificial intelligence (AI) is one of the main technological advances in dentistry. However, the existing literature on the practical application of AI for detecting cephalometric landmarks of orthodontic interest in digital images is heterogeneous, and there is no consensus regarding accuracy and precision. Thus, this review evaluated the use of artificial intelligence for detecting cephalometric landmarks in digital imaging examinations and compared it to manual annotation of landmarks. An electronic search was performed in nine databases to find studies that analyzed the detection of cephalometric landmarks in digital imaging examinations with AI and manual landmarking. Two reviewers selected the studies, extracted the data, and assessed the risk of bias using QUADAS-2. Random-effects meta-analyses determined the agreement and precision of AI compared to manual detection at a 95% confidence interval. The electronic search located 7410 studies, of which 40 were included. Only three studies presented a low risk of bias for all domains evaluated. The meta-analysis showed AI agreement rates of 79% (95% CI: 76-82%, I2 = 99%) and 90% (95% CI: 87-92%, I2 = 99%) for the thresholds of 2 and 3 mm, respectively, with a mean divergence of 2.05 (95% CI: 1.41-2.69, I2 = 10%) compared to manual landmarking. The menton cephalometric landmark showed the lowest divergence between both methods (SMD, 1.17; 95% CI, 0.82; 1.53; I2 = 0%). Based on very low certainty of evidence, the application of AI was promising for automatically detecting cephalometric landmarks, but further studies should focus on testing its strength and validity in different samples.
Collapse
Affiliation(s)
| | - Walbert A Vieira
- Department of Restorative Dentistry, Endodontics Division, School of Dentistry of Piracicaba, State University of Campinas, Piracicaba, São Paulo, Brazil
| | | | | | - Thiago Leite Beaini
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil
| | - Rubens Spin-Neto
- Department of Dentistry and Oral Health, Section for Oral Radiology, Aarhus University, Aarhus C, Denmark
| | - Luiz Renato Paranhos
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil.
| | | |
Collapse
|
12
|
Artificial intelligence models for clinical usage in dentistry with a focus on dentomaxillofacial CBCT: a systematic review. Oral Radiol 2023; 39:18-40. [PMID: 36269515 DOI: 10.1007/s11282-022-00660-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 09/29/2022] [Indexed: 01/05/2023]
Abstract
This study aimed at performing a systematic review of the literature on the application of artificial intelligence (AI) in dental and maxillofacial cone beam computed tomography (CBCT) and providing comprehensive descriptions of current technical innovations to assist future researchers and dental professionals. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA) Statement was followed. The study's protocol was prospectively registered. Following databases were searched, based on MeSH and Emtree terms: PubMed/MEDLINE, Embase and Web of Science. The search strategy enrolled 1473 articles. 59 publications were included, which assessed the use of AI on CBCT images in dentistry. According to the PROBAST guidelines for study design, seven papers reported only external validation and 11 reported both model building and validation on an external dataset. 40 studies focused exclusively on model development. The AI models employed mainly used deep learning models (42 studies), while other 17 papers used conventional approaches, such as statistical-shape and active shape models, and traditional machine learning methods, such as thresholding-based methods, support vector machines, k-nearest neighbors, decision trees, and random forests. Supervised or semi-supervised learning was utilized in the majority (96.62%) of studies, and unsupervised learning was used in two (3.38%). 52 publications included studies had a high risk of bias (ROB), two papers had a low ROB, and four papers had an unclear rating. Applications based on AI have the potential to improve oral healthcare quality, promote personalized, predictive, preventative, and participatory dentistry, and expedite dental procedures.
Collapse
|
13
|
Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare (Basel) 2022; 10:healthcare10101892. [PMID: 36292339 PMCID: PMC9602147 DOI: 10.3390/healthcare10101892] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 12/04/2022] Open
Abstract
Within the ever-growing healthcare industry, dental informatics is a burgeoning field of study. One of the major obstacles to the health care system’s transformation is obtaining knowledge and insightful data from complex, high-dimensional, and diverse sources. Modern biomedical research, for instance, has seen an increase in the use of complex, heterogeneous, poorly documented, and generally unstructured electronic health records, imaging, sensor data, and text. There were still certain restrictions even after many current techniques were used to extract more robust and useful elements from the data for analysis. New effective paradigms for building end-to-end learning models from complex data are provided by the most recent deep learning technology breakthroughs. Therefore, the current study aims to examine the most recent research on the use of deep learning techniques for dental informatics problems and recommend creating comprehensive and meaningful interpretable structures that might benefit the healthcare industry. We also draw attention to some drawbacks and the need for better technique development and provide new perspectives about this exciting new development in the field.
Collapse
|
14
|
Schwendicke F, Chaurasia A, Arsiwala L, Lee JH, Elhennawy K, Jost-Brinkmann PG, Demarco F, Krois J. Deep learning for cephalometric landmark detection: systematic review and meta-analysis. Clin Oral Investig 2021; 25:4299-4309. [PMID: 34046742 PMCID: PMC8310492 DOI: 10.1007/s00784-021-03990-w] [Citation(s) in RCA: 72] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 05/14/2021] [Indexed: 10/31/2022]
Abstract
OBJECTIVES Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs. METHODS Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498). DATA From 321 identified records, 19 studies (published 2017-2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7-93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (-0.581; 95 CI: -1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824). CONCLUSIONS DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed. CLINICAL SIGNIFICANCE Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective.
Collapse
Affiliation(s)
- Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany.
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany.
| | - Akhilanand Chaurasia
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
- Department of Oral Medicine and Radiology, King George's Medical University, Lucknow, India
| | - Lubaina Arsiwala
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Jae-Hong Lee
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
- Department of Periodontology, Daejeon Dental Hospital, Institute of Wonkwang Dental Research, Wonkwang University College of Dentistry, Daejeon, Korea
| | - Karim Elhennawy
- Department of Orthodontics, Dentofacial Orthopedics and Pedodontics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Paul-Georg Jost-Brinkmann
- Department of Orthodontics, Dentofacial Orthopedics and Pedodontics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Flavio Demarco
- Post-Graduate Program in Epidemiology, Federal University of Pelotas, Pelotas, Brazil
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| |
Collapse
|