1
|
Pul U, Schwendicke F. Artificial intelligence for detecting periapical radiolucencies: A systematic review and meta-analysis. J Dent 2024:105104. [PMID: 38851523 DOI: 10.1016/j.jdent.2024.105104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 05/24/2024] [Accepted: 05/27/2024] [Indexed: 06/10/2024] Open
Abstract
OBJECTIVES Dentists' diagnostic accuracy in detecting periapical radiolucency varies considerably. This systematic review and meta-analysis aimed to investigate the accuracy of artificial intelligence (AI) for detecting periapical radiolucency. DATA Studies reporting diagnostic accuracy and utilizing AI for periapical radiolucency detection, published until November 2023, were eligible for inclusion. Meta-analysis was conducted using the online MetaDTA Tool to calculate pooled sensitivity and specificity. Risk of bias was evaluated using QUADAS-2. SOURCES A comprehensive search was conducted in PubMed/MEDLINE, ScienceDirect, and Institute of Electrical and Electronics Engineers (IEEE) Xplore databases. Studies reporting diagnostic accuracy and utilizing AI tools for periapical radiolucency detection, published until November 2023, were eligible for inclusion. STUDY SELECTION We identified 210 articles, of which 24 met the criteria for inclusion in the review. All but one study used one type of convolutional neural network. The body of evidence comes with an overall unclear to high risk of bias and several applicability concerns. Four of the twenty-four studies were included in a meta-analysis. AI showed a pooled sensitivity and specificity of 0.94 (95% CI = 0.90-0.96) and 0.96 (95% CI = 0.91-0.98), respectively. CONCLUSIONS AI demonstrated high specificity and sensitivity for detecting periapical radiolucencies. However, the current landscape suggests a need for diverse study designs beyond traditional diagnostic accuracy studies. Prospective real-life randomized controlled trials using heterogeneous data are needed to demonstrate the true value of AI. CLINICAL SIGNIFICANCE Artificial intelligence tools seem to have the potential to support detecting periapical radiolucencies on imagery. Notably, nearly all studies did not test fully fletched software systems but measured the mere accuracy of AI models in diagnostic accuracy studies. The true value of currently available AI-based software for lesion detection on both 2D and 3D radiographs remains uncertain.
Collapse
Affiliation(s)
- Utku Pul
- University for Digital Technologies in Medicine and Dentistry, Wiltz, Luxembourg
| | - Falk Schwendicke
- Conservative Dentistry and Periodontology, LMU Klinikum, Munich, Germany.
| |
Collapse
|
2
|
Nordblom N, Büttner M, Schwendicke F. Artificial Intelligence in Orthodontics: Critical Review. J Dent Res 2024; 103:577-584. [PMID: 38682436 PMCID: PMC11118788 DOI: 10.1177/00220345241235606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
With increasing digitalization in orthodontics, certain orthodontic manufacturing processes such as the fabrication of indirect bonding trays, aligner production, or wire bending can be automated. However, orthodontic treatment planning and evaluation remains a specialist's task and responsibility. As the prediction of growth in orthodontic patients and response to orthodontic treatment is inherently complex and individual, orthodontists make use of features gathered from longitudinal, multimodal, and standardized orthodontic data sets. Currently, these data sets are used by the orthodontist to make informed, rule-based treatment decisions. In research, artificial intelligence (AI) has been successfully applied to assist orthodontists with the extraction of relevant data from such data sets. Here, AI has been applied for the analysis of clinical imagery, such as automated landmark detection in lateral cephalograms but also for evaluation of intraoral scans or photographic data. Furthermore, AI is applied to help orthodontists with decision support for treatment decisions such as the need for orthognathic surgery or for orthodontic tooth extractions. One major challenge in current AI research in orthodontics is the limited generalizability, as most studies use unicentric data with high risks of bias. Moreover, comparing AI across different studies and tasks is virtually impossible as both outcomes and outcome metrics vary widely, and underlying data sets are not standardized. Notably, only few AI applications in orthodontics have reached full clinical maturity and regulatory approval, and researchers in the field are tasked with tackling real-world evaluation and implementation of AI into the orthodontic workflow.
Collapse
Affiliation(s)
- N.F. Nordblom
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - M. Büttner
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - F. Schwendicke
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University of Munich, Munich, Germany
| |
Collapse
|
3
|
Aminoshariae A, Nosrat A, Nagendrababu V, Dianat O, Mohammad-Rahimi H, O'Keefe AW, Setzer FC. Artificial Intelligence in Endodontic Education. J Endod 2024; 50:562-578. [PMID: 38387793 DOI: 10.1016/j.joen.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/15/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
AIMS The future dental and endodontic education must adapt to the current digitalized healthcare system in a hyper-connected world. The purpose of this scoping review was to investigate the ways an endodontic education curriculum could benefit from the implementation of artificial intelligence (AI) and overcome the limitations of this technology in the delivery of healthcare to patients. METHODS An electronic search was carried out up to December 2023 using MEDLINE, Web of Science, Cochrane Library, and a manual search of reference literature. Grey literature, ongoing clinical trials were also searched using ClinicalTrials.gov. RESULTS The search identified 251 records, of which 35 were deemed relevant to artificial intelligence (AI) and Endodontic education. Areas in which AI might aid students with their didactic and clinical endodontic education were identified as follows: 1) radiographic interpretation; 2) differential diagnosis; 3) treatment planning and decision-making; 4) case difficulty assessment; 5) preclinical training; 6) advanced clinical simulation and case-based training, 7) real-time clinical guidance; 8) autonomous systems and robotics; 9) progress evaluation and personalized education; 10) calibration and standardization. CONCLUSIONS AI in endodontic education will support clinical and didactic teaching through individualized feedback; enhanced, augmented, and virtually generated training aids; automated detection and diagnosis; treatment planning and decision support; and AI-based student progress evaluation, and personalized education. Its implementation will inarguably change the current concept of teaching Endodontics. Dental educators would benefit from introducing AI in clinical and didactic pedagogy; however, they must be aware of AI's limitations and challenges to overcome.
Collapse
Affiliation(s)
| | - Ali Nosrat
- Division of Endodontics, Department of Advanced Oral Sciences and Therapeutics, School of Dentistry, University of Maryland Baltimore, Baltimore, Maryland; Private Practice, Centreville Endodontics, Centreville, Virginia
| | - Venkateshbabu Nagendrababu
- Department of Preventive and Restorative Dentistry, University of Sharjah, College of Dental Medicine, Sharjah, United Arab Emirates
| | - Omid Dianat
- Division of Endodontics, Department of Advanced Oral Sciences and Therapeutics, School of Dentistry, University of Maryland Baltimore, Baltimore, Maryland; Private Practice, Centreville Endodontics, Centreville, Virginia
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | | | - Frank C Setzer
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
4
|
Bürger VK, Amann J, Bui CKT, Fehr J, Madai VI. The unmet promise of trustworthy AI in healthcare: why we fail at clinical translation. Front Digit Health 2024; 6:1279629. [PMID: 38698888 PMCID: PMC11063331 DOI: 10.3389/fdgth.2024.1279629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 04/02/2024] [Indexed: 05/05/2024] Open
Abstract
Artificial intelligence (AI) has the potential to revolutionize healthcare, for example via decision support systems, computer vision approaches, or AI-based prevention tools. Initial results from AI applications in healthcare show promise but are rarely translated into clinical practice successfully and ethically. This occurs despite an abundance of "Trustworthy AI" guidelines. How can we explain the translational gaps of AI in healthcare? This paper offers a fresh perspective on this problem, showing that failing translation of healthcare AI markedly arises from a lack of an operational definition of "trust" and "trustworthiness". This leads to (a) unintentional misuse concerning what trust (worthiness) is and (b) the risk of intentional abuse by industry stakeholders engaging in ethics washing. By pointing out these issues, we aim to highlight the obstacles that hinder translation of Trustworthy medical AI to practice and prevent it from fulfilling its unmet promises.
Collapse
Affiliation(s)
- Valerie K. Bürger
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Julia Amann
- Strategy and Innovation, Careum Foundation, Zurich, Switzerland
| | - Cathrine K. T. Bui
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Jana Fehr
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité—Universitätsmedizin Berlin, Berlin, Germany
- Digital Health & Machine Learning, Hasso Plattner Institute for Digital Engineering, Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
| | - Vince I. Madai
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité—Universitätsmedizin Berlin, Berlin, Germany
- Faculty of Computing, Engineering, and the Built Environment, School of Computing and Digital Technology, Birmingham City University, Birmingham, United Kingdom
| |
Collapse
|
5
|
Zhang Y, Zhu T, Zheng Y, Xiong Y, Liu W, Zeng W, Tang W, Liu C. Machine learning-based medical imaging diagnosis in patients with temporomandibular disorders: a diagnostic test accuracy systematic review and meta-analysis. Clin Oral Investig 2024; 28:186. [PMID: 38430334 DOI: 10.1007/s00784-024-05586-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 02/25/2024] [Indexed: 03/03/2024]
Abstract
OBJECTIVES Temporomandibular disorders (TMDs) are the second most common musculoskeletal condition which are challenging tasks for most clinicians. Recent research used machine learning (ML) algorithms to diagnose TMDs intelligently. This study aimed to systematically evaluate the quality of these studies and assess the diagnostic accuracy of existing models. MATERIALS AND METHODS Twelve databases (Europe PMC, Embase, etc.) and two registers were searched for published and unpublished studies using ML algorithms on medical images. Two reviewers extracted the characteristics of studies and assessed the methodological quality using the QUADAS-2 tool independently. RESULTS A total of 28 studies (29 reports) were included: one was at unclear risk of bias and the others were at high risk. Thus the certainty of evidence was quite low. These studies used many types of algorithms including 8 machine learning models (logistic regression, support vector machine, random forest, etc.) and 15 deep learning models (Resnet152, Yolo v5, Inception V3, etc.). The diagnostic accuracy of a few models was relatively satisfactory. The pooled sensitivity and specificity were 0.745 (0.660-0.814) and 0.770 (0.700-0.828) in random forest, 0.765 (0.686-0.829) and 0.766 (0.688-0.830) in XGBoost, and 0.781 (0.704-0.843) and 0.781 (0.704-0.843) in LightGBM. CONCLUSIONS Most studies had high risks of bias in Patient Selection and Index Test. Some algorithms are relatively satisfactory and might be promising in intelligent diagnosis. Overall, more high-quality studies and more types of algorithms should be conducted in the future. CLINICAL RELEVANCE We evaluated the diagnostic accuracy of the existing models and provided clinicians with much advice about the selection of algorithms. This study stated the promising orientation of future research, and we believe it will promote the intelligent diagnosis of TMDs.
Collapse
Affiliation(s)
- Yunan Zhang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China
| | - Tao Zhu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China
| | - Yunhao Zheng
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China
| | - Yutao Xiong
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China
| | - Wei Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China
| | - Wei Zeng
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China
| | - Wei Tang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China.
| | - Chang Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China.
| |
Collapse
|
6
|
Büttner M, Leser U, Schneider L, Schwendicke F. Natural Language Processing: Chances and Challenges in Dentistry. J Dent 2024; 141:104796. [PMID: 38072335 DOI: 10.1016/j.jdent.2023.104796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/25/2023] [Accepted: 11/27/2023] [Indexed: 12/21/2023] Open
Abstract
INTRODUCTION Natural language processing (NLP) is an intersection between Computer Science and Linguistic which aims to enable machines to process and understand human language. We here summarized applications and limitations of NLP in dentistry. DATA AND SOURCES Narrative review. FINDINGS NLP has evolved increasingly fast. For the dental domain, relevant NLP applications are text classification (e.g., symptom classification) and natural language generation and understanding (e.g., clinical chatbots assisting professionals in office work and patient communication). Analyzing large quantities of text will allow understanding diseases and their trajectories and support a more precise and personalized care. Speech recognition systems may serve as virtual assistants and facilitate automated documentation. However, to date, NLP has rarely been applied in dentistry. Existing research focuses mainly on rule-based solutions for narrow tasks. Technologies such as Recurrent Neural Networks and Transformers have been shown to surpass the language processing capabilities of such rule-based solutions in many fields, but are data-hungry (i.e., rely on large amounts of training data), which limits their application in the dental domain at present. Technologies such as federated or transfer learning or data sharing concepts may allow to overcome this limitation, while challenges in terms of explainability, reproducibility, generalizability and evaluation of NLP in dentistry remain to be resolved for enabling approval of such technologies in medical devices and services. CONCLUSIONS NLP will become a cornerstone of a number of applications in dentistry. The community is called to action to improve the current limitations and foster reliable, high-quality dental NLP. CLINICAL SIGNIFICANCE NLP for text classification (e.g., dental symptom classification) and language generation and understanding (e.g., clinical chatbots, speech recognition) will support administrative tasks in dentistry, provide deeper insights for clinicians and support research and education.
Collapse
Affiliation(s)
- Martha Büttner
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Germany.
| | - Ulf Leser
- Department of Computer Science, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Lisa Schneider
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Germany
| | - Falk Schwendicke
- Clinic for Operative, Preventive and Pediatric Dentistry and Periodontology, Ludwig-Maximilians-University, Munich, Germany
| |
Collapse
|
7
|
Lashkaripour A, McIntyre DP, Calhoun SGK, Krauth K, Densmore DM, Fordyce PM. Design automation of microfluidic single and double emulsion droplets with machine learning. Nat Commun 2024; 15:83. [PMID: 38167827 PMCID: PMC10761910 DOI: 10.1038/s41467-023-44068-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024] Open
Abstract
Droplet microfluidics enables kHz screening of picoliter samples at a fraction of the cost of other high-throughput approaches. However, generating stable droplets with desired characteristics typically requires labor-intensive empirical optimization of device designs and flow conditions that limit adoption to specialist labs. Here, we compile a comprehensive droplet dataset and use it to train machine learning models capable of accurately predicting device geometries and flow conditions required to generate stable aqueous-in-oil and oil-in-aqueous single and double emulsions from 15 to 250 μm at rates up to 12000 Hz for different fluids commonly used in life sciences. Blind predictions by our models for as-yet-unseen fluids, geometries, and device materials yield accurate results, establishing their generalizability. Finally, we generate an easy-to-use design automation tool that yield droplets within 3 μm (<8%) of the desired diameter, facilitating tailored droplet-based platforms and accelerating their utility in life sciences.
Collapse
Affiliation(s)
- Ali Lashkaripour
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Department of Genetics, Stanford University, Stanford, CA, USA.
| | - David P McIntyre
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Biological Design Center, Boston University, Boston, MA, USA
| | | | - Karl Krauth
- Department of Genetics, Stanford University, Stanford, CA, USA
| | - Douglas M Densmore
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Biological Design Center, Boston University, Boston, MA, USA
- Department of Electrical & Computer Engineering, Boston University, Boston, MA, USA
| | - Polly M Fordyce
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Department of Genetics, Stanford University, Stanford, CA, USA.
- Chan-Zuckerberg Biohub, San Francisco, CA, USA.
- Sarafan ChEM-H Institute, Stanford University, Stanford, CA, USA.
| |
Collapse
|
8
|
Çelik B, Savaştaer EF, Kaya HI, Çelik ME. The role of deep learning for periapical lesion detection on panoramic radiographs. Dentomaxillofac Radiol 2023; 52:20230118. [PMID: 37641964 DOI: 10.1259/dmfr.20230118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023] Open
Abstract
OBJECTIVE This work aimed to detect automatically periapical lesion on panoramic radiographs (PRs) using deep learning. METHODS 454 objects in 357 PRs were anonymized and manually labeled. They are then pre-processed to improve image quality and enhancement purposes. The data were randomly assigned into the training, validation, and test folders with ratios of 0.8, 0.1, and 0.1, respectively. The state-of-art 10 different deep learning-based detection frameworks including various backbones were applied to periapical lesion detection problem. Model performances were evaluated by mean average precision, accuracy, precision, recall, F1 score, precision-recall curves, area under curve and several other Common Objects in Context detection evaluation metrics. RESULTS Deep learning-based detection frameworks were generally successful in detecting periapical lesions on PRs. Detection performance, mean average precision, varied between 0.832 and 0.953 while accuracy was between 0.673 and 0.812 for all models. F1 score was between 0.8 and 0.895. RetinaNet performed the best detection performance, similarly Adaptive Training Sample Selection provided F1 score of 0.895 as highest value. Testing with external data supported our findings. CONCLUSION This work showed that deep learning models can reliably detect periapical lesions on PRs. Artificial intelligence-based on deep learning tools are revolutionizing dental healthcare and can help both clinicians and dental healthcare system.
Collapse
Affiliation(s)
- Berrin Çelik
- Oral and Maxillofacial Radiology Department, Faculty of Dentistry, Ankara Yıldırım Beyazıt University, Ankara, Turkey
| | - Ertugrul Furkan Savaştaer
- Electrical Electronics Engineering Department, Faculty of Engineering, Gazi University, Ankara, Turkey
| | - Halil Ibrahim Kaya
- Electrical Electronics Engineering Department, Faculty of Engineering, Gazi University, Ankara, Turkey
| | - Mahmut Emin Çelik
- Electrical Electronics Engineering Department, Faculty of Engineering, Gazi University, Ankara, Turkey
- Biomedical Calibration and Research Center, Gazi University Hospital, Gazi University, Ankara, Turkey
| |
Collapse
|
9
|
Arora U, Sengupta D, Kumar M, Tirupathi K, Sai MK, Hareesh A, Sai Chaithanya ES, Nikhila V, Bhavana N, Vigneshwar P, Rani A, Yadav R. Perceiving placental ultrasound image texture evolution during pregnancy with normal and adverse outcome through machine learning prism. Placenta 2023; 140:109-116. [PMID: 37572594 DOI: 10.1016/j.placenta.2023.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/29/2023] [Accepted: 07/19/2023] [Indexed: 08/14/2023]
Abstract
INTRODUCTION The objective was to perform placental ultrasound image texture (UPIA) in first (T1), second(T2) and third(T3) trimesters of pregnancy using machine learning( ML). METHODS In this prospective observational study the 2D placental ultrasound (US) images from 11-14 weeks, 20-24 weeks, and 28-32 weeks were taken. The image data was divided into training, validating, and testing subsets in the ratio of 80%, 10%, and 10%. Three different ML techniques, deep learning, transfer learning, and vision transformer were used for UPIA. RESULTS Out of 1008 cases included in the study, 59.5% (600/1008) had a normal outcome. The image texture classification was compared between T1&T2, T2 &T3 and T1&T3 pairs. Using Inception v3 model, to classify T1& T2 images, gave the accuracy, Cohen Kappa score of 83.3%, 0.662 respectively. The image classification between T1&T3 achieved best results using EfficientNetB0 model, having the accuracy, Cohen Kappa score, sensitivity and specificity of 87.5%, 0.749, 83.4%, and 88.9% respectively. Comparison of placental image texture among cases with materno-fetal adverse outcome and controls was done using Efficient Net B0. The F1 score, was found to be 0.824 , 0.820, and 0.892 in T1, T2 and T3 respectively. The sensitivity and specificity of the model was 77.4% at 80.2% at T1 but increased to 81.0% and 93.9% at T2 &T3 respectively. DISCUSSION The study presents a novel technique to classify placental ultrasound image texture using ML models and could differentiate first and third-trimester normal placenta and normal and adverse pregnancy outcome images with good accuracy.
Collapse
Affiliation(s)
- Urvashi Arora
- Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Debarka Sengupta
- Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Manisha Kumar
- Department of Obstetrics and Gynecology, Lady Hardinge Medical College, New Delhi, 110001, India.
| | | | | | - Amuru Hareesh
- Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | | | | | - Nellore Bhavana
- Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Palani Vigneshwar
- Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Anjali Rani
- Lady Hardinge Medical College, New Delhi, 110001, India
| | - Reena Yadav
- Department of Obstetrics and Gynecology, Lady Hardinge Medical College, New Delhi, 110001, India
| |
Collapse
|
10
|
Rovira-Lastra B, Khoury-Ribas L, Flores-Orozco EI, Ayuso-Montero R, Chaurasia A, Martinez-Gomis J. Accuracy of digital and conventional systems in locating occlusal contacts: A clinical study. J Prosthet Dent 2023:S0022-3913(23)00481-X. [PMID: 37612195 DOI: 10.1016/j.prosdent.2023.06.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 06/20/2023] [Accepted: 06/21/2023] [Indexed: 08/25/2023]
Abstract
STATEMENT OF PROBLEM The accuracy of methods used for locating occlusal contacts throughout the entire clinical procedure has been poorly studied. PURPOSE The purpose of this clinical study was to determine the reproducibility and criterion validity for different methods of locating occlusal contacts. MATERIAL AND METHODS Thirty-two adults with natural dentitions participated in this cross-sectional test-retest study. In total, occlusal contacts at maximum intercuspation were recorded by using 15 methods: silicone transillumination with Occlufast Rock (40, 50, 100, and 200 µm) and Occlufast CAD (40 and 50 µm); virtual occlusion (100, 200, 300, and 400 µm); articulating film (12-, 40-, 100-, and 200-µm-thick); and T-Scan III. Images of the occlusal records were scaled and calibrated spatially, and the occlusal contacts of the right posterior mandibular teeth were delimited by using the FIJI software program. Reproducibility was expressed as 95% confidence intervals (95% CI) of the percentage of agreement in the location of the occlusal contacts between images from the test sessions against retest sessions using the same method. Criterion validity was expressed as 95% CI of the percentage of agreement in the location of the occlusal contacts between images from the test sessions against images from Occlufast Rock (criterion standard). RESULTS Occlufast Rock achieved 85% to 95% agreement in the location of the occlusal contacts between the 2 sessions, whereas Occlufast CAD, 200-µm articulating film, and T-Scan offered 79% to 86%, 68% to 75%, and 65% to 75% agreement, respectively. The most valid method was Occlufast CAD (74% to 80%) followed by the 200-µm articulating film (57% to 63%), 400-µm virtual occlusion (53% to 62%), 100-µm articulating film (52% to 60%), and T-Scan (48% to 56%). CONCLUSIONS Conventional methods, such as 100- and 200-µm articulating film and digital methods, including 400 µm virtual occlusion and T-Scan, offer sufficient accuracy in locating the occlusal contacts. However, strategies are needed to improve accuracy.
Collapse
Affiliation(s)
- Bernat Rovira-Lastra
- Assistant Professor, Department of Odontostomatology, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Catalonia, Spain
| | - Laura Khoury-Ribas
- Assistant Professor, Department of Odontostomatology, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Catalonia, Spain
| | - Elan-Ignacio Flores-Orozco
- Associate Professor, Department of Prosthodontics, Faculty of Dentistry, Autonomous University of Nayarit, Tepic, Mexico
| | - Raul Ayuso-Montero
- Associate Professor, Department of Odontostomatology, School of Dentistry, Faculty of Medicine and Health Sciences, University of Barcelona, Campus de Bellvitge 08907 L'Hospitalet de Llobregat, Barcelona, Catalonia, Spain
| | - Akhilanand Chaurasia
- Associate Professor, Department of Oral Medicine and Radiology, King George's Medical University, Lucknow, India
| | - Jordi Martinez-Gomis
- Associate Professor, Serra Hunter Fellow, Department of Odontostomatology, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Catalonia, Spain; and Researcher, Oral Health and Masticatory System Group (Bellvitge Biomedical Research Institute) IDIBELL, L'Hospitalet de Llobregat, Barcelona, Catalonia, Spain.
| |
Collapse
|
11
|
Pfänder L, Schneider L, Büttner M, Krois J, Meyer-Lueckel H, Schwendicke F. Multi-modal deep learning for automated assembly of periapical radiographs. J Dent 2023; 135:104588. [PMID: 37348642 DOI: 10.1016/j.jdent.2023.104588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 03/23/2023] [Accepted: 06/13/2023] [Indexed: 06/24/2023] Open
Abstract
OBJECTIVES Periapical radiographs are oftentimes taken in series to display all teeth present in the oral cavity. Our aim was to automatically assemble such a series of periapical radiographs into an anatomically correct status using a multi-modal deep learning model. METHODS 4,707 periapical images from 387 patients (on average, 12 images per patient) were used. Radiographs were labeled according to their field of view and the dataset split into a training, validation, and test set, stratified by patient. In addition to the radiograph the timestamp of image generation was extracted and abstracted as follows: A matrix, containing the normalized timestamps of all images of a patient was constructed, representing the order in which images were taken, providing temporal context information to the deep learning model. Using the image data together with the time sequence data a multi-modal deep learning model consisting of two residual convolutional neural networks (ResNet-152 for image data, ResNet-50 for time data) was trained. Additionally, two uni-modal models were trained on image data and time data, respectively. A custom scoring technique was used to measure model performance. RESULTS Multi-modal deep learning outperformed both uni-modal image-based learning (p<0.001) and time-based learning (p<0.05). The multi-modal deep learning model predicted tooth labels with an F1-score, sensitivity and precision of 0.79, respectively, and an accuracy of 0.99. 37 out of 77 patient datasets were fully correctly assembled by multi-modal learning; in the remaining ones, usually only one image was incorrectly labeled. CONCLUSIONS Multi-modal modeling allowed automated assembly of periapical radiographs and outperformed both uni-modal models. Dental machine learning models can benefit from additional data modalities. CLINICAL SIGNIFICANCE Like humans, deep learning models may profit from multiple data sources for decision-making. We demonstrate how multi-modal learning can assist assembling periapical radiographs into an anatomically correct status. Multi-modal learning should be considered for more complex tasks, as clinically a wealth of data is usually available and could be leveraged.
Collapse
Affiliation(s)
- L Pfänder
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité-Universitätsmedizin Berlin, 14197 Berlin, Germany
| | - L Schneider
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité-Universitätsmedizin Berlin, 14197 Berlin, Germany; ITU/WHO Focus Group AI4Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - M Büttner
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité-Universitätsmedizin Berlin, 14197 Berlin, Germany; ITU/WHO Focus Group AI4Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - J Krois
- ITU/WHO Focus Group AI4Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - H Meyer-Lueckel
- Department of Restorative, Preventive and Pediatric Dentistry, zmk Bern, University of Bern, Switzerland
| | - F Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité-Universitätsmedizin Berlin, 14197 Berlin, Germany; ITU/WHO Focus Group AI4Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland.
| |
Collapse
|
12
|
Schneider L, Rischke R, Krois J, Krasowski A, Büttner M, Mohammad-Rahimi H, Chaurasia A, Pereira NS, Lee JH, Uribe SE, Shahab S, Koca-Ünsal RB, Ünsal G, Martinez-Beneyto Y, Brinz J, Tryfonos O, Schwendicke F. Federated vs Local vs Central Deep Learning of Tooth Segmentation on Panoramic Radiographs. J Dent 2023; 135:104556. [PMID: 37209769 DOI: 10.1016/j.jdent.2023.104556] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 05/22/2023] Open
Abstract
OBJECTIVE Federated Learning (FL) enables collaborative training of artificial intelligence (AI) models from multiple data sources without directly sharing data. Due to the large amount of sensitive data in dentistry, FL may be particularly relevant for oral and dental research and applications. This study, for the first time, employed FL for a dental task, automated tooth segmentation on panoramic radiographs. METHODS We employed a dataset of 4,177 panoramic radiographs collected from nine different centers (n = 143 to n = 1881 per center) across the globe and used FL to train a machine learning model for tooth segmentation. FL performance was compared against Local Learning (LL), i.e., training models on isolated data from each center (assuming data sharing not to be an option). Further, the performance gap to Central Learning (CL), i.e., training on centrally pooled data (based on data sharing agreements) was quantified. Generalizability of models was evaluated on a pooled test dataset from all centers. RESULTS For 8 out of 9 centers, FL outperformed LL with statistical significance (p<0.05); only the center providing the largest amount of data FL did not have such an advantage. For generalizability, FL outperformed LL across all centers. CL surpassed both FL and LL for performance and generalizability. CONCLUSION If data pooling (for CL) is not feasible, FL is shown to be a useful alternative to train performant and, more importantly, generalizable deep learning models in dentistry, where data protection barriers are high. CLINICAL SIGNIFICANCE This study proves the validity and utility of FL in the field of dentistry, which encourages researchers to adopt this method to improve the generalizability of dental AI models and ease their transition to the clinical environment.
Collapse
Affiliation(s)
- Lisa Schneider
- Department of Oral Diagnostics, Digital Health, and Health Services Research, Charité - University Medicine Berlin, Berlin, Germany; ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Roman Rischke
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Joachim Krois
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Aleksander Krasowski
- Department of Oral Diagnostics, Digital Health, and Health Services Research, Charité - University Medicine Berlin, Berlin, Germany; ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Martha Büttner
- Department of Oral Diagnostics, Digital Health, and Health Services Research, Charité - University Medicine Berlin, Berlin, Germany; ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Hossein Mohammad-Rahimi
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Shahid Beheshti University of Medical Sciences, Tehran, Iran Dental school, Iran
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Oral Medicine and Radiology, Faculty of Dental Sciences, King George's Medical University, Lucknow, India
| | - Nielsen S Pereira
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Private Practice in Oral and Maxillofacial Radiology, Rio de Janeiro, Brazil
| | - Jae-Hong Lee
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Periodontology, College of Dentistry and Institute of Oral Bioscience, Jeonbuk National University, Jeonju, Korea
| | - Sergio E Uribe
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Conservative Dentistry Oral Health, Riga Stradins University, Riga, Latvia; School of Dentistry, Universidad Austral de Chile, Valdivia, Chile; Baltic Biomaterials Centre of Excellence, Headquarters at Riga Technical University, Riga, Latvia
| | - Shahriar Shahab
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahed University of Medical Sciences, Tehran, Iran
| | - Revan Birke Koca-Ünsal
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Periodontology, Faculty of Dentistry, University of Kyrenia, Kyrenia, Cyprus
| | - Gürkan Ünsal
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | | | - Janet Brinz
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Conservative Dentistry and Periodontology, University Hospital, LMU Munich, Munich, Germany
| | - Olga Tryfonos
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland; Department of Periodontology and Oral Biochemistry, Academic Centre for Dentistry Amsterdam, Amsterdam, the Netherlands
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health, and Health Services Research, Charité - University Medicine Berlin, Berlin, Germany; ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva, Switzerland.
| |
Collapse
|
13
|
Shafi I, Fatima A, Afzal H, Díez IDLT, Lipari V, Breñosa J, Ashraf I. A Comprehensive Review of Recent Advances in Artificial Intelligence for Dentistry E-Health. Diagnostics (Basel) 2023; 13:2196. [PMID: 37443594 DOI: 10.3390/diagnostics13132196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/14/2023] [Accepted: 06/23/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence has made substantial progress in medicine. Automated dental imaging interpretation is one of the most prolific areas of research using AI. X-ray and infrared imaging systems have enabled dental clinicians to identify dental diseases since the 1950s. However, the manual process of dental disease assessment is tedious and error-prone when diagnosed by inexperienced dentists. Thus, researchers have employed different advanced computer vision techniques, and machine- and deep-learning models for dental disease diagnoses using X-ray and near-infrared imagery. Despite the notable development of AI in dentistry, certain factors affect the performance of the proposed approaches, including limited data availability, imbalanced classes, and lack of transparency and interpretability. Hence, it is of utmost importance for the research community to formulate suitable approaches, considering the existing challenges and leveraging findings from the existing studies. Based on an extensive literature review, this survey provides a brief overview of X-ray and near-infrared imaging systems. Additionally, a comprehensive insight into challenges faced by researchers in the dental domain has been brought forth in this survey. The article further offers an amalgamative assessment of both performances and methods evaluated on public benchmarks and concludes with ethical considerations and future research avenues.
Collapse
Affiliation(s)
- Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Anum Fatima
- National Centre for Robotics, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Hammad Afzal
- Military College of Signals (MCS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Isabel de la Torre Díez
- Department of Signal Theory and Communications and Telematic Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - Vivian Lipari
- Research Unit in Food Technologies, Agro-Food Industries and Nutrition, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Research Unit in Food Technologies, Agro-Food Industries and Nutrition, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
- Research Unit in Food Technologies, Agro-Food Industries and Nutrition, Fundación Universitaria Internacional de Colombia, Bogotá 111311, Colombia
| | - Jose Breñosa
- Research Unit in Food Technologies, Agro-Food Industries and Nutrition, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Universidade Internacional do Cuanza, Cuito EN250, Bié, Angola
- Research Unit in Food Technologies, Agro-Food Industries and Nutrition, Universidad Internacional Iberoamericana Arecibo, Puerto Rico, PR 00613, USA
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
14
|
Deep Learning for Detection of Periapical Radiolucent Lesions: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy. J Endod 2023; 49:248-261.e3. [PMID: 36563779 DOI: 10.1016/j.joen.2022.12.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 12/11/2022] [Accepted: 12/12/2022] [Indexed: 12/25/2022]
Abstract
INTRODUCTION The aim of this systematic review and meta-analysis was to investigate the overall accuracy of deep learning models in detecting periapical (PA) radiolucent lesions in dental radiographs, when compared to expert clinicians. METHODS Electronic databases of Medline (via PubMed), Embase (via Ovid), Scopus, Google Scholar, and arXiv were searched. Quality of eligible studies was assessed by using Quality Assessment and Diagnostic Accuracy Tool-2. Quantitative analyses were conducted using hierarchical logistic regression for meta-analyses on diagnostic accuracy. Subgroup analyses on different image modalities (PA radiographs, panoramic radiographs, and cone beam computed tomographic images) and on different deep learning tasks (classification, segmentation, object detection) were conducted. Certainty of evidence was assessed by using Grading of Recommendations Assessment, Development, and Evaluation system. RESULTS A total of 932 studies were screened. Eighteen studies were included in the systematic review, out of which 6 studies were selected for quantitative analyses. Six studies had low risk of bias. Twelve studies had risk of bias. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio of included studies (all image modalities; all tasks) were 0.925 (95% confidence interval [CI], 0.862-0.960), 0.852 (95% CI, 0.810-0.885), 6.261 (95% CI, 4.717-8.311), 0.087 (95% CI, 0.045-0.168), and 71.692 (95% CI, 29.957-171.565), respectively. No publication bias was detected (Egger's test, P = .82). Grading of Recommendations Assessment, Development and Evaluationshowed a "high" certainty of evidence for the studies included in the meta-analyses. CONCLUSION Compared to expert clinicians, deep learning showed highly accurate results in detecting PA radiolucent lesions in dental radiographs. Most studies had risk of bias. There was a lack of prospective studies.
Collapse
|
15
|
Shahnavazi M, Mohamadrahimi H. The application of artificial neural networks in the detection of mandibular fractures using panoramic radiography. Dent Res J (Isfahan) 2023; 20:27. [PMID: 36960025 PMCID: PMC10028573 DOI: 10.4103/1735-3327.369629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/06/2022] [Accepted: 12/20/2022] [Indexed: 03/25/2023] Open
Abstract
Background Panoramic radiography is a standard diagnostic imaging method for dentists. However, it is challenging to detect mandibular trauma and fractures in panoramic radiographs due to the superimposed facial skeleton structures. The objective of this study was to develop a deep learning algorithm that is capable of detecting mandibular fractures and trauma automatically and compare its performance with general dentists. Materials and Methods This is a retrospective diagnostic test accuracy study. This study used a two-stage deep learning framework. To train the model, 190 panoramic images were collected from four different sources. The mandible was first segmented using a U-net model. Then, to detect fractures, a model named Faster region-based convolutional neural network was applied. In the end, a comparison was made between the accuracy, specificity, and sensitivity of artificial intelligence and general dentists in trauma diagnosis. Results The mAP50 and mAP75 for object detection were 98.66% and 57.90%, respectively. The classification accuracy of the model was 91.67%. The sensitivity and specificity of the model were 100% and 83.33%, respectively. On the other hand, human-level diagnostic accuracy, sensitivity, and specificity were 87.22 ± 8.91, 82.22 ± 16.39, and 92.22 ± 6.33, respectively. Conclusion Our framework can provide a level of performance better than general dentists when it comes to diagnosing trauma or fractures.
Collapse
Affiliation(s)
- Maryam Shahnavazi
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aja University of Medical Sciences, Tehran, Iran
- Address for correspondence: Dr. Maryam Shahnavazi, School of Dentistry, Aja University of Medical Sciences, Misaq Complex, 13th East Street, Ajoudanieh, Tehran, Iran. E-mail:
| | - Hosein Mohamadrahimi
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
16
|
Hung KF, Yeung AWK, Bornstein MM, Schwendicke F. Personalized dental medicine, artificial intelligence, and their relevance for dentomaxillofacial imaging. Dentomaxillofac Radiol 2023; 52:20220335. [PMID: 36472627 PMCID: PMC9793453 DOI: 10.1259/dmfr.20220335] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Personalized medicine refers to the tailoring of diagnostics and therapeutics to individuals based on one's biological, social, and behavioral characteristics. While personalized dental medicine is still far from being a reality, advanced artificial intelligence (AI) technologies with improved data analytic approaches are expected to integrate diverse data from the individual, setting, and system levels, which may facilitate a deeper understanding of the interaction of these multilevel data and therefore bring us closer to more personalized, predictive, preventive, and participatory dentistry, also known as P4 dentistry. In the field of dentomaxillofacial imaging, a wide range of AI applications, including several commercially available software options, have been proposed to assist dentists in the diagnosis and treatment planning of various dentomaxillofacial diseases, with performance similar or even superior to that of specialists. Notably, the impact of these dental AI applications on treatment decision, clinical and patient-reported outcomes, and cost-effectiveness has so far been assessed sparsely. Such information should be further investigated in future studies to provide patients, providers, and healthcare organizers a clearer picture of the true usefulness of AI in daily dental practice.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Division of Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Michael M. Bornstein
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, Basel, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
17
|
Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current Applications of Deep Learning and Radiomics on CT and CBCT for Maxillofacial Diseases. Diagnostics (Basel) 2022; 13:diagnostics13010110. [PMID: 36611402 PMCID: PMC9818323 DOI: 10.3390/diagnostics13010110] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The increasing use of computed tomography (CT) and cone beam computed tomography (CBCT) in oral and maxillofacial imaging has driven the development of deep learning and radiomics applications to assist clinicians in early diagnosis, accurate prognosis prediction, and efficient treatment planning of maxillofacial diseases. This narrative review aimed to provide an up-to-date overview of the current applications of deep learning and radiomics on CT and CBCT for the diagnosis and management of maxillofacial diseases. Based on current evidence, a wide range of deep learning models on CT/CBCT images have been developed for automatic diagnosis, segmentation, and classification of jaw cysts and tumors, cervical lymph node metastasis, salivary gland diseases, temporomandibular (TMJ) disorders, maxillary sinus pathologies, mandibular fractures, and dentomaxillofacial deformities, while CT-/CBCT-derived radiomics applications mainly focused on occult lymph node metastasis in patients with oral cancer, malignant salivary gland tumors, and TMJ osteoarthritis. Most of these models showed high performance, and some of them even outperformed human experts. The models with performance on par with human experts have the potential to serve as clinically practicable tools to achieve the earliest possible diagnosis and treatment, leading to a more precise and personalized approach for the management of maxillofacial diseases. Challenges and issues, including the lack of the generalizability and explainability of deep learning models and the uncertainty in the reproducibility and stability of radiomic features, should be overcome to gain the trust of patients, providers, and healthcare organizers for daily clinical use of these models.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Qi Yong H. Ai
- Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lun M. Wong
- Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Dion Tik Shun Li
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Yiu Yan Leung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
- Correspondence:
| |
Collapse
|
18
|
Chua M, Kim D, Choi J, Lee NG, Deshpande V, Schwab J, Lev MH, Gonzalez RG, Gee MS, Do S. Tackling prediction uncertainty in machine learning for healthcare. Nat Biomed Eng 2022:10.1038/s41551-022-00988-x. [PMID: 36581695 DOI: 10.1038/s41551-022-00988-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 11/17/2022] [Indexed: 12/31/2022]
Abstract
Predictive machine-learning systems often do not convey the degree of confidence in the correctness of their outputs. To prevent unsafe prediction failures from machine-learning models, the users of the systems should be aware of the general accuracy of the model and understand the degree of confidence in each individual prediction. In this Perspective, we convey the need of prediction-uncertainty metrics in healthcare applications, with a focus on radiology. We outline the sources of prediction uncertainty, discuss how to implement prediction-uncertainty metrics in applications that require zero tolerance to errors and in applications that are error-tolerant, and provide a concise framework for understanding prediction uncertainty in healthcare contexts. For machine-learning-enabled automation to substantially impact healthcare, machine-learning models with zero tolerance for false-positive or false-negative errors must be developed intentionally.
Collapse
Affiliation(s)
- Michelle Chua
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Doyun Kim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Jongmun Choi
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Nahyoung G Lee
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Boston, MA, USA
| | - Vikram Deshpande
- Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | - Joseph Schwab
- Department of Orthopedic Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Michael H Lev
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Ramon G Gonzalez
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Michael S Gee
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA. .,Department of Pathology, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
19
|
Fatima A, Shafi I, Afzal H, Díez IDLT, Lourdes DRSM, Breñosa J, Espinosa JCM, Ashraf I. Advancements in Dentistry with Artificial Intelligence: Current Clinical Applications and Future Perspectives. Healthcare (Basel) 2022; 10:2188. [PMID: 36360529 PMCID: PMC9690084 DOI: 10.3390/healthcare10112188] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/11/2022] [Accepted: 10/26/2022] [Indexed: 08/31/2023] Open
Abstract
Artificial intelligence has been widely used in the field of dentistry in recent years. The present study highlights current advances and limitations in integrating artificial intelligence, machine learning, and deep learning in subfields of dentistry including periodontology, endodontics, orthodontics, restorative dentistry, and oral pathology. This article aims to provide a systematic review of current clinical applications of artificial intelligence within different fields of dentistry. The preferred reporting items for systematic reviews (PRISMA) statement was used as a formal guideline for data collection. Data was obtained from research studies for 2009-2022. The analysis included a total of 55 papers from Google Scholar, IEEE, PubMed, and Scopus databases. Results show that artificial intelligence has the potential to improve dental care, disease diagnosis and prognosis, treatment planning, and risk assessment. Finally, this study highlights the limitations of the analyzed studies and provides future directions to improve dental care.
Collapse
Affiliation(s)
- Anum Fatima
- National Centre for Robotics, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Hammad Afzal
- Military College of Signals (MCS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Isabel De La Torre Díez
- Department of Signal Theory and Communications and Telematic Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - Del Rio-Solá M. Lourdes
- Department of Vascular Surgery, University Hospital of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - Jose Breñosa
- Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
- Universidade Internacional do Cuanza, Estrada Nacional 250, Bairro Kaluapanda Cuito- Bié, Angola
| | - Julio César Martínez Espinosa
- Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Universidad Internacional Iberoamericana, Campeche 24560, Mexico
- Fundación Universitaria Internacional de Colombia, Calle 39A #19-18 Bogotá D.C, Colombia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Korea
| |
Collapse
|
20
|
An Efficient Deep Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics (Basel) 2022; 12:diagnostics12092115. [PMID: 36140516 PMCID: PMC9497837 DOI: 10.3390/diagnostics12092115] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/24/2022] [Accepted: 08/29/2022] [Indexed: 12/12/2022] Open
Abstract
Efficient skin cancer detection using images is a challenging task in the healthcare domain. In today’s medical practices, skin cancer detection is a time-consuming procedure that may lead to a patient’s death in later stages. The diagnosis of skin cancer at an earlier stage is crucial for the success rate of complete cure. The efficient detection of skin cancer is a challenging task. Therefore, the numbers of skilful dermatologists around the globe are not enough to deal with today’s healthcare. The huge difference between data from various healthcare sector classes leads to data imbalance problems. Due to data imbalance issues, deep learning models are often trained on one class more than others. This study proposes a novel deep learning-based skin cancer detector using an imbalanced dataset. Data augmentation was used to balance various skin cancer classes to overcome the data imbalance. The Skin Cancer MNIST: HAM10000 dataset was employed, which consists of seven classes of skin lesions. Deep learning models are widely used in disease diagnosis through images. Deep learning-based models (AlexNet, InceptionV3, and RegNetY-320) were employed to classify skin cancer. The proposed framework was also tuned with various combinations of hyperparameters. The results show that RegNetY-320 outperformed InceptionV3 and AlexNet in terms of the accuracy, F1-score, and receiver operating characteristic (ROC) curve both on the imbalanced and balanced datasets. The performance of the proposed framework was better than that of conventional methods. The accuracy, F1-score, and ROC curve value obtained with the proposed framework were 91%, 88.1%, and 0.95, which were significantly better than those of the state-of-the-art method, which achieved 85%, 69.3%, and 0.90, respectively. Our proposed framework may assist in disease identification, which could save lives, reduce unnecessary biopsies, and reduce costs for patients, dermatologists, and healthcare professionals.
Collapse
|
21
|
Velasquez R, Barja-Ore J, Salazar-Salvatierra E, Gutiérrez-Ilave M, Mauricio-Vilchez C, Mendoza R, Mayta-Tovalino F. Characteristics, Impact, and Visibility of Scientific Publications on Artificial Intelligence in Dentistry: A Scientometric Analysis. J Contemp Dent Pract 2022; 23:761-767. [PMID: 37283008 DOI: 10.5005/jp-journals-10024-3386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
AIM To analyze the bibliometric characteristics, impact, and visibility of scientific publications on artificial intelligence (AI) in dentistry in Scopus. MATERIALS AND METHODS Descriptive and cross-sectional bibliometric study, based on the systematic search of information in Scopus between 2017 and July 10, 2022. The search strategy was elaborated with Medical Subject Headings (MeSH) and Boolean operators. The analysis of bibliometric indicators was performed with Elsevier's SciVal program. RESULTS From 2017 to 2022, the number of publications in indexed scientific journals increased, especially in the Q1 (56.1%) and Q2 (30.6%) quartile. Among the journals with the highest production, the majority was from the United States and the United Kingdom, and the Journal of Dental Research has the highest impact (14.9 citations per publication) and the most publications (31). In addition, the Charité - Universitätsmedizin Berlin (FWCI: 8.24) and Krois Joachim (FWCI: 10.09) from Germany were the institution and author with the highest expected performance relative to the world average, respectively. The United States is the country with the highest number of published papers. CLINICAL SIGNIFICANCE There is an increasing tendency to increase the scientific production on artificial intelligence in the field of dentistry, with a preference for publication in prestigious scientific journals of high impact. Most of the productive authors and institutions were from Japan. There is a need to promote and consolidate strategies to develop collaborative research both nationally and internationally.
Collapse
Affiliation(s)
- Ricardo Velasquez
- Postgraduate Department, Faculty of Dentistry, Universidad Nacional Federico Villarreal, Lima, Peru
| | - John Barja-Ore
- Research Direction, Universidad Privada del Norte, Lima, Peru
| | | | - Margot Gutiérrez-Ilave
- Academic Department of Preventive and Social Stomatology, Faculty of Dentistry, Universidad Nacional Mayor de San Marcos, Lima, Peru
| | - Cesar Mauricio-Vilchez
- Postgraduate Department, Faculty of Dentistry, Universidad Nacional Federico Villarreal, Lima, Peru
| | - Roman Mendoza
- Postgraduate Department, Faculty of Dentistry, Universidad Nacional Federico Villarreal, Lima, Peru
| | - Frank Mayta-Tovalino
- Vicerrectorado de Investigación, Universidad San Ignacio de Loyola, Av. la Fontana, La Molina, Lima, Peru, Phone: +013171000, e-mail:
| |
Collapse
|
22
|
Chandrashekar G, AlQarni S, Bumann EE, Lee Y. Collaborative deep learning model for tooth segmentation and identification using panoramic radiographs. Comput Biol Med 2022; 148:105829. [PMID: 35868047 DOI: 10.1016/j.compbiomed.2022.105829] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 04/04/2022] [Accepted: 07/03/2022] [Indexed: 11/27/2022]
Abstract
Panoramic radiographs are an integral part of effective dental treatment planning, supporting dentists in identifying impacted teeth, infections, malignancies, and other dental issues. However, screening for anomalies solely based on a dentist's assessment may result in diagnostic inconsistency, posing difficulties in developing a successful treatment plan. Recent advancements in deep learning-based segmentation and object detection algorithms have enabled the provision of predictable and practical identification to assist in the evaluation of a patient's mineralized oral health, enabling dentists to construct a more successful treatment plan. However, there has been a lack of efforts to develop collaborative models that enhance learning performance by leveraging individual models. The article describes a novel technique for enabling collaborative learning by incorporating tooth segmentation and identification models created independently from panoramic radiographs. This collaborative technique permits the aggregation of tooth segmentation and identification to produce enhanced results by recognizing and numbering existing teeth (up to 32 teeth). The experimental findings indicate that the proposed collaborative model is significantly more effective than individual learning models (e.g., 98.77% vs. 96% and 98.44% vs.91% for tooth segmentation and recognition, respectively). Additionally, our models outperform the state-of-the-art segmentation and identification research. We demonstrated the effectiveness of collaborative learning in detecting and segmenting teeth in a variety of complex situations, including healthy dentition, missing teeth, orthodontic treatment in progress, and dentition with dental implants.
Collapse
Affiliation(s)
- Geetha Chandrashekar
- Department of Computer Science Electrical Engineering, University of Missouri(2), Kansas City, MO, USA.
| | - Saeed AlQarni
- Department of Computer Science Electrical Engineering, University of Missouri(2), Kansas City, MO, USA; Department of Computing and Informatics, Saudi Electronic University, Saudi Arabia.
| | - Erin Ealba Bumann
- Department of Oral and Craniofacial Sciences, University of Missouri, Kansas City, MO, USA.
| | - Yugyung Lee
- Department of Computer Science Electrical Engineering, University of Missouri(2), Kansas City, MO, USA.
| |
Collapse
|
23
|
Engels P, Meyer O, Schönewolf J, Schlickenrieder A, Hickel R, Hesenius M, Gruhn V, Kühnisch J. Automated detection of posterior restorations in permanent teeth using artificial intelligence on intraoral photographs. J Dent 2022; 121:104124. [PMID: 35395346 DOI: 10.1016/j.jdent.2022.104124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/31/2022] [Indexed: 10/18/2022] Open
Abstract
OBJECTIVES Intraoral photographs might be considered the machine-readable equivalent of a clinical-based visual examination and can potentially be used to detect and categorize dental restorations. The first objective of this study was to develop a deep learning-based convolutional neural network (CNN) for automated detection and categorization of posterior composite, cement, amalgam, gold and ceramic restorations on clinical photographs. Second, this study aimed to determine the diagnostic accuracy for the developed CNN (test method) compared to that of an expert evaluation (reference standard). METHODS The whole image set of 1,761 images (483 of unrestored teeth, 570 of composite restorations, 213 of cements, 278 of amalgam restorations, 125 of gold restorations and 92 of ceramic restorations) was divided into a training set (N=1,407, 401, 447, 66, 231, 93, and 169, respectively) and a test set (N=354, 82, 123, 26, 47, 32, and 44). The expert diagnoses served as a reference standard for cyclic training and repeated evaluation of the CNN (ResNeXt-101-32x8d), which was trained by using image augmentation and transfer learning. Statistical analysis included the calculation of contingency tables, areas under the receiver operating characteristic curve and saliency maps. RESULTS After training was complete, the CNN was able to categorize restorations correctly with the following diagnostic accuracy values: 94.9% for unrestored teeth, 92.9% for composites, 98.3% for cements, 99.2% for amalgam restorations, 99.4% for gold restorations and 97.8% for ceramic restorations. CONCLUSIONS It was possible to categorize different types of posterior restorations on intraoral photographs automatically with a good diagnostic accuracy. CLINICAL SIGNIFICANCE Dental diagnostics might be supported by artificial intelligence-based algorithms in the future. However, further improvements are needed to increase accuracy and practicability.
Collapse
Affiliation(s)
- Paula Engels
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University Munich, Germany
| | - Ole Meyer
- Institute for Software Engineering, University of Duisburg-Essen, Essen, Germany
| | - Jule Schönewolf
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University Munich, Germany
| | - Anne Schlickenrieder
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University Munich, Germany
| | - Reinhard Hickel
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University Munich, Germany
| | - Marc Hesenius
- Institute for Software Engineering, University of Duisburg-Essen, Essen, Germany
| | - Volker Gruhn
- Institute for Software Engineering, University of Duisburg-Essen, Essen, Germany
| | - Jan Kühnisch
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University Munich, Germany.
| |
Collapse
|
24
|
Precision dentistry—what it is, where it fails (yet), and how to get there. Clin Oral Investig 2022; 26:3395-3403. [PMID: 35284954 PMCID: PMC8918420 DOI: 10.1007/s00784-022-04420-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 02/17/2022] [Indexed: 12/23/2022]
Abstract
Objectives Dentistry is stuck between the one-size-fits-all approach towards diagnostics and therapy employed for a century and the era of stratified medicine. The present review presents the concept of precision dentistry, i.e., the next step beyond stratification into risk groups, and lays out where we stand, but also what challenges we have ahead for precision dentistry to come true. Material and methods Narrative literature review. Results Current approaches for enabling more precise diagnostics and therapies focus on stratification of individuals using clinical or social risk factors or indicators. Most research in dentistry does not focus on predictions — the key for precision dentistry — but on associations. We critically discuss why both approaches (focus on a limited number of risk factors or indicators and on associations) are insufficient and elaborate on what we think may allow to overcome the status quo. Conclusions Leveraging more diverse and broad data stemming from routine or unusual sources via advanced data analytics and testing the resulting prediction models rigorously may allow further steps towards more precise oral and dental care. Clinical significance Precision dentistry refers to tailoring diagnostics and therapy to an individual; it builds on modelling, prediction making and rigorous testing. Most studies in the dental domain focus on showing associations, and do not attempt to make any predictions. Moreover, the datasets used are narrow and usually collected purposively following a clinical reasoning. Opening routine data silos and involving uncommon data sources to harvest broad data and leverage them using advanced analytics could facilitate precision dentistry.
Collapse
|
25
|
Abstract
Data are a key resource for modern societies and expected to improve quality, accessibility, affordability, safety, and equity of health care. Dental care and research are currently transforming into what we term data dentistry, with 3 main applications: 1) medical data analysis uses deep learning, allowing one to master unprecedented amounts of data (language, speech, imagery) and put them to productive use. 2) Data-enriched clinical care integrates data from individual (e.g., demographic, social, clinical and omics data, consumer data), setting (e.g., geospatial, environmental, provider-related data), and systems level (payer or regulatory data to characterize input, throughput, output, and outcomes of health care) to provide a comprehensive and continuous real-time assessment of biologic perturbations, individual behaviors, and context. Such care may contribute to a deeper understanding of health and disease and a more precise, personalized, predictive, and preventive care. 3) Data for research include open research data and data sharing, allowing one to appraise, benchmark, pool, replicate, and reuse data. Concerns and confidence into data-driven applications, stakeholders’ and system’s capabilities, and lack of data standardization and harmonization currently limit the development and implementation of data dentistry. Aspects of bias and data-user interaction require attention. Action items for the dental community circle around increasing data availability, refinement, and usage; demonstrating safety, value, and usefulness of applications; educating the dental workforce and consumers; providing performant and standardized infrastructure and processes; and incentivizing and adopting open data and data sharing.
Collapse
Affiliation(s)
- F Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - J Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité-Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
26
|
Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Panoptic Segmentation on Panoramic Radiographs: Deep Learning-Based Segmentation of Various Structures Including Maxillary Sinus and Mandibular Canal. J Clin Med 2021; 10:2577. [PMID: 34208024 PMCID: PMC8230590 DOI: 10.3390/jcm10122577] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/03/2021] [Accepted: 06/09/2021] [Indexed: 11/29/2022] Open
Abstract
Panoramic radiographs, also known as orthopantomograms, are routinely used in most dental clinics. However, it has been difficult to develop an automated method that detects the various structures present in these radiographs. One of the main reasons for this is that structures of various sizes and shapes are collectively shown in the image. In order to solve this problem, the recently proposed concept of panoptic segmentation, which integrates instance segmentation and semantic segmentation, was applied to panoramic radiographs. A state-of-the-art deep neural network model designed for panoptic segmentation was trained to segment the maxillary sinus, maxilla, mandible, mandibular canal, normal teeth, treated teeth, and dental implants on panoramic radiographs. Unlike conventional semantic segmentation, each object in the tooth and implant classes was individually classified. For evaluation, the panoptic quality, segmentation quality, recognition quality, intersection over union (IoU), and instance-level IoU were calculated. The evaluation and visualization results showed that the deep learning-based artificial intelligence model can perform panoptic segmentation of images, including those of the maxillary sinus and mandibular canal, on panoramic radiographs. This automatic machine learning method might assist dental practitioners to set up treatment plans and diagnose oral and maxillofacial diseases.
Collapse
Affiliation(s)
- Jun-Young Cha
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Hyung-In Yoon
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - In-Sung Yeo
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| |
Collapse
|
27
|
Cejudo JE, Chaurasia A, Feldberg B, Krois J, Schwendicke F. Classification of Dental Radiographs Using Deep Learning. J Clin Med 2021; 10:1496. [PMID: 33916800 PMCID: PMC8038360 DOI: 10.3390/jcm10071496] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 03/26/2021] [Accepted: 03/31/2021] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. METHODS Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on L, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. RESULTS All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (p < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. CONCLUSIONS Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.
Collapse
Affiliation(s)
- Jose E. Cejudo
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, Germany; (J.E.C.); (B.F.); (J.K.)
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dentistry, 1211 Geneva, Switzerland;
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, Uttar Pradesh, India
| | - Ben Feldberg
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, Germany; (J.E.C.); (B.F.); (J.K.)
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, Germany; (J.E.C.); (B.F.); (J.K.)
- ITU/WHO Focus Group AI on Health, Topic Group Dentistry, 1211 Geneva, Switzerland;
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, Germany; (J.E.C.); (B.F.); (J.K.)
- ITU/WHO Focus Group AI on Health, Topic Group Dentistry, 1211 Geneva, Switzerland;
| |
Collapse
|