1
|
Zahoor Ul Huqh M, Abdullah JY, Husein A, AL-Rawas M, W. Ahmad WMA, Jamayet NB, Alam MK, Bin Yahya MR, Selvaraj S, Tabnjh AK. Development of artificial neural network model for predicting the rapid maxillary expansion technique in children with cleft lip and palate. FRONTIERS IN DENTAL MEDICINE 2025; 6:1530372. [PMID: 40303983 PMCID: PMC12037576 DOI: 10.3389/fdmed.2025.1530372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 03/19/2025] [Indexed: 05/02/2025] Open
Abstract
Aim The study aimed to determine the mid-palatal suture (MPS) maturation stages and to develop a binary logistic regression model to predict the possibility of surgical or non-surgical rapid maxillary expansion (RME) in children with unilateral cleft lip and palate (UCLP). Methods A retrospective case control study was conducted. A total of 100 subjects were included. Data was gathered from the databases of Hospital Universiti Sains Malaysia and Hospital Raja Perempuan Zainab II, respectively. Cone beam computed tomography scans of both cleft and non-cleft individuals were utilized to determine the MPS maturation stages. Romexis software version 3.8.2 was used to analyze the images. Results The results of the binary logistic regression model were utilized to establish the relationship between the probability (P) of a specific event of interest (P(Y = 1)) and a linear combination of independent variables (Xs) using the logit link function. Potential factors such as age, gender, cleft, category of malocclusion, and MPS were chosen which could play a role in predicting the technique of RME in children with UCLP and non-UCLP. A subset of these variables was validated via multilayer feed forward neural network (MLFFNN). Conclusions The effectiveness of the hybrid biometric model created in this work, which combines bootstrap and BLR with R-syntax was evaluated in terms of how accurately it predicted a binary response variable. A validation method based on an MLFFNN was used to evaluate the precision of the generated model. This leads to a good outcome.
Collapse
Affiliation(s)
- Mohamed Zahoor Ul Huqh
- International Research Fellow, Faculty of Dentistry, SEGi University, Petaling Jaya, Selangor, Malaysia
| | - Johari Yap Abdullah
- Craniofacial Imaging Laboratory, School of Dental Sciences, Health Campus, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Malaysia
- Dental Research Unit, Center for Global Health Research, Saveetha Medical College and Hospital, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India
| | - Adam Husein
- Prosthodontic Unit, School of Dental Sciences, Health Campus, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Malaysia
- College of Dental Medicine, Department of Preventive and Restorative Dentistry, University of Sharjah, Sharjah, United Arab Emirates
| | - Matheel AL-Rawas
- Prosthodontic Unit, School of Dental Sciences, Health Campus, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Malaysia
| | - Wan Muhamad Amir W. Ahmad
- Department of Biostatistics, School of Dental Sciences, Health Campus, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Malaysia
| | - Nafij Bin Jamayet
- Division of Clinical Dentistry (Prosthodontics), School of Dentistry, International Medical University, Bukit Jalil, Kuala Lumpur, Malaysia
| | | | - Mohd Rosli Bin Yahya
- Oral & Maxillofacial Surgery Department, Hospital Raja Perempuan Zainab II, Kota Bharu, Malaysia
| | - Siddharthan Selvaraj
- Department of Public Health Dentistry, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India
- Department of Dental Research Cell, Dr.D.Y. Patil Dental College & Hospital, Pune, India
| | - Abedelmalek Kalefh Tabnjh
- Dental Research Unit, Center for Global Health Research, Saveetha Medical College and Hospital, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India
- Department of Cariology, Odontology School, Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- Department of Applied Dental Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
2
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:641-655. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
3
|
Tang H, Liu S, Tan W, Fu L, Yan M, Feng H. Prediction of midpalatal suture maturation stage based on transfer learning and enhanced vision transformer. BMC Med Inform Decis Mak 2024; 24:232. [PMID: 39174951 PMCID: PMC11340164 DOI: 10.1186/s12911-024-02598-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 07/02/2024] [Indexed: 08/24/2024] Open
Abstract
BACKGROUND Maxillary expansion is an important treatment method for maxillary transverse hypoplasia. Different methods of maxillary expansion should be carried out depending on the midpalatal suture maturation levels, and the diagnosis was validated by palatal plane cone beam computed tomography (CBCT) images by orthodontists, while such a method suffered from low efficiency and strong subjectivity. This study develops and evaluates an enhanced vision transformer (ViT) to automatically classify CBCT images of midpalatal sutures with different maturation stages. METHODS In recent years, the use of convolutional neural network (CNN) to classify images of midpalatal suture with different maturation stages has brought positive significance to the decision of the clinical maxillary expansion method. However, CNN cannot adequately learn the long-distance dependencies between images and features, which are also required for global recognition of midpalatal suture CBCT images. The Self-Attention of ViT has the function of capturing the relationship between long-distance pixels of the image. However, it lacks the inductive bias of CNN and needs more data training. To solve this problem, a CNN-enhanced ViT model based on transfer learning is proposed to classify midpalatal suture CBCT images. In this study, 2518 CBCT images of the palate plane are collected, and the images are divided into 1259 images as the training set, 506 images as the verification set, and 753 images as the test set. After the training set image preprocessing, the CNN-enhanced ViT model is trained and adjusted, and the generalization ability of the model is tested on the test set. RESULTS The classification accuracy of our proposed ViT model is 95.75%, and its Macro-averaging Area under the receiver operating characteristic Curve (AUC) and Micro-averaging AUC are 97.89% and 98.36% respectively on our data test set. The classification accuracy of the best performing CNN model EfficientnetV2_S was 93.76% on our data test set. The classification accuracy of the clinician is 89.10% on our data test set. CONCLUSIONS The experimental results show that this method can effectively complete CBCT images classification of midpalatal suture maturation stages, and the performance is better than a clinician. Therefore, the model can provide a valuable reference for orthodontists and assist them in making correct a diagnosis.
Collapse
Affiliation(s)
- Haomin Tang
- College of Medicine, Guizhou University, Guiyang, China
| | - Shu Liu
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Weijie Tan
- Guizhou Big Data Academy, Guizhou University, Guiyang, 550025, China
| | - Lingling Fu
- College of Medicine, Guizhou University, Guiyang, China
| | - Ming Yan
- Department of Oral and Maxillofacial Surgery, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hongchao Feng
- Department of Oral and Maxillofacial Surgery, Guiyang Hospital of Stomatology, Guiyang, 550002, China.
| |
Collapse
|
4
|
Altındağ A, Bahrilli S, Çelik Ö, Bayrakdar İŞ, Orhan K. Tooth numbering and classification on bitewing radiographs: an artificial intelligence pilot study. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:679-689. [PMID: 38632035 DOI: 10.1016/j.oooo.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/13/2024] [Accepted: 02/08/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVE The aim of this study is to assess the efficacy of employing a deep learning methodology for the automated identification and enumeration of permanent teeth in bitewing radiographs. The experimental procedures and techniques employed in this study are described in the following section. STUDY DESIGN A total of 1248 bitewing radiography images were annotated using the CranioCatch labeling program, developed in Eskişehir, Turkey. The dataset has been partitioned into 3 subsets: training (n = 1000, 80% of the total), validation (n = 124, 10% of the total), and test (n = 124, 10% of the total) sets. The images were subjected to a 3 × 3 clash operation in order to enhance the clarity of the labeled regions. RESULTS The F1, sensitivity and precision results of the artificial intelligence model obtained using the Yolov5 architecture in the test dataset were found to be 0.9913, 0.9954, and 0.9873, respectively. CONCLUSION The utilization of numerical identification for teeth within deep learning-based artificial intelligence algorithms applied to bitewing radiographs has demonstrated notable efficacy. The utilization of clinical decision support system software, which is augmented by artificial intelligence, has the potential to enhance the efficiency and effectiveness of dental practitioners.
Collapse
Affiliation(s)
- Ali Altındağ
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Necmettin Erbakan University, Konya, Turkey.
| | - Serkan Bahrilli
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Necmettin Erbakan University, Konya, Turkey
| | - Özer Çelik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| |
Collapse
|
5
|
da Silva RLB, Yang S, Kim D, Kim JH, Lim SH, Han J, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Automatic segmentation and classification of frontal sinuses for sex determination from CBCT scans using a two-stage anatomy-guided attention network. Sci Rep 2024; 14:11750. [PMID: 38782964 PMCID: PMC11116511 DOI: 10.1038/s41598-024-62211-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 05/14/2024] [Indexed: 05/25/2024] Open
Abstract
Sex determination is essential for identifying unidentified individuals, particularly in forensic contexts. Traditional methods for sex determination involve manual measurements of skeletal features on CBCT scans. However, these manual measurements are labor-intensive, time-consuming, and error-prone. The purpose of this study was to automatically and accurately determine sex on a CBCT scan using a two-stage anatomy-guided attention network (SDetNet). SDetNet consisted of a 2D frontal sinus segmentation network (FSNet) and a 3D anatomy-guided attention network (SDNet). FSNet segmented frontal sinus regions in the CBCT images and extracted regions of interest (ROIs) near them. Then, the ROIs were fed into SDNet to predict sex accurately. To improve sex determination performance, we proposed multi-channel inputs (MSIs) and an anatomy-guided attention module (AGAM), which encouraged SDetNet to learn differences in the anatomical context of the frontal sinus between males and females. SDetNet showed superior sex determination performance in the area under the receiver operating characteristic curve, accuracy, Brier score, and specificity compared with the other 3D CNNs. Moreover, the results of ablation studies showed a notable improvement in sex determination with the embedding of both MSI and AGAM. Consequently, SDetNet demonstrated automatic and accurate sex determination by learning the anatomical context information of the frontal sinus on CBCT scans.
Collapse
Affiliation(s)
- Renan Lucio Berbel da Silva
- Discipline of Oral Radiology, Department of Stomatology, School of Dentistry, University of São Paulo, São Paulo, SP, Brazil
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, South Korea
| | - DaEl Kim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, 08826, South Korea
| | - Jun Ho Kim
- Discipline of Oral Radiology, Department of Stomatology, School of Dentistry, University of São Paulo, São Paulo, SP, Brazil
| | - Sang-Heon Lim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, 08826, South Korea
| | - Jiyong Han
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, 08826, South Korea
| | - Jun-Min Kim
- Department of Electronics and Information Engineering, Hansung University, Seoul, 02876, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, South Korea.
| | - Won-Jin Yi
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, South Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, South Korea.
| |
Collapse
|
6
|
Strunga M, Urban R, Surovková J, Thurzo A. Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment. Healthcare (Basel) 2023; 11:healthcare11050683. [PMID: 36900687 PMCID: PMC10000479 DOI: 10.3390/healthcare11050683] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 03/03/2023] Open
Abstract
This scoping review examines the contemporary applications of advanced artificial intelligence (AI) software in orthodontics, focusing on its potential to improve daily working protocols, but also highlighting its limitations. The aim of the review was to evaluate the accuracy and efficiency of current AI-based systems compared to conventional methods in diagnosing, assessing the progress of patients' treatment and follow-up stability. The researchers used various online databases and identified diagnostic software and dental monitoring software as the most studied software in contemporary orthodontics. The former can accurately identify anatomical landmarks used for cephalometric analysis, while the latter enables orthodontists to thoroughly monitor each patient, determine specific desired outcomes, track progress, and warn of potential changes in pre-existing pathology. However, there is limited evidence to assess the stability of treatment outcomes and relapse detection. The study concludes that AI is an effective tool for managing orthodontic treatment from diagnosis to retention, benefiting both patients and clinicians. Patients find the software easy to use and feel better cared for, while clinicians can make diagnoses more easily and assess compliance and damage to braces or aligners more quickly and frequently.
Collapse
|
7
|
A Novel Deep Learning-Based Approach for Segmentation of Different Type Caries Lesions on Panoramic Radiographs. Diagnostics (Basel) 2023; 13:diagnostics13020202. [PMID: 36673010 PMCID: PMC9858411 DOI: 10.3390/diagnostics13020202] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 12/30/2022] [Accepted: 01/03/2023] [Indexed: 01/06/2023] Open
Abstract
The study aims to evaluate the diagnostic performance of an artificial intelligence system based on deep learning for the segmentation of occlusal, proximal and cervical caries lesions on panoramic radiographs. The study included 504 anonymous panoramic radiographs obtained from the radiology archive of Inonu University Faculty of Dentistry's Department of Oral and Maxillofacial Radiology from January 2018 to January 2020. This study proposes Dental Caries Detection Network (DCDNet) architecture for dental caries segmentation. The main difference between DCDNet and other segmentation architecture is that the last part of DCDNet contains a Multi-Predicted Output (MPO) structure. In MPO, the final feature map split into three different paths for detecting occlusal, proximal and cervical caries. Extensive experimental analyses were executed to analyze the DCDNet network architecture performance. In these comparison results, while the proposed model achieved an average F1-score of 62.79%, the highest average F1-score of 15.69% was achieved with the state-of-the-art segmentation models. These results show that the proposed artificial intelligence-based model can be one of the indispensable auxiliary tools of dentists in the diagnosis and treatment planning of carious lesions by enabling their detection in different locations with high success.
Collapse
|
8
|
Ari T, Sağlam H, Öksüzoğlu H, Kazan O, Bayrakdar İŞ, Duman SB, Çelik Ö, Jagtap R, Futyma-Gąbka K, Różyło-Kalinowska I, Orhan K. Automatic Feature Segmentation in Dental Periapical Radiographs. Diagnostics (Basel) 2022; 12:diagnostics12123081. [PMID: 36553088 PMCID: PMC9777016 DOI: 10.3390/diagnostics12123081] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
While a large number of archived digital images make it easy for radiology to provide data for Artificial Intelligence (AI) evaluation; AI algorithms are more and more applied in detecting diseases. The aim of the study is to perform a diagnostic evaluation on periapical radiographs with an AI model based on Convoluted Neural Networks (CNNs). The dataset includes 1169 adult periapical radiographs, which were labelled in CranioCatch annotation software. Deep learning was performed using the U-Net model implemented with the PyTorch library. The AI models based on deep learning models improved the success rate of carious lesion, crown, dental pulp, dental filling, periapical lesion, and root canal filling segmentation in periapical images. Sensitivity, precision and F1 scores for carious lesion were 0.82, 0.82, and 0.82, respectively; sensitivity, precision and F1 score for crown were 1, 1, and 1, respectively; sensitivity, precision and F1 score for dental pulp, were 0.97, 0.87 and 0.92, respectively; sensitivity, precision and F1 score for filling were 0.95, 0.95, and 0.95, respectively; sensitivity, precision and F1 score for the periapical lesion were 0.92, 0.85, and 0.88, respectively; sensitivity, precision and F1 score for root canal filling, were found to be 1, 0.96, and 0.98, respectively. The success of AI algorithms in evaluating periapical radiographs is encouraging and promising for their use in routine clinical processes as a clinical decision support system.
Collapse
Affiliation(s)
- Tugba Ari
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26040 Eskişehir, Turkey
| | - Hande Sağlam
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26040 Eskişehir, Turkey
| | - Hasan Öksüzoğlu
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26040 Eskişehir, Turkey
| | - Orhan Kazan
- Health Services Vocational School, Gazi University, 06560 Ankara, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26040 Eskişehir, Turkey
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, 26040 Eskişehir, Turkey
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Suayip Burak Duman
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Inonu University, 44000 Malatya, Turkey
| | - Özer Çelik
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, 26040 Eskişehir, Turkey
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, 26040 Eskisehir, Turkey
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Karolina Futyma-Gąbka
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-059 Lublin, Poland
| | - Ingrid Różyło-Kalinowska
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-059 Lublin, Poland
- Correspondence: ; Tel.: +48-81-502-1800
| | - Kaan Orhan
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-059 Lublin, Poland
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, 0600 Ankara, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), 0600 Ankara, Turkey
| |
Collapse
|