1
|
Süküt Y, Yurdakurban E, Duran GS. Accuracy of deep learning-based upper airway segmentation. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:102048. [PMID: 39244033 DOI: 10.1016/j.jormas.2024.102048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 09/04/2024] [Indexed: 09/09/2024]
Abstract
INTRODUCTION In orthodontic treatments, accurately assessing the upper airway volume and morphology is essential for proper diagnosis and planning. Cone beam computed tomography (CBCT) is used for assessing upper airway volume through manual, semi-automatic, and automatic airway segmentation methods. This study evaluates upper airway segmentation accuracy by comparing the results of an automatic model and a semi-automatic method against the gold standard manual method. MATERIALS AND METHODS An automatic segmentation model was trained using the MONAI Label framework to segment the upper airway from CBCT images. An open-source program, ITK-SNAP, was used for semi-automatic segmentation. The accuracy of both methods was evaluated against manual segmentations. Evaluation metrics included Dice Similarity Coefficient (DSC), Precision, Recall, 95% Hausdorff Distance (HD), and volumetric differences. RESULTS The automatic segmentation group averaged a DSC score of 0.915±0.041, while the semi-automatic group scored 0.940±0.021, indicating clinically acceptable accuracy for both methods. Analysis of the 95% HD revealed that semi-automatic segmentation (0.997±0.585) was more accurate and closer to manual segmentation than automatic segmentation (1.447±0.674). Volumetric comparisons revealed no statistically significant differences between automatic and manual segmentation for total, oropharyngeal, and velopharyngeal airway volumes. Similarly, no significant differences were noted between the semi-automatic and manual methods across these regions. CONCLUSION It has been observed that both automatic and semi-automatic methods, which utilise open-source software, align effectively with manual segmentation. Implementing these methods can aid in decision-making by allowing faster and easier upper airway segmentation with comparable accuracy in orthodontic practice.
Collapse
Affiliation(s)
- Yağızalp Süküt
- Department of Orthodontics, Gülhane Faculty of Dentistry, University of Health Sciences, Ankara 06010, Turkey.
| | - Ebru Yurdakurban
- Department of Orthodontics, Faculty of Dentistry, Muğla Sıtkı Koçman University, Muğla 48000, Turkey
| | - Gökhan Serhat Duran
- Department of Orthodontics, Faculty of Dentistry, Çanakkale 18 March University, Çanakkale 17000, Turkey
| |
Collapse
|
2
|
Shetty S, Mubarak AS, R David L, Al Jouhari MO, Talaat W, Al-Rawi N, AlKawas S, Shetty S, Uzun Ozsahin D. The Application of Mask Region-Based Convolutional Neural Networks in the Detection of Nasal Septal Deviation Using Cone Beam Computed Tomography Images: Proof-of-Concept Study. JMIR Form Res 2024; 8:e57335. [PMID: 39226096 PMCID: PMC11408888 DOI: 10.2196/57335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 05/07/2024] [Accepted: 05/27/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) models are being increasingly studied for the detection of variations and pathologies in different imaging modalities. Nasal septal deviation (NSD) is an important anatomical structure with clinical implications. However, AI-based radiographic detection of NSD has not yet been studied. OBJECTIVE This research aimed to develop and evaluate a real-time model that can detect probable NSD using cone beam computed tomography (CBCT) images. METHODS Coronal section images were obtained from 204 full-volume CBCT scans. The scans were classified as normal and deviated by 2 maxillofacial radiologists. The images were then used to train and test the AI model. Mask region-based convolutional neural networks (Mask R-CNNs) comprising 3 different backbones-ResNet50, ResNet101, and MobileNet-were used to detect deviated nasal septum in 204 CBCT images. To further improve the detection, an image preprocessing technique (contrast enhancement [CEH]) was added. RESULTS The best-performing model-CEH-ResNet101-achieved a mean average precision of 0.911, with an area under the curve of 0.921. CONCLUSIONS The performance of the model shows that the model is capable of detecting nasal septal deviation. Future research in this field should focus on additional preprocessing of images and detection of NSD based on multiple planes using 3D images.
Collapse
Affiliation(s)
- Shishir Shetty
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Auwalu Saleh Mubarak
- Operational Research Center in Healthcare, Near East University, Nicosia, Turkey
| | - Leena R David
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Mhd Omar Al Jouhari
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Wael Talaat
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Natheer Al-Rawi
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Sausan AlKawas
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Sunaina Shetty
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Dilber Uzun Ozsahin
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
3
|
Ismail IN, Subramaniam PK, Chi Adam KB, Ghazali AB. Application of Artificial Intelligence in Cone-Beam Computed Tomography for Airway Analysis: A Narrative Review. Diagnostics (Basel) 2024; 14:1917. [PMID: 39272702 PMCID: PMC11394605 DOI: 10.3390/diagnostics14171917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 08/25/2024] [Accepted: 08/29/2024] [Indexed: 09/15/2024] Open
Abstract
Cone-beam computed tomography (CBCT) has emerged as a promising tool for the analysis of the upper airway, leveraging on its ability to provide three-dimensional information, minimal radiation exposure, affordability, and widespread accessibility. The integration of artificial intelligence (AI) in CBCT for airway analysis has shown improvements in the accuracy and efficiency of diagnosing and managing airway-related conditions. This review aims to explore the current applications of AI in CBCT for airway analysis, highlighting its components and processes, applications, benefits, challenges, and potential future directions. A comprehensive literature review was conducted, focusing on studies published in the last decade that discuss AI applications in CBCT airway analysis. Many studies reported the significant improvement in segmentation and measurement of airway volumes from CBCT using AI, thereby facilitating accurate diagnosis of airway-related conditions. In addition, these AI models demonstrated high accuracy and consistency in their application for airway analysis through automated segmentation tasks, volume measurement, and 3D reconstruction, which enhanced the diagnostic accuracy and allowed predictive treatment outcomes. Despite these advancements, challenges remain in the integration of AI into clinical workflows. Furthermore, variability in AI performance across different populations and imaging settings necessitates further validation studies. Continued research and development are essential to overcome current challenges and fully realize the potential of AI in airway analysis.
Collapse
Affiliation(s)
- Izzati Nabilah Ismail
- Oral and Maxillofacial Surgery Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| | - Pram Kumar Subramaniam
- Oral and Maxillofacial Surgery Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| | - Khairul Bariah Chi Adam
- Oral and Maxillofacial Surgery Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| | - Ahmad Badruddin Ghazali
- Oral Radiology Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| |
Collapse
|
4
|
Leeraha C, Kusakunniran W, Yodrabum N, Chaisrisawadisuk S, Vathanophas V, Siriapisith T. Performance enhancement of deep learning based solutions for pharyngeal airway space segmentation on MRI scans. Sci Rep 2024; 14:19671. [PMID: 39181978 PMCID: PMC11344857 DOI: 10.1038/s41598-024-70826-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Accepted: 08/21/2024] [Indexed: 08/27/2024] Open
Abstract
The automatic segmentation of the pharyngeal airway space has many potential medical use, one of which is to help facilitate the creation of the Tubingen Palatal Plate. Therefore, it is of great importance to understand which methods are suitable for this task. Here, neural network based solutions available in the literature are compared to find the best methods. Neural network models were chosen to encompass a diverse landscape. Some models were taken from the general semantic segmentation literature, while others were taken from the medical or pharyngeal airway space segmentation literature. Some models are convolutional neural networks, while others are transformer-based model or a mix of both convolutional and transformer-based model. These models include 2d/3d U-Net, Deeplabv3, YOLOv8, Swinv2 UNETR, SegFormer, and 3D MRU-Net. Furthermore, additional strategies to enhance performance were also considered. These strategies consisted of training two separate networks in multiple stages as well leveraging unlabeled data to pretrain the neural networks before fine-tuning them on the labeled data. It was found that out of all the models considered here, the 2d U-Net performed the best achieving an average dice score of 0.9180 ± 0.0111. Out of all the strategies to enhance performance, only two strategies improve the actual results but only by a small margin. Therefore, these strategies can be consider if a small increase in performance is desired from the 2d U-Net at the expense of computational resource.
Collapse
Affiliation(s)
- Chattapatr Leeraha
- Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom, 73170, Thailand
| | - Worapan Kusakunniran
- Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom, 73170, Thailand.
| | - Nutcha Yodrabum
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, 10700, Thailand
| | - Sarut Chaisrisawadisuk
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, 10700, Thailand
| | - Vannipa Vathanophas
- Department of Otorhinolaryngology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, 10700, Thailand
| | - Thanongchai Siriapisith
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, 10700, Thailand
| |
Collapse
|
5
|
Wang X, Alqahtani KA, Van den Bogaert T, Shujaat S, Jacobs R, Shaheen E. Convolutional neural network for automated tooth segmentation on intraoral scans. BMC Oral Health 2024; 24:804. [PMID: 39014389 PMCID: PMC11250967 DOI: 10.1186/s12903-024-04582-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 07/05/2024] [Indexed: 07/18/2024] Open
Abstract
BACKGROUND Tooth segmentation on intraoral scanned (IOS) data is a prerequisite for clinical applications in digital workflows. Current state-of-the-art methods lack the robustness to handle variability in dental conditions. This study aims to propose and evaluate the performance of a convolutional neural network (CNN) model for automatic tooth segmentation on IOS images. METHODS A dataset of 761 IOS images (380 upper jaws, 381 lower jaws) was acquired using an intraoral scanner. The inclusion criteria included a full set of permanent teeth, teeth with orthodontic brackets, and partially edentulous dentition. A multi-step 3D U-Net pipeline was designed for automated tooth segmentation on IOS images. The model's performance was assessed in terms of time and accuracy. Additionally, the model was deployed on an online cloud-based platform, where a separate subsample of 18 IOS images was used to test the clinical applicability of the model by comparing three modes of segmentation: automated artificial intelligence-driven (A-AI), refined (R-AI), and semi-automatic (SA) segmentation. RESULTS The average time for automated segmentation was 31.7 ± 8.1 s per jaw. The CNN model achieved an Intersection over Union (IoU) score of 91%, with the full set of teeth achieving the highest performance and the partially edentulous group scoring the lowest. In terms of clinical applicability, SA took an average of 860.4 s per case, whereas R-AI showed a 2.6-fold decrease in time (328.5 s). Furthermore, R-AI offered higher performance and reliability compared to SA, regardless of the dentition group. CONCLUSIONS The 3D U-Net pipeline was accurate, efficient, and consistent for automatic tooth segmentation on IOS images. The online cloud-based platform could serve as a viable alternative for IOS segmentation.
Collapse
Affiliation(s)
- Xiaotong Wang
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Harbin Medical University, Youzheng Street 23, Nangang, Harbin, 150001, China
| | - Khalid Ayidh Alqahtani
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia
| | - Tom Van den Bogaert
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, 14611, Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia.
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
| | - Eman Shaheen
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Dental Medicine, Karolinska Institutet, Solnavägen 1, 171 77, stockholm, 3000, Sweden
| |
Collapse
|
6
|
Ravelo V, Acero J, Fuentes-Zambrano J, García Guevara H, Olate S. Artificial Intelligence Used for Diagnosis in Facial Deformities: A Systematic Review. J Pers Med 2024; 14:647. [PMID: 38929868 PMCID: PMC11204491 DOI: 10.3390/jpm14060647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 05/26/2024] [Accepted: 06/02/2024] [Indexed: 06/28/2024] Open
Abstract
AI is included in a lot of different systems. In facial surgery, there are some AI-based software programs oriented to diagnosis in facial surgery. This study aims to evaluate the capacity and training of models for diagnosis of dentofacial deformities in class II and class III patients using artificial intelligence and the potential use for indicating orthognathic surgery. The search strategy is from 1943 to April 2024 in PubMed, Embase, Scopus, Lilacs, and Web of Science. Studies that used imaging to assess anatomical structures, airway volume, and craniofacial positions using the AI algorithm in the human population were included. The methodological quality of the studies was assessed using the Effective Public Health Practice Project instrument. The systematic search identified 697 articles. Eight studies were obtained for descriptive analysis after exclusion according to our inclusion and exclusion criteria. All studies were retrospective in design. A total of 5552 subjects with an age range between 14.7 and 56 years were obtained; 2474 (44.56%) subjects were male, and 3078 (55.43%) were female. Six studies were analyzed using 2D imaging and obtained highly accurate results in diagnosing skeletal features and determining the need for orthognathic surgery, and two studies used 3D imaging for measurement and diagnosis. Limitations of the studies such as age, diagnosis in facial deformity, and the included variables were observed. Concerning the overall analysis bias, six studies were at moderate risk due to weak study designs, while two were at high risk of bias. We can conclude that, with the few articles included, using AI-based software allows for some craniometric recognition and measurements to determine the diagnosis of facial deformities using mainly 2D analysis. However, it is necessary to perform studies based on three-dimensional images, increase the sample size, and train models in different populations to ensure accuracy of AI applications in this field. After that, the models can be trained for dentofacial diagnosis.
Collapse
Affiliation(s)
- Victor Ravelo
- Grupo de Investigación de Pregrado en Odontología (GIPO), Universidad Autónoma de Chile, Temuco 4780000, Chile;
- PhD Program in Morphological Science, Universidad de La Frontera, Temuco 4780000, Chile
| | - Julio Acero
- Department of Oral and Maxillofacial Surgery, Ramon y Cajal University Hospital, Ramon y Cajal Research Institute (IRYCIS), University of Alcala, 28034 Madrid, Spain;
| | | | - Henry García Guevara
- Department of Oral Surgery, La Floresta Medical Institute, Caracas 1060, Venezuela;
- Division for Oral and Maxillofacial Surgery, Hospital Ortopedico Infantil, Caracas 1060, Venezuela
| | - Sergio Olate
- Center for Research in Morphology and Surgery (CEMyQ), Universidad de La Frontera, Temuco 4780000, Chile
- Division of Oral, Facial and Maxillofacial Surgery, Universidad de La Frontera, Temuco 4780000, Chile
| |
Collapse
|
7
|
Shujaat S, Alfadley A, Morgan N, Jamleh A, Riaz M, Aboalela AA, Jacobs R. Emergence of artificial intelligence for automating cone-beam computed tomography-derived maxillary sinus imaging tasks. A systematic review. Clin Implant Dent Relat Res 2024. [PMID: 38863306 DOI: 10.1111/cid.13352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 04/16/2024] [Accepted: 05/20/2024] [Indexed: 06/13/2024]
Abstract
Cone-beam computed tomography (CBCT) imaging of the maxillary sinus is indispensable for implantologists, offering three-dimensional anatomical visualization, morphological variation detection, and abnormality identification, all critical for diagnostics and treatment planning in digital implant workflows. The following systematic review presented the current evidence pertaining to the use of artificial intelligence (AI) for CBCT-derived maxillary sinus imaging tasks. An electronic search was conducted on PubMed, Web of Science, and Cochrane up until January 2024. Based on the eligibility criteria, 14 articles were included that reported on the use of AI for the automation of CBCT-derived maxillary sinus assessment tasks. The QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2) tool was used to evaluate the risk of bias and applicability concerns. The AI models used were designed to automate tasks such as segmentation, classification, and prediction. Most studies related to automated maxillary sinus segmentation demonstrated high performance. In terms of classification tasks, the highest accuracy was observed for diagnosing sinusitis (99.7%), whereas the lowest accuracy was detected for classifying abnormalities such as fungal balls and chronic rhinosinusitis (83.0%). Regarding implant treatment planning, the classification of automated surgical plans for maxillary sinus floor augmentation based on residual bone height showed high accuracy (97%). Additionally, AI demonstrated high performance in predicting gender and sinus volume. In conclusion, although AI shows promising potential in automating maxillary sinus imaging tasks which could be useful for diagnostic and planning tasks in implantology, there is a need for more diverse datasets to improve the generalizability and clinical relevance of AI models. Future studies are suggested to focus on expanding the datasets, making the AI model's source available, and adhering to standardized AI reporting guidelines.
Collapse
Affiliation(s)
- Sohaib Shujaat
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | - Abdulmohsen Alfadley
- King Abdullah International Medical Research Center, Department of Restorative and Prosthetic Dental Sciences, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Nermin Morgan
- Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Ahmed Jamleh
- Department of Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, UAE
| | - Marryam Riaz
- Department of Physiology, Azra Naheed Dental College, Superior University, Lahore, Pakistan
| | - Ali Anwar Aboalela
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Section of Oral Diagnostics and Surgery, Department of Dental Medicine, Division of Oral Diagnostics and Rehabilitation, Karolinska Institutet, Huddinge, Sweden
| |
Collapse
|
8
|
Esmaeilyfard R, Bonyadifard H, Paknahad M. Dental Caries Detection and Classification in CBCT Images Using Deep Learning. Int Dent J 2024; 74:328-334. [PMID: 37940474 PMCID: PMC10988262 DOI: 10.1016/j.identj.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/24/2023] [Accepted: 10/09/2023] [Indexed: 11/10/2023] Open
Abstract
OBJECTIVES This study aimed to investigate the accuracy of deep learning algorithms to diagnose tooth caries and classify the extension and location of dental caries in cone beam computed tomography (CBCT) images. To the best of our knowledge, this is the first study to evaluate the application of deep learning for dental caries in CBCT images. METHODS The CBCT image dataset comprised 382 molar teeth with caries and 403 noncarious molar cases. The dataset was divided into a development set for training and validation and test set. Three images were obtained for each case, including axial, sagittal, and coronal. The test dataset was provided to a multiple-input convolutional neural network (CNN). The network made predictions regarding the presence or absence of dental decay and classified the lesions according to their depths and types for the provided samples. Accuracy, sensitivity, specificity, and F1 score values were measured for dental caries detection and classification. RESULTS The diagnostic accuracy, sensitivity, specificity, and F1 score for caries detection in carious molar teeth were 95.3%, 92.1%, 96.3%, and 93.2%, respectively, and for noncarious molar teeth were 94.8%, 94.3%, 95.8%, and 94.6%. The CNN network showed high sensitivity, specificity, and accuracy in classifying caries extensions and locations. CONCLUSIONS This research demonstrates that deep learning models can accurately identify dental caries and classify their depths and types with high accuracy, sensitivity, and specificity. The successful application of deep learning in this field will undoubtedly assist dental practitioners and patients in improving diagnostic and treatment planning in dentistry. CLINICAL SIGNIFICANCE This study showed that deep learning can accurately detect and classify dental caries. Deep learning can provide dental caries detection accurately. Considering the shortage of dentists in certain areas, using CNNs can lead to broader geographic coverage in detecting dental caries.
Collapse
Affiliation(s)
- Rasool Esmaeilyfard
- Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran
| | - Haniyeh Bonyadifard
- Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran
| | - Maryam Paknahad
- Oral, and Dental Disease Research Center, Oral and Maxillofacial Radiology, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, Iran.
| |
Collapse
|
9
|
Rodrigues J, Evangelopoulos E, Anagnostopoulos I, Sachdev N, Ismail A, Samsudin R, Khalaf K, Pattanaik S, Shetty SR. Impact of class II and class III skeletal malocclusion on pharyngeal airway dimensions: A systematic literature review and meta-analysis. Heliyon 2024; 10:e27284. [PMID: 38501020 PMCID: PMC10945137 DOI: 10.1016/j.heliyon.2024.e27284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 02/24/2024] [Accepted: 02/27/2024] [Indexed: 03/20/2024] Open
Abstract
Background This study is a pioneer systematic review and meta-analysis aimed at comparing the influence of Class II and Class III skeletal malocclusions on pharyngeal airway dimensions. It stands as the inaugural comprehensive assessment to collate and analyze the disparate findings from previously published articles on this topic. The objective of this study was to identify published articles that compare the effects of class II and class III skeletal malocclusion on the pharyngeal airway dimensions. Methods An all-inclusive search for existing published studies was done to identify peer-reviewed scholarly articles that compared the influence of class II and class III skeletal malocclusion on pharyngeal airway dimensions. The search was done via five electronic databases: Cochrane Library, EMBASE, Scopus, Web of Science, and PubMed. Screening of the articles was done and the eligible studies were critically assessed using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist. Results The initial search yielded 476 potential articles of which, nine were finally included in this study for a total of 866 patients. Three studies were of cross-sectional design and six were of retrospective study design. Following a critical analysis and review of the studies, class III skeletal malocclusion had significantly larger volume and area measurements compared to class II skeletal malocclusion. Conclusion Research in the field of literature has established that variations in skeletal classifications have a discernible effect on the size of the pharyngeal airways. With the advancement of skeletal malocclusions to a class III, there is an observed increase in both the volume and cross-sectional area of the airways.
Collapse
Affiliation(s)
- Jensyll Rodrigues
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | | | | | | | - Ahmad Ismail
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Rani Samsudin
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Khaled Khalaf
- Institute of Dentistry, University of Aberdeen, United Kingdom
| | - Snigdha Pattanaik
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Shishir Ram Shetty
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
10
|
Cohen O, Kundel V, Robson P, Al-Taie Z, Suárez-Fariñas M, Shah NA. Achieving Better Understanding of Obstructive Sleep Apnea Treatment Effects on Cardiovascular Disease Outcomes through Machine Learning Approaches: A Narrative Review. J Clin Med 2024; 13:1415. [PMID: 38592223 PMCID: PMC10932326 DOI: 10.3390/jcm13051415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 02/13/2024] [Accepted: 02/17/2024] [Indexed: 04/10/2024] Open
Abstract
Obstructive sleep apnea (OSA) affects almost a billion people worldwide and is associated with a myriad of adverse health outcomes. Among the most prevalent and morbid are cardiovascular diseases (CVDs). Nonetheless, randomized controlled trials (RCTs) of OSA treatment have failed to show improvements in CVD outcomes. A major limitation in our field is the lack of precision in defining OSA and specifically subgroups with the potential to benefit from therapy. Further, this has called into question the validity of using the time-honored apnea-hypopnea index as the ultimate defining criteria for OSA. Recent applications of advanced statistical methods and machine learning have brought to light a variety of OSA endotypes and phenotypes. These methods also provide an opportunity to understand the interaction between OSA and comorbid diseases for better CVD risk stratification. Lastly, machine learning and specifically heterogeneous treatment effects modeling can help uncover subgroups with differential outcomes after treatment initiation. In an era of data sharing and big data, these techniques will be at the forefront of OSA research. Advanced data science methods, such as machine-learning analyses and artificial intelligence, will improve our ability to determine the unique influence of OSA on CVD outcomes and ultimately allow us to better determine precision medicine approaches in OSA patients for CVD risk reduction. In this narrative review, we will highlight how team science via machine learning and artificial intelligence applied to existing clinical data, polysomnography, proteomics, and imaging can do just that.
Collapse
Affiliation(s)
- Oren Cohen
- Department of Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (O.C.); (V.K.)
| | - Vaishnavi Kundel
- Department of Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (O.C.); (V.K.)
| | - Philip Robson
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA;
| | - Zainab Al-Taie
- Center for Biostatistics, Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (Z.A.-T.); (M.S.-F.)
| | - Mayte Suárez-Fariñas
- Center for Biostatistics, Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (Z.A.-T.); (M.S.-F.)
| | - Neomi A. Shah
- Department of Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (O.C.); (V.K.)
| |
Collapse
|
11
|
Nogueira-Reis F, Morgan N, Suryani IR, Tabchoury CPM, Jacobs R. Full virtual patient generated by artificial intelligence-driven integrated segmentation of craniomaxillofacial structures from CBCT images. J Dent 2024; 141:104829. [PMID: 38163456 DOI: 10.1016/j.jdent.2023.104829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 12/13/2023] [Accepted: 12/29/2023] [Indexed: 01/03/2024] Open
Abstract
OBJECTIVES To assess the performance, time-efficiency, and consistency of a convolutional neural network (CNN) based automated approach for integrated segmentation of craniomaxillofacial structures compared with semi-automated method for creating a virtual patient using cone beam computed tomography (CBCT) scans. METHODS Thirty CBCT scans were selected. Six craniomaxillofacial structures, encompassing the maxillofacial complex bones, maxillary sinus, dentition, mandible, mandibular canal, and pharyngeal airway space, were segmented on these scans using semi-automated and composite of previously validated CNN-based automated segmentation techniques for individual structures. A qualitative assessment of the automated segmentation revealed the need for minor refinements, which were manually corrected. These refined segmentations served as a reference for comparing semi-automated and automated integrated segmentations. RESULTS The majority of minor adjustments with the automated approach involved under-segmentation of sinus mucosal thickening and regions with reduced bone thickness within the maxillofacial complex. The automated and the semi-automated approaches required an average time of 1.1 min and 48.4 min, respectively. The automated method demonstrated a greater degree of similarity (99.6 %) to the reference than the semi-automated approach (88.3 %). The standard deviation values for all metrics with the automated approach were low, indicating a high consistency. CONCLUSIONS The CNN-driven integrated segmentation approach proved to be accurate, time-efficient, and consistent for creating a CBCT-derived virtual patient through simultaneous segmentation of craniomaxillofacial structures. CLINICAL RELEVANCE The creation of a virtual orofacial patient using an automated approach could potentially transform personalized digital workflows. This advancement could be particularly beneficial for treatment planning in a variety of dental and maxillofacial specialties.
Collapse
Affiliation(s)
- Fernanda Nogueira-Reis
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Dakahlia 35516, Egypt
| | - Isti Rahayu Suryani
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Cinthia Pereira Machado Tabchoury
- Department of Biosciences, Division of Biochemistry, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, Huddinge, Stockholm 141 04, Sweden.
| |
Collapse
|
12
|
Swaity A, Elgarba BM, Morgan N, Ali S, Shujaat S, Borsci E, Chilvarquer I, Jacobs R. Deep learning driven segmentation of maxillary impacted canine on cone beam computed tomography images. Sci Rep 2024; 14:369. [PMID: 38172136 PMCID: PMC10764895 DOI: 10.1038/s41598-023-49613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 12/10/2023] [Indexed: 01/05/2024] Open
Abstract
The process of creating virtual models of dentomaxillofacial structures through three-dimensional segmentation is a crucial component of most digital dental workflows. This process is typically performed using manual or semi-automated approaches, which can be time-consuming and subject to observer bias. The aim of this study was to train and assess the performance of a convolutional neural network (CNN)-based online cloud platform for automated segmentation of maxillary impacted canine on CBCT image. A total of 100 CBCT images with maxillary canine impactions were randomly allocated into two groups: a training set (n = 50) and a testing set (n = 50). The training set was used to train the CNN model and the testing set was employed to evaluate the model performance. Both tasks were performed on an online cloud-based platform, 'Virtual patient creator' (Relu, Leuven, Belgium). The performance was assessed using voxel- and surface-based comparison between automated and semi-automated ground truth segmentations. In addition, the time required for segmentation was also calculated. The automated tool showed high performance for segmenting impacted canines with a dice similarity coefficient of 0.99 ± 0.02. Moreover, it was 24 times faster than semi-automated approach. The proposed CNN model achieved fast, consistent, and precise segmentation of maxillary impacted canines.
Collapse
Affiliation(s)
- Abdullah Swaity
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Prosthodontic Department, King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Prosthodontics, Tanta University, Tanta, Egypt
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Saleem Ali
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Restorative Dentistry Department, King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Elena Borsci
- Oral Diagnostic Clinic, Karolinska Institute, Stockholm, Sweden
| | - Israel Chilvarquer
- Department of Oral Radiology, School of Dentistry, University of São Paulo (USP), São Paulo, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium.
- Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
13
|
Iruvuri AG, Miryala G, Khan Y, Ramalingam NT, Sevugaperumal B, Soman M, Padmanabhan A. Revolutionizing Dental Imaging: A Comprehensive Study on the Integration of Artificial Intelligence in Dental and Maxillofacial Radiology. Cureus 2023; 15:e50292. [PMID: 38205468 PMCID: PMC10776831 DOI: 10.7759/cureus.50292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 12/08/2023] [Indexed: 01/12/2024] Open
Abstract
Recent advancements in deep learning and artificial intelligence (AI) have profoundly impacted various fields, including diagnostic imaging. Integrating AI technologies such as deep learning and convolutional neural networks has the potential to drastically improve diagnostic methods in the field of dentistry and maxillofacial radiography. A systematic study that adhered to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards was carried out to examine the efficacy and uses of AI in dentistry and maxillofacial radiography. Incorporating cohort studies, case-control studies, and randomized clinical trials, the study used an interdisciplinary methodology. A thorough search spanning peer-reviewed research papers from 2009 to 2023 was done in databases including MEDLINE/PubMed and EMBASE. The inclusion criteria were original clinical research in English that employed AI models to recognize anatomical components in oral and maxillofacial pictures, identify anomalies, and diagnose disorders. The study looked at numerous research that used cutting-edge technology to show how accurate and dependable dental imaging is. Among the tasks covered by these investigations were age estimation, periapical lesion detection, segmentation of maxillary structures, assessment of dentofacial abnormalities, and segmentation of the mandibular canal. The study revealed important developments in the precise definition of anatomical structures and the identification of diseases. The use of AI technology in dental imaging marks a revolutionary development that will usher in a time of unmatched accuracy and effectiveness. These technologies have not only improved diagnostic accuracy and enabled early disease detection but have also streamlined intricate procedures, significantly enhancing patient outcomes. The symbiotic collaboration between human expertise and machine intelligence promises a future of more sophisticated and empathetic oral healthcare.
Collapse
Affiliation(s)
- Alekhya G Iruvuri
- General Dentistry, Malla Reddy Dental College for Women, Hyderabad, IND
| | - Gouthami Miryala
- General Dentistry, SVS Institute of Dental Sciences, Mahabubnagar, IND
| | - Yusuf Khan
- Orthodontics and Dentofacial Orthopaedics, Diamond Medical Specialists, Taif, SAU
| | | | - Bharath Sevugaperumal
- General Dentistry, Rajah Muthiah Dental College and Hospital, Annamalai University, Chidambaram, IND
| | - Mrunmayee Soman
- Dentistry, Dr. D. Y. Patil Dental College and Hospital, Pune, IND
| | | |
Collapse
|
14
|
Chen H, Lv T, Luo Q, Li L, Wang Q, Li Y, Zhou D, Emami E, Schmittbuhl M, van der Stelt P, Huynh N. Reliability and accuracy of a semi-automatic segmentation protocol of the nasal cavity using cone beam computed tomography in patients with sleep apnea. Clin Oral Investig 2023; 27:6813-6821. [PMID: 37796336 DOI: 10.1007/s00784-023-05295-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023]
Abstract
OBJECTIVES The objectives of this study included using the cone beam computed tomography (CBCT) technology to assess: (1) intra- and inter-observer reliability of the volume measurement of the nasal cavity; (2) the accuracy of the segmentation protocol for evaluation of the nasal cavity. MATERIALS AND METHODS This study used test-retest reliability and accuracy methods within two different population sample groups, from Eastern Asia and North America. Thirty obstructive sleep apnea (OSA) patients were randomly selected from administrative and research oral health data archived at two dental faculties in China and Canada. To assess the reliability of the protocol, two observers performed nasal cavity volume measurement twice with a 10-day interval, using Amira software (v4.1, Visage Imaging Inc., Carlsbad, CA). The accuracy study used a computerized tomography (CT) scan of an OSA patient, who was not included in the study sample, to fabricate an anthropomorphic phantom of the nasal cavity volume with known dimensions (18.9 ml, gold standard). This phantom was scanned using one NewTom 5G (QR systems, Verona, Italy) CBCT scanner. The nasal cavity was segmented based on CBCT images and converted into standard tessellation language (STL) models. The volume of the nasal cavity was measured on the acquired STL models (18.99 ± 0.066 ml). RESULTS The intra-observer and inter-observer intraclass correlation coefficients for the volume measurement of the nasal cavity were 0.980-0.997 and 0.948-0.992 consecutively. The nasal cavity volume measurement was overestimated by 1.1%-3.1%, compared to the gold standard. CONCLUSIONS The semi-automatic segmentation protocol of the nasal cavity in patients with sleep apnea and by using cone beam computed tomography is reliable and accurate. CLINICAL RELEVANCE This study provides a reliable and accurate protocol for segmentation of nasal cavity, which will facilitate the clinician to analyze the images within nasoethmoidal region.
Collapse
Affiliation(s)
- Hui Chen
- Department of Orthodontics, School and Hospital of Stomatology, Shandong University, Shandong Key Laboratory of Oral Tissue Regeneration, Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Shandong Provincial Clinical Research Center for Oral Diseases, Cheeloo College of Medicine, Shandong University, Jinan, 250100, Shandong, China.
| | - Tao Lv
- Department of Orthodontics, School and Hospital of Stomatology, Shandong University, Shandong Key Laboratory of Oral Tissue Regeneration, Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Shandong Provincial Clinical Research Center for Oral Diseases, Cheeloo College of Medicine, Shandong University, Jinan, 250100, Shandong, China.
| | - Qing Luo
- Hospital of Stomatology, Ningbo, Zhejiang, China
| | - Lei Li
- Centre for Advanced Jet Engineering Technologies (CaJET), School of Mechanical Engineering, Key Laboratory of High-Efficiency and Clean Mechanical Manufacture at Shandong University, Ministry of Education, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| | - Qing Wang
- Department of Orthodontics, Stomatological Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Yanzhong Li
- Department of Otorhinolaryngology, NHC Key Laboratory of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, China
| | - Debo Zhou
- Key Laboratory of Special Functional Aggregated Materials, Ministry of Education, School of Chemistry and Chemical Engineering, Shandong University, Jinan, China
| | - Elham Emami
- Faculty of Dentistry, McGill University, Montreal, Quebec, Canada
| | | | - Paul van der Stelt
- Department of Oral Radilology, Academic Centre for Dentistry Amsterdam, Amsterdam, the Netherlands
| | - Nelly Huynh
- Faculty of Dental Medicine, Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
15
|
Liu J, Zhang C, Shan Z. Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare (Basel) 2023; 11:2760. [PMID: 37893833 PMCID: PMC10606213 DOI: 10.3390/healthcare11202760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/11/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
In recent years, there has been the notable emergency of artificial intelligence (AI) as a transformative force in multiple domains, including orthodontics. This review aims to provide a comprehensive overview of the present state of AI applications in orthodontics, which can be categorized into the following domains: (1) diagnosis, including cephalometric analysis, dental analysis, facial analysis, skeletal-maturation-stage determination and upper-airway obstruction assessment; (2) treatment planning, including decision making for extractions and orthognathic surgery, and treatment outcome prediction; and (3) clinical practice, including practice guidance, remote care, and clinical documentation. We have witnessed a broadening of the application of AI in orthodontics, accompanied by advancements in its performance. Additionally, this review outlines the existing limitations within the field and offers future perspectives.
Collapse
Affiliation(s)
- Junqi Liu
- Division of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| | - Chengfei Zhang
- Division of Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| | - Zhiyi Shan
- Division of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| |
Collapse
|
16
|
Lin L, Tang B, Cao L, Yan J, Zhao T, Hua F, He H. The knowledge, experience, and attitude on artificial intelligence-assisted cephalometric analysis: Survey of orthodontists and orthodontic students. Am J Orthod Dentofacial Orthop 2023; 164:e97-e105. [PMID: 37565946 DOI: 10.1016/j.ajodo.2023.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 07/01/2023] [Accepted: 07/01/2023] [Indexed: 08/12/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) developed rapidly in orthodontics, and AI-based cephalometric applications have been adopted. This study aimed to assess AI-assisted cephalometric technologies related knowledge, experience, and attitude among orthodontists and orthodontic students; describe their subject view of the applications and related technologies in orthodontics; and identify associated factors. METHODS An online cross-sectional survey based on a professional tool (www.wjx.cn) was performed from October 11-17, 2022. Participants were recruited with a purposive and snowball sampling approach. Data was collected and analyzed with descriptive statistics, chi-square tests, and multivariable generalized estimating equations. RESULTS Four hundred eighty valid questionnaires were collected and analyzed; 68.8% of the respondents agreed that AI-based cephalometric applications would replace manual and semiautomatic approaches. Practitioners using AI-assisted applications (87.5%) spent less time in cephalometric analysis than the other groups using other approaches, and 349 (72.7%) respondents considered AI-based applications could assist in obtaining more accurate analysis results. Lectures and training programs (56.0%) were the main sources of respondents' knowledge about AI. Knowledge level was associated with experience in AI-related clinical or scientific projects (P <0.001). Most respondents (88.8%) were interested in future AI applications in orthodontics. CONCLUSIONS Respondents are optimistic about the future of AI in orthodontics. AI-assisted cephalometric applications were believed to make clinical diagnostic analysis more convenient and straightforward for practitioners and even replace manual and semiautomatic approaches. The education and promotion of AI should be strengthened to elevate orthodontists' understanding.
Collapse
Affiliation(s)
- Lizhuo Lin
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Department of Orthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Bojun Tang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Department of Orthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Lingyun Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Department of Orthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Jiarong Yan
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Department of Orthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Tingting Zhao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Department of Orthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China; Center for Dentofacial Development and Sleep Medicine, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Fang Hua
- Center for Dentofacial Development and Sleep Medicine, School and Hospital of Stomatology, Wuhan University, Wuhan, China; Center for Orthodontics and Pediatric Dentistry at Optics Valley Branch, School and Hospital of Stomatology, Wuhan University, Wuhan, China; Center for Evidence-Based Stomatology, School and Hospital of Stomatology, Wuhan University, Wuhan, China; Division of Dentistry, School of Medical Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom.
| | - Hong He
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Department of Orthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China; Center for Dentofacial Development and Sleep Medicine, School and Hospital of Stomatology, Wuhan University, Wuhan, China.
| |
Collapse
|
17
|
Elgarba BM, Van Aelst S, Swaity A, Morgan N, Shujaat S, Jacobs R. Deep learning-based segmentation of dental implants on cone-beam computed tomography images: A validation study. J Dent 2023; 137:104639. [PMID: 37517787 DOI: 10.1016/j.jdent.2023.104639] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/21/2023] [Accepted: 07/26/2023] [Indexed: 08/01/2023] Open
Abstract
OBJECTIVES To train and validate a cloud-based convolutional neural network (CNN) model for automated segmentation (AS) of dental implant and attached prosthetic crown on cone-beam computed tomography (CBCT) images. METHODS A total dataset of 280 maxillomandibular jawbone CBCT scans was acquired from patients who underwent implant placement with or without coronal restoration. The dataset was randomly divided into three subsets: training set (n = 225), validation set (n = 25) and testing set (n = 30). A CNN model was developed and trained using expert-based semi-automated segmentation (SS) of the implant and attached prosthetic crown as the ground truth. The performance of AS was assessed by comparing with SS and manually corrected automated segmentation referred to as refined-automated segmentation (R-AS). Evaluation metrics included timing, voxel-wise comparison based on confusion matrix and 3D surface differences. RESULTS The average time required for AS was 60 times faster (<30 s) than the SS approach. The CNN model was highly effective in segmenting dental implants both with and without coronal restoration, achieving a high dice similarity coefficient score of 0.92±0.02 and 0.91±0.03, respectively. Moreover, the root mean square deviation values were also found to be low (implant only: 0.08±0.09 mm, implant+restoration: 0.11±0.07 mm) when compared with R-AS, implying high AI segmentation accuracy. CONCLUSIONS The proposed cloud-based deep learning tool demonstrated high performance and time-efficient segmentation of implants on CBCT images. CLINICAL SIGNIFICANCE AI-based segmentation of implants and prosthetic crowns can minimize the negative impact of artifacts and enhance the generalizability of creating dental virtual models. Furthermore, incorporating the suggested tool into existing CNN models specialized for segmenting anatomical structures can improve pre-surgical planning for implants and post-operative assessment of peri‑implant bone levels.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt
| | - Stijn Van Aelst
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium
| | - Abdullah Swaity
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Prosthodontic Department, King Hussein Medical Center, Royal Medical Services, Amman, Jordan
| | - Nermin Morgan
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Sohaib Shujaat
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
18
|
Jin S, Han H, Huang Z, Xiang Y, Du M, Hua F, Guan X, Liu J, Chen F, He H. Automatic three-dimensional nasal and pharyngeal airway subregions identification via Vision Transformer. J Dent 2023; 136:104595. [PMID: 37343616 DOI: 10.1016/j.jdent.2023.104595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/06/2023] [Accepted: 06/19/2023] [Indexed: 06/23/2023] Open
Abstract
OBJECTIVES Upper airway assessment requires a fully-automated segmentation system for complete or sub-regional identification. This study aimed to develop a novel Deep Learning (DL) model for accurate segmentation of the upper airway and achieve entire and subregional identification. METHODS Fifty cone-beam computed tomography (CBCT) scans, including 24,502 slices, were labelled as the ground truth by one orthodontist and two otorhinolaryngologists. A novel model, a lightweight multitask network based on the Swin Transformer and U-Net, was built for automatic segmentation of the entire upper airway and subregions. Segmentation performance was evaluated using Precision, Recall, Dice similarity coefficient (DSC) and Intersection over union (IoU). The clinical implications of the precision errors were quantitatively analysed, and comparisons between the AI model and Dolphin software were conducted. RESULTS Our model achieved good performance with a precision of 85.88-94.25%, recall of 93.74-98.44%, DSC of 90.95-96.29%, IoU of 83.68-92.85% in the overall and subregions of three-dimensional (3D) upper airway, and a precision of 91.22-97.51%, recall of 90.70-97.62%, DSC of 90.92-97.55%, and IoU of 83.41-95.29% in the subregions of two-dimensional (2D) crosssections. Discrepancies in volume and area caused by precision errors did not affect clinical outcomes. Both our AI model and the Dolphin software provided clinically acceptable consistency for pharyngeal airway assessments. CONCLUSION The novel DL model not only achieved segmentation of the entire upper airway, including the nasal cavity and subregion identification, but also performed exceptionally well, making it well suited for 3D upper airway assessment from the nasal cavity to the hypopharynx, especially for intricate structures. CLINICAL SIGNIFICANCE This system provides insights into the aetiology, risk, severity, treatment effect, and prognosis of dentoskeletal deformities and obstructive sleep apnea. It achieves rapid assessment of the entire upper airway and its subregions, making airway management-an integral part of orthodontic treatment, orthognathic surgery, and ENT surgery-easier.
Collapse
Affiliation(s)
- Suhan Jin
- Department of Orthodontics, Hubei-MOST KLOS & KLOBM, School & Hospital of Stomatology, Wuhan University,Wuhan, China; Department of Orthodontics, Affiliated Stomatological Hospital of Zunyi Medical University, Zunyi, China
| | - Haojie Han
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China
| | - Zhiqun Huang
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yuandi Xiang
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mingyuan Du
- Department of Orthodontics, Hubei-MOST KLOS & KLOBM, School & Hospital of Stomatology, Wuhan University,Wuhan, China
| | - Fang Hua
- Department of Orthodontics, Hubei-MOST KLOS & KLOBM, School & Hospital of Stomatology, Wuhan University,Wuhan, China
| | - Xiaoyan Guan
- Department of Orthodontics, Affiliated Stomatological Hospital of Zunyi Medical University, Zunyi, China
| | - Jianguo Liu
- School of Stomatology, Zunyi Medical University, Zunyi, China; Special Key Laboratory of Oral Diseases Research, Higher Education Institution, Zunyi, China
| | - Fang Chen
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China.
| | - Hong He
- Department of Orthodontics, Hubei-MOST KLOS & KLOBM, School & Hospital of Stomatology, Wuhan University,Wuhan, China.
| |
Collapse
|
19
|
Meng X, Mao F, Mao Z, Xue Q, Jia J, Hu M. Multi-stage Unet segmentation and automatic measurement of pharyngeal airway based on lateral cephalograms. J Dent 2023; 136:104637. [PMID: 37506811 DOI: 10.1016/j.jdent.2023.104637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 07/17/2023] [Accepted: 07/25/2023] [Indexed: 07/30/2023] Open
Abstract
OBJECTIVES Orthodontic treatment profoundly impact the pharyngeal airway (PA) of patients. Airway examination is an integral part of daily orthodontic diagnosis, and lateral cephalograms (LC) are reliable to reveal PA structures. This study attempted to develop a simple method to help clinicians make a preliminary judgement of patients' PA conditions and assess the impact of orthodontic treatment on their airways. METHODS LCs of 764 patients were used to train a multistage unit segmentation model. Another 130 images were used to validate the model and more 130 images were used to test the model. RESULTS Unet was used as the backbone, with a mean dice value of 0.8180, precision of 0.8393, and recall of 0.8188. Furthermore, we identified seven key points and measured related indices. The length of the line separating the nasopharynx and oropharynx and the line separating the oropharynx and hypopharynx were manually measured thrice and the average values was compared. The intraclass correlation coefficient (ICC) for the two lines was 0.599 and 0.855. Then, we performed a single linear regression analysis, which indicated a strong correlation between the predictions and measurements for the two lines. CONCLUSIONS This method is reliable for segmenting three regions (nasopharynx, oropharynx, and hypopharynx) of the PA and calculating related indices. However, the predictions obtained from this model still have errors, and it is necessary for clinical practitioners to assess and adjust the predictions. CLINICAL SIGNIFICANCE Our model can help orthodontists formulate personalised treatment plans and evaluate the risk of airway stenosis during orthodontic treatment. This method may mark the beginning of a new and simpler approach for PA obstruction detection, specifically tailored to orthodontic patients.
Collapse
Affiliation(s)
- Xiangquan Meng
- School of Mathematics, Jilin University, Changchun 130012, China
| | - Feng Mao
- Hospital of Stomatology, Key Laboratory of Pathobiology, Ministry of Education, Jilin University, Changchun 130021, China
| | - Zhi Mao
- Hospital of Stomatology, Key Laboratory of Pathobiology, Ministry of Education, Jilin University, Changchun 130021, China
| | - Qing Xue
- Hospital of Stomatology, Key Laboratory of Pathobiology, Ministry of Education, Jilin University, Changchun 130021, China
| | - Jiwei Jia
- School of Mathematics, Jilin University, Changchun 130012, China; National Applied Mathematical Center (Jilin), Changchun 130012, China
| | - Min Hu
- Hospital of Stomatology, Key Laboratory of Pathobiology, Ministry of Education, Jilin University, Changchun 130021, China.
| |
Collapse
|
20
|
Maken P, Gupta A, Gupta MK. A systematic review of the techniques for automatic segmentation of the human upper airway using volumetric images. Med Biol Eng Comput 2023; 61:1901-1927. [PMID: 37248380 DOI: 10.1007/s11517-023-02842-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 04/20/2023] [Indexed: 05/31/2023]
Abstract
The human upper airway is comprised of many anatomical volumes. The obstructions in the upper airway volumes are needed to be diagnosed which requires volumetric segmentation. Manual segmentation is time-consuming and requires expertise in the field. Automatic segmentation provides reliable results and also saves time and effort for the expert. The objective of this study is to systematically review the literature to study various techniques used for the automatic segmentation of the human upper airway regions in volumetric images. PRISMA guidelines were followed to conduct the systematic review. Four online databases Scopus, Google Scholar, PubMed, and JURN were used for the searching of the relevant papers. The relevant papers were shortlisted using inclusion and exclusion eligibility criteria. Three review questions were made and explored to find their answers. The best technique among all the literature studies based on the Dice coefficient and precision was identified and justified through the analysis. This systematic review provides insight to the researchers so that they shall be able to overcome the prominent issues in the field identified from the literature. The outcome of the review is based on several parameters, e.g., accuracy, techniques, challenges, datasets, and segmentation of different sub-regions. Flowchart of the search process as per PRISMA guidelines along with inclusion and exclusion criteria.
Collapse
Affiliation(s)
- Payal Maken
- School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, India
| | - Abhishek Gupta
- Biomedical Application Division, CSIR-Central Scientific Instruments Organisation, Chandigarh, 160030, India.
| | - Manoj Kumar Gupta
- School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, India
| |
Collapse
|
21
|
Fan W, Zhang J, Wang N, Li J, Hu L. The Application of Deep Learning on CBCT in Dentistry. Diagnostics (Basel) 2023; 13:2056. [PMID: 37370951 DOI: 10.3390/diagnostics13122056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user's proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research.
Collapse
Affiliation(s)
- Wenjie Fan
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jiaqi Zhang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Nan Wang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jia Li
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Li Hu
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
22
|
Kim DY, Woo S, Roh JY, Choi JY, Kim KA, Cha JY, Kim N, Kim SJ. Subregional pharyngeal changes after orthognathic surgery in skeletal Class III patients analyzed by convolutional neural networks-based segmentation. J Dent 2023:104565. [PMID: 37308053 DOI: 10.1016/j.jdent.2023.104565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/14/2023] Open
Abstract
OBJECTIVES To evaluate the accuracy of fully automatic segmentation of pharyngeal volume of interests (VOIs) before and after orthognathic surgery in skeletal Class III patients using a convolutional neural network (CNN) model and to investigate the clinical applicability of artificial intelligence for quantitative evaluation of treatment changes in pharyngeal VOIs. METHODS 310 cone-beam computed tomography (CBCT) images were divided into a training set (n=150), validation set (n=40), and test set (n=120). The test datasets comprised matched pairs of pre- and posttreatment images of 60 skeletal Class III patients (mean age 23.1±5.0 years; ANB<-2⁰) who underwent bimaxillary orthognathic surgery with orthodontic treatment. A 3D U-Net CNNs model was applied for fully automatic segmentation and measurement of subregional pharyngeal volumes of pretreatment (T0) and posttreatment (T1) scans. The model's accuracy was compared to semi-automatic segmentation outcomes by humans using the dice similarity coefficient (DSC) and volume similarity (VS). The correlation between surgical skeletal changes and model accuracy was obtained. RESULTS The proposed model achieved high performance of subregional pharyngeal segmentation on both T0 and T1 images, representing a significant T1-T0 difference of DSC only in the nasopharynx. Region-specific differences among pharyngeal VOIs, which were observed at T0, disappeared on the T1 images. The decreased DSC of nasopharyngeal segmentation after treatment was weakly correlated with the amount of maxillary advancement. There was no correlation between the mandibular setback amount and model accuracy. CONCLUSIONS The proposed model offers fast and accurate subregional pharyngeal segmentation on both pretreatment and posttreatment CBCT images in skeletal Class III patients. CLINICAL SIGNIFICANCE We elucidated the clinical applicability of the CNNs model to quantitatively evaluate subregional pharyngeal changes after surgical-orthodontic treatment, which offers a basis for developing a fully integrated multiclass CNNs model to predict pharyngeal responses after dentoskeletal treatments.
Collapse
Affiliation(s)
- Dong-Yul Kim
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Seoyeon Woo
- Department of Convergence Medicine, Asan Medical Institute of Convergence, Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jae-Yon Roh
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jin-Young Choi
- Department of Orthodontics, Kyung Hee University Dental Hospital, 23, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Kyung-A Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jung-Yul Cha
- Department of Orthodontics, The Institute of Craniofacial Deformity, College of Dentistry, Yonsei University, 50-1 Yonseiro, Seodaemun-gu, Seoul, 03722, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Su-Jung Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea.
| |
Collapse
|
23
|
Abesi F, Maleki M, Zamani M. Diagnostic performance of artificial intelligence using cone-beam computed tomography imaging of the oral and maxillofacial region: A scoping review and meta-analysis. Imaging Sci Dent 2023; 53:101-108. [PMID: 37405196 PMCID: PMC10315225 DOI: 10.5624/isd.20220224] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/13/2023] [Accepted: 02/22/2023] [Indexed: 04/12/2024] Open
Abstract
Purpose The aim of this study was to conduct a scoping review and meta-analysis to provide overall estimates of the recall and precision of artificial intelligence for detection and segmentation using oral and maxillofacial cone-beam computed tomography (CBCT) scans. Materials and Methods A literature search was done in Embase, PubMed, and Scopus through October 31, 2022 to identify studies that reported the recall and precision values of artificial intelligence systems using oral and maxillofacial CBCT images for the automatic detection or segmentation of anatomical landmarks or pathological lesions. Recall (sensitivity) indicates the percentage of certain structures that are correctly detected. Precision (positive predictive value) indicates the percentage of accurately identified structures out of all detected structures. The performance values were extracted and pooled, and the estimates were presented with 95% confidence intervals (CIs). Results In total, 12 eligible studies were finally included. The overall pooled recall for artificial intelligence was 0.91 (95% CI: 0.87-0.94). In a subgroup analysis, the pooled recall was 0.88 (95% CI: 0.77-0.94) for detection and 0.92 (95% CI: 0.87-0.96) for segmentation. The overall pooled precision for artificial intelligence was 0.93 (95% CI: 0.88-0.95). A subgroup analysis showed that the pooled precision value was 0.90 (95% CI: 0.77-0.96) for detection and 0.94 (95% CI: 0.89-0.97) for segmentation. Conclusion Excellent performance was found for artificial intelligence using oral and maxillofacial CBCT images.
Collapse
Affiliation(s)
- Farida Abesi
- Department of Oral and Maxillofacial Radiology, Dental Faculty, Babol University of Medical Sciences, Babol, Iran
| | - Mahla Maleki
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| | - Mohammad Zamani
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| |
Collapse
|
24
|
Dong W, Chen Y, Li A, Mei X, Yang Y. Automatic detection of adenoid hypertrophy on cone-beam computed tomography based on deep learning. Am J Orthod Dentofacial Orthop 2023; 163:553-560.e3. [PMID: 36990529 DOI: 10.1016/j.ajodo.2022.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 11/01/2022] [Accepted: 11/01/2022] [Indexed: 03/29/2023]
Abstract
INTRODUCTION This study proposed an automatic diagnosis method based on deep learning for adenoid hypertrophy detection on cone-beam computed tomography. METHODS The hierarchical masks self-attention U-net (HMSAU-Net) for segmentation of the upper airway and the 3-dimensional (3D)-ResNet for diagnosing adenoid hypertrophy were constructed on the basis of 87 cone-beam computed tomography samples. A self-attention encoder module was added to the SAU-Net to optimize upper airway segmentation precision. The hierarchical masks were introduced to ensure that the HMSAU-Net captured sufficient local semantic information. RESULTS We used Dice to evaluate the performance of HMSAU-Net and used diagnostic method indicators to test the performance of 3D-ResNet. The average Dice value of our proposed model was 0.960, which was superior to the 3DU-Net and SAU-Net models. In the diagnostic models, 3D-ResNet10 had an excellent ability to diagnose adenoid hypertrophy automatically with a mean accuracy of 0.912, a mean sensitivity of 0.976, a mean specificity of 0.867, a mean positive predictive value of 0.837, a mean negative predictive value of 0.981, and a F1 score of 0.901. CONCLUSIONS The value of this diagnostic system lies in that it provides a new method for the rapid and accurate early clinical diagnosis of adenoid hypertrophy in children, allows us to look at the upper airway obstruction in three-dimensional space and relieves the work pressure of imaging doctors.
Collapse
Affiliation(s)
- Wenjie Dong
- Department of Stomatology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, China
| | - Yaosen Chen
- Department of Stomatology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, China
| | - Ankang Li
- Computer Science School, Wuhan University, Wuhan, Hubei, China
| | - Xiaoguang Mei
- Electronic Information School, Wuhan University, Wuhan, Hubei, China
| | - Yan Yang
- Department of Stomatology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, China.
| |
Collapse
|
25
|
Morgan N, Shujaat S, Jazil O, Jacobs R. Three-dimensional quantification of skeletal midfacial complex symmetry. Int J Comput Assist Radiol Surg 2023; 18:611-619. [PMID: 36272017 DOI: 10.1007/s11548-022-02775-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 10/05/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE Quantification of skeletal symmetry in a healthy population could have a strong impact on the reconstructive surgical procedures where mirroring of the contralateral healthy side acts as a clinical reference for the restoration of unilateral defects. Hence, the aim of this study was to three-dimensionally assess the symmetry of skeletal midfacial complex in skeletal class I patients. METHODS A sample of 100 cone beam computed tomography (CBCT) scans (50 males, 50 females; age range: 19-40 years) were recruited. Automated segmentation of the skeletal midfacial complex was performed to create a three-dimensional (3D) virtual model using a convolutional neural network (CNN)-based segmentation tool. Thereafter, the segmented model was mirrored and registered to quantify skeletal symmetry using a color-coded conformance mapping based on a surface part comparison analysis. RESULTS Overall, the mean and root-mean-square (RMS) differences between complete true and mirrored models were 0.14 ± 0.12 and 0.87 ± 0.21 mm, respectively. Female patients had a significantly more symmetrical midfacial complex (mean difference: 0.11 ± 0.1 mm, RMS: 0.81 ± 0.17 mm) compared to male patients (mean difference: 0.16 ± 0.13 mm, RMS: 0.94 ± 0.23 mm). No significant difference existed between left and right sides irrespective of the patient's gender. CONCLUSION The comparison between true and mirrored complete and left/right split midfacial complex showed symmetry within a clinically acceptable range of 1 mm, which justifies the applicability of using the mirroring technique. The presented data could act as a reference guide for surgeons during planning of reconstructive surgical procedures and outcome assessment at follow-up.
Collapse
Affiliation(s)
- Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven and Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33 bus 7001, 3000, Leuven, Belgium.
- Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt.
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven and Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33 bus 7001, 3000, Leuven, Belgium
- Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Omid Jazil
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven and Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33 bus 7001, 3000, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven and Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33 bus 7001, 3000, Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
26
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
27
|
Nogueira-Reis F, Morgan N, Nomidis S, Van Gerven A, Oliveira-Santos N, Jacobs R, Tabchoury CPM. Three-dimensional maxillary virtual patient creation by convolutional neural network-based segmentation on cone-beam computed tomography images. Clin Oral Investig 2023; 27:1133-1141. [PMID: 36114907 PMCID: PMC9985582 DOI: 10.1007/s00784-022-04708-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/01/2022] [Indexed: 11/03/2022]
Abstract
OBJECTIVE To qualitatively and quantitatively assess integrated segmentation of three convolutional neural network (CNN) models for the creation of a maxillary virtual patient (MVP) from cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS A dataset of 40 CBCT scans acquired with different scanning parameters was selected. Three previously validated individual CNN models were integrated to achieve a combined segmentation of maxillary complex, maxillary sinuses, and upper dentition. Two experts performed a qualitative assessment, scoring-integrated segmentations from 0 to 10 based on the number of required refinements. Furthermore, experts executed refinements, allowing performance comparison between integrated automated segmentation (AS) and refined segmentation (RS) models. Inter-observer consistency of the refinements and the time needed to create a full-resolution automatic segmentation were calculated. RESULTS From the dataset, 85% scored 7-10, and 15% were within 3-6. The average time required for automated segmentation was 1.7 min. Performance metrics indicated an excellent overlap between automatic and refined segmentation with a dice similarity coefficient (DSC) of 99.3%. High inter-observer consistency of refinements was observed, with a 95% Hausdorff distance (HD) of 0.045 mm. CONCLUSION The integrated CNN models proved to be fast, accurate, and consistent along with a strong interobserver consistency in creating the MVP. CLINICAL RELEVANCE The automated segmentation of these structures simultaneously could act as a valuable tool in clinical orthodontics, implant rehabilitation, and any oral or maxillofacial surgical procedures, where visualization of MVP and its relationship with surrounding structures is a necessity for reaching an accurate diagnosis and patient-specific treatment planning.
Collapse
Affiliation(s)
- Fernanda Nogueira-Reis
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo, 13414‑903, Brazil.,OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium.,Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura , 35516, Dakahlia, Egypt
| | | | | | - Nicolly Oliveira-Santos
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo, 13414‑903, Brazil.,OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium. .,Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04, Huddinge, Stockholm, Sweden.
| | - Cinthia Pereira Machado Tabchoury
- Department of Biosciences, Division of Biochemistry, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo, 13414‑903, Brazil
| |
Collapse
|
28
|
Automated Evaluation of Upper Airway Obstruction Based on Deep Learning. BIOMED RESEARCH INTERNATIONAL 2023; 2023:8231425. [PMID: 36852295 PMCID: PMC9966825 DOI: 10.1155/2023/8231425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 10/31/2022] [Accepted: 01/25/2023] [Indexed: 02/20/2023]
Abstract
Objectives This study is aimed at developing a screening tool that could evaluate the upper airway obstruction on lateral cephalograms based on deep learning. Methods We developed a novel and practical convolutional neural network model to automatically evaluate upper airway obstruction based on ResNet backbone using the lateral cephalogram. A total of 1219 X-ray images were collected for model training and testing. Results In comparison with VGG16, our model showed a better performance with sensitivity of 0.86, specificity of 0.89, PPV of 0.90, NPV of 0.85, and F1-score of 0.88, respectively. The heat maps of cephalograms showed a deeper understanding of features learned by deep learning model. Conclusion This study demonstrated that deep learning could learn effective features from cephalograms and automated evaluate upper airway obstruction according to X-ray images. Clinical Relevance. A novel and practical deep convolutional neural network model has been established to relieve dentists' workload of screening and improve accuracy in upper airway obstruction.
Collapse
|
29
|
Alqahtani KA, Jacobs R, Shujaat S, Politis C, Shaheen E. Automated three-dimensional quantification of external root resorption following combined orthodontic-orthognathic surgical treatment. A validation study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2023; 124:101289. [PMID: 36122841 DOI: 10.1016/j.jormas.2022.09.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 10/14/2022]
Abstract
OBJECTIVE Three-dimensional (3D) quantitative assessment of external root resorption (ERR) following combined orthodontic-orthognathic surgical treatment is vital for ensuring an optimal long-term tooth prognosis. In this era, lack of evidence exists applying automated 3D approaches for assessing ERR. Therefore, this study aimed to validate a protocol for 3D quantification of ERR on cone-beam computed tomography (CBCT) images following combined orthodontic-orthognathic surgical treatment. MATERIAL AND METHODS Twenty patients who underwent combined orthodontic-orthognathic surgical treatment were recruited. Each patient had CBCT scans acquired with NewTom VGi evo (NewTom) at three time-points i.e., 4-weeks prior to surgery (T0), 1-week (T1) and 1-year after surgery (T2). Patients were divided into two groups, group A (surgical Le Fort I osteotomy group: 10 patients) and group B (orthodontic group without maxillary surgical intervention: 10 patients). Root resorption was assessed by measuring length and volumetric changes of maxillary premolar to premolar teeth (central and lateral incisors, canines, 1st and 2nd premolars= 10 teeth) at T0-T1 and T0-T2 time intervals in both groups. The protocol consisted of convolutional neural network based segmentation followed by surface-based superimposition and automated 3D analysis. RESULTS The intra-observer intra-class correlation coefficient (ICC) was found to be excellent (1.0) with an average error of 0 mm and 0 mm3 for assessing root length and volume, respectively. The entire protocol took 56.8 ± 7 s for quantifying ERR. Both group of patients showed negligible changes in length and volumetric ratio at T0-T1 time-interval. Furthermore, group A had lower ERR ratio with decreased root volume and length compared to group B at T0-T2 time-interval. CONCLUSIONS The proposed protocol was found to be time efficient, accurate and reliable for 3D quantification of ERR on CBCT images. It could act as a viable automated option for assessing ERR. CLINICAL SIGNIFICANCE The automated protocol could provide a time efficient method to allow a reliable and accurate 3D follow up root resorption after orthognathic and orthodontic treatment procedures. These new insights could allow clinicians to implement strategies for minimizing the risk of root resorption and to further enhance treatment predictability.
Collapse
Affiliation(s)
- Khalid Ayidh Alqahtani
- OMFS IMPATH research group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven and Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium; Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Prince Sattam bin Abdulaziz University, AlKharj, Saudi Arabia.
| | - Reinhilde Jacobs
- OMFS IMPATH research group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven and Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden
| | - Sohaib Shujaat
- OMFS IMPATH research group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven and Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | - Constantinus Politis
- OMFS IMPATH research group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven and Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | - Eman Shaheen
- OMFS IMPATH research group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven and Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| |
Collapse
|
30
|
Artificial Intelligence as an Aid in CBCT Airway Analysis: A Systematic Review. LIFE (BASEL, SWITZERLAND) 2022; 12:life12111894. [PMID: 36431029 PMCID: PMC9696726 DOI: 10.3390/life12111894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/10/2022] [Accepted: 11/11/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND The use of artificial intelligence (AI) in health sciences is becoming increasingly popular among doctors nowadays. This study evaluated the literature regarding the use of AI for CBCT airway analysis. To our knowledge, this is the first systematic review that examines the performance of artificial intelligence in CBCT airway analysis. METHODS Electronic databases and the reference lists of the relevant research papers were searched for published and unpublished literature. Study selection, data extraction, and risk of bias evaluation were all carried out independently and twice. Finally, five articles were chosen. RESULTS The results suggested a high correlation between the automatic and manual airway measurements indicating that the airway measurements may be automatically and accurately calculated from CBCT images. CONCLUSIONS According to the present literature, automatic airway segmentation can be used for clinical purposes. The main key findings of this systematic review are that the automatic airway segmentation is accurate in the measurement of the airway and, at the same time, appears to be fast and easy to use. However, the present literature is really limited, and more studies in the future providing high-quality evidence are needed.
Collapse
|
31
|
Bonfanti-Gris M, Garcia-Cañas A, Alonso-Calvo R, Salido Rodriguez-Manzaneque MP, Pradies Ramiro G. Evaluation of an Artificial Intelligence web-based software to detect and classify dental structures and treatments in panoramic radiographs. J Dent 2022; 126:104301. [PMID: 36150430 DOI: 10.1016/j.jdent.2022.104301] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES To evaluate the diagnostic reliability of a web-based Artificial Intelligence program on the detection and classification of dental structures and treatments present on panoramic radiographs. METHODS A total of 300 orthopantomographies (OPG) were randomly selected for this study. First, the images were visually evaluated by two calibrated operators with radiodiagnosis experience that, after consensus, established the "ground truth". Operators' findings on the radiographs were collected and classified as follows: metal restorations (MR), resin-based restorations (RR), endodontic treatment (ET), Crowns (C) and Implants (I). The orthopantomographies were then anonymously uploaded and automatically analyzed by the web-based software (Denti.Ai). Results were then stored, and a statistical analysis was performed by comparing them with the ground truth in terms of Sensitivity (S), Specificity (E), Positive Predictive Value (PPV) Negative Predictive Value (NPV) and its later representation in the area under (AUC) the Receiver Operating Characteristic (ROC) Curve. RESULTS Diagnostic metrics obtained for each study variable were as follows: (MR) S=85.48%, E=87.50%, PPV=82.8%, NPV=42.51%, AUC=0.869; (PR) S=41.11%, E=93.30%, PPV=90.24%, NPV=87.50%, AUC=0.672; (ET) S=91.9%, E=100%, PPV=100%, NPV=94.62%, AUC=0.960; (C) S=89.53%, E=95.79%, PPV=89.53%, NPV=95.79%, AUC=0.927; (I) S, E, PPV, NPV=100%, AUC=1.000. CONCLUSIONS Findings suggest that the web-based Artificial intelligence software provides a good performance on the detection of implants, crowns, metal fillings and endodontic treatments, not being so accurate on the classification of dental structures or resin-based restorations. CLINICAL SIGNIFICANCE General diagnostic and treatment decisions using orthopantomographies can be improved by using web-based artificial intelligence tools, avoiding subjectivity and lapses from the clinician.
Collapse
Affiliation(s)
- Monica Bonfanti-Gris
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain
| | - Angel Garcia-Cañas
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain
| | - Raul Alonso-Calvo
- Department of Informatics Systems and Languages, Faculty of Software Engineering, Polytechnic University of Madrid. Campus Montegancedo s/n, Boadilla del Monte. 28660 Madrid, Spain
| | - Maria Paz Salido Rodriguez-Manzaneque
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain.
| | - Guillermo Pradies Ramiro
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain
| |
Collapse
|
32
|
Saini M, Susan S. Diabetic retinopathy screening using deep learning for multi-class imbalanced datasets. Comput Biol Med 2022; 149:105989. [PMID: 36037631 DOI: 10.1016/j.compbiomed.2022.105989] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/08/2022] [Accepted: 08/14/2022] [Indexed: 11/30/2022]
Abstract
Screening and diagnosis of diabetic retinopathy disease is a well known problem in the biomedical domain. The use of medical imagery from a patient's eye for detecting the damage caused to blood vessels is a part of the computer-aided diagnosis that has immensely progressed over the past few years due to the advent and success of deep learning. The challenges related to imbalanced datasets, inconsistent annotations, less number of sample images and inappropriate performance evaluation metrics has caused an adverse impact on the performance of the deep learning models. In order to tackle the effect caused by class imbalance, we have done extensive comparative analysis between various state-of-the-art methods on three benchmark datasets of diabetic retinopathy: - Kaggle DR detection, IDRiD and DDR, for classification, object detection and segmentation tasks. This research could serve as a concrete baseline for future research in this field to find appropriate approaches and deep learning architectures for imbalanced datasets.
Collapse
Affiliation(s)
- Manisha Saini
- Delhi Technological University, New Delhi, 110042, Delhi, India.
| | - Seba Susan
- Delhi Technological University, New Delhi, 110042, Delhi, India.
| |
Collapse
|
33
|
Cho HN, Gwon E, Kim KA, Baek SH, Kim N, Kim SJ. Accuracy of convolutional neural networks-based automatic segmentation of pharyngeal airway sections according to craniofacial skeletal pattern. Am J Orthod Dentofacial Orthop 2022; 162:e53-e62. [PMID: 35654686 DOI: 10.1016/j.ajodo.2022.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 11/28/2022]
Abstract
INTRODUCTION This study aimed to evaluate a 3-dimensional (3D) U-Net-based convolutional neural networks model for the fully automatic segmentation of regional pharyngeal volume of interests (VOIs) in cone-beam computed tomography scans to compare the accuracy of the model performance across different skeletal patterns presenting with various pharyngeal dimensions. METHODS Two-hundred sixteen cone-beam computed tomography scans of adult patients were randomly divided into training (n = 100), validation (n = 16), and test (n = 100) datasets. We trained the 3D U-Net model for fully automatic segmentation of pharyngeal VOIs and their measurements: nasopharyngeal, velopharyngeal, glossopharyngeal, and hypopharyngeal sections as well as total pharyngeal airway space (PAS). The test datasets were subdivided according to the sagittal and vertical skeletal patterns. The segmentation performance was assessed by dice similarity coefficient, volumetric similarity, precision, and recall values, compared with the ground truth created by 1 expert's manual processing using semiautomatic software. RESULTS The proposed model achieved highly accurate performance, showing a mean dice similarity coefficient of 0.928 ± 0.023, the volumetric similarity of 0.928 ± 0.023, precision of 0.925 ± 0.030, and recall of 0.921 ± 0.029 for total PAS segmentation. The performance showed region-specific differences, revealing lower accuracy in the glossopharyngeal and hypopharyngeal sections than in the upper sections (P <0.001). However, the accuracy of model performance at each pharyngeal VOI showed no significant difference according to sagittal or vertical skeletal patterns. CONCLUSIONS The 3D-convolutional neural network performance for region-specific PAS analysis is promising to substitute for laborious and time-consuming manual analysis in every skeletal and pharyngeal pattern.
Collapse
Affiliation(s)
- Ha-Nul Cho
- Department of Dentistry, Graduate School, Kyung Hee University, Seoul, South Korea
| | - Eunseo Gwon
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Kyung-A Kim
- Department of Dentistry, Graduate School, Kyung Hee University, Seoul, South Korea
| | - Seung-Hak Baek
- Department of Orthodontics, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea; Department of Radiology, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
| | - Su-Jung Kim
- Department of Dentistry, Graduate School, Kyung Hee University, Seoul, South Korea.
| |
Collapse
|
34
|
Preda F, Morgan N, Van Gerven A, Nogueira-Reis F, Smolders A, Wang X, Nomidis S, Shaheen E, Willems H, Jacobs R. Deep convolutional neural network-based automated segmentation of the maxillofacial complex from cone-beam computed tomography - A validation study. J Dent 2022; 124:104238. [PMID: 35872223 DOI: 10.1016/j.jdent.2022.104238] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 02/08/2023] Open
Abstract
OBJECTIVES The present study investigated the accuracy, consistency, and time-efficiency of a novel deep CNN-based model for the automated maxillofacial bone segmentation from CBCT images. METHOD A dataset of 144 scans was acquired from two CBCT devices and randomly divided into three subsets: training set (n= 110), validation set (n= 10) and testing set (n=24). A three-dimensional (3D) U-Net (CNN) model was developed, and the achieved automated segmentation was compared with a manual approach. RESULTS The average time required for automated segmentation was 39.1 seconds with a 204-fold decrease in time consumption compared to manual segmentation (132.7 minutes). The model is highly accurate for identification of the bony structures of the anatomical region of interest with a dice similarity coefficient (DSC) of 92.6%. Additionally, the fully deterministic nature of the CNN model was able to provide 100% consistency without any variability. The inter-observer consistency for expert-based minor correction of the automated segmentation observed an excellent DSC of 99.7%. CONCLUSION The proposed CNN model provided a time-efficient, accurate, and consistent CBCT-based automated segmentation of the maxillofacial complex. CLINICAL SIGNIFICANCE Automated segmentation of the maxillofacial complex could act as a potent alternative to the conventional segmentation techniques for improving the efficiency of the digital workflows. This approach could deliver an accurate and ready-to-print three dimensional (3D) models that are essential to patient-specific digital treatment planning for orthodontics, maxillofacial surgery, and implant placement.
Collapse
Affiliation(s)
- Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium.
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, 35516 Mansoura, Dakahlia, Egypt
| | | | - Fernanda Nogueira-Reis
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium; Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | | | - Xiaotong Wang
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium
| | | | - Eman Shaheen
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium
| | | | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04 Huddinge, Stockholm, Sweden
| |
Collapse
|
35
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
36
|
Automated detection and labelling of teeth and small edentulous regions on Cone-Beam Computed Tomography using Convolutional Neural Networks. J Dent 2022; 122:104139. [DOI: 10.1016/j.jdent.2022.104139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 04/04/2022] [Accepted: 04/20/2022] [Indexed: 12/30/2022] Open
|
37
|
Diaconu A, Holte MB, Cattaneo PM, Pinholt EM. A semi-automatic approach for longitudinal 3D upper airway analysis using voxel-based registration. Dentomaxillofac Radiol 2022; 51:20210253. [PMID: 34644181 PMCID: PMC8925868 DOI: 10.1259/dmfr.20210253] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVES To propose and validate a reliable semi-automatic approach for three-dimensional (3D) analysis of the upper airway (UA) based on voxel-based registration (VBR). METHODS Post-operative cone beam computed tomography (CBCT) scans of 10 orthognathic surgery patients were superimposed to the pre-operative CBCT scans by VBR using the anterior cranial base as reference. Anatomic landmarks were used to automatically cut the UA and calculate volumes and cross-sectional areas (CSA). The 3D analysis was performed by two observers twice, at an interval of two weeks. Intraclass correlations and Bland-Altman plots were used to quantify the measurement error and reliability of the method. The relative Dahlberg error was calculated and compared with a similar method based on landmark re-identification and manual measurements. RESULTS Intraclass correlation coefficient (ICC) showed excellent intra- and inter-observer reliability (ICC ≥ 0.995). Bland-Altman plots showed good observer agreement, low bias and no systematic errors. The relative Dahlberg error ranged between 0.51 and 4.30% for volume and 0.24 and 2.90% for CSA. This was lower when compared with a similar, manual method. Voxel-based registration introduced 0.05-1.44% method error. CONCLUSIONS The proposed method was shown to have excellent reliability and high observer agreement. The method is feasible for longitudinal clinical trials on large cohorts due to being semi-automatic.
Collapse
Affiliation(s)
- Alexandru Diaconu
- 3D Lab Denmark, Department of Oral and Maxillofacial Surgery, University Hospital of Southern Denmark, Esbjerg, Denmark
| | | | - Paolo Maria Cattaneo
- Melbourne Dental School, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Victoria, Australia
| | | |
Collapse
|
38
|
Fontenele RC, Gerhardt MDN, Pinto JC, Van Gerven A, Willems H, Jacobs R, Freitas DQ. Influence of dental fillings and tooth type on performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on CBCT images – A validation study. J Dent 2022; 119:104069. [DOI: 10.1016/j.jdent.2022.104069] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 01/26/2022] [Accepted: 02/16/2022] [Indexed: 01/11/2023] Open
|
39
|
Badr FF, Jadu FM. Performance of artificial intelligence using oral and maxillofacial CBCT images: A systematic review and meta-analysis. Niger J Clin Pract 2022; 25:1918-1927. [DOI: 10.4103/njcp.njcp_394_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|