1
|
Lin J, Zheng Q, Wu Y, Zhou M, Chen J, Wang X, Kang T, Zhang W, Chen X. Quantitative analysis and clinical determinants of orthodontically induced root resorption using automated tooth segmentation from CBCT imaging. BMC Oral Health 2025; 25:694. [PMID: 40340630 PMCID: PMC12063342 DOI: 10.1186/s12903-025-06052-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Accepted: 04/23/2025] [Indexed: 05/10/2025] Open
Abstract
BACKGROUND Orthodontically induced root resorption (OIRR) is difficult to assess accurately using traditional 2D imaging due to distortion and low sensitivity. While CBCT offers more precise 3D evaluation, manual segmentation remains labor-intensive and prone to variability. Recent advances in deep learning enable automatic, accurate tooth segmentation from CBCT images. This study applies deep learning and CBCT technology to quantify OIRR and analyze its risk factors, aiming to improve assessment accuracy, efficiency, and clinical decision-making. METHOD This study retrospectively analyzed CBCT scans of 108 orthodontic patients to assess OIRR using deep learning-based tooth segmentation and volumetric analysis. Statistical analysis was performed using linear regression to evaluate the influence of patient-related factors. A significance level of p < 0.05 was considered statistically significant. RESULTS Root volume significantly decreased after orthodontic treatment (p < 0.001). Age, gender, open (deep) bite, severe crowding, and other factors significantly influenced root resorption rates in different tooth positions. Multivariable regression analysis showed these factors can predict root resorption, explaining 3% to 15.4% of the variance. CONCLUSION This study applied a deep learning model to accurately assess root volume changes using CBCT, revealing significant root volume reduction after orthodontic treatment. It found that underage patients experienced less root resorption, while factors like anterior open bite and deep overbite influenced resorption in specific teeth, though skeletal pattern, overjet, and underbite were not significant predictors.
Collapse
Affiliation(s)
- Jiaqi Lin
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Qianhan Zheng
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Yongjia Wu
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Mengqi Zhou
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Jiahao Chen
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Xiaozhe Wang
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Ting Kang
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China.
| | - Weifang Zhang
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China.
- Social Medicine & Health Affairs Administration, Zhejiang University, Hangzhou, 310058, Zhejiang, China.
| | - Xuepeng Chen
- School of Stomatology, Clinical Research Center for Oral Diseases of Zhejiang Province, Stomatology HospitalZhejiang University School of MedicineKey Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China.
| |
Collapse
|
2
|
Kim MS, Amm E, Parsi G, ElShebiny T, Motro M. Automated dentition segmentation: 3D UNet-based approach with MIScnn framework. J World Fed Orthod 2025; 14:84-90. [PMID: 39489636 DOI: 10.1016/j.ejwf.2024.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 09/18/2024] [Accepted: 09/18/2024] [Indexed: 11/05/2024]
Abstract
INTRODUCTION Advancements in technology have led to the adoption of digital workflows in dentistry, which require the segmentation of regions of interest from cone-beam computed tomography (CBCT) scans. These segmentations assist in diagnosis, treatment planning, and research. However, manual segmentation is an expensive and labor-intensive process. Therefore, automated methods, such as convolutional neural networks (CNNs), provide a more efficient way to generate segmentations from CBCT scans. METHODS A three-dimensional UNet-based CNN model, utilizing the Medical Image Segmentation CNN framework, was used for training and generating predictions from CBCT scans. A dataset of 351 CBCT scans, with ground-truth labels created through manual segmentation using AI-assisted segmentation software, was prepared. Data preprocessing, augmentation, and model training were performed, and the performance of the proposed CNN model was analyzed. RESULTS The CNN model achieved high accuracy in segmenting maxillary and mandibular teeth from CBCT scans, with average Dice Similarity Coefficient values of 91.83% and 91.35% for maxillary and mandibular teeth, respectively. Performance metrics, including Intersection over Union, precision, and recall, further confirmed the model's effectiveness. CONCLUSIONS The study demonstrates the efficacy of the three-dimensional UNet-based CNN model within the Medical Image Segmentation CNN framework for automated segmentation of maxillary and mandibular dentition from CBCT scans. Automated segmentation using CNNs has the potential to deliver accurate and efficient results, offering a significant advantage over traditional segmentation methods.
Collapse
Affiliation(s)
- Min Seok Kim
- Department of Orthodontics and Dentofacial Orthopedics, Boston University Goldman School of Dentistry, Boston, Massachusetts.
| | - Elie Amm
- Department of Orthodontics and Dentofacial Orthopedics, Boston University Goldman School of Dentistry, Boston, Massachusetts
| | - Goli Parsi
- Department of Orthodontics and Dentofacial Orthopedics, Boston University Goldman School of Dentistry, Boston, Massachusetts
| | - Tarek ElShebiny
- Department of Orthodontics, Case Western Reserve University School of Dental Medicine, Cleveland, Ohio
| | - Melih Motro
- Department of Orthodontics and Dentofacial Orthopedics, Boston University Goldman School of Dentistry, Boston, Massachusetts
| |
Collapse
|
3
|
Alahmari M, Alahmari M, Almuaddi A, Abdelmagyd H, Rao K, Hamdoon Z, Alsaegh M, Chaitanya NCSK, Shetty S. Accuracy of artificial intelligence-based segmentation in maxillofacial structures: a systematic review. BMC Oral Health 2025; 25:350. [PMID: 40055718 PMCID: PMC11887095 DOI: 10.1186/s12903-025-05730-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 02/26/2025] [Indexed: 03/23/2025] Open
Abstract
OBJECTIVE The aim of this review was to evaluate the accuracy of artificial intelligence (AI) in the segmentation of teeth, jawbone (maxilla, mandible with temporomandibular joint), and mandibular (inferior alveolar) canal in CBCT and CT scans. MATERIALS AND METHODS Articles were retrieved from MEDLINE, Cochrane CENTRAL, IEEE Xplore, and Google Scholar. Eligible studies were analyzed thematically, and their quality was appraised using the JBI checklist for diagnostic test accuracy studies. Meta-analysis was conducted for key performance metrics, including Dice Similarity Coefficient (DSC) and Average Surface Distance (ASD). RESULTS A total of 767 non-duplicate articles were identified, and 30 studies were included in the review. Of these, 27 employed deep-learning models, while 3 utilized classical machine-learning approaches. The pooled DSC for mandible segmentation was 0.94 (95% CI: 0.91-0.98), mandibular canal segmentation was 0.694 (95% CI: 0.551-0.838), maxilla segmentation was 0.907 (95% CI: 0.867-0.948), and teeth segmentation was 0.925 (95% CI: 0.891-0.959). Pooled ASD values were 0.534 mm (95% CI: 0.366-0.703) for the mandibular canal, 0.468 mm (95% CI: 0.295-0.641) for the maxilla, and 0.189 mm (95% CI: 0.043-0.335) for teeth. Other metrics, such as sensitivity and precision, were variably reported, with sensitivity exceeding 90% across studies. CONCLUSION AI-based segmentation, particularly using deep-learning models, demonstrates high accuracy in the segmentation of dental and maxillofacial structures, comparable to expert manual segmentation. The integration of AI into clinical workflows offers not only accuracy but also substantial time savings, positioning it as a promising tool for automated dental imaging.
Collapse
Affiliation(s)
- Manea Alahmari
- College of Dentistry, King Khalid University, Abha, Saudi Arabia
| | - Maram Alahmari
- Armed Forces Hospital Southern Region, Khamis Mushait, Saudi Arabia
| | | | - Hossam Abdelmagyd
- College of Dentistry, Suez Canal University, Ajman, United Arab Emirates
| | - Kumuda Rao
- AB Shetty Memorial Institute of Dental Sciences, Nitte (Deemed to be University), Mangalore, India
| | - Zaid Hamdoon
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohammed Alsaegh
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Nallan C S K Chaitanya
- College of Dental Sciences, RAK Medical and Health Sciences University, Ras-Al-Khaimah, United Arab Emirates
| | - Shishir Shetty
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates.
| |
Collapse
|
4
|
Liu Y, Zhang S, Wu X, Yang T, Pei Y, Guo H, Jiang Y, Feng Z, Xiao W, Wang YP, Wang L. Individual Graph Representation Learning for Pediatric Tooth Segmentation From Dental CBCT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1432-1444. [PMID: 40030235 DOI: 10.1109/tmi.2024.3501365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
Pediatric teeth exhibit significant changes in type and spatial distribution across different age groups. This variation makes pediatric teeth segmentation from cone-beam computed tomography (CBCT) more challenging than that in adult teeth. Existing methods mainly focus on adult teeth segmentation, which however cannot be adapted to spatial distribution of pediatric teeth with individual changes (SDPTIC) in different children, resulting in limited accuracy for segmenting pediatric teeth. Therefore, we introduce a novel topology structure-guided graph convolutional network (TSG-GCN) to generate dynamic graph representation of SDPTIC for improved pediatric teeth segmentation. Specifically, this network combines a 3D GCN-based decoder for teeth segmentation and a 2D decoder for dynamic adjacency matrix learning (DAML) to capture SDPTIC information for individual graph representation. 3D teeth labels are transformed into specially-designed 2D projection labels, which is accomplished by first decoupling 3D teeth labels into class-wise volumes for different teeth via one-hot encoding and then projecting them to generate instance-wise 2D projections. With such 2D labels, DAML can be trained to adaptively describe SDPTIC from CBCT with dynamic adjacency matrix, which is then incorporated into GCN for improving segmentation. To ensure inter-task consistency at the adjacency matrix level between the two decoders, a novel loss function is designed. It can address the issue with inconsistent prediction and unstable TSG-GCN convergence due to two heterogeneous decoders. The TSG-GCN approach is finally validated with both public and multi-center datasets. Experimental results demonstrate its effectiveness for pediatric teeth segmentation, with significant improvement over seven state-of-the-art methods.
Collapse
|
5
|
Zheng Q, Ma L, Wu Y, Gao Y, Li H, Lin J, Qing S, Long D, Chen X, Zhang W. Automatic 3-dimensional quantification of orthodontically induced root resorption in cone-beam computed tomography images based on deep learning. Am J Orthod Dentofacial Orthop 2025; 167:188-201. [PMID: 39503671 DOI: 10.1016/j.ajodo.2024.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/24/2024] [Accepted: 09/04/2024] [Indexed: 01/03/2025]
Abstract
INTRODUCTION Orthodontically induced root resorption (OIRR) is a common and undesirable consequence of orthodontic treatment. Traditionally, studies employ manual methods to conduct 3-dimensional quantitative analysis of OIRR via cone-beam computed tomography (CBCT), which is often subjective and time-consuming. With advancements in computer technology, deep learning-based approaches have gained traction in medical image processing. This study presents a deep learning-based model for the fully automatic extraction of root volume information and the localization of root resorption from CBCT images. METHODS In this cross-sectional, retrospective study, 4534 teeth from 105 patients were used to train and validate an automatic model for OIRR quantification. The protocol encompassed several steps: preprocessing of CBCT images involving automatic tooth segmentation and conversion into point clouds, followed by segmentation of tooth crowns and roots via the Dynamic Graph Convolutional Neural Network. The root volume was subsequently calculated, and OIRR localization was performed. The intraclass correlation coefficient was employed to validate the consistency between the automatic model and manual measurements. RESULTS The proposed method strongly correlated with manual measurements in terms of root volume and OIRR severity assessment. The intraclass correlation coefficient values for average volume measurements at each tooth position exceeded 0.95 (P <0.001), with the accuracy of different OIRR severity classifications surpassing 0.8. CONCLUSIONS The proposed methodology provides automatic and reliable tools for OIRR assessment, offering potential improvements in orthodontic treatment planning and monitoring.
Collapse
Affiliation(s)
- Qianhan Zheng
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China
| | - Lei Ma
- Department of Control Science and Engineering, School of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Yongjia Wu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China
| | - Yu Gao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China
| | - Huimin Li
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China
| | - Jiaqi Lin
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China
| | - Shuhong Qing
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China
| | - Dan Long
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine, Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Xuepeng Chen
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China.
| | - Weifang Zhang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, Zhejiang, China; Social Medicine and Health Affairs Administration, Zhejiang University, Hangzhou, Zhejiang, China.
| |
Collapse
|
6
|
Chen W, Dhawan M, Liu J, Ing D, Mehta K, Tran D, Lawrence D, Ganhewa M, Cirillo N. Mapping the Use of Artificial Intelligence-Based Image Analysis for Clinical Decision-Making in Dentistry: A Scoping Review. Clin Exp Dent Res 2024; 10:e70035. [PMID: 39600121 PMCID: PMC11599430 DOI: 10.1002/cre2.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 09/19/2024] [Accepted: 10/20/2024] [Indexed: 11/29/2024] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is an emerging field in dentistry. AI is gradually being integrated into dentistry to improve clinical dental practice. The aims of this scoping review were to investigate the application of AI in image analysis for decision-making in clinical dentistry and identify trends and research gaps in the current literature. MATERIAL AND METHODS This review followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). An electronic literature search was performed through PubMed and Scopus. After removing duplicates, a preliminary screening based on titles and abstracts was performed. A full-text review and analysis were performed according to predefined inclusion criteria, and data were extracted from eligible articles. RESULTS Of the 1334 articles returned, 276 met the inclusion criteria (consisting of 601,122 images in total) and were included in the qualitative synthesis. Most of the included studies utilized convolutional neural networks (CNNs) on dental radiographs such as orthopantomograms (OPGs) and intraoral radiographs (bitewings and periapicals). AI was applied across all fields of dentistry - particularly oral medicine, oral surgery, and orthodontics - for direct clinical inference and segmentation. AI-based image analysis was use in several components of the clinical decision-making process, including diagnosis, detection or classification, prediction, and management. CONCLUSIONS A variety of machine learning and deep learning techniques are being used for dental image analysis to assist clinicians in making accurate diagnoses and choosing appropriate interventions in a timely manner.
Collapse
Affiliation(s)
- Wei Chen
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Monisha Dhawan
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Jonathan Liu
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Damie Ing
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Kruti Mehta
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Daniel Tran
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | | | - Max Ganhewa
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| | - Nicola Cirillo
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| |
Collapse
|
7
|
Alharbi SS, Alhasson HF. Exploring the Applications of Artificial Intelligence in Dental Image Detection: A Systematic Review. Diagnostics (Basel) 2024; 14:2442. [PMID: 39518408 PMCID: PMC11545562 DOI: 10.3390/diagnostics14212442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 10/10/2024] [Accepted: 10/12/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND Dental care has been transformed by neural networks, introducing advanced methods for improving patient outcomes. By leveraging technological innovation, dental informatics aims to enhance treatment and diagnostic processes. Early diagnosis of dental problems is crucial, as it can substantially reduce dental disease incidence by ensuring timely and appropriate treatment. The use of artificial intelligence (AI) within dental informatics is a pivotal tool that has applications across all dental specialties. This systematic literature review aims to comprehensively summarize existing research on AI implementation in dentistry. It explores various techniques used for detecting oral features such as teeth, fillings, caries, prostheses, crowns, implants, and endodontic treatments. AI plays a vital role in the diagnosis of dental diseases by enabling precise and quick identification of issues that may be difficult to detect through traditional methods. Its ability to analyze large volumes of data enhances diagnostic accuracy and efficiency, leading to better patient outcomes. METHODS An extensive search was conducted across a number of databases, including Science Direct, PubMed (MEDLINE), arXiv.org, MDPI, Nature, Web of Science, Google Scholar, Scopus, and Wiley Online Library. RESULTS The studies included in this review employed a wide range of neural networks, showcasing their versatility in detecting the dental categories mentioned above. Additionally, the use of diverse datasets underscores the adaptability of these AI models to different clinical scenarios. This study highlights the compatibility, robustness, and heterogeneity among the reviewed studies. This indicates that AI technologies can be effectively integrated into current dental practices. The review also discusses potential challenges and future directions for AI in dentistry. It emphasizes the need for further research to optimize these technologies for broader clinical applications. CONCLUSIONS By providing a detailed overview of AI's role in dentistry, this review aims to inform practitioners and researchers about the current capabilities and future potential of AI-driven dental care, ultimately contributing to improved patient outcomes and more efficient dental practices.
Collapse
Affiliation(s)
- Shuaa S. Alharbi
- Department of Information Technology, College of Computer, Qassim University, Buraydah 52571, Saudi Arabia;
| | | |
Collapse
|
8
|
Xiang B, Lu J, Yu J. Evaluating tooth segmentation accuracy and time efficiency in CBCT images using artificial intelligence: A systematic review and Meta-analysis. J Dent 2024; 146:105064. [PMID: 38768854 DOI: 10.1016/j.jdent.2024.105064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 04/22/2024] [Accepted: 05/09/2024] [Indexed: 05/22/2024] Open
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to assess the current performance of artificial intelligence (AI)-based methods for tooth segmentation in three-dimensional cone-beam computed tomography (CBCT) images, with a focus on their accuracy and efficiency compared to those of manual segmentation techniques. DATA The data analyzed in this review consisted of a wide range of research studies utilizing AI algorithms for tooth segmentation in CBCT images. Meta-analysis was performed, focusing on the evaluation of the segmentation results using the dice similarity coefficient (DSC). SOURCES PubMed, Embase, Scopus, Web of Science, and IEEE Explore were comprehensively searched to identify relevant studies. The initial search yielded 5642 entries, and subsequent screening and selection processes led to the inclusion of 35 studies in the systematic review. Among the various segmentation methods employed, convolutional neural networks, particularly the U-net model, are the most commonly utilized. The pooled effect of the DSC score for tooth segmentation was 0.95 (95 %CI 0.94 to 0.96). Furthermore, seven papers provided insights into the time required for segmentation, which ranged from 1.5 s to 3.4 min when utilizing AI techniques. CONCLUSIONS AI models demonstrated favorable accuracy in automatically segmenting teeth from CBCT images while reducing the time required for the process. Nevertheless, correction methods for metal artifacts and tooth structure segmentation using different imaging modalities should be addressed in future studies. CLINICAL SIGNIFICANCE AI algorithms have great potential for precise tooth measurements, orthodontic treatment planning, dental implant placement, and other dental procedures that require accurate tooth delineation. These advances have contributed to improved clinical outcomes and patient care in dental practice.
Collapse
Affiliation(s)
- Bilu Xiang
- School of Dentistry, Shenzhen University Medical School, Shenzhen University, Shenzhen 518000, China.
| | - Jiayi Lu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| | - Jiayi Yu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
9
|
Takeya A, Watanabe K, Haga A. Fine structural human phantom in dentistry and instance tooth segmentation. Sci Rep 2024; 14:12630. [PMID: 38824210 PMCID: PMC11144222 DOI: 10.1038/s41598-024-63319-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 05/28/2024] [Indexed: 06/03/2024] Open
Abstract
In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.
Collapse
Affiliation(s)
- Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Keiichiro Watanabe
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan.
| |
Collapse
|
10
|
Liu Y, Xie R, Wang L, Liu H, Liu C, Zhao Y, Bai S, Liu W. Fully automatic AI segmentation of oral surgery-related tissues based on cone beam computed tomography images. Int J Oral Sci 2024; 16:34. [PMID: 38719817 PMCID: PMC11079075 DOI: 10.1038/s41368-024-00294-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 02/21/2024] [Accepted: 03/09/2024] [Indexed: 05/12/2024] Open
Abstract
Accurate segmentation of oral surgery-related tissues from cone beam computed tomography (CBCT) images can significantly accelerate treatment planning and improve surgical accuracy. In this paper, we propose a fully automated tissue segmentation system for dental implant surgery. Specifically, we propose an image preprocessing method based on data distribution histograms, which can adaptively process CBCT images with different parameters. Based on this, we use the bone segmentation network to obtain the segmentation results of alveolar bone, teeth, and maxillary sinus. We use the tooth and mandibular regions as the ROI regions of tooth segmentation and mandibular nerve tube segmentation to achieve the corresponding tasks. The tooth segmentation results can obtain the order information of the dentition. The corresponding experimental results show that our method can achieve higher segmentation accuracy and efficiency compared to existing methods. Its average Dice scores on the tooth, alveolar bone, maxillary sinus, and mandibular canal segmentation tasks were 96.5%, 95.4%, 93.6%, and 94.8%, respectively. These results demonstrate that it can accelerate the development of digital dentistry.
Collapse
Affiliation(s)
- Yu Liu
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Rui Xie
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Lifeng Wang
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Hongpeng Liu
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Chen Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Yimin Zhao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China.
| | - Shizhu Bai
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Wenyong Liu
- Key Laboratory of Biomechanics and Mechanobiology of the Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| |
Collapse
|
11
|
Zheng Q, Gao Y, Zhou M, Li H, Lin J, Zhang W, Chen X. Semi or fully automatic tooth segmentation in CBCT images: a review. PeerJ Comput Sci 2024; 10:e1994. [PMID: 38660190 PMCID: PMC11041986 DOI: 10.7717/peerj-cs.1994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cone beam computed tomography (CBCT) is widely employed in modern dentistry, and tooth segmentation constitutes an integral part of the digital workflow based on these imaging data. Previous methodologies rely heavily on manual segmentation and are time-consuming and labor-intensive in clinical practice. Recently, with advancements in computer vision technology, scholars have conducted in-depth research, proposing various fast and accurate tooth segmentation methods. In this review, we review 55 articles in this field and discuss the effectiveness, advantages, and disadvantages of each approach. In addition to simple classification and discussion, this review aims to reveal how tooth segmentation methods can be improved by the application and refinement of existing image segmentation algorithms to solve problems such as irregular morphology and fuzzy boundaries of teeth. It is assumed that with the optimization of these methods, manual operation will be reduced, and greater accuracy and robustness in tooth segmentation will be achieved. Finally, we highlight the challenges that still exist in this field and provide prospects for future directions.
Collapse
Affiliation(s)
- Qianhan Zheng
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yu Gao
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Mengqi Zhou
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Huimin Li
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jiaqi Lin
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weifang Zhang
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Social Medicine & Health Affairs Administration, Zhejiang University, Hangzhou, China
| | - Xuepeng Chen
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Clinical Research Center for Oral Diseases of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
12
|
Chen X, Ma N, Xu T, Xu C. Deep learning-based tooth segmentation methods in medical imaging: A review. Proc Inst Mech Eng H 2024; 238:115-131. [PMID: 38314788 DOI: 10.1177/09544119231217603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Deep learning approaches for tooth segmentation employ convolutional neural networks (CNNs) or Transformers to derive tooth feature maps from extensive training datasets. Tooth segmentation serves as a critical prerequisite for clinical dental analysis and surgical procedures, enabling dentists to comprehensively assess oral conditions and subsequently diagnose pathologies. Over the past decade, deep learning has experienced significant advancements, with researchers introducing efficient models such as U-Net, Mask R-CNN, and Segmentation Transformer (SETR). Building upon these frameworks, scholars have proposed numerous enhancement and optimization modules to attain superior tooth segmentation performance. This paper discusses the deep learning methods of tooth segmentation on dental panoramic radiographs (DPRs), cone-beam computed tomography (CBCT) images, intro oral scan (IOS) models, and others. Finally, we outline performance-enhancing techniques and suggest potential avenues for ongoing research. Numerous challenges remain, including data annotation and model generalization limitations. This paper offers insights for future tooth segmentation studies, potentially facilitating broader clinical adoption.
Collapse
Affiliation(s)
- Xiaokang Chen
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| | - Nan Ma
- Faculty of Information and Technology, Beijing University of Technology, Beijing, China
- Engineering Research Center of Intelligence Perception and Autonomous Control, Ministry of Education, Beijing University of Technology, Beijing, China
| | - Tongkai Xu
- Department of General Dentistry II, Peking University School and Hospital of Stomatology, Beijing, China
| | - Cheng Xu
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| |
Collapse
|
13
|
Tan M, Cui Z, Zhong T, Fang Y, Zhang Y, Shen D. A progressive framework for tooth and substructure segmentation from cone-beam CT images. Comput Biol Med 2024; 169:107839. [PMID: 38150887 DOI: 10.1016/j.compbiomed.2023.107839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/13/2023] [Accepted: 12/07/2023] [Indexed: 12/29/2023]
Abstract
BACKGROUND Accurate segmentation of individual tooth and their substructures including enamel, pulp, and dentin from cone-beam computed tomography (CBCT) images is essential for dental diagnosis and treatment planning in digital dentistry. Existing methods for tooth segmentation based on CBCT images have achieved substantial progress; however, techniques for further segmentation into substructures are yet to be developed. PURPOSE We aim to propose a novel three-stage progressive deep-learning-based framework for automatically segmenting 3D tooth from CBCT images, focusing on finer substructures, i.e., enamel, pulp, and dentin. METHODS In this paper, we first detect each tooth using its centroid by a clustering scheme, which efficiently determines each tooth detection by applying learned displacement vectors from the foreground tooth region. Next, guided by the detected centroid, each tooth proposal, combined with the corresponding tooth map, is processed through our tooth segmentation network. We also present an attention-based hybrid feature fusion mechanism, which provides intricate details of the tooth boundary while maintaining the global tooth shape, thereby enhancing the segmentation process. Additionally, we utilize the skeleton of the tooth as a guide for subsequent substructure segmentation. RESULTS Our algorithm is extensively evaluated on a collected dataset of 314 patients, and the extensive comparison and ablation studies demonstrate superior segmentation results of our approach. CONCLUSIONS Our proposed method can automatically segment tooth and finer substructures from CBCT images, underlining its potential applicability for clinical diagnosis and surgical treatment.
Collapse
Affiliation(s)
- Minhui Tan
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Zhiming Cui
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China.
| | - Tao Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Yu Fang
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 200230, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China.
| |
Collapse
|
14
|
Li J, Cheng B, Niu N, Gao G, Ying S, Shi J, Zeng T. A fine-grained orthodontics segmentation model for 3D intraoral scan data. Comput Biol Med 2024; 168:107821. [PMID: 38064844 DOI: 10.1016/j.compbiomed.2023.107821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 11/01/2023] [Accepted: 12/04/2023] [Indexed: 01/10/2024]
Abstract
With the widespread application of digital orthodontics in the diagnosis and treatment of oral diseases, more and more researchers focus on the accurate segmentation of teeth from intraoral scan data. The accuracy of the segmentation results will directly affect the follow-up diagnosis of dentists. Although the current research on tooth segmentation has achieved promising results, the 3D intraoral scan datasets they use are almost all indirect scans of plaster models, and only contain limited samples of abnormal teeth, so it is difficult to apply them to clinical scenarios under orthodontic treatment. The current issue is the lack of a unified and standardized dataset for analyzing and validating the effectiveness of tooth segmentation. In this work, we focus on deformed teeth segmentation and provide a fine-grained tooth segmentation dataset (3D-IOSSeg). The dataset consists of 3D intraoral scan data from more than 200 patients, with each sample labeled with a fine-grained mesh unit. Meanwhile, 3D-IOSSeg meticulously classified every tooth in the upper and lower jaws. In addition, we propose a fast graph convolutional network for 3D tooth segmentation named Fast-TGCN. In the model, the relationship between adjacent mesh cells is directly established by the naive adjacency matrix to better extract the local geometric features of the tooth. Extensive experiments show that Fast-TGCN can quickly and accurately segment teeth from the mouth with complex structures and outperforms other methods in various evaluation metrics. Moreover, we present the results of multiple classical tooth segmentation methods on this dataset, providing a comprehensive analysis of the field. All code and data will be available at https://github.com/MIVRC/Fast-TGCN.
Collapse
Affiliation(s)
- Juncheng Li
- School of Communication Information Engineering, Shanghai University, Shanghai, China.
| | - Bodong Cheng
- School of Computer Science and Technology, East China Normal University, Shanghai, China.
| | - Najun Niu
- School of Stomatology, Nanjing Medical University, Nanjing, China.
| | - Guangwei Gao
- Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing, China.
| | - Shihui Ying
- Department of Mathematics, School of Science, Shanghai University, Shanghai, China.
| | - Jun Shi
- School of Communication Information Engineering, Shanghai University, Shanghai, China.
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, New Territories, Hong Kong.
| |
Collapse
|
15
|
Mei L, Fang Y, Zhao Y, Zhou XS, Zhu M, Cui Z, Shen D. DTR-Net: Dual-Space 3D Tooth Model Reconstruction From Panoramic X-Ray Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:517-528. [PMID: 37751352 DOI: 10.1109/tmi.2023.3313795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/28/2023]
Abstract
In digital dentistry, cone-beam computed tomography (CBCT) can provide complete 3D tooth models, yet suffers from a long concern of requiring excessive radiation dose and higher expense. Therefore, 3D tooth model reconstruction from 2D panoramic X-ray image is more cost-effective, and has attracted great interest in clinical applications. In this paper, we propose a novel dual-space framework, namely DTR-Net, to reconstruct 3D tooth model from 2D panoramic X-ray images in both image and geometric spaces. Specifically, in the image space, we apply a 2D-to-3D generative model to recover intensities of CBCT image, guided by a task-oriented tooth segmentation network in a collaborative training manner. Meanwhile, in the geometric space, we benefit from an implicit function network in the continuous space, learning using points to capture complicated tooth shapes with geometric properties. Experimental results demonstrate that our proposed DTR-Net achieves state-of-the-art performance both quantitatively and qualitatively in 3D tooth model reconstruction, indicating its potential application in dental practice.
Collapse
|
16
|
Kim H, Jeon YD, Park KB, Cha H, Kim MS, You J, Lee SW, Shin SH, Chung YG, Kang SB, Jang WS, Yoon DK. Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning. Sci Rep 2023; 13:20431. [PMID: 37993627 PMCID: PMC10665312 DOI: 10.1038/s41598-023-47706-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023] Open
Abstract
Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5-8 times faster than the experts' recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
Collapse
Affiliation(s)
- Hyeonjoo Kim
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Young Dae Jeon
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Ki Bong Park
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Hayeong Cha
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Moo-Sub Kim
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Juyeon You
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Se-Won Lee
- Department of Orthopedic Surgery, Yeouido St. Mary's Hospital,, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Bin Kang
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Won Seuk Jang
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea.
| | - Do-Kun Yoon
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea.
| |
Collapse
|
17
|
Zhang L, Li W, Lv J, Xu J, Zhou H, Li G, Ai K. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview. J Dent 2023; 138:104727. [PMID: 37769934 DOI: 10.1016/j.jdent.2023.104727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023] Open
Abstract
OBJECTIVES This article reviews recent advances in computer-aided segmentation methods for oral and maxillofacial surgery and describes the advantages and limitations of these methods. The objective is to provide an invaluable resource for precise therapy and surgical planning in oral and maxillofacial surgery. Study selection, data and sources: This review includes full-text articles and conference proceedings reporting the application of segmentation methods in the field of oral and maxillofacial surgery. The research focuses on three aspects: tooth detection segmentation, mandibular canal segmentation and alveolar bone segmentation. The most commonly used imaging technique is CBCT, followed by conventional CT and Orthopantomography. A systematic electronic database search was performed up to July 2023 (Medline via PubMed, IEEE Xplore, ArXiv, Google Scholar were searched). RESULTS These segmentation methods can be mainly divided into two categories: traditional image processing and machine learning (including deep learning). Performance testing on a dataset of images labeled by medical professionals shows that it performs similarly to dentists' annotations, confirming its effectiveness. However, no studies have evaluated its practical application value. CONCLUSION Segmentation methods (particularly deep learning methods) have demonstrated unprecedented performance, while inherent challenges remain, including the scarcity and inconsistency of datasets, visible artifacts in images, unbalanced data distribution, and the "black box" nature. CLINICAL SIGNIFICANCE Accurate image segmentation is critical for precise treatment and surgical planning in oral and maxillofacial surgery. This review aims to facilitate more accurate and effective surgical treatment planning among dental researchers.
Collapse
Affiliation(s)
- Lang Zhang
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Wang Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China.
| | - Jinxun Lv
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Jiajie Xu
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Hengyu Zhou
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Gen Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Keqi Ai
- Department of Radiology, Xinqiao Hospital, Army Medical University, Chongqing 400037, China.
| |
Collapse
|
18
|
Song Y, Yang H, Ge Z, Du H, Li G. Age estimation based on 3D pulp segmentation of first molars from CBCT images using U-Net. Dentomaxillofac Radiol 2023; 52:20230177. [PMID: 37427595 PMCID: PMC10552131 DOI: 10.1259/dmfr.20230177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/17/2023] [Accepted: 06/02/2023] [Indexed: 07/11/2023] Open
Abstract
OBJECTIVE To train a U-Net model to segment the intact pulp cavity of first molars and establish a reliable mathematical model for age estimation. METHODS We trained a U-Net model by 20 sets of cone-beam CT images and this model was able to segment the intact pulp cavity of first molars. Utilizing this model, 239 maxillary first molars and 234 mandibular first molars from 142 males and 135 females aged 15-69 years old were segmented and the intact pulp cavity volumes were calculated, followed by logarithmic regression analysis to establish the mathematical model with age as the dependent variable and pulp cavity volume as the independent variable. Another 256 first molars were collected to estimate ages with the established model. Mean absolute error and root mean square error between the actual and the estimated ages were used to assess the precision and accuracy of the model. RESULTS The dice similarity coefficient of the U-Net model was 95.6%. The established age estimation model was [Formula: see text] (V is the intact pulp cavity volume of the first molars). The coefficient of determination (R2), mean absolute error and root mean square error were 0.662, 6.72 years, and 8.26 years, respectively. CONCLUSION The trained U-Net model can accurately segment pulp cavity of the first molars from three-dimensional cone-beam CT images. The segmented pulp cavity volumes could be used to estimate the human ages with reasonable precision and accuracy.
Collapse
Affiliation(s)
- Yangjing Song
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology; National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Beijing, China
| | - Huifang Yang
- Center of Digital Dentistry, Peking University School and Hospital of Stomatology & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
| | - Zhipu Ge
- Department of Radiology, Qingdao Stomatological Hospital Affiliated to Qingdao University, Qingdao, Shandong Province, China
| | - Han Du
- Shanghai Stomatological Hospital & School of Stomatology, Fudan University & Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai, China
| | - Gang Li
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology; National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Beijing, China
| |
Collapse
|
19
|
Wang J, Suo R, Zhou Y. ML R-CNN: A Location-Aware Network for Tooth Instance Segmentation. PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING 2023:1-5. [DOI: 10.1145/3604078.3604097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Jiajun Wang
- College of Software Engineering, Sichuan University, China
| | - Ruiyu Suo
- College of Physics, Sichuan University, China
| | - Yao Zhou
- College of Computer Science, Sichuan University, China
| |
Collapse
|
20
|
Wang Y, Xia W, Yan Z, Zhao L, Bian X, Liu C, Qi Z, Zhang S, Tang Z. Root canal treatment planning by automatic tooth and root canal segmentation in dental CBCT with deep multi-task feature learning. Med Image Anal 2023; 85:102750. [PMID: 36682153 DOI: 10.1016/j.media.2023.102750] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 10/16/2022] [Accepted: 01/10/2023] [Indexed: 01/21/2023]
Abstract
Accurate and automatic segmentation of individual tooth and root canal from cone-beam computed tomography (CBCT) images is an essential but challenging step for dental surgical planning. In this paper, we propose a novel framework, which consists of two neural networks, DentalNet and PulpNet, for efficient, precise, and fully automatic tooth instance segmentation and root canal segmentation from CBCT images. We first use the proposed DentalNet to achieve tooth instance segmentation and identification. Then, the region of interest (ROI) of the affected tooth is extracted and fed into the PulpNet to obtain precise segmentation of the pulp chamber and the root canal space. These two networks are trained by multi-task feature learning and evaluated on two clinical datasets respectively and achieve superior performances to several comparing methods. In addition, we incorporate our method into an efficient clinical workflow to improve the surgical planning process. In two clinical case studies, our workflow took only 2 min instead of 6 h to obtain the 3D model of tooth and root canal effectively for the surgical planning, resulting in satisfying outcomes in difficult root canal treatments.
Collapse
Affiliation(s)
- Yiwei Wang
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Wenjun Xia
- Shanghai Xuhui District Dental Center, Shanghai 200031, China
| | - Zhennan Yan
- SenseBrain Technology, Princeton, NJ 08540, USA
| | - Liang Zhao
- SenseTime Research, Shanghai 200233, China
| | - Xiaohe Bian
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Chang Liu
- SenseTime Research, Shanghai 200233, China
| | - Zhengnan Qi
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China; Centre for Perceptual and Interactive Intelligence (CPII), Hong Kong Special Administrative Region of China.
| | - Zisheng Tang
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China.
| |
Collapse
|
21
|
Deep Learning-Based Automatic Detection and Grading of Motion-Related Artifacts on Gadoxetic Acid-Enhanced Liver MRI. Invest Radiol 2023; 58:166-172. [PMID: 36070544 DOI: 10.1097/rli.0000000000000914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
OBJECTIVES The aim of this study was to develop and validate a deep learning-based algorithm (DLA) for automatic detection and grading of motion-related artifacts on arterial phase liver magnetic resonance imaging (MRI). MATERIALS AND METHODS Multistep DLA for detection and grading of motion-related artifacts, based on the modified ResNet-101 and U-net, were trained using 336 arterial phase images of gadoxetic acid-enhanced liver MRI examinations obtained in 2017 (training dataset; mean age, 68.6 years [range, 18-95]; 254 men). Motion-related artifacts were evaluated in 4 different MRI slices using a 3-tier grading system. In the validation dataset, 313 images from the same institution obtained in 2018 (internal validation dataset; mean age, 67.2 years [range, 21-87]; 228 men) and 329 from 3 different institutions (external validation dataset; mean age, 64.0 years [range, 23-90]; 214 men) were included, and the per-slice and per-examination performances for the detection of motion-related artifacts were evaluated. RESULTS The per-slice sensitivity and specificity of the DLA for detecting grade 3 motion-related artifacts were 91.5% (97/106) and 96.8% (1134/1172) in the internal validation dataset and 93.3% (265/284) and 91.6% (948/1035) in the external validation dataset. The per-examination sensitivity and specificity were 92.0% (23/25) and 99.7% (287/288) in the internal validation dataset and 90.0% (72/80) and 96.0% (239/249) in the external validation dataset, respectively. The processing time of the DLA for automatic grading of motion-related artifacts was from 4.11 to 4.22 seconds per MRI examination. CONCLUSIONS The DLA enabled automatic and instant detection and grading of motion-related artifacts on arterial phase gadoxetic acid-enhanced liver MRI.
Collapse
|
22
|
Arsiwala-Scheppach LT, Chaurasia A, Müller A, Krois J, Schwendicke F. Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
Affiliation(s)
- Lubaina T. Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, India
| | - Anne Müller
- Pharmacovigilance Institute (Pharmakovigilanz- und Beratungszentrum, PVZ) for Embryotoxicology, Institute of Clinical Pharmacology and Toxicology, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| |
Collapse
|
23
|
Artificial intelligence models for clinical usage in dentistry with a focus on dentomaxillofacial CBCT: a systematic review. Oral Radiol 2023; 39:18-40. [PMID: 36269515 DOI: 10.1007/s11282-022-00660-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 09/29/2022] [Indexed: 01/05/2023]
Abstract
This study aimed at performing a systematic review of the literature on the application of artificial intelligence (AI) in dental and maxillofacial cone beam computed tomography (CBCT) and providing comprehensive descriptions of current technical innovations to assist future researchers and dental professionals. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA) Statement was followed. The study's protocol was prospectively registered. Following databases were searched, based on MeSH and Emtree terms: PubMed/MEDLINE, Embase and Web of Science. The search strategy enrolled 1473 articles. 59 publications were included, which assessed the use of AI on CBCT images in dentistry. According to the PROBAST guidelines for study design, seven papers reported only external validation and 11 reported both model building and validation on an external dataset. 40 studies focused exclusively on model development. The AI models employed mainly used deep learning models (42 studies), while other 17 papers used conventional approaches, such as statistical-shape and active shape models, and traditional machine learning methods, such as thresholding-based methods, support vector machines, k-nearest neighbors, decision trees, and random forests. Supervised or semi-supervised learning was utilized in the majority (96.62%) of studies, and unsupervised learning was used in two (3.38%). 52 publications included studies had a high risk of bias (ROB), two papers had a low ROB, and four papers had an unclear rating. Applications based on AI have the potential to improve oral healthcare quality, promote personalized, predictive, preventative, and participatory dentistry, and expedite dental procedures.
Collapse
|
24
|
Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare (Basel) 2022; 10:healthcare10101892. [PMID: 36292339 PMCID: PMC9602147 DOI: 10.3390/healthcare10101892] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 12/04/2022] Open
Abstract
Within the ever-growing healthcare industry, dental informatics is a burgeoning field of study. One of the major obstacles to the health care system’s transformation is obtaining knowledge and insightful data from complex, high-dimensional, and diverse sources. Modern biomedical research, for instance, has seen an increase in the use of complex, heterogeneous, poorly documented, and generally unstructured electronic health records, imaging, sensor data, and text. There were still certain restrictions even after many current techniques were used to extract more robust and useful elements from the data for analysis. New effective paradigms for building end-to-end learning models from complex data are provided by the most recent deep learning technology breakthroughs. Therefore, the current study aims to examine the most recent research on the use of deep learning techniques for dental informatics problems and recommend creating comprehensive and meaningful interpretable structures that might benefit the healthcare industry. We also draw attention to some drawbacks and the need for better technique development and provide new perspectives about this exciting new development in the field.
Collapse
|
25
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
26
|
Tooth CT Image Segmentation Method Based on the U-Net Network and Attention Module. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:3289663. [PMID: 36035284 PMCID: PMC9417771 DOI: 10.1155/2022/3289663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 07/05/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022]
Abstract
Traditional image segmentation methods often encounter problems of low segmentation accuracy and being time-consuming when processing complex tooth Computed Tomography (CT) images. This paper proposes an improved segmentation method for tooth CT images. Firstly, the U-Net network is used to construct a tooth image segmentation model. A large number of feature maps in downsampling are supplemented to downsampling to reduce information loss. At the same time, the problem of inaccurate image segmentation and positioning is solved. Then, the attention module is introduced into the U-Net network to increase the weight of important information and improve the accuracy of network segmentation. Among them, subregion average pooling is used instead of global average pooling to obtain spatial features. Finally, the U-Net network combined with the improved attention module is used to realize the segmentation of tooth CT images. And based on the image collection provided by West China Hospital for experimental demonstration, compared with other algorithms, our method has better segmentation performance and efficiency. The contours of the teeth obtained are clearer, which is helpful to assist the doctor in the diagnosis.
Collapse
|
27
|
Park S, Chung M. Cardiac segmentation on CT Images through shape-aware contour attentions. Comput Biol Med 2022; 147:105782. [DOI: 10.1016/j.compbiomed.2022.105782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 06/02/2022] [Accepted: 06/19/2022] [Indexed: 11/30/2022]
|
28
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
29
|
Du M, Wu X, Ye Y, Fang S, Zhang H, Chen M. A Combined Approach for Accurate and Accelerated Teeth Detection on Cone Beam CT Images. Diagnostics (Basel) 2022; 12:diagnostics12071679. [PMID: 35885584 PMCID: PMC9323385 DOI: 10.3390/diagnostics12071679] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/04/2022] [Accepted: 07/07/2022] [Indexed: 11/23/2022] Open
Abstract
Teeth detection and tooth segmentation are essential for processing Cone Beam Computed Tomography (CBCT) images. The accuracy decides the credibility of the subsequent applications, such as diagnosis, treatment plans in clinical practice or other research that is dependent on automatic dental identification. The main problems are complex noises and metal artefacts which would affect the accuracy of teeth detection and segmentation with traditional algorithms. In this study, we proposed a teeth-detection method to avoid the problems above and to accelerate the operation speed. In our method, (1) a Convolutional Neural Network (CNN) was employed to classify layer classes; (2) images were chosen to perform Region of Interest (ROI) cropping; (3) in ROI regions, we used a YOLO v3 and multi-level combined teeth detection method to locate each tooth bounding box; (4) we obtained tooth bounding boxes on all layers. We compared our method with a Faster R-CNN method which was commonly used in previous studies. The training and prediction time were shortened by 80% and 62% in our method, respectively. The Object Inclusion Ratio (OIR) metric of our method was 96.27%, while for the Faster R-CNN method, it was 91.40%. When testing images with severe noise or with different missing teeth, our method promises a stable result. In conclusion, our method of teeth detection on dental CBCT is practical and reliable for its high prediction speed and robust detection.
Collapse
Affiliation(s)
- Mingjun Du
- Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (M.D.); (H.Z.)
| | - Xueying Wu
- Department of Prosthodontics, Shanghai Stomatological Hospital & School of Stomatology, Fudan University and Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai 200040, China; (X.W.); (Y.Y.); (S.F.)
| | - Ye Ye
- Department of Prosthodontics, Shanghai Stomatological Hospital & School of Stomatology, Fudan University and Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai 200040, China; (X.W.); (Y.Y.); (S.F.)
| | - Shuobo Fang
- Department of Prosthodontics, Shanghai Stomatological Hospital & School of Stomatology, Fudan University and Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai 200040, China; (X.W.); (Y.Y.); (S.F.)
| | - Hengwei Zhang
- Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (M.D.); (H.Z.)
| | - Ming Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (M.D.); (H.Z.)
- Correspondence:
| |
Collapse
|
30
|
Kim HJ, Kim KD, Kim DH. Deep convolutional neural network-based skeletal classification of cephalometric image compared with automated-tracing software. Sci Rep 2022; 12:11659. [PMID: 35804075 PMCID: PMC9270345 DOI: 10.1038/s41598-022-15856-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 06/30/2022] [Indexed: 11/30/2022] Open
Abstract
This study aimed to investigate deep convolutional neural network- (DCNN-) based artificial intelligence (AI) model using cephalometric images for the classification of sagittal skeletal relationships and compare the performance of the newly developed DCNN-based AI model with that of the automated-tracing AI software. A total of 1574 cephalometric images were included and classified based on the A-point-Nasion- (N-) point-B-point (ANB) angle (Class I being 0–4°, Class II > 4°, and Class III < 0°). The DCNN-based AI model was developed using training (1334 images) and validation (120 images) sets with a standard classification label for the individual images. A test set of 120 images was used to compare the AI models. The agreement of the DCNN-based AI model or the automated-tracing AI software with a standard classification label was measured using Cohen’s kappa coefficient (0.913 for the DCNN-based AI model; 0.775 for the automated-tracing AI software). In terms of their performances, the micro-average values of the DCNN-based AI model (sensitivity, 0.94; specificity, 0.97; precision, 0.94; accuracy, 0.96) were higher than those of the automated-tracing AI software (sensitivity, 0.85; specificity, 0.93; precision, 0.85; accuracy, 0.90). With regard to the sagittal skeletal classification using cephalometric images, the DCNN-based AI model outperformed the automated-tracing AI software.
Collapse
Affiliation(s)
- Ho-Jin Kim
- Department of Orthodontics, School of Dentistry, Kyungpook National University, 2175, Dalgubul-Daero, Jung-Gu, Daegu, 41940, Korea.
| | - Kyoung Dong Kim
- School of Electronic and Electrical Engineering College of IT Engineering, Kyungpook National University, Daegu, Korea
| | - Do-Hoon Kim
- Medical Big Data Research Center, Kyungpook National University, Daegu, Korea
| |
Collapse
|
31
|
Automated segmentation of the fractured vertebrae on CT and its applicability in a radiomics model to predict fracture malignancy. Sci Rep 2022; 12:6735. [PMID: 35468985 PMCID: PMC9038736 DOI: 10.1038/s41598-022-10807-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/13/2022] [Indexed: 11/08/2022] Open
Abstract
Although CT radiomics has shown promising results in the evaluation of vertebral fractures, the need for manual segmentation of fractured vertebrae limited the routine clinical implementation of radiomics. Therefore, automated segmentation of fractured vertebrae is needed for successful clinical use of radiomics. In this study, we aimed to develop and validate an automated algorithm for segmentation of fractured vertebral bodies on CT, and to evaluate the applicability of the algorithm in a radiomics prediction model to differentiate benign and malignant fractures. A convolutional neural network was trained to perform automated segmentation of fractured vertebral bodies using 341 vertebrae with benign or malignant fractures from 158 patients, and was validated on independent test sets (internal test, 86 vertebrae [59 patients]; external test, 102 vertebrae [59 patients]). Then, a radiomics model predicting fracture malignancy on CT was constructed, and the prediction performance was compared between automated and human expert segmentations. The algorithm achieved good agreement with human expert segmentation at testing (Dice similarity coefficient, 0.93-0.94; cross-sectional area error, 2.66-2.97%; average surface distance, 0.40-0.54 mm). The radiomics model demonstrated good performance in the training set (AUC, 0.93). In the test sets, automated and human expert segmentations showed comparable prediction performances (AUC, internal test, 0.80 vs 0.87, p = 0.044; external test, 0.83 vs 0.80, p = 0.37). In summary, we developed and validated an automated segmentation algorithm that showed comparable performance to human expert segmentation in a CT radiomics model to predict fracture malignancy, which may enable more practical clinical utilization of radiomics.
Collapse
|
32
|
A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images. Nat Commun 2022; 13:2096. [PMID: 35440592 PMCID: PMC9018763 DOI: 10.1038/s41467-022-29637-2] [Citation(s) in RCA: 110] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/03/2022] [Indexed: 12/20/2022] Open
Abstract
Accurate delineation of individual teeth and alveolar bones from dental cone-beam CT (CBCT) images is an essential step in digital dentistry for precision dental healthcare. In this paper, we present an AI system for efficient, precise, and fully automatic segmentation of real-patient CBCT images. Our AI system is evaluated on the largest dataset so far, i.e., using a dataset of 4,215 patients (with 4,938 CBCT scans) from 15 different centers. This fully automatic AI system achieves a segmentation accuracy comparable to experienced radiologists (e.g., 0.5% improvement in terms of average Dice similarity coefficient), while significant improvement in efficiency (i.e., 500 times faster). In addition, it consistently obtains accurate results on the challenging cases with variable dental abnormalities, with the average Dice scores of 91.5% and 93.0% for tooth and alveolar bone segmentation. These results demonstrate its potential as a powerful system to boost clinical workflows of digital dentistry.
Collapse
|
33
|
Dot G, Schouman T, Dubois G, Rouch P, Gajny L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur Radiol 2022; 32:3639-3648. [PMID: 35037088 DOI: 10.1007/s00330-021-08455-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/27/2021] [Accepted: 11/01/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. METHODS Four hundred and fifty-three consecutive patients having undergone high-resolution CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentation of the mandible. RESULTS In the test cohort, mean volumetric Dice similarity coefficient (vDSC) and surface Dice similarity coefficient at 1 mm (sDSC) were 0.96 and 0.97 for the upper skull, 0.94 and 0.98 for the mandible, 0.95 and 0.99 for the upper teeth, 0.94 and 0.99 for the lower teeth, and 0.82 and 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth, and 58% for the lower teeth. CONCLUSION While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans. KEY POINTS • The nnU-Net deep learning framework can be trained out-of-the-box to provide robust fully automatic multi-task segmentation of CT scans performed for computer-assisted orthognathic surgery planning. • The clinical viability of the trained nnU-Net model is shown on a challenging test dataset of 153 CT scans randomly selected from clinical practice, showing metallic artifacts and diverse anatomical deformities. • Commonly used biomedical segmentation evaluation metrics (volumetric and surface Dice similarity coefficient) do not always match industry expert evaluation in the case of more demanding clinical applications.
Collapse
Affiliation(s)
- Gauthier Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France. .,Universite de Paris, AP-HP, Hopital Pitie-Salpetriere, Service d'Odontologie, Paris, France.
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Materialise, Malakoff, France
| | - Philippe Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,EPF-Graduate School of Engineering, Sceaux, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France
| |
Collapse
|
34
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
35
|
Shaheen E, Leite A, Alqahtani KA, Smolders A, Van Gerven A, Willems H, Jacobs R. A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study. J Dent 2021; 115:103865. [PMID: 34710545 DOI: 10.1016/j.jdent.2021.103865] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 09/20/2021] [Accepted: 10/24/2021] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES Automatic tooth segmentation and classification from cone beam computed tomography (CBCT) have become an integral component of the digital dental workflows. Therefore, the aim of this study was to develop and validate a deep learning approach for an automatic tooth segmentation and classification from CBCT images. METHODS A dataset of 186 CBCT scans was acquired from two CBCT machines with different acquisition settings. An artificial intelligence (AI) framework was built to segment and classify teeth. Teeth were segmented in a three-step approach with each step consisting of a 3D U-Net and step 2 included classification. The dataset was divided into training set (140 scans) to train the model based on ground-truth segmented teeth, validation set (35 scans) to test the model performance and test set (11 scans) to evaluate the model performance compared to ground-truth. Different evaluation metrics were used such as precision, recall rate and time. RESULTS The AI framework correctly segmented teeth with optimal precision (0.98±0.02) and recall (0.83±0.05). The difference between the AI model and ground-truth was 0.56±0.38 mm based on 95% Hausdorff distance confirming the high performance of AI compared to ground-truth. Furthermore, segmentation of all the teeth within a scan was more than 1800 times faster for AI compared to that of an expert. Teeth classification also performed optimally with a recall rate of 98.5% and precision of 97.9%. CONCLUSIONS The proposed 3D U-Net based AI framework is an accurate and time-efficient deep learning system for automatic tooth segmentation and classification without expert refinement. CLINICAL SIGNIFICANCE The proposed system might enable potential future applications for diagnostics and treatment planning in the field of digital dentistry, while reducing clinical workload.
Collapse
Affiliation(s)
- Eman Shaheen
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium.
| | - André Leite
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; Department of Dentistry, Faculty of Health Sciences, University of Brasília, Brasília, Brazil
| | - Khalid Ayidh Alqahtani
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | | | | | | | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04 Huddinge, Sweden
| |
Collapse
|
36
|
Bichu YM, Hansa I, Bichu AY, Premjani P, Flores-Mir C, Vaid NR. Applications of artificial intelligence and machine learning in orthodontics: a scoping review. Prog Orthod 2021; 22:18. [PMID: 34219198 PMCID: PMC8255249 DOI: 10.1186/s40510-021-00361-9] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 05/12/2021] [Indexed: 12/15/2022] Open
Abstract
INTRODUCTION This scoping review aims to provide an overview of the existing evidence on the use of artificial intelligence (AI) and machine learning (ML) in orthodontics, its translation into clinical practice, and what limitations do exist that have precluded their envisioned application. METHODS A scoping review of the literature was carried out following the PRISMA-ScR guidelines. PubMed was searched until July 2020. RESULTS Sixty-two articles fulfilled the inclusion criteria. A total of 43 out of the 62 studies (69.35%) were published this last decade. The majority of these studies were from the USA (11), followed by South Korea (9) and China (7). The number of studies published in non-orthodontic journals (36) was more extensive than in orthodontic journals (26). Artificial Neural Networks (ANNs) were found to be the most commonly utilized AI/ML algorithm (13 studies), followed by Convolutional Neural Networks (CNNs), Support Vector Machine (SVM) (9 studies each), and regression (8 studies). The most commonly studied domains were diagnosis and treatment planning-either broad-based or specific (33), automated anatomic landmark detection and/or analyses (19), assessment of growth and development (4), and evaluation of treatment outcomes (2). The different characteristics and distribution of these studies have been displayed and elucidated upon therein. CONCLUSION This scoping review suggests that there has been an exponential increase in the number of studies involving various orthodontic applications of AI and ML. The most commonly studied domains were diagnosis and treatment planning, automated anatomic landmark detection and/or analyses, and growth and development assessment.
Collapse
Affiliation(s)
| | | | | | | | - Carlos Flores-Mir
- Department of Orthodontics, University of Alberta, Edmonton, Alberta, Canada
| | - Nikhilesh R Vaid
- Department of Orthodontics, European University College, Dubai, United Arab Emirates
| |
Collapse
|
37
|
Tang R, Yin H, Wang Z, Zhang Z, Zhao L, Zhang P, Li J, Zhao P, Lv H, Zhang L, Yang Z, Wang Z. Stapes visualization by ultra-high resolution CT in cadaveric heads: A preliminary study. Eur J Radiol 2021; 141:109786. [PMID: 34058698 DOI: 10.1016/j.ejrad.2021.109786] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 05/13/2021] [Accepted: 05/17/2021] [Indexed: 11/17/2022]
Abstract
PURPOSE This study aimed to assess stapes visualization using an ultra-high resolution computed tomography (U-HRCT). METHOD Sixty ears from 30 cadaveric human heads were scanned by both U-HRCT and 128-section multislice CT (MSCT) with clinical parameters. Image quality of the stapes head, anterior and posterior crura, footplate, incudostapedial joint and stapedial muscle within the pyramidal eminence was scored using a 3-point Likert scale. Linear measurements of the stapes configuration were performed on U-HRCT. RESULTS The interobserver agreement for image qualitative score on U-HRCT was good to excellent (interobserver agreement coefficients 0.65-0.86). With the exception of the stapes head, U-HRCT achieved significantly higher qualitative scores than MSCT across all anatomical structures (Ps < 0.05). The total height of the stapes was measured to be 3.48 ± 0.33 mm. The height and width of the obturator foramen were 1.77 ± 0.28 mm and 2.19 ± 0.33 mm, respectively. The widths of the anterior and posterior crura were 0.20 ± 0.06 mm and 0.22 ± 0.06 mm, respectively. The thickness of the footplate was 0.22 ± 0.06 mm, and the angle of the incudostapedial joint was 95.91 ± 10.69°. CONCLUSIONS U-HRCT is capable of delineating fine structures of the stapes and provides linear data on dimensions of the stapes, which could be helpful for detecting stapes disease and making individualized surgical plans in the clinical setting.
Collapse
Affiliation(s)
- Ruowei Tang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Hongxia Yin
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Zheng Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Zhengyu Zhang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Lei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Peng Zhang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Jing Li
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Pengfei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Han Lv
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Li Zhang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Zhenchang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China.
| |
Collapse
|
38
|
Chung M, Lee J, Song W, Song Y, Yang IH, Lee J, Shin YG. Automatic Registration Between Dental Cone-Beam CT and Scanned Surface via Deep Pose Regression Neural Networks and Clustered Similarities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3900-3909. [PMID: 32746134 DOI: 10.1109/tmi.2020.3007520] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Computerized registration between maxillofacial cone-beam computed tomography (CT) images and a scanned dental model is an essential prerequisite for surgical planning for dental implants or orthognathic surgery. We propose a novel method that performs fully automatic registration between a cone-beam CT image and an optically scanned model. To build a robust and automatic initial registration method, deep pose regression neural networks are applied in a reduced domain (i.e., two-dimensional image). Subsequently, fine registration is performed using optimal clusters. A majority voting system achieves globally optimal transformations while each cluster attempts to optimize local transformation parameters. The coherency of clusters determines their candidacy for the optimal cluster set. The outlying regions in the iso-surface are effectively removed based on the consensus among the optimal clusters. The accuracy of registration is evaluated based on the Euclidean distance of 10 landmarks on a scanned model, which have been annotated by experts in the field. The experiments show that the registration accuracy of the proposed method, measured based on the landmark distance, outperforms the best performing existing method by 33.09%. In addition to achieving high accuracy, our proposed method neither requires human interactions nor priors (e.g., iso-surface extraction). The primary significance of our study is twofold: 1) the employment of lightweight neural networks, which indicates the applicability of neural networks in extracting pose cues that can be easily obtained and 2) the introduction of an optimal cluster-based registration method that can avoid metal artifacts during the matching procedures.
Collapse
|