1
|
Jiang Y, Jiang C, Shi B, Wu Y, Xing S, Liang H, Huang J, Huang X, Huang L, Lin L. Automatic identification of hard and soft tissue landmarks in cone-beam computed tomography via deep learning with diversity datasets: a methodological study. BMC Oral Health 2025; 25:505. [PMID: 40200295 PMCID: PMC11980328 DOI: 10.1186/s12903-025-05831-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Accepted: 03/17/2025] [Indexed: 04/10/2025] Open
Abstract
BACKGROUND Manual landmark detection in cone beam computed tomography (CBCT) for evaluating craniofacial structures relies on medical expertise and is time-consuming. This study aimed to apply a new deep learning method to predict and locate soft and hard tissue craniofacial landmarks on CBCT in patients with various types of malocclusion. METHODS A total of 498 CBCT images were collected. Following the calibration procedure, two experienced clinicians identified 43 landmarks in the x-, y-, and z-coordinate planes on the CBCT images using Checkpoint Software, creating the ground truth by averaging the landmark coordinates. To evaluate the accuracy of our algorithm, we determined the mean absolute error along the x-, y-, and z-axes and calculated the mean radial error (MRE) between the reference landmark and predicted landmark, as well as the successful detection rate (SDR). RESULTS Each landmark prediction took approximately 4.2 s on a conventional graphics processing unit. The mean absolute error across all coordinates was 0.74 mm. The overall MRE for the 43 landmarks was 1.76 ± 1.13 mm, and the SDR was 60.16%, 91.05%, and 97.58% within 2-, 3-, and 4-mm error ranges of manual marking, respectively. The average MRE of the hard tissue landmarks (32/43) was 1.73 mm, while that for soft tissue landmarks (11/43) was 1.84 mm. CONCLUSIONS Our proposed algorithm demonstrates a clinically acceptable level of accuracy and robustness for automatic detection of CBCT soft- and hard-tissue landmarks across all types of malformations. The potential for artificial intelligence to assist in identifying three dimensional-CT landmarks in routine clinical practice and analysing large datasets for future research is promising.
Collapse
Affiliation(s)
- Yan Jiang
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
| | - Canyang Jiang
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - Bin Shi
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - You Wu
- School of Stomatology, Fujian Medical University, Fuzhou, 350122, China
| | - Shuli Xing
- College of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350118, China
| | - Hao Liang
- College of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350118, China
| | - Jianping Huang
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - Xiaohong Huang
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China.
| | - Li Huang
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China.
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China.
| | - Lisong Lin
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China.
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China.
| |
Collapse
|
2
|
Ma S, Wang H, Zhao W, Yu Z, Wei B, Zhu S, Zhai Y. An interpretable deep learning model for hallux valgus prediction. Comput Biol Med 2025; 185:109468. [PMID: 39662315 DOI: 10.1016/j.compbiomed.2024.109468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 10/26/2024] [Accepted: 11/21/2024] [Indexed: 12/13/2024]
Abstract
BACKGROUND This work developed an interpretable deep learning model to automatically annotate landmarks and calculate the hallux valgus angle (HVA) and the intermetatarsal angle (IMA), reducing the time and error of manual calculations by medical experts and improving the efficiency and accuracy of hallux valgus (HV) diagnosis. METHODS A total of 2,000 foot X-ray images were manually labeled with 12 landmarks by two surgical specialists as training data for the deep learning model. The important parts of the foot X-ray images centered on the proximal phalanx of the bunion (PH1), the first metatarsal (MT1), and the second metatarsal (MT2) were segmented using the proposed AG-UNet in the study. The SE-DNN network model was used for automatic identification of landmarks and calculation of the HVA angle between PH1 and MT1, and the IMA angle between MT1 and MT2. Finally, the accuracy of the model was assessed using a comparison of two methods, the interpretability of deep learning and manual measurements by a foot and ankle surgeon. RESULTS In the test set, the average error distance between the 12 landmarks predicted by the model and the manually annotated landmarks ranged from 1.9 mm to 5.6 mm, and the average error of all landmarks was less than 3.1 mm. In addition, for the measurement of HVA and IMA angles, the inter-rater agreement between the proposed model and the experts performed well, and the ICC results were all greater than or equal to 0.9. CONCLUSION This work proposed an interpretable deep learning model for hallux valgus prediction, which can automatically identify 12 landmarks and calculate HVA and IMA. Compared with the subjective judgment of medical experts, the model showed significant advantages in reliability and accuracy. The method has been applied in hospitals and achieved significant detection results.
Collapse
Affiliation(s)
- Shuang Ma
- School of Information Science and Engineering, Linyi University, Linyi University, Linyi City, Shandong Province, Linyi, 276000, Linyi, China; Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China.
| | - Haifeng Wang
- School of Information Science and Engineering, Linyi University, Linyi University, Linyi City, Shandong Province, Linyi, 276000, Linyi, China; Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China.
| | - Wei Zhao
- Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China; Linyi City People's Hospital, Linyi People's Hospital of Shandong Province, Linyi, 276034, Linyi, China.
| | - Zhihao Yu
- School of Information Science and Engineering, Linyi University, Linyi University, Linyi City, Shandong Province, Linyi, 276000, Linyi, China; Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China.
| | - Baofu Wei
- Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China; Linyi City People's Hospital, Linyi People's Hospital of Shandong Province, Linyi, 276034, Linyi, China.
| | - Shufeng Zhu
- School of Information Science and Engineering, Linyi University, Linyi University, Linyi City, Shandong Province, Linyi, 276000, Linyi, China; Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China.
| | - Yongqing Zhai
- Linyi People's Hospital Health and Medical Big Data Center, Linyi City, Shandong Province, Linyi, 276034, Linyi, China; Linyi City People's Hospital, Linyi People's Hospital of Shandong Province, Linyi, 276034, Linyi, China.
| |
Collapse
|
3
|
Görg C, Elkhill C, Chaij J, Royalty K, Nguyen PD, French B, Cruz-Guerrero IA, Porras AR. SHAPE: A visual computing pipeline for interactive landmarking of 3D photograms and patient reporting for assessing craniosynostosis. COMPUTERS & GRAPHICS 2024; 125:104056. [PMID: 39726689 PMCID: PMC11671126 DOI: 10.1016/j.cag.2024.104056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/28/2024]
Abstract
3D photogrammetry is a cost-effective, non-invasive imaging modality that does not require the use of ionizing radiation or sedation. Therefore, it is specifically valuable in pediatrics and is used to support the diagnosis and longitudinal study of craniofacial developmental pathologies such as craniosynostosis - the premature fusion of one or more cranial sutures resulting in local cranial growth restrictions and cranial malformations. Analysis of 3D photogrammetry requires the identification of craniofacial landmarks to segment the head surface and compute metrics to quantify anomalies. Unfortunately, commercial 3D photogrammetry software requires intensive manual landmark placements, which is time-consuming and prone to errors. We designed and implemented SHAPE, a System for Head-shape Analysis and Pediatric Evaluation. It integrates our previously developed automated landmarking method in a visual computing pipeline to evaluate a patient's 3D photogram while allowing for manual confirmation and correction. It also automatically computes advanced metrics to quantify craniofacial anomalies and automatically creates a report that can be uploaded to the patient's electronic health record. We conducted a user study with a professional clinical photographer to compare SHAPE to the existing clinical workflow. We found that SHAPE allows for the evaluation of a craniofacial 3D photogram more than three times faster than the current clinical workflow (3.85 ± 0.99 vs. 13.07 ± 5.29 minutes, p < 0.001). Our qualitative study findings indicate that the SHAPE workflow is well aligned with the existing clinical workflow and that SHAPE has useful features and is easy to learn.
Collapse
Affiliation(s)
- Carsten Görg
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, 13001 East 17th Place, Aurora, CO 80045, USA
| | - Connor Elkhill
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, 13001 East 17th Place, Aurora, CO 80045, USA
| | - Jasmine Chaij
- Department of Pediatric Plastic and Reconstructive Surgery, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Kristin Royalty
- Department of Pediatric Plastic and Reconstructive Surgery, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Phuong D. Nguyen
- Department of Pediatric Plastic and Reconstructive Surgery, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Brooke French
- Department of Pediatric Plastic and Reconstructive Surgery, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Ines A. Cruz-Guerrero
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, 13001 East 17th Place, Aurora, CO 80045, USA
| | - Antonio R. Porras
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, 13001 East 17th Place, Aurora, CO 80045, USA
- Department of Pediatric Plastic and Reconstructive Surgery, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045, USA
- Departments of Pediatrics, Surgery and Biomedical Informatics, School of Medicine, University of Colorado Anschutz Medical Campus, 13001 East 17th Place, Aurora, CO, 80045, USA
- Department of Pediatric Neurosurgery, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045, USA
| |
Collapse
|
4
|
Park J, Yoon S, Kim H, Kim Y, Lee U, Yu H. Clinical validity and precision of deep learning-based cone-beam computed tomography automatic landmarking algorithm. Imaging Sci Dent 2024; 54:240-250. [PMID: 39371307 PMCID: PMC11450405 DOI: 10.5624/isd.20240009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 05/17/2024] [Accepted: 05/24/2024] [Indexed: 10/08/2024] Open
Abstract
Purpose This study was performed to assess the clinical validity and accuracy of a deep learning-based automatic landmarking algorithm for cone-beam computed tomography (CBCT). Three-dimensional (3D) CBCT head measurements obtained through manual and automatic landmarking were compared. Materials and Methods A total of 80 CBCT scans were divided into 3 groups: non-surgical (39 cases); surgical without hardware, namely surgical plates and mini-screws (9 cases); and surgical with hardware (32 cases). Each CBCT scan was analyzed to obtain 53 measurements, comprising 27 lengths, 21 angles, and 5 ratios, which were determined based on 65 landmarks identified using either a manual or a 3D automatic landmark detection method. Results In comparing measurement values derived from manual and artificial intelligence landmarking, 6 items displayed significant differences: R U6CP-L U6CP, R L3CP-L L3CP, S-N, Or_R-R U3CP, L1L to Me-GoL, and GoR-Gn/S-N (P<0.05). Of the 3 groups, the surgical scans without hardware exhibited the lowest error, reflecting the smallest difference in measurements between human- and artificial intelligence-based landmarking. The time required to identify 65 landmarks was approximately 40-60 minutes per CBCT volume when done manually, compared to 10.9 seconds for the artificial intelligence method (PC specifications: GeForce 2080Ti, 64GB RAM, and an Intel i7 CPU at 3.6 GHz). Conclusion Measurements obtained with a deep learning-based CBCT automatic landmarking algorithm were similar in accuracy to values derived from manually determined points. By decreasing the time required to calculate these measurements, the efficiency of diagnosis and treatment may be improved.
Collapse
Affiliation(s)
- Jungeun Park
- Department of Orthodontics, College of Dentistry, Yonsei University, Seoul, Korea
| | - Seongwon Yoon
- College of Dentistry, Seoul National University, Seoul, Korea
- Imagoworks Incorporated, Seoul, Korea
| | - Hannah Kim
- Imagoworks Incorporated, Seoul, Korea
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | - Youngjun Kim
- Imagoworks Incorporated, Seoul, Korea
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | - Uilyong Lee
- Department of Oral and Maxillofacial Surgery, College of Dentistry, Chungang University Hospital, Seoul, Korea
| | - Hyungseog Yu
- Department of Orthodontics, The Institute of Craniofacial Deformity, College of Dentistry, Yonsei University, Seoul, Korea
| |
Collapse
|
5
|
Chen H, Qu Z, Tian Y, Jiang N, Qin Y, Gao J, Zhang R, Ma Y, Jin Z, Zhai G. A cross-temporal multimodal fusion system based on deep learning for orthodontic monitoring. Comput Biol Med 2024; 180:109025. [PMID: 39159544 DOI: 10.1016/j.compbiomed.2024.109025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 07/30/2024] [Accepted: 08/11/2024] [Indexed: 08/21/2024]
Abstract
INTRODUCTION In the treatment of malocclusion, continuous monitoring of the three-dimensional relationship between dental roots and the surrounding alveolar bone is essential for preventing complications from orthodontic procedures. Cone-beam computed tomography (CBCT) provides detailed root and bone data, but its high radiation dose limits its frequent use, consequently necessitating an alternative for ongoing monitoring. OBJECTIVES We aimed to develop a deep learning-based cross-temporal multimodal image fusion system for acquiring root and jawbone information without additional radiation, enhancing the ability of orthodontists to monitor risk. METHODS Utilizing CBCT and intraoral scans (IOSs) as cross-temporal modalities, we integrated deep learning with multimodal fusion technologies to develop a system that includes a CBCT segmentation model for teeth and jawbones. This model incorporates a dynamic kernel prior model, resolution restoration, and an IOS segmentation network optimized for dense point clouds. Additionally, a coarse-to-fine registration module was developed. This system facilitates the integration of IOS and CBCT images across varying spatial and temporal dimensions, enabling the comprehensive reconstruction of root and jawbone information throughout the orthodontic treatment process. RESULTS The experimental results demonstrate that our system not only maintains the original high resolution but also delivers outstanding segmentation performance on external testing datasets for CBCT and IOSs. CBCT achieved Dice coefficients of 94.1 % and 94.4 % for teeth and jawbones, respectively, and it achieved a Dice coefficient of 91.7 % for the IOSs. Additionally, in the context of real-world registration processes, the system achieved an average distance error (ADE) of 0.43 mm for teeth and 0.52 mm for jawbones, significantly reducing the processing time. CONCLUSION We developed the first deep learning-based cross-temporal multimodal fusion system, addressing the critical challenge of continuous risk monitoring in orthodontic treatments without additional radiation exposure. We hope that this study will catalyze transformative advancements in risk management strategies and treatment modalities, fundamentally reshaping the landscape of future orthodontic practice.
Collapse
Affiliation(s)
- Haiwen Chen
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Zhiyuan Qu
- Institute of Image Communication and Network Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200011, China
| | - Yuan Tian
- Institute of Image Communication and Network Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200011, China
| | - Ning Jiang
- Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Yuan Qin
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Jie Gao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Ruoyan Zhang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Yanning Ma
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China.
| | - Zuolin Jin
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Guangtao Zhai
- Institute of Image Communication and Network Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200011, China
| |
Collapse
|
6
|
Yang Y, Zhang M, An Y, Huang Q, Shi Y, Jin L, Zeng A, Long X, Yu N, Wang X. Automated 3D Perioral Landmark Detection Using High-Resolution Network: Artificial Intelligence-based Anthropometric Analysis. Aesthet Surg J 2024; 44:NP606-NP612. [PMID: 38662744 DOI: 10.1093/asj/sjae103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024] Open
Abstract
BACKGROUND Three-dimensional facial stereophotogrammetry, a convenient, noninvasive and highly reliable evaluation tool, has in recent years shown great potential in plastic surgery for preoperative planning and evaluating treatment efficacy. However, it requires manual identification of facial landmarks by trained evaluators to obtain anthropometric data, which takes much time and effort. Automatic 3D facial landmark localization has the potential to facilitate fast data acquisition and eliminate evaluator error. OBJECTIVES The aim of this work was to describe a novel deep-learning method based on dimension transformation and key-point detection for automated 3D perioral landmark annotation. METHODS After transforming a 3D facial model into 2D images, High-Resolution Network is implemented for key-point detection. The 2D coordinates of key points are then mapped back to the 3D model using mathematical methods to obtain the 3D landmark coordinates. This program was trained with 120 facial models and validated in 50 facial models. RESULTS Our approach achieved a satisfactory mean [standard deviation] accuracy of 1.30 [0.68] mm error in landmark detection with a mean processing time of 5.2 [0.21] seconds per model. Subsequent analysis based on these landmarks showed mean errors of 0.87 [1.02] mm for linear measurements and 5.62° [6.61°] for angular measurements. CONCLUSIONS This automated 3D perioral landmarking method could serve as an effective tool that enables fast and accurate anthropometric analysis of lip morphology for plastic surgery and aesthetic procedures.
Collapse
|
7
|
Yilmaz S, Tasyurek M, Amuk M, Celik M, Canger EM. Developing deep learning methods for classification of teeth in dental panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:118-127. [PMID: 37316425 DOI: 10.1016/j.oooo.2023.02.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 09/13/2022] [Accepted: 02/10/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVES We aimed to develop an artificial intelligence-based clinical dental decision-support system using deep-learning methods to reduce diagnostic interpretation error and time and increase the effectiveness of dental treatment and classification. STUDY DESIGN We compared the performance of 2 deep-learning methods, You Only Look Once V4 (YOLO-V4) and Faster Regions with the Convolutional Neural Networks (R-CNN), for tooth classification in dental panoramic radiography for tooth classification in dental panoramic radiography to determine which is more successful in terms of accuracy, time, and detection ability. Using a method based on deep-learning models trained on a semantic segmentation task, we analyzed 1200 panoramic radiographs selected retrospectively. In the classification process, our model identified 36 classes, including 32 teeth and 4 impacted teeth. RESULTS The YOLO-V4 method achieved a mean 99.90% precision, 99.18% recall, and 99.54% F1 score. The Faster R-CNN method achieved a mean 93.67% precision, 90.79% recall, and 92.21% F1 score. Experimental evaluations showed that the YOLO-V4 method outperformed the Faster R-CNN method in terms of accuracy of predicted teeth in the tooth classification process, speed of tooth classification, and ability to detect impacted and erupted third molars. CONCLUSIONS The YOLO-V4 method outperforms the Faster R-CNN method in terms of accuracy of tooth prediction, speed of detection, and ability to detect impacted third molars and erupted third molars. The proposed deep learning based methods can assist dentists in clinical decision making, save time, and reduce the negative effects of stress and fatigue in daily practice.
Collapse
Affiliation(s)
- Serkan Yilmaz
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Murat Tasyurek
- Department of Computer Engineering, Kayseri University, Kayseri, Turkey
| | - Mehmet Amuk
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Mete Celik
- Department of Computer Engineering, Erciyes University, Kayseri, Turkey
| | - Emin Murat Canger
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey.
| |
Collapse
|
8
|
Bayrakdar IS, Elfayome NS, Hussien RA, Gulsen IT, Kuran A, Gunes I, Al-Badr A, Celik O, Orhan K. Artificial intelligence system for automatic maxillary sinus segmentation on cone beam computed tomography images. Dentomaxillofac Radiol 2024; 53:256-266. [PMID: 38502963 PMCID: PMC11056744 DOI: 10.1093/dmfr/twae012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/29/2024] [Accepted: 03/14/2024] [Indexed: 03/21/2024] Open
Abstract
OBJECTIVES The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model. METHODS In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values. RESULTS F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.
Collapse
Affiliation(s)
- Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, 26040, Turkey
| | - Nermin Sameh Elfayome
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Cairo University, Cairo, 12613, Egypt
| | - Reham Ashraf Hussien
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Cairo University, Cairo, 12613, Egypt
| | - Ibrahim Tevfik Gulsen
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Alanya Alaaddin Keykubat University, Antalya, 07425, Turkey
| | - Alican Kuran
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Turkey
| | - Ihsan Gunes
- Open and Distance Education Application and Research Center, Eskisehir Technical University, Eskisehir, 26555, Turkey
| | - Alwaleed Al-Badr
- Restorative Dentistry, Riyadh Elm University, Riyadh, 13244, Saudi Arabia
| | - Ozer Celik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, 26040, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, 06560, Turkey
| |
Collapse
|
9
|
Esmaeilyfard R, Bonyadifard H, Paknahad M. Dental Caries Detection and Classification in CBCT Images Using Deep Learning. Int Dent J 2024; 74:328-334. [PMID: 37940474 PMCID: PMC10988262 DOI: 10.1016/j.identj.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/24/2023] [Accepted: 10/09/2023] [Indexed: 11/10/2023] Open
Abstract
OBJECTIVES This study aimed to investigate the accuracy of deep learning algorithms to diagnose tooth caries and classify the extension and location of dental caries in cone beam computed tomography (CBCT) images. To the best of our knowledge, this is the first study to evaluate the application of deep learning for dental caries in CBCT images. METHODS The CBCT image dataset comprised 382 molar teeth with caries and 403 noncarious molar cases. The dataset was divided into a development set for training and validation and test set. Three images were obtained for each case, including axial, sagittal, and coronal. The test dataset was provided to a multiple-input convolutional neural network (CNN). The network made predictions regarding the presence or absence of dental decay and classified the lesions according to their depths and types for the provided samples. Accuracy, sensitivity, specificity, and F1 score values were measured for dental caries detection and classification. RESULTS The diagnostic accuracy, sensitivity, specificity, and F1 score for caries detection in carious molar teeth were 95.3%, 92.1%, 96.3%, and 93.2%, respectively, and for noncarious molar teeth were 94.8%, 94.3%, 95.8%, and 94.6%. The CNN network showed high sensitivity, specificity, and accuracy in classifying caries extensions and locations. CONCLUSIONS This research demonstrates that deep learning models can accurately identify dental caries and classify their depths and types with high accuracy, sensitivity, and specificity. The successful application of deep learning in this field will undoubtedly assist dental practitioners and patients in improving diagnostic and treatment planning in dentistry. CLINICAL SIGNIFICANCE This study showed that deep learning can accurately detect and classify dental caries. Deep learning can provide dental caries detection accurately. Considering the shortage of dentists in certain areas, using CNNs can lead to broader geographic coverage in detecting dental caries.
Collapse
Affiliation(s)
- Rasool Esmaeilyfard
- Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran
| | - Haniyeh Bonyadifard
- Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran
| | - Maryam Paknahad
- Oral, and Dental Disease Research Center, Oral and Maxillofacial Radiology, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, Iran.
| |
Collapse
|
10
|
Holte MB, Pinholt EM. Validation of a fully automatic three-dimensional assessment of orthognathic surgery. J Craniomaxillofac Surg 2024; 52:438-446. [PMID: 38369395 DOI: 10.1016/j.jcms.2024.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Revised: 10/31/2023] [Accepted: 01/16/2024] [Indexed: 02/20/2024] Open
Abstract
The aim of the present study was to propose and validate FAST3D: a fully automatic three-dimensional (3D) assessment of the surgical accuracy and the long-term skeletal stability of orthognathic surgery. To validate FAST3D, the agreement between FAST3D and a validated state-of-the-art semi-automatic method was calculated by intra-class correlation coefficients (ICC) at a 95 % confidence interval. A one-sided hypothesis test was performed to evaluate whether the absolute discrepancy between the measurements produced by the two methods was statistically significantly below a clinically relevant error margin of 0.5 mm. Ten subjects (six male, four female; mean age 24.4 years), class II and III, who underwent a combined three-piece Le Fort I osteotomy, bilateral sagittal split osteotomy and genioplasty, were included in the validation study. The agreement between the two methods was excellent for all measurements, ICC range (0.85-1.00), and fair for the rotational stability of the chin, ICC = 0.54. The absolute discrepancy for all measurements was statistically significantly lower than the clinical relevant error margin (p < 0.008). Within the limitations of the present validation study, FAST3D demonstrated to be reliable and may be adopted whenever appropriate in order to reduce the work load of the medical staff.
Collapse
Affiliation(s)
- Michael Boelstoft Holte
- 3D Lab Denmark, Department of Oral and Maxillofacial Surgery, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Finsensgade 35, 6700, Esbjerg, Denmark.
| | - Else Marie Pinholt
- 3D Lab Denmark, Department of Oral and Maxillofacial Surgery, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Finsensgade 35, 6700, Esbjerg, Denmark.
| |
Collapse
|
11
|
Pérez-Cano FD, Parra-Cabrera G, Vilchis-Torres I, Reyes-Lagos JJ, Jiménez-Delgado JJ. Exploring Fracture Patterns: Assessing Representation Methods for Bone Fracture Simulation. J Pers Med 2024; 14:376. [PMID: 38673003 PMCID: PMC11051195 DOI: 10.3390/jpm14040376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Revised: 03/28/2024] [Accepted: 03/29/2024] [Indexed: 04/28/2024] Open
Abstract
Fracture pattern acquisition and representation in human bones play a crucial role in medical simulation, diagnostics, and treatment planning. This article presents a comprehensive review of methodologies employed in acquiring and representing bone fracture patterns. Several techniques, including segmentation algorithms, curvature analysis, and deep learning-based approaches, are reviewed to determine their effectiveness in accurately identifying fracture zones. Additionally, diverse methods for representing fracture patterns are evaluated. The challenges inherent in detecting accurate fracture zones from medical images, the complexities arising from multifragmentary fractures, and the need to automate fracture reduction processes are elucidated. A detailed analysis of the suitability of each representation method for specific medical applications, such as simulation systems, surgical interventions, and educational purposes, is provided. The study explores insights from a broad spectrum of research articles, encompassing diverse methodologies and perspectives. This review elucidates potential directions for future research and contributes to advancements in comprehending the acquisition and representation of fracture patterns in human bone.
Collapse
Affiliation(s)
| | - Gema Parra-Cabrera
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain; (G.P.-C.); (J.J.J.-D.)
| | - Ivett Vilchis-Torres
- Centro de Investigación Multidisciplinaria en Educación, Universidad Autónoma del Estado de México, Toluca 50110, Mexico;
| | | | | |
Collapse
|
12
|
Liu J, Zhang C, Shan Z. Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare (Basel) 2023; 11:2760. [PMID: 37893833 PMCID: PMC10606213 DOI: 10.3390/healthcare11202760] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/11/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
In recent years, there has been the notable emergency of artificial intelligence (AI) as a transformative force in multiple domains, including orthodontics. This review aims to provide a comprehensive overview of the present state of AI applications in orthodontics, which can be categorized into the following domains: (1) diagnosis, including cephalometric analysis, dental analysis, facial analysis, skeletal-maturation-stage determination and upper-airway obstruction assessment; (2) treatment planning, including decision making for extractions and orthognathic surgery, and treatment outcome prediction; and (3) clinical practice, including practice guidance, remote care, and clinical documentation. We have witnessed a broadening of the application of AI in orthodontics, accompanied by advancements in its performance. Additionally, this review outlines the existing limitations within the field and offers future perspectives.
Collapse
Affiliation(s)
- Junqi Liu
- Division of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| | - Chengfei Zhang
- Division of Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| | - Zhiyi Shan
- Division of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| |
Collapse
|
13
|
Liu J, Xing F, Shaikh A, French B, Linguraru MG, Porras AR. Joint Cranial Bone Labeling and Landmark Detection in Pediatric CT Images Using Context Encoding. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3117-3126. [PMID: 37216247 PMCID: PMC10760565 DOI: 10.1109/tmi.2023.3278493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Image segmentation, labeling, and landmark detection are essential tasks for pediatric craniofacial evaluation. Although deep neural networks have been recently adopted to segment cranial bones and locate cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, they may be hard to train and provide suboptimal results in some applications. First, they seldom leverage global contextual information that can improve object detection performance. Second, most methods rely on multi-stage algorithm designs that are inefficient and prone to error accumulation. Third, existing methods often target simple segmentation tasks and have shown low reliability in more challenging scenarios such as multiple cranial bone labeling in highly variable pediatric datasets. In this paper, we present a novel end-to-end neural network architecture based on DenseNet that incorporates context regularization to jointly label cranial bone plates and detect cranial base landmarks from CT images. Specifically, we designed a context-encoding module that encodes global context information as landmark displacement vector maps and uses it to guide feature learning for both bone labeling and landmark identification. We evaluated our model on a highly diverse pediatric CT image dataset of 274 normative subjects and 239 patients with craniosynostosis (age 0.63 ± 0.54 years, range 0-2 years). Our experiments demonstrate improved performance compared to state-of-the-art approaches.
Collapse
|
14
|
Elkhill C, Liu J, Linguraru MG, LeBeau S, Khechoyan D, French B, Porras AR. Geometric learning and statistical modeling for surgical outcomes evaluation in craniosynostosis using 3D photogrammetry. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107689. [PMID: 37393741 PMCID: PMC10527531 DOI: 10.1016/j.cmpb.2023.107689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/11/2023] [Accepted: 06/22/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and repeatable detection of craniofacial landmarks is crucial for automated quantitative evaluation of head development anomalies. Since traditional imaging modalities are discouraged in pediatric patients, 3D photogrammetry has emerged as a popular and safe imaging alternative to evaluate craniofacial anomalies. However, traditional image analysis methods are not designed to operate on unstructured image data representations such as 3D photogrammetry. METHODS We present a fully automated pipeline to identify craniofacial landmarks in real time, and we use it to assess the head shape of patients with craniosynostosis using 3D photogrammetry. To detect craniofacial landmarks, we propose a novel geometric convolutional neural network based on Chebyshev polynomials to exploit the point connectivity information in 3D photogrammetry and quantify multi-resolution spatial features. We propose a landmark-specific trainable scheme that aggregates the multi-resolution geometric and texture features quantified at every vertex of a 3D photogram. Then, we embed a new probabilistic distance regressor module that leverages the integrated features at every point to predict landmark locations without assuming correspondences with specific vertices in the original 3D photogram. Finally, we use the detected landmarks to segment the calvaria from the 3D photograms of children with craniosynostosis, and we derive a new statistical index of head shape anomaly to quantify head shape improvements after surgical treatment. RESULTS We achieved an average error of 2.74 ± 2.70 mm identifying Bookstein Type I craniofacial landmarks, which is a significant improvement compared to other state-of-the-art methods. Our experiments also demonstrated a high robustness to spatial resolution variability in the 3D photograms. Finally, our head shape anomaly index quantified a significant reduction of head shape anomalies as a consequence of surgical treatment. CONCLUSION Our fully automated framework provides real-time craniofacial landmark detection from 3D photogrammetry with state-of-the-art accuracy. In addition, our new head shape anomaly index can quantify significant head phenotype changes and can be used to quantitatively evaluate surgical treatment in patients with craniosynostosis.
Collapse
Affiliation(s)
- Connor Elkhill
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA; Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA.
| | - Jiawei Liu
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 7144 13th Pl NW, Washington, DC 20012, USA; Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Ross Hall, 2300 Eye Street, NW, Washington, DC 20037, USA
| | - Scott LeBeau
- Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - David Khechoyan
- Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Surgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Brooke French
- Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Surgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Antonio R Porras
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA; Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Surgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Biomedical Informatics, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Pediatrics and Department of Neurosurgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| |
Collapse
|
15
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
16
|
Weingart JV, Schlager S, Metzger MC, Brandenburg LS, Hein A, Schmelzeisen R, Bamberg F, Kim S, Kellner E, Reisert M, Russe MF. Automated detection of cephalometric landmarks using deep neural patchworks. Dentomaxillofac Radiol 2023; 52:20230059. [PMID: 37427585 PMCID: PMC10461263 DOI: 10.1259/dmfr.20230059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/25/2023] [Accepted: 05/13/2023] [Indexed: 07/11/2023] Open
Abstract
OBJECTIVES This study evaluated the accuracy of deep neural patchworks (DNPs), a deep learning-based segmentation framework, for automated identification of 60 cephalometric landmarks (bone-, soft tissue- and tooth-landmarks) on CT scans. The aim was to determine whether DNP could be used for routine three-dimensional cephalometric analysis in diagnostics and treatment planning in orthognathic surgery and orthodontics. METHODS Full skull CT scans of 30 adult patients (18 female, 12 male, mean age 35.6 years) were randomly divided into a training and test data set (each n = 15). Clinician A annotated 60 landmarks in all 30 CT scans. Clinician B annotated 60 landmarks in the test data set only. The DNP was trained using spherical segmentations of the adjacent tissue for each landmark. Automated landmark predictions in the separate test data set were created by calculating the center of mass of the predictions. The accuracy of the method was evaluated by comparing these annotations to the manual annotations. RESULTS The DNP was successfully trained to identify all 60 landmarks. The mean error of our method was 1.94 mm (SD 1.45 mm) compared to a mean error of 1.32 mm (SD 1.08 mm) for manual annotations. The minimum error was found for landmarks ANS 1.11 mm, SN 1.2 mm, and CP_R 1.25 mm. CONCLUSION The DNP-algorithm was able to accurately identify cephalometric landmarks with mean errors <2 mm. This method could improve the workflow of cephalometric analysis in orthodontics and orthognathic surgery. Low training requirements while still accomplishing high precision make this method particularly promising for clinical use.
Collapse
Affiliation(s)
- Julia Vera Weingart
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Stefan Schlager
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Anna Hein
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
17
|
Fan W, Zhang J, Wang N, Li J, Hu L. The Application of Deep Learning on CBCT in Dentistry. Diagnostics (Basel) 2023; 13:2056. [PMID: 37370951 DOI: 10.3390/diagnostics13122056] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user's proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research.
Collapse
Affiliation(s)
- Wenjie Fan
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jiaqi Zhang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Nan Wang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jia Li
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Li Hu
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
18
|
Nishimoto S, Saito T, Ishise H, Fujiwara T, Kawai K, Kakibuchi M. Three-Dimensional Craniofacial Landmark Detection in Series of CT Slices Using Multi-Phased Regression Networks. Diagnostics (Basel) 2023; 13:diagnostics13111930. [PMID: 37296782 DOI: 10.3390/diagnostics13111930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/26/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023] Open
Abstract
Geometrical assessments of human skulls have been conducted based on anatomical landmarks. If developed, the automatic detection of these landmarks will yield both medical and anthropological benefits. In this study, an automated system with multi-phased deep learning networks was developed to predict the three-dimensional coordinate values of craniofacial landmarks. Computed tomography images of the craniofacial area were obtained from a publicly available database. They were digitally reconstructed into three-dimensional objects. Sixteen anatomical landmarks were plotted on each of the objects, and their coordinate values were recorded. Three-phased regression deep learning networks were trained using ninety training datasets. For the evaluation, 30 testing datasets were employed. The 3D error for the first phase, which tested 30 data, was 11.60 px on average (1 px = 500/512 mm). For the second phase, it was significantly improved to 4.66 px. For the third phase, it was further significantly reduced to 2.88. This was comparable to the gaps between the landmarks, as plotted by two experienced practitioners. Our proposed method of multi-phased prediction, which conducts coarse detection first and narrows down the detection area, may be a possible solution to prediction problems, taking into account the physical limitations of memory and computation.
Collapse
Affiliation(s)
- Soh Nishimoto
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Takuya Saito
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Hisako Ishise
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Toshihiro Fujiwara
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Kenichiro Kawai
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Masao Kakibuchi
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| |
Collapse
|
19
|
Abesi F, Maleki M, Zamani M. Diagnostic performance of artificial intelligence using cone-beam computed tomography imaging of the oral and maxillofacial region: A scoping review and meta-analysis. Imaging Sci Dent 2023; 53:101-108. [PMID: 37405196 PMCID: PMC10315225 DOI: 10.5624/isd.20220224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/13/2023] [Accepted: 02/22/2023] [Indexed: 04/12/2024] Open
Abstract
Purpose The aim of this study was to conduct a scoping review and meta-analysis to provide overall estimates of the recall and precision of artificial intelligence for detection and segmentation using oral and maxillofacial cone-beam computed tomography (CBCT) scans. Materials and Methods A literature search was done in Embase, PubMed, and Scopus through October 31, 2022 to identify studies that reported the recall and precision values of artificial intelligence systems using oral and maxillofacial CBCT images for the automatic detection or segmentation of anatomical landmarks or pathological lesions. Recall (sensitivity) indicates the percentage of certain structures that are correctly detected. Precision (positive predictive value) indicates the percentage of accurately identified structures out of all detected structures. The performance values were extracted and pooled, and the estimates were presented with 95% confidence intervals (CIs). Results In total, 12 eligible studies were finally included. The overall pooled recall for artificial intelligence was 0.91 (95% CI: 0.87-0.94). In a subgroup analysis, the pooled recall was 0.88 (95% CI: 0.77-0.94) for detection and 0.92 (95% CI: 0.87-0.96) for segmentation. The overall pooled precision for artificial intelligence was 0.93 (95% CI: 0.88-0.95). A subgroup analysis showed that the pooled precision value was 0.90 (95% CI: 0.77-0.96) for detection and 0.94 (95% CI: 0.89-0.97) for segmentation. Conclusion Excellent performance was found for artificial intelligence using oral and maxillofacial CBCT images.
Collapse
Affiliation(s)
- Farida Abesi
- Department of Oral and Maxillofacial Radiology, Dental Faculty, Babol University of Medical Sciences, Babol, Iran
| | - Mahla Maleki
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| | - Mohammad Zamani
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| |
Collapse
|
20
|
Nan L, Tang M, Liang B, Mo S, Kang N, Song S, Zhang X, Zeng X. Automated Sagittal Skeletal Classification of Children Based on Deep Learning. Diagnostics (Basel) 2023; 13:diagnostics13101719. [PMID: 37238203 DOI: 10.3390/diagnostics13101719] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 05/02/2023] [Accepted: 05/05/2023] [Indexed: 05/28/2023] Open
Abstract
Malocclusions are a type of cranio-maxillofacial growth and developmental deformity that occur with high incidence in children. Therefore, a simple and rapid diagnosis of malocclusions would be of great benefit to our future generation. However, the application of deep learning algorithms to the automatic detection of malocclusions in children has not been reported. Therefore, the aim of this study was to develop a deep learning-based method for automatic classification of the sagittal skeletal pattern in children and to validate its performance. This would be the first step in establishing a decision support system for early orthodontic treatment. In this study, four different state-of-the-art (SOTA) models were trained and compared by using 1613 lateral cephalograms, and the best performance model, Densenet-121, was selected was further subsequent validation. Lateral cephalograms and profile photographs were used as the input for the Densenet-121 model, respectively. The models were optimized using transfer learning and data augmentation techniques, and label distribution learning was introduced during model training to address the inevitable label ambiguity between adjacent classes. Five-fold cross-validation was conducted for a comprehensive evaluation of our method. The sensitivity, specificity, and accuracy of the CNN model based on lateral cephalometric radiographs were 83.99, 92.44, and 90.33%, respectively. The accuracy of the model with profile photographs was 83.39%. The accuracy of both CNN models was improved to 91.28 and 83.98%, respectively, while the overfitting decreased after addition of label distribution learning. Previous studies have been based on adult lateral cephalograms. Therefore, our study is novel in using deep learning network architecture with lateral cephalograms and profile photographs obtained from children in order to obtain a high-precision automatic classification of the sagittal skeletal pattern in children.
Collapse
Affiliation(s)
- Lan Nan
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Min Tang
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Bohui Liang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
| | - Shuixue Mo
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Na Kang
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Shaohua Song
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
| | - Xiaojuan Zeng
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
- Guangxi Health Commission Key Laboratory of Prevention and Treatment for Oral Infectious Diseases, Nanning 530021, China
- Guangxi Key Laboratory of Oral and Maxillofacial Rehabilitation and Reconstruction, Nanning 530021, China
| |
Collapse
|
21
|
Blum FMS, Möhlhenrich SC, Raith S, Pankert T, Peters F, Wolf M, Hölzle F, Modabber A. Evaluation of an artificial intelligence-based algorithm for automated localization of craniofacial landmarks. Clin Oral Investig 2023; 27:2255-2265. [PMID: 37014502 PMCID: PMC10159965 DOI: 10.1007/s00784-023-04978-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 03/21/2023] [Indexed: 04/05/2023]
Abstract
OBJECTIVES Due to advancing digitalisation, it is of interest to develop standardised and reproducible fully automated analysis methods of cranial structures in order to reduce the workload in diagnosis and treatment planning and to generate objectifiable data. The aim of this study was to train and evaluate an algorithm based on deep learning methods for fully automated detection of craniofacial landmarks in cone-beam computed tomography (CBCT) in terms of accuracy, speed, and reproducibility. MATERIALS AND METHODS A total of 931 CBCTs were used to train the algorithm. To test the algorithm, 35 landmarks were located manually by three experts and automatically by the algorithm in 114 CBCTs. The time and distance between the measured values and the ground truth previously determined by an orthodontist were analyzed. Intraindividual variations in manual localization of landmarks were determined using 50 CBCTs analyzed twice. RESULTS The results showed no statistically significant difference between the two measurement methods. Overall, with a mean error of 2.73 mm, the AI was 2.12% better and 95% faster than the experts. In the area of bilateral cranial structures, the AI was able to achieve better results than the experts on average. CONCLUSION The achieved accuracy of automatic landmark detection was in a clinically acceptable range, is comparable in precision to manual landmark determination, and requires less time. CLINICAL RELEVANCE Further enlargement of the database and continued development and optimization of the algorithm may lead to ubiquitous fully automated localization and analysis of CBCT datasets in future routine clinical practice.
Collapse
Affiliation(s)
| | | | - Stefan Raith
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Tobias Pankert
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Florian Peters
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Michael Wolf
- Department of Orthodontics, University Hospital of RWTH Aachen, Pauwelsstraße 30, D-52074, Aachen, Germany
| | - Frank Hölzle
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Ali Modabber
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
22
|
Serafin M, Baldini B, Cabitza F, Carrafiello G, Baselli G, Del Fabbro M, Sforza C, Caprioglio A, Tartaglia GM. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2023; 128:544-555. [PMID: 37093337 PMCID: PMC10181977 DOI: 10.1007/s11547-023-01629-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/28/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVES The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. METHODS PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. RESULTS The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I2 = 98.13%, τ2 = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). CONCLUSION Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done.
Collapse
Affiliation(s)
- Marco Serafin
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Benedetta Baldini
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy.
| | - Federico Cabitza
- Department of Informatics, System and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Belgioioso 173, 20157, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Oncology and Hematology-Oncology, University of Milan, Via Sforza 35, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Giuseppe Baselli
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy
| | - Massimo Del Fabbro
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Chiarella Sforza
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Alberto Caprioglio
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Gianluca M Tartaglia
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| |
Collapse
|
23
|
Schobs LA, Swift AJ, Lu H. Uncertainty Estimation for Heatmap-Based Landmark Localization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1021-1034. [PMID: 36383596 DOI: 10.1109/tmi.2022.3222730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Automatic anatomical landmark localization has made great strides by leveraging deep learning methods in recent years. The ability to quantify the uncertainty of these predictions is a vital component needed for these methods to be adopted in clinical settings, where it is imperative that erroneous predictions are caught and corrected. We propose Quantile Binning, a data-driven method to categorize predictions by uncertainty with estimated error bounds. Our framework can be applied to any continuous uncertainty measure, allowing straightforward identification of the best subset of predictions with accompanying estimated error bounds. We facilitate easy comparison between uncertainty measures by constructing two evaluation metrics derived from Quantile Binning. We compare and contrast three epistemic uncertainty measures (two baselines, and a proposed method combining aspects of the two), derived from two heatmap-based landmark localization model paradigms (U-Net and patch-based). We show results across three datasets, including a publicly available Cephalometric dataset. We illustrate how filtering out gross mispredictions caught in our Quantile Bins significantly improves the proportion of predictions under an acceptable error threshold. Finally, we demonstrate that Quantile Binning remains effective on landmarks with high aleatoric uncertainty caused by inherent landmark ambiguity, and offer recommendations on which uncertainty measure to use and how to use it. The code and data are available at https://github.com/schobs/qbin.
Collapse
|
24
|
Torosdagli N, Anwar S, Verma P, Liberton DK, Lee JS, Han WW, Bagci U. Relational reasoning network for anatomical landmarking. J Med Imaging (Bellingham) 2023; 10:024002. [PMID: 36891503 PMCID: PMC9986769 DOI: 10.1117/1.jmi.10.2.024002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023] Open
Abstract
Purpose We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones. Conclusions Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.
Collapse
Affiliation(s)
| | - Syed Anwar
- University of Central Florida, Orlando, Florida, United States
- Children’s National Hospital, Sheikh Zayed Institute, Washington, District of Columbia, United States
- George Washington University, Washington, District of Columbia, United States
| | - Payal Verma
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Denise K. Liberton
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Janice S. Lee
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Wade W. Han
- Boston Children’s Hospital, Harvard Medical School, Department of Otolaryngology - Head and Neck Surgery, Boston, Maryland, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
| | - Ulas Bagci
- University of Central Florida, Orlando, Florida, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
- Northwestern University, Departments of Radiology, BME, and ECE, Machine and Hybrid Intelligence Lab, Chicago, Illinois, United States
| |
Collapse
|
25
|
Bonaldi L, Pretto A, Pirri C, Uccheddu F, Fontanella CG, Stecco C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering (Basel) 2023; 10:137. [PMID: 36829631 PMCID: PMC9952222 DOI: 10.3390/bioengineering10020137] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
By leveraging the recent development of artificial intelligence algorithms, several medical sectors have benefited from using automatic segmentation tools from bioimaging to segment anatomical structures. Segmentation of the musculoskeletal system is key for studying alterations in anatomical tissue and supporting medical interventions. The clinical use of such tools requires an understanding of the proper method for interpreting data and evaluating their performance. The current systematic review aims to present the common bottlenecks for musculoskeletal structures analysis (e.g., small sample size, data inhomogeneity) and the related strategies utilized by different authors. A search was performed using the PUBMED database with the following keywords: deep learning, musculoskeletal system, segmentation. A total of 140 articles published up until February 2022 were obtained and analyzed according to the PRISMA framework in terms of anatomical structures, bioimaging techniques, pre/post-processing operations, training/validation/testing subset creation, network architecture, loss functions, performance indicators and so on. Several common trends emerged from this survey; however, the different methods need to be compared and discussed based on each specific case study (anatomical region, medical imaging acquisition setting, study population, etc.). These findings can be used to guide clinicians (as end users) to better understand the potential benefits and limitations of these tools.
Collapse
Affiliation(s)
- Lorenza Bonaldi
- Department of Civil, Environmental and Architectural Engineering, University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Andrea Pretto
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
| | - Carmelo Pirri
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
| | - Francesca Uccheddu
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Chiara Giulia Fontanella
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Carla Stecco
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| |
Collapse
|
26
|
Artificial intelligence models for clinical usage in dentistry with a focus on dentomaxillofacial CBCT: a systematic review. Oral Radiol 2023; 39:18-40. [PMID: 36269515 DOI: 10.1007/s11282-022-00660-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 09/29/2022] [Indexed: 01/05/2023]
Abstract
This study aimed at performing a systematic review of the literature on the application of artificial intelligence (AI) in dental and maxillofacial cone beam computed tomography (CBCT) and providing comprehensive descriptions of current technical innovations to assist future researchers and dental professionals. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA) Statement was followed. The study's protocol was prospectively registered. Following databases were searched, based on MeSH and Emtree terms: PubMed/MEDLINE, Embase and Web of Science. The search strategy enrolled 1473 articles. 59 publications were included, which assessed the use of AI on CBCT images in dentistry. According to the PROBAST guidelines for study design, seven papers reported only external validation and 11 reported both model building and validation on an external dataset. 40 studies focused exclusively on model development. The AI models employed mainly used deep learning models (42 studies), while other 17 papers used conventional approaches, such as statistical-shape and active shape models, and traditional machine learning methods, such as thresholding-based methods, support vector machines, k-nearest neighbors, decision trees, and random forests. Supervised or semi-supervised learning was utilized in the majority (96.62%) of studies, and unsupervised learning was used in two (3.38%). 52 publications included studies had a high risk of bias (ROB), two papers had a low ROB, and four papers had an unclear rating. Applications based on AI have the potential to improve oral healthcare quality, promote personalized, predictive, preventative, and participatory dentistry, and expedite dental procedures.
Collapse
|
27
|
Machine Learning Predictive Model as Clinical Decision Support System in Orthodontic Treatment Planning. Dent J (Basel) 2022; 11:dj11010001. [PMID: 36661538 PMCID: PMC9858447 DOI: 10.3390/dj11010001] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/09/2022] [Accepted: 11/18/2022] [Indexed: 12/24/2022] Open
Abstract
Diagnosis and treatment planning forms the crux of orthodontics, which orthodontists gain with years of expertise. Machine Learning (ML), having the ability to learn by pattern recognition, can gain this expertise in a very short duration, ensuring reduced error, inter-intra clinician variability and good accuracy. Thus, the aim of this study was to construct an ML predictive model to predict a broader outline of the orthodontic diagnosis and treatment plan. The sample consisted of 700 case records of orthodontically treated patients in the past ten years. The data were split into a training and a test set. There were 33 input variables and 11 output variables. Four ML predictive model layers with seven algorithms were created. The test set was used to check the efficacy of the ML-predicted treatment plan and compared with that of the decision made by the expert orthodontists. The model showed an overall average accuracy of 84%, with the Decision Tree, Random Forest and XGB classifier algorithms showing the highest accuracy ranging from 87-93%. Yet in their infancy stages, Machine Learning models could become a valuable Clinical Decision Support System in orthodontic diagnosis and treatment planning in the future.
Collapse
|
28
|
Tsolakis IA, Tsolakis AI, Elshebiny T, Matthaios S, Palomo JM. Comparing a Fully Automated Cephalometric Tracing Method to a Manual Tracing Method for Orthodontic Diagnosis. J Clin Med 2022; 11:jcm11226854. [PMID: 36431331 PMCID: PMC9693212 DOI: 10.3390/jcm11226854] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/11/2022] [Accepted: 11/16/2022] [Indexed: 11/22/2022] Open
Abstract
Background: This study aims to compare an automated cephalometric analysis based on the latest deep learning method of automatically identifying cephalometric landmarks with a manual tracing method using broadly accepted cephalometric software. Methods: A total of 100 cephalometric X-rays taken using a CS8100SC cephalostat were collected from a private practice. The X-rays were taken in maximum image size (18 × 24 cm lateral image). All cephalometric X-rays were first manually traced using the Dolphin 3D Imaging program version 11.0 and then automatically, using the Artificial Intelligence CS imaging V8 software. The American Board of Orthodontics analysis and the European Board of Orthodontics analysis were used for the cephalometric measurements. This resulted in the identification of 16 cephalometric landmarks, used for 16 angular and 2 linear measurements. Results: All measurements showed great reproducibility with high intra-class reliability (>0.97). The two methods showed great agreement, with an ICC range of 0.70−0.92. Mean values of SNA, SNB, ANB, SN-MP, U1-SN, L1-NB, SNPg, ANPg, SN/ANS-PNS, SN/GoGn, U1/ANS-PNS, L1-APg, U1-NA, and L1-GoGn landmarks had no significant differences between the two methods (p > 0.0027), while the mean values of FMA, L1-MP, ANS-PNS/GoGn, and U1-L1 were statistically significantly different (p < 0.0027). Conclusions: The automatic cephalometric tracing method using CS imaging V8 software is reliable and accurate for all cephalometric measurements.
Collapse
Affiliation(s)
- Ioannis A. Tsolakis
- Department of Orthodontics, School of Dentistry, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
- Correspondence:
| | - Apostolos I. Tsolakis
- Department of Orthodontics, School of Dentistry, National and Kapodistrian, University of Athens, 157 72 Athens, Greece
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Tarek Elshebiny
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Stefanos Matthaios
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - J. Martin Palomo
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
29
|
Fan Y, Chen G, He W, Zhang N, Song G, Matthews H, Claes P, Xu T. Nasal characteristics in patients with asymmetric mandibular prognathism. Am J Orthod Dentofacial Orthop 2022; 162:680-688. [PMID: 35973875 DOI: 10.1016/j.ajodo.2021.06.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 06/01/2021] [Accepted: 06/01/2021] [Indexed: 11/28/2022]
Abstract
INTRODUCTION To objectively quantify nasal characteristics of patients with asymmetric mandibular prognathism and to evaluate the association between nasal asymmetry and dentofacial abnormalities. METHODS Ninety adult patients with asymmetric mandibular prognathism were included. Images were captured during pretreatment using 3-dimensional stereophotogrammetry. A total of 7160 uniformly sampled quasi-landmarks were automatically identified on each facial image to establish correspondence using a template mapping technique. Fifteen commonly used anatomic landmarks were automatically located on each image through barycentric to Cartesian coordinate conversion. Nasal characteristics and asymmetry were quantified by anthropometric linear distances, angular measurements, and surface-based analysis. The degree of the nasal, chin, and periorbital asymmetry in a patient was scored using a root-mean-squared error between the left and right sides. The correlations among these regional asymmetries were evaluated. RESULTS The nasal tip was significantly shifted to the deviated side of the chin, and the nostrils were asymmetrical. The location and degree of nasal asymmetry varied among patients with asymmetric mandibular prognathism. The level of nasal asymmetry was significantly and positively correlated with chin and periorbital asymmetry. CONCLUSIONS Nasal asymmetry is present in asymmetric mandibular prognathism patients. Furthermore, it is positively associated with periorbital deviation and chin deviation. Individualized nasal asymmetry evaluation should be performed, and clinicians should inform patients about preexisting nasal asymmetry.
Collapse
Affiliation(s)
- Yi Fan
- Third Clinical Division, Peking University School and Hospital of Stomatology, Beijing, China; National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing, China National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Gui Chen
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing, China; National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Wei He
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China; National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Nan Zhang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing, China; National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Guangying Song
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing, China; National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Harold Matthews
- Facial Science, Murdoch Children's Research Institute, Melbourne, Australia; Department of Human Genetics, KU Leuven Leuven, Belgium, and Medical Imaging Research Centre, Universitair Ziekenhuis, Leuven, Belgium
| | - Peter Claes
- Facial Science, Murdoch Children's Research Institute, Melbourne, Australia; Department of Human Genetics and Department of Electrical Engineering, KU Leuven, Leuven, Belgium, and Medical Imaging Research Centre, Universitair Ziekenhuis, Leuven, Belgium
| | - Tianmin Xu
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing, China; National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China.
| |
Collapse
|
30
|
Lang Y, Lian C, Xiao D, Deng H, Thung KH, Yuan P, Gateno J, Kuang T, Alfi DM, Wang L, Shen D, Xia JJ, Yap PT. Localization of Craniomaxillofacial Landmarks on CBCT Images Using 3D Mask R-CNN and Local Dependency Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2856-2866. [PMID: 35544487 PMCID: PMC9673501 DOI: 10.1109/tmi.2022.3174513] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method.
Collapse
|
31
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
32
|
Minnema J, Ernst A, van Eijnatten M, Pauwels R, Forouzanfar T, Batenburg KJ, Wolff J. A review on the application of deep learning for CT reconstruction, bone segmentation and surgical planning in oral and maxillofacial surgery. Dentomaxillofac Radiol 2022; 51:20210437. [PMID: 35532946 PMCID: PMC9522976 DOI: 10.1259/dmfr.20210437] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 12/11/2022] Open
Abstract
Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.
Collapse
Affiliation(s)
- Jordi Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Anne Ernst
- Institute for Medical Systems Biology, University Hospital Hamburg-Eppendorf, Hamburg, Germany
| | - Maureen van Eijnatten
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Ruben Pauwels
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
| | - Tymour Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Kees Joost Batenburg
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Jan Wolff
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard, Aarhus, Denmark
| |
Collapse
|
33
|
Liu J, Xing F, Shaikh A, Linguraru MG, Porras AR. Learning with Context Encoding for Single-Stage Cranial Bone Labeling and Landmark Localization. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13438:286-296. [PMID: 39911317 PMCID: PMC11795956 DOI: 10.1007/978-3-031-16452-1_28] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2025]
Abstract
Automatic anatomical segmentation and landmark localization in medical images are important tasks during craniofacial analysis. While deep neural networks have been recently applied to segment cranial bones and identify cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, existing methods often provide suboptimal and sometimes unrealistic results because they do not incorporate contextual image information. Additionally, most state-of-the-art deep learning methods for cranial bone segmentation and landmark detection rely on multi-stage data processing pipelines, which are inefficient and prone to errors. In this paper, we propose a novel context encoding-constrained neural network for single-stage cranial bone labeling and landmark localization. Specifically, we design and incorporate a novel context encoding module into a U-Net-like architecture. We explicitly enforce the network to capture context-related features for representation learning so pixel-wise predictions are not isolated from the image context. In addition, we introduce a new auxiliary task to model the relative spatial configuration of different anatomical landmarks, which serves as an additional regularization that further refines network predictions. The proposed method is end-to-end trainable for single-stage cranial bone labeling and landmark localization. The method was evaluated on a highly diverse pediatric 3D CT image dataset with 274 subjects. Our experiments demonstrate superior performance of our method compared to state-of-the-art approaches.
Collapse
Affiliation(s)
- Jiawei Liu
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora CO 80045, USA
| | - Fuyong Xing
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora CO 80045, USA
| | - Abbas Shaikh
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora CO 80045, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital. Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences. Washington DC 20010, USA
| | - Antonio R Porras
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora CO 80045, USA
- Department of Pediatrics, School of Medicine, University of Colorado Anschutz Medical Campus. Departments of Pediatric Plastic & Reconstructive Surgery and Neurosurgery, Children's Hospital Colorado. Aurora CO 80045, USA
| |
Collapse
|
34
|
Elkhill C, LeBeau S, French B, Porras AR. Graph convolutional network with probabilistic spatial regression: application to craniofacial landmark detection from 3D photogrammetry. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13433:574-583. [PMID: 39845778 PMCID: PMC11751771 DOI: 10.1007/978-3-031-16437-8_55] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
Quantitative evaluation of pediatric craniofacial anomalies relies on the accurate identification of anatomical landmarks and structures. While segmentation and landmark detection methods in standard clinical images are available in the literature, image-based methods are not directly applicable to 3D photogrammetry because of its unstructured nature consisting in variable numbers of vertices and polygons. In this work, we propose a graph-based convolutional neural network based on Chebyshev polynomials that exploits vertex coordinates, polygonal connectivity, and surface normal vectors to extract multi-resolution spatial features from the 3D photographs. We then aggregate them using a novel weighting scheme that accounts for local spatial resolution variability in the data. We also propose a new trainable regression scheme based on the probabilistic distances between each original vertex and the anatomical landmarks to calculate coordinates from the aggregated spatial features. This approach allows calculating accurate landmark coordinates without assuming correspondences with specific vertices in the original mesh. Our method achieved state-of-the-art landmark detection errors.
Collapse
Affiliation(s)
- Connor Elkhill
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus
- Computational Bioscience Program, School of Medicine, University of Colorado Anschutz Medical Campus
- Department of Pediatric Plastic & Reconstructive Surgery, Children's Hospital Colorado
| | - Scott LeBeau
- Department of Pediatric Plastic & Reconstructive Surgery, Children's Hospital Colorado
| | - Brooke French
- Department of Pediatric Plastic & Reconstructive Surgery, Children's Hospital Colorado
| | - Antonio R Porras
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus
- Computational Bioscience Program, School of Medicine, University of Colorado Anschutz Medical Campus
- Department of Pediatric Plastic & Reconstructive Surgery, Children's Hospital Colorado
- Department of Pediatric Neurosurgery, Children's Hospital Colorado
- Department of Pediatrics, School of Medicine, University of Colorado Anschutz Medical Campus
| |
Collapse
|
35
|
Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J Dent Res 2022; 101:1380-1387. [PMID: 35982646 DOI: 10.1177/00220345221112333] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The increasing use of 3-dimensional (3D) imaging by orthodontists and maxillofacial surgeons to assess complex dentofacial deformities and plan orthognathic surgeries implies a critical need for 3D cephalometric analysis. Although promising methods were suggested to localize 3D landmarks automatically, concerns about robustness and generalizability restrain their clinical use. Consequently, highly trained operators remain needed to perform manual landmarking. In this retrospective diagnostic study, we aimed to train and evaluate a deep learning (DL) pipeline based on SpatialConfiguration-Net for automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans. A retrospective sample of consecutive presurgical CT scans was randomly distributed between a training/validation set (n = 160) and a test set (n = 38). The reference data consisted of 33 landmarks, manually localized once by 1 operator(n = 178) or twice by 3 operators (n = 20, test set only). After inference on the test set, 1 CT scan showed "very low" confidence level predictions; we excluded it from the overall analysis but still assessed and discussed the corresponding results. The model performance was evaluated by comparing the predictions with the reference data; the outcome set included localization accuracy, cephalometric measurements, and comparison to manual landmarking reproducibility. On the hold-out test set, the mean localization error was 1.0 ± 1.3 mm, while success detection rates for 2.0, 2.5, and 3.0 mm were 90.4%, 93.6%, and 95.4%, respectively. Mean errors were -0.3 ± 1.3° and -0.1 ± 0.7 mm for angular and linear measurements, respectively. When compared to manual reproducibility, the measurements were within the Bland-Altman 95% limits of agreement for 91.9% and 71.8% of skeletal and dentoalveolar variables, respectively. To conclude, while our DL method still requires improvement, it provided highly accurate 3D landmark localization on a challenging test set, with a reliability for skeletal evaluation on par with what clinicians obtain.
Collapse
Affiliation(s)
- G Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Universite Paris Cite, AP-HP, Hopital Pitie Salpetriere, Service de Medecine Bucco-Dentaire, Paris, France
| | - T Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - S Chang
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - F Rafflenbeul
- Department of Dentofacial Orthopedics, Faculty of Dental Surgery, Strasbourg University, Strasbourg, France
| | - A Kerbrat
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - P Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - L Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
36
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
37
|
Chen R, Ma Y, Chen N, Liu L, Cui Z, Lin Y, Wang W. Structure-Aware Long Short-Term Memory Network for 3D Cephalometric Landmark Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1791-1801. [PMID: 35130151 DOI: 10.1109/tmi.2022.3149281] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis. However, the current methods are time-consuming and suffer from large biases in landmark localization, leading to unreliable diagnosis results. In this work, we propose a novel Structure-Aware Long Short-Term Memory framework (SA-LSTM) for efficient and accurate 3D landmark detection. To reduce the computational burden, SA-LSTM is designed in two stages. It first locates the coarse landmarks via heatmap regression on a down-sampled CBCT volume and then progressively refines landmarks by attentive offset regression using multi-resolution cropped patches. To boost accuracy, SA-LSTM captures global-local dependence among the cropping patches via self-attention. Specifically, a novel graph attention module implicitly encodes the landmark's global structure to rationalize the predicted position. Moreover, a novel attention-gated module recursively filters irrelevant local features and maintains high-confident local predictions for aggregating the final result. Experiments conducted on an in-house dataset and a public dataset show that our method outperforms state-of-the-art methods, achieving 1.64 mm and 2.37 mm average errors, respectively. Furthermore, our method is very efficient, taking only 0.5 seconds for inferring the whole CBCT volume of resolution 768×768×576 .
Collapse
|
38
|
Li T, Wang Y, Qu Y, Dong R, Kang M, Zhao J. Feasibility study of hallux valgus measurement with a deep convolutional neural network based on landmark detection. Skeletal Radiol 2022; 51:1235-1247. [PMID: 34748073 DOI: 10.1007/s00256-021-03939-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 10/03/2021] [Accepted: 10/08/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop a deep learning algorithm based on automatic detection of landmarks that can be used to automatically calculate forefoot imaging parameters from radiographs and test its performance. MATERIALS AND METHODS A total of 1023 weight-bearing dorsoplantar (DP) radiographs were included. A total of 776 radiographs were used for training and verification of the model, and 247 radiographs were used for testing the performance of the model. The radiologists manually marked 18 landmarks on each image. By training our model to automatically label these landmarks, 4 imaging parameters commonly used for the diagnosis of hallux valgus could be measured, including the first-second intermetatarsal angle (IMA), hallux valgus angle (HVA), hallux interphalangeal angle (HIA), and distal metatarsal articular angle (DMAA). The reference standard was determined by the radiologists' measurements. The percentage of correct key points (PCK), intragroup correlation coefficient (ICC), Pearson correlation coefficient (r), root mean square error (RMSE), and mean absolute error (MAE) between the predicted value of the model and the reference standard were calculated. The Bland-Altman plot shows the mean difference and 95% LoA. RESULTS The PCK was 84-99% at the 3-mm threshold. The correlation between the observed and predicted values of the four angles was high (ICC: 0.89-0.96, r: 0.81-0.97, RMSE: 3.76-6.77, MAE: 3.22-5.52). However, there was a systematic error between the model predicted value and the reference standard (the mean difference ranged from - 3.00 to - 5.08°, and the standard deviation ranged from 2.25 to 4.47°). CONCLUSION Our model can accurately identify landmarks, but there is a certain amount of error in the angle measurement, which needs further improvement.
Collapse
Affiliation(s)
- Tong Li
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Yuzhao Wang
- College of Computer Science and Technology, Jilin University, Changchun, 130000, China
| | - Yang Qu
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Rongpeng Dong
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Mingyang Kang
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Jianwu Zhao
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China.
| |
Collapse
|
39
|
A Dual Discriminator Adversarial Learning Approach for Dental Occlusal Surface Reconstruction. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1933617. [PMID: 35449834 PMCID: PMC9018184 DOI: 10.1155/2022/1933617] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/12/2022] [Indexed: 11/18/2022]
Abstract
Objective. Restoring the correct masticatory function of partially edentulous patient is a challenging task primarily due to the complex tooth morphology between individuals. Although some deep learning-based approaches have been proposed for dental restorations, most of them do not consider the influence of dental biological characteristics for the occlusal surface reconstruction. Description. In this article, we propose a novel dual discriminator adversarial learning network to address these challenges. In particular, this network architecture integrates two models: a dilated convolutional-based generative model and a dual global-local discriminative model. While the generative model adopts dilated convolution layers to generate a feature representation that preserves clear tissue structure, the dual discriminative model makes use of two discriminators to jointly distinguish whether the input is real or fake. While the global discriminator focuses on the missing teeth and adjacent teeth to assess whether it is coherent as a whole, the local discriminator aims only at the defective teeth to ensure the local consistency of the generated dental crown. Results. Experiments on 1000 real-world patient dental samples demonstrate the effectiveness of our method. For quantitative comparison, the image quality metrics are used to measure the similarity of the generated occlusal surface, and the root mean square between the generated result and the target crown calculated by our method is 0.114 mm. In qualitative analysis, the proposed approach can generate more reasonable dental biological morphology. Conclusion. The results demonstrate that our method significantly outperforms the state-of-the-art methods in occlusal surface reconstruction. Importantly, the designed occlusal surface has enough anatomical morphology of natural teeth and superior clinical application value.
Collapse
|
40
|
Automated detection and labelling of teeth and small edentulous regions on Cone-Beam Computed Tomography using Convolutional Neural Networks. J Dent 2022; 122:104139. [DOI: 10.1016/j.jdent.2022.104139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 04/04/2022] [Accepted: 04/20/2022] [Indexed: 12/30/2022] Open
|
41
|
Diaconu A, Holte MB, Cattaneo PM, Pinholt EM. A semi-automatic approach for longitudinal 3D upper airway analysis using voxel-based registration. Dentomaxillofac Radiol 2022; 51:20210253. [PMID: 34644181 PMCID: PMC8925868 DOI: 10.1259/dmfr.20210253] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVES To propose and validate a reliable semi-automatic approach for three-dimensional (3D) analysis of the upper airway (UA) based on voxel-based registration (VBR). METHODS Post-operative cone beam computed tomography (CBCT) scans of 10 orthognathic surgery patients were superimposed to the pre-operative CBCT scans by VBR using the anterior cranial base as reference. Anatomic landmarks were used to automatically cut the UA and calculate volumes and cross-sectional areas (CSA). The 3D analysis was performed by two observers twice, at an interval of two weeks. Intraclass correlations and Bland-Altman plots were used to quantify the measurement error and reliability of the method. The relative Dahlberg error was calculated and compared with a similar method based on landmark re-identification and manual measurements. RESULTS Intraclass correlation coefficient (ICC) showed excellent intra- and inter-observer reliability (ICC ≥ 0.995). Bland-Altman plots showed good observer agreement, low bias and no systematic errors. The relative Dahlberg error ranged between 0.51 and 4.30% for volume and 0.24 and 2.90% for CSA. This was lower when compared with a similar, manual method. Voxel-based registration introduced 0.05-1.44% method error. CONCLUSIONS The proposed method was shown to have excellent reliability and high observer agreement. The method is feasible for longitudinal clinical trials on large cohorts due to being semi-automatic.
Collapse
Affiliation(s)
- Alexandru Diaconu
- 3D Lab Denmark, Department of Oral and Maxillofacial Surgery, University Hospital of Southern Denmark, Esbjerg, Denmark
| | | | - Paolo Maria Cattaneo
- Melbourne Dental School, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Victoria, Australia
| | | |
Collapse
|
42
|
Holte MB, Sæderup H, Pinholt EM. Comparison of surface- and voxel-based registration on the mandibular ramus for long-term three-dimensional assessment of condylar remodelling following orthognathic surgery. Dentomaxillofac Radiol 2022; 51:20210499. [PMID: 35143288 PMCID: PMC9499205 DOI: 10.1259/dmfr.20210499] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES The purpose of the present study was to validate and compare the accuracy and reliability of surface- and voxel-based registration on the mandibular rami for long-term three-dimensional (3D) evaluation of condylar remodelling following Orthognathic Surgery. METHODS The mandible was 3D reconstructed from a pair of superimposed pre- and postoperative (two years) cone-beam computerized tomography scans and divided into the condyle, and 21 ramal regions. The accuracy of surface- and voxel-based registration was measured by the absolute mean surface distance of each region after alignment of the pre- and postoperative rami. To evaluate the reliability, mean absolute differences and intra class correlation coefficients (ICC) were calculated at a 95% confidence interval on volumetric and surface distance measurements of two observers. Paired t-tests were applied to statistically evaluate whether the accuracy and reliability of surface- and voxel-based registration were significantly different (p < 0.05). RESULTS A total of twenty subjects (sixteen female; four male; mean age 27.6 years) with class II malocclusion and maxillomandibular retrognathia, who underwent bimaxillary surgery, were included. Surface-based registration was more accurate and reliable than voxel-based registration on the mandibular ramus two years post-surgery (p < 0.05). The inter observer reliability of using surface-based registration was excellent, ICC range [0.82-1.00]. For voxel-based registration, the inter observer reliability ranged from poor to excellent [0.00-0.98]. The measurement error introduced by applying surface-based registration for assessment of condylar remodelling was considered clinical irrelevant (1.83% and 0.18 mm), while the measurement error introduced by voxel-based registration was considered clinical relevant (5.44% and 0.52 mm). CONCLUSIONS Surface-based registration was proven more accurate and reliable compared to voxel-based registration on the mandibular ramus for long-term 3D assessment of condylar remodelling following Orthognathic Surgery. However, importantly, the performance difference may be caused by an inappropriate reference structure, proposed in the literature, and applied in this study.
Collapse
Affiliation(s)
- Michael Boelstoft Holte
- Department of Oral and Maxillofacial Surgery & University of Southern Denmark, Faculty of Health Sciences, Department of Regional Health Research, University Hospital of Southern Denmark, Odense, Denmark
| | - Henrik Sæderup
- Department of Oral and Maxillofacial Surgery, University Hospital of Southern Denmark, Odense, Denmark
| | - Else Marie Pinholt
- Department of Regional Health Research & University Hospital of Southern Denmark, Department of Oral and Maxillofacial Surgery, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
43
|
Dot G, Schouman T, Dubois G, Rouch P, Gajny L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur Radiol 2022; 32:3639-3648. [PMID: 35037088 DOI: 10.1007/s00330-021-08455-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/27/2021] [Accepted: 11/01/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. METHODS Four hundred and fifty-three consecutive patients having undergone high-resolution CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentation of the mandible. RESULTS In the test cohort, mean volumetric Dice similarity coefficient (vDSC) and surface Dice similarity coefficient at 1 mm (sDSC) were 0.96 and 0.97 for the upper skull, 0.94 and 0.98 for the mandible, 0.95 and 0.99 for the upper teeth, 0.94 and 0.99 for the lower teeth, and 0.82 and 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth, and 58% for the lower teeth. CONCLUSION While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans. KEY POINTS • The nnU-Net deep learning framework can be trained out-of-the-box to provide robust fully automatic multi-task segmentation of CT scans performed for computer-assisted orthognathic surgery planning. • The clinical viability of the trained nnU-Net model is shown on a challenging test dataset of 153 CT scans randomly selected from clinical practice, showing metallic artifacts and diverse anatomical deformities. • Commonly used biomedical segmentation evaluation metrics (volumetric and surface Dice similarity coefficient) do not always match industry expert evaluation in the case of more demanding clinical applications.
Collapse
Affiliation(s)
- Gauthier Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France. .,Universite de Paris, AP-HP, Hopital Pitie-Salpetriere, Service d'Odontologie, Paris, France.
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Materialise, Malakoff, France
| | - Philippe Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,EPF-Graduate School of Engineering, Sceaux, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France
| |
Collapse
|
44
|
Luo D, Zeng W, Chen J, Tang W. Deep Learning for Automatic Image Segmentation in Stomatology and Its Clinical Application. FRONTIERS IN MEDICAL TECHNOLOGY 2021; 3:767836. [PMID: 35047964 PMCID: PMC8757832 DOI: 10.3389/fmedt.2021.767836] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning has become an active research topic in the field of medical image analysis. In particular, for the automatic segmentation of stomatological images, great advances have been made in segmentation performance. In this paper, we systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning, and their clinical applications. We categorized them into different tasks and analyze their advantages and disadvantages. The main categories that we explored were the data sources, backbone network, and task formulation. We categorized data sources into panoramic radiography, dental X-rays, cone-beam computed tomography, multi-slice spiral computed tomography, and methods based on intraoral scan images. For the backbone network, we distinguished methods based on convolutional neural networks from those based on transformers. We divided task formulations into semantic segmentation tasks and instance segmentation tasks. Toward the end of the paper, we discussed the challenges and provide several directions for further research on the automatic segmentation of stomatological images.
Collapse
Affiliation(s)
| | | | | | - Wei Tang
- The State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
45
|
Chen X, Lian C, Deng HH, Kuang T, Lin HY, Xiao D, Gateno J, Shen D, Xia JJ, Yap PT. Fast and Accurate Craniomaxillofacial Landmark Detection via 3D Faster R-CNN. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3867-3878. [PMID: 34310293 PMCID: PMC8686670 DOI: 10.1109/tmi.2021.3099509] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Automatic craniomaxillofacial (CMF) landmark localization from cone-beam computed tomography (CBCT) images is challenging, considering that 1) the number of landmarks in the images may change due to varying deformities and traumatic defects, and 2) the CBCT images used in clinical practice are typically large. In this paper, we propose a two-stage, coarse-to-fine deep learning method to tackle these challenges with both speed and accuracy in mind. Specifically, we first use a 3D faster R-CNN to roughly locate landmarks in down-sampled CBCT images that have varying numbers of landmarks. By converting the landmark point detection problem to a generic object detection problem, our 3D faster R-CNN is formulated to detect virtual, fixed-size objects in small boxes with centers indicating the approximate locations of the landmarks. Based on the rough landmark locations, we then crop 3D patches from the high-resolution images and send them to a multi-scale UNet for the regression of heatmaps, from which the refined landmark locations are finally derived. We evaluated the proposed approach by detecting up to 18 landmarks on a real clinical dataset of CMF CBCT images with various conditions. Experiments show that our approach achieves state-of-the-art accuracy of 0.89 ± 0.64mm in an average time of 26.2 seconds per volume.
Collapse
|
46
|
Yao J, Zeng W, He T, Zhou S, Zhang Y, Guo J, Tang W. Automatic localization of cephalometric landmarks based on convolutional neural network. Am J Orthod Dentofacial Orthop 2021; 161:e250-e259. [PMID: 34802868 DOI: 10.1016/j.ajodo.2021.09.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 09/01/2021] [Accepted: 09/01/2021] [Indexed: 11/01/2022]
Abstract
INTRODUCTION Cephalometry plays an important role in the diagnosis and treatment of orthodontics and orthognathic surgery. This study intends to develop an automatic landmark location system to make cephalometry more convenient. METHODS In this study, 512 lateral cephalograms were collected, and 37 landmarks were included. The coordinates of all landmarks in the 512 films were obtained to establish a labeled dataset: 312 were used as a training set, 100 as a validation set, and 100 as a testing set. An automatic landmark location system based on the convolutional neural network was developed. This system consisted of a global detection module and a locally modified module. The lateral cephalogram was first fed into the global module to obtain an initial estimate of the landmark's position, which was then adjusted with the locally modified module to improve accuracy. Mean radial error (MRE) and success detection rate (SDR) within the range of 1-4 mm were used to evaluate the method. RESULTS The MRE of our validation set was 1.127 ± 1.028 mm, and SDR of 1.0, 1.5, 2.0, 2.5, 3.0, and 4.0 mm were respectively 45.95%, 89.19%, 97.30%, 97.30%, and 97.30%. The MRE of our testing set was 1.038 ± 0.893 mm, and SDR of 1.0, 1.5, 2.0, 2.5, 3.0, and 4.0 mm were respectively 54.05%, 91.89%, 97.30%, 100%, 100%, and 100%. CONCLUSIONS In this study, we proposed a new automatic landmark location system on the basis of the convolutional neural network. The system could detect 37 landmarks with high accuracy. All landmarks are commonly used in clinical practice and could meet the requirements of different cephalometric analysis methods.
Collapse
Affiliation(s)
- Jie Yao
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, Shaanxi, China State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Wei Zeng
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Tao He
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Shanluo Zhou
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Yi Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, China.
| | - Wei Tang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
47
|
He T, Yao J, Tian W, Yi Z, Tang W, Guo J. Cephalometric landmark detection by considering translational invariance in the two-stage framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
48
|
Xu J, Liu J, Zhang D, Zhou Z, Zhang C, Chen X. A 3D segmentation network of mandible from CT scan with combination of multiple convolutional modules and edge supervision in mandibular reconstruction. Comput Biol Med 2021; 138:104925. [PMID: 34656866 DOI: 10.1016/j.compbiomed.2021.104925] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 10/01/2021] [Accepted: 10/01/2021] [Indexed: 11/27/2022]
Abstract
Mandibular reconstruction is a very complex surgery that demands removing the tumor, which is followed by reconstruction of the defective mandible. Accurate segmentation of the mandible plays an important role in its preoperative planning. However, there are many segmentation challenges including the connected boundaries of upper and lower teeth, blurred condyle edges, metal artifact interference, and different shapes of the mandibles with tumor invasion (MTI). Those manual or semi-automatic segmentation methods commonly used in clinical practice are time-consuming and have poor effects. The automatic segmentation methods are mainly developed for the mandible without tumor invasion (Non-MTI) rather than MTI and have problems such as under-segmentation. Given these problems, this paper proposed a 3D automatic segmentation network of the mandible with a combination of multiple convolutional modules and edge supervision. Firstly, the squeeze-and-excitation residual module is used for feature optimization to make the network focused more on the mandibular segmentation region. Secondly, the multi atrous convolution cascade module is adapted to implement a multi-scale feature search to extract more detailed features. Considering that most mandibular segmentation networks ignore the boundary information, the loss function combining region loss and edge loss is applied to further improve the segmentation performance. The final experiment shows that the proposed network can segment Non-MTI and MTI quickly and automatically with an average segmentation time of 7.41s for a CT scan. In the meantime, it also has a good segmentation accuracy. For Non-MTI segmentation, the dice coefficient (Dice) reaches 97.98 ± 0.36%, average surface distance (ASD) reaches 0.061 ± 0.016 mm, and 95% Hausdorff distance (95HD) reaches 0.484 ± 0.027 mm. For Non-MTI segmentation, the Dice reaches 96.90 ± 1.59%, ASD reaches 0.162 ± 0.107 mm, and 95HD reaches 1.161 ± 1.034 mm. Compared with other methods, the proposed method has better segmentation performance, effectively improving segmentation accuracy and reducing under-segmentation. It can greatly improve doctor's segmentation efficiency and will have a promising application prospect in mandibular reconstruction surgery in the future.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jiannan Liu
- Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Dingzhong Zhang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zijie Zhou
- Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chenping Zhang
- Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
49
|
Tian S, Wang M, Dai N, Ma H, Li L, Fiorenza L, Sun Y, Li Y. DCPR-GAN: Dental Crown Prosthesis Restoration Using Two-stage Generative Adversarial Networks. IEEE J Biomed Health Inform 2021; 26:151-160. [PMID: 34637385 DOI: 10.1109/jbhi.2021.3119394] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Restoring the correct masticatory function of broken teeth is the basis of dental crown prosthesis rehabilitation. However, it is a challenging task primarily due to the complex and personalized morphology of the occlusal surface. In this article, we address this problem by designing a new two-stage generative adversarial network (GAN) to reconstruct a dental crown surface in the data-driven perspective. Specifically, in the first stage, a conditional GAN (CGAN) is designed to learn the inherent relationship between the defective tooth and the target crown, which can solve the problem of the occlusal relationship restoration. In the second stage, an improved CGAN is further devised by considering an occlusal groove parsing network (GroNet) and an occlusal fingerprint constraint to enforce the generator to enrich the functional characteristics of the occlusal surface. Experimental results demonstrate that the proposed framework significantly outperforms the state-of-the-art deep learning methods in functional occlusal surface reconstruction using a real-world patient database. Moreover, the standard deviation (SD) and root mean square (RMS) between the generated occlusal surface and the target crown calculated by our method are both less than 0.161mm. Importantly, the designed dental crown has enough anatomical morphology and higher clinical applicability.
Collapse
|
50
|
Kumar A, Bhadauria HS, Singh A. Descriptive analysis of dental X-ray images using various practical methods: A review. PeerJ Comput Sci 2021; 7:e620. [PMID: 34616881 PMCID: PMC8459782 DOI: 10.7717/peerj-cs.620] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/09/2021] [Indexed: 06/13/2023]
Abstract
In dentistry, practitioners interpret various dental X-ray imaging modalities to identify tooth-related problems, abnormalities, or teeth structure changes. Another aspect of dental imaging is that it can be helpful in the field of biometrics. Human dental image analysis is a challenging and time-consuming process due to the unspecified and uneven structures of various teeth, and hence the manual investigation of dental abnormalities is at par excellence. However, automation in the domain of dental image segmentation and examination is essentially the need of the hour in order to ensure error-free diagnosis and better treatment planning. In this article, we have provided a comprehensive survey of dental image segmentation and analysis by investigating more than 130 research works conducted through various dental imaging modalities, such as various modes of X-ray, CT (Computed Tomography), CBCT (Cone Beam Computed Tomography), etc. Overall state-of-the-art research works have been classified into three major categories, i.e., image processing, machine learning, and deep learning approaches, and their respective advantages and limitations are identified and discussed. The survey presents extensive details of the state-of-the-art methods, including image modalities, pre-processing applied for image enhancement, performance measures, and datasets utilized.
Collapse
|