1
|
Sankar H, Alagarsamy R, Lal B, Rana SS, Roychoudhury A, Agrawal A, Wankhar S. Role of artificial intelligence in treatment planning and outcome prediction of jaw corrective surgeries by using 3-D imaging: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2025; 139:299-310. [PMID: 39701860 DOI: 10.1016/j.oooo.2024.09.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Revised: 08/21/2024] [Accepted: 09/17/2024] [Indexed: 12/21/2024]
Abstract
OBJECTIVE Artificial intelligence (AI) has been increasingly utilized in diagnosis of skeletal deformities, while its role in treatment planning and outcome prediction of jaw corrective surgeries with 3-dimensional (3D) imaging remains underexplored. METHODS The comprehensive search was done in PubMed, Google scholar, Semantic scholar and Cochrane Library between January 2000 and May 2024. Inclusion criteria encompassed studies on AI applications in treatment planning and outcome prediction for jaw corrective surgeries using 3D imaging. Data extracted included study details, AI algorithms, and performance metrics. Modified PROBAST tool was used to assess the risk of bias (ROB). RESULTS Fourteen studies were included. 11 studies used deep learning algorithms, and 3 employed machine learning on CT data. In treatment planning the prediction error was 0.292 to 3.32 mm (N = 5), and Dice score was 92.24 to 96% (N = 2). Accuracy of outcome predictions varied from 85.7% to 99.98% (N = 2). ROB was low in most of the included studies. A meta-analysis was not conducted due to significant heterogeneity and insufficient data reporting in the included studies. CONCLUSION 3D imaging-based AI models in treatment planning and outcome prediction for jaw corrective surgeries show promise but remain at proof-of-concept. Further, prospective multicentric studies are needed to validate these findings.
Collapse
Affiliation(s)
- Hariram Sankar
- Department of Dentistry, All India Institute of Medical Sciences, Bathinda, Punjab, India
| | - Ragavi Alagarsamy
- Department of Burns, Plastic and Maxillofacial Surgery, VMMC and Safdarjung hospital, New Delhi, India
| | - Babu Lal
- Department of Trauma and Emergency Medicine, All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, India.
| | - Shailendra Singh Rana
- Department of Dentistry, All India Institute of Medical Sciences, Bathinda, Punjab, India
| | - Ajoy Roychoudhury
- Department of Oral & Maxillofacial Surgery, All India Institute of Medical Sciences, New Delhi, India
| | - Amit Agrawal
- Department of Neurosurgery, All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, India
| | - Syrpailyne Wankhar
- Department of Translational Medicine, All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, India
| |
Collapse
|
2
|
Chang JS, Ma CY, Ko EWC. Prediction of surgery-first approach orthognathic surgery using deep learning models. Int J Oral Maxillofac Surg 2024; 53:942-949. [PMID: 38821731 DOI: 10.1016/j.ijom.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 04/24/2024] [Accepted: 05/08/2024] [Indexed: 06/02/2024]
Abstract
The surgery-first approach (SFA) orthognathic surgery can be beneficial due to reduced overall treatment time and earlier profile improvement. The objective of this study was to utilize deep learning to predict the treatment modality of SFA or the orthodontics-first approach (OFA) in orthognathic surgery patients and assess its clinical accuracy. A supervised deep learning model using three convolutional neural networks (CNNs) was trained based on lateral cephalograms and occlusal views of 3D dental model scans from 228 skeletal Class III malocclusion patients (114 treated by SFA and 114 by OFA). An ablation study of five groups (lateral cephalogram only, mandible image only, maxilla image only, maxilla and mandible images, and all data combined) was conducted to assess the influence of each input type. The results showed the average validation accuracy, precision, recall, F1 score, and AUROC for the five folds were 0.978, 0.980, 0.980, 0.980, and 0.998 ; the average testing results for the five folds were 0.906, 0.986, 0.828, 0.892, and 0.952. The lateral cephalogram only group had the least accuracy, while the maxilla image only group had the best accuracy. Deep learning provides a novel method for an accelerated workflow, automated assisted decision-making, and personalized treatment planning.
Collapse
Affiliation(s)
- J-S Chang
- Graduate Institute of Dental and Craniofacial Science, Chang Gung University, Taoyuan, Taiwan; Department of Craniofacial Orthodontics, Chang Gung Memorial Hospital, Taipei, Taiwan
| | - C-Y Ma
- Department of Artificial Intelligence, Chang Gung University, Taoyuan, Taiwan; Artificial Intelligence Research Center, Chang Gung University, Taoyuan, Taiwan; Division of Rheumatology, Allergy and Immunology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - E W-C Ko
- Graduate Institute of Dental and Craniofacial Science, Chang Gung University, Taoyuan, Taiwan; Department of Craniofacial Orthodontics, Chang Gung Memorial Hospital, Taipei, Taiwan; Craniofacial Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan.
| |
Collapse
|
3
|
Dot G, Chaurasia A, Dubois G, Savoldelli C, Haghighat S, Azimian S, Taramsari AR, Sivaramakrishnan G, Issa J, Dubey A, Schouman T, Gajny L. DentalSegmentator: Robust open source deep learning-based CT and CBCT image segmentation. J Dent 2024; 147:105130. [PMID: 38878813 DOI: 10.1016/j.jdent.2024.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/08/2024] [Accepted: 06/12/2024] [Indexed: 06/30/2024] Open
Abstract
OBJECTIVES Segmentation of anatomical structures on dento-maxillo-facial (DMF) computed tomography (CT) or cone beam computed tomography (CBCT) scans is increasingly needed in digital dentistry. The main aim of this research was to propose and evaluate a novel open source tool called DentalSegmentator for fully automatic segmentation of five anatomical structures on DMF CT and CBCT scans: maxilla/upper skull, mandible, upper teeth, lower teeth, and the mandibular canal. METHODS A retrospective sample of 470 CT and CBCT scans was used as a training/validation set. The performance and generalizability of the tool was evaluated by comparing segmentations provided by experts and automatic segmentations in two hold-out test datasets: an internal dataset of 133 CT and CBCT scans acquired before orthognathic surgery and an external dataset of 123 CBCT scans randomly sampled from routine examinations in 5 institutions. RESULTS The mean overall results in the internal test dataset (n = 133) were a Dice similarity coefficient (DSC) of 92.2 ± 6.3 % and a normalised surface distance (NSD) of 98.2 ± 2.2 %. The mean overall results on the external test dataset (n = 123) were a DSC of 94.2 ± 7.4 % and a NSD of 98.4 ± 3.6 %. CONCLUSIONS The results obtained from this highly diverse dataset demonstrate that this tool can provide fully automatic and robust multiclass segmentation for DMF CT and CBCT scans. To encourage the clinical deployment of DentalSegmentator, the pre-trained nnU-Net model has been made publicly available along with an extension for the 3D Slicer software. CLINICAL SIGNIFICANCE DentalSegmentator open source 3D Slicer extension provides a free, robust, and easy-to-use approach to obtaining patient-specific three-dimensional models from CT and CBCT scans. These models serve various purposes in a digital dentistry workflow, such as visualization, treatment planning, intervention, and follow-up.
Collapse
Affiliation(s)
- Gauthier Dot
- UFR Odontologie, Universite Paris Cité, Paris, France; Service de Medecine Bucco-Dentaire, AP-HP, Hopital Pitie-Salpetriere, Paris, France; Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.
| | - Akhilanand Chaurasia
- Department of Oral Medicine and Radiology, Faculty of Dental Sciences, King George Medical University, Lucknow, Uttar Pradesh, India
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France; Materialise France, Malakoff, France
| | - Charles Savoldelli
- Department of Oral and Maxillofacial Surgery, Head and Neck Institute, University Hospital of Nice, France
| | - Sara Haghighat
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
| | - Sarina Azimian
- Research Committee, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | | | - Julien Issa
- Department of Diagnostics, Chair of Practical Clinical Dentistry, Poznan University of Medical Sciences, Poznan, Poland; Doctoral School, Poznan University of Medical Sciences, Poznan, Poland
| | - Abhishek Dubey
- Department of Oral Medicine and Radiology, Maharana Pratap Dental College, Kanpur, India
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France; AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Medecine Sorbonne Universite, Paris, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
4
|
Choi Y, Bang J, Kim SY, Seo M, Jang J. Deep learning-based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net. Eur Radiol 2024; 34:5389-5400. [PMID: 38243135 DOI: 10.1007/s00330-024-10585-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 12/05/2023] [Accepted: 12/17/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE To evaluate deep learning-based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. METHODS This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development (n = 56), test 1 (n = 13), and test 2 (n = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson's correlation and Bland-Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. RESULTS All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) (p = 0.037 and p = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91-0.93). CONCLUSION The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. CLINICAL RELEVANCE STATEMENT Deep learning-based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. KEY POINTS • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.
Collapse
Affiliation(s)
- Yangsean Choi
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea.
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Centre, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | - Jooin Bang
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Sang-Yeon Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Minkook Seo
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
5
|
Xiang B, Lu J, Yu J. Evaluating tooth segmentation accuracy and time efficiency in CBCT images using artificial intelligence: A systematic review and Meta-analysis. J Dent 2024; 146:105064. [PMID: 38768854 DOI: 10.1016/j.jdent.2024.105064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 04/22/2024] [Accepted: 05/09/2024] [Indexed: 05/22/2024] Open
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to assess the current performance of artificial intelligence (AI)-based methods for tooth segmentation in three-dimensional cone-beam computed tomography (CBCT) images, with a focus on their accuracy and efficiency compared to those of manual segmentation techniques. DATA The data analyzed in this review consisted of a wide range of research studies utilizing AI algorithms for tooth segmentation in CBCT images. Meta-analysis was performed, focusing on the evaluation of the segmentation results using the dice similarity coefficient (DSC). SOURCES PubMed, Embase, Scopus, Web of Science, and IEEE Explore were comprehensively searched to identify relevant studies. The initial search yielded 5642 entries, and subsequent screening and selection processes led to the inclusion of 35 studies in the systematic review. Among the various segmentation methods employed, convolutional neural networks, particularly the U-net model, are the most commonly utilized. The pooled effect of the DSC score for tooth segmentation was 0.95 (95 %CI 0.94 to 0.96). Furthermore, seven papers provided insights into the time required for segmentation, which ranged from 1.5 s to 3.4 min when utilizing AI techniques. CONCLUSIONS AI models demonstrated favorable accuracy in automatically segmenting teeth from CBCT images while reducing the time required for the process. Nevertheless, correction methods for metal artifacts and tooth structure segmentation using different imaging modalities should be addressed in future studies. CLINICAL SIGNIFICANCE AI algorithms have great potential for precise tooth measurements, orthodontic treatment planning, dental implant placement, and other dental procedures that require accurate tooth delineation. These advances have contributed to improved clinical outcomes and patient care in dental practice.
Collapse
Affiliation(s)
- Bilu Xiang
- School of Dentistry, Shenzhen University Medical School, Shenzhen University, Shenzhen 518000, China.
| | - Jiayi Lu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| | - Jiayi Yu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
6
|
Bao J, Zhang X, Xiang S, Liu H, Cheng M, Yang Y, Huang X, Xiang W, Cui W, Lai HC, Huang S, Wang Y, Qian D, Yu H. Deep Learning-Based Facial and Skeletal Transformations for Surgical Planning. J Dent Res 2024; 103:809-819. [PMID: 38808566 DOI: 10.1177/00220345241253186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024] Open
Abstract
The increasing application of virtual surgical planning (VSP) in orthognathic surgery implies a critical need for accurate prediction of facial and skeletal shapes. The craniofacial relationship in patients with dentofacial deformities is still not understood, and transformations between facial and skeletal shapes remain a challenging task due to intricate anatomical structures and nonlinear relationships between the facial soft tissue and bones. In this study, a novel bidirectional 3-dimensional (3D) deep learning framework, named P2P-ConvGC, was developed and validated based on a large-scale data set for accurate subject-specific transformations between facial and skeletal shapes. Specifically, the 2-stage point-sampling strategy was used to generate multiple nonoverlapping point subsets to represent high-resolution facial and skeletal shapes. Facial and skeletal point subsets were separately input into the prediction system to predict the corresponding skeletal and facial point subsets via the skeletal prediction subnetwork and facial prediction subnetwork. For quantitative evaluation, the accuracy was calculated with shape errors and landmark errors between the predicted skeleton or face with corresponding ground truths. The shape error was calculated by comparing the predicted point sets with the ground truths, with P2P-ConvGC outperforming existing state-of-the-art algorithms including P2P-Net, P2P-ASNL, and P2P-Conv. The total landmark errors (Euclidean distances of craniomaxillofacial landmarks) of P2P-ConvGC in the upper skull, mandible, and facial soft tissues were 1.964 ± 0.904 mm, 2.398 ± 1.174 mm, and 2.226 ± 0.774 mm, respectively. Furthermore, the clinical feasibility of the bidirectional model was validated using a clinical cohort. The result demonstrated its prediction ability with average surface deviation errors of 0.895 ± 0.175 mm for facial prediction and 0.906 ± 0.082 mm for skeletal prediction. To conclude, our proposed model achieved good performance on the subject-specific prediction of facial and skeletal shapes and showed clinical application potential in postoperative facial prediction and VSP for orthognathic surgery.
Collapse
Affiliation(s)
- J Bao
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - X Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - S Xiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - H Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - M Cheng
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Y Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, China
| | - X Huang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - W Xiang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - W Cui
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - H C Lai
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - S Huang
- Department of Oral and Maxillofacial Surgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Y Wang
- Qingdao Stomatological Hospital Affiliated to Qingdao University, Qingdao, Shandong, China
| | - D Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - H Yu
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| |
Collapse
|
7
|
Di Brigida L, Cortese A, Cataldo E, Naddeo A. A New Method to Design and Manufacture a Low-Cost Custom-Made Template for Mandible Cut and Repositioning Using Standard Plates in BSSO Surgery. Bioengineering (Basel) 2024; 11:668. [PMID: 39061750 PMCID: PMC11273722 DOI: 10.3390/bioengineering11070668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 06/27/2024] [Indexed: 07/28/2024] Open
Abstract
In this study, a new methodology for designing and creating a custom-made template for maxillofacial surgery has been developed. The custom-made template can be used both for cutting and repositioning of the mandible arches for executing a BSSO (bilateral sagittal split osteotomy) treatment. The idea was developed in order to give the possibility of using a custom-made template with standard plates, thus reducing long times, high costs and low availability of custom-made plates; this represents the proof of novelty of the proposed template, based on a well-established methodology. The methodology was completely developed in the CAD virtual environment and, after the surgeons' assessment, an in-vitro experiment by a maxillofacial surgeon was performed in order to check the usability and the versatility of the system, thanks to the use of additive manufacturing technologies. When computer-aided technologies are used for orthognathic surgery, there are significant time and cost savings that can be realised, as well as improved performance. The cost of the whole operation is lower than the standard one, thanks to the use of standard plates. To carry out the procedures, the proposed methodology allows for inexpensive physical mock-ups that enable the BSSO procedure to be performed.
Collapse
Affiliation(s)
- Liliana Di Brigida
- Department of Industrial Engineering, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano, SA, Italy; (L.D.B.); (E.C.)
- Technogym Research and Development Department, Via Calcinaro 2861, 47521 Cesena, FC, Italy
| | - Antonio Cortese
- Department of Medicine, Surgery and Dentistry, University of Salerno, Via Salvador Allende, 84081 Baronissi, SA, Italy;
| | - Emilio Cataldo
- Department of Industrial Engineering, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano, SA, Italy; (L.D.B.); (E.C.)
- Techno Design S.r.l., via Rosa Jemma, 2, 84091 Battipaglia, SA, Italy
| | - Alessandro Naddeo
- Department of Industrial Engineering, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano, SA, Italy; (L.D.B.); (E.C.)
| |
Collapse
|
8
|
Tel A, Raccampo L, Vinayahalingam S, Troise S, Abbate V, Orabona GD, Sembronio S, Robiony M. Complex Craniofacial Cases through Augmented Reality Guidance in Surgical Oncology: A Technical Report. Diagnostics (Basel) 2024; 14:1108. [PMID: 38893634 PMCID: PMC11171943 DOI: 10.3390/diagnostics14111108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/20/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024] Open
Abstract
Augmented reality (AR) is a promising technology to enhance image guided surgery and represents the perfect bridge to combine precise virtual planning with computer-aided execution of surgical maneuvers in the operating room. In craniofacial surgical oncology, AR brings to the surgeon's sight a digital, three-dimensional representation of the anatomy and helps to identify tumor boundaries and optimal surgical paths. Intraoperatively, real-time AR guidance provides surgeons with accurate spatial information, ensuring accurate tumor resection and preservation of critical structures. In this paper, the authors review current evidence of AR applications in craniofacial surgery, focusing on real surgical applications, and compare existing literature with their experience during an AR and navigation guided craniofacial resection, to subsequently analyze which technological trajectories will represent the future of AR and define new perspectives of application for this revolutionizing technology.
Collapse
Affiliation(s)
- Alessandro Tel
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Luca Raccampo
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Stefania Troise
- Neurosciences Reproductive and Odontostomatological Sciences Department, University of Naples “Federico II”, 80131 Naples, Italy
| | - Vincenzo Abbate
- Neurosciences Reproductive and Odontostomatological Sciences Department, University of Naples “Federico II”, 80131 Naples, Italy
| | - Giovanni Dell’Aversana Orabona
- Neurosciences Reproductive and Odontostomatological Sciences Department, University of Naples “Federico II”, 80131 Naples, Italy
| | - Salvatore Sembronio
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Massimo Robiony
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
9
|
Zheng Q, Gao Y, Zhou M, Li H, Lin J, Zhang W, Chen X. Semi or fully automatic tooth segmentation in CBCT images: a review. PeerJ Comput Sci 2024; 10:e1994. [PMID: 38660190 PMCID: PMC11041986 DOI: 10.7717/peerj-cs.1994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cone beam computed tomography (CBCT) is widely employed in modern dentistry, and tooth segmentation constitutes an integral part of the digital workflow based on these imaging data. Previous methodologies rely heavily on manual segmentation and are time-consuming and labor-intensive in clinical practice. Recently, with advancements in computer vision technology, scholars have conducted in-depth research, proposing various fast and accurate tooth segmentation methods. In this review, we review 55 articles in this field and discuss the effectiveness, advantages, and disadvantages of each approach. In addition to simple classification and discussion, this review aims to reveal how tooth segmentation methods can be improved by the application and refinement of existing image segmentation algorithms to solve problems such as irregular morphology and fuzzy boundaries of teeth. It is assumed that with the optimization of these methods, manual operation will be reduced, and greater accuracy and robustness in tooth segmentation will be achieved. Finally, we highlight the challenges that still exist in this field and provide prospects for future directions.
Collapse
Affiliation(s)
- Qianhan Zheng
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yu Gao
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Mengqi Zhou
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Huimin Li
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jiaqi Lin
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weifang Zhang
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Social Medicine & Health Affairs Administration, Zhejiang University, Hangzhou, China
| | - Xuepeng Chen
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Clinical Research Center for Oral Diseases of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
10
|
Yang X, He D, Li Y, Li C, Wang X, Zhu X, Sun H, Xu Y. Deep learning-based vessel extraction in 3D confocal microscope images of cleared human glioma tissues. BIOMEDICAL OPTICS EXPRESS 2024; 15:2498-2516. [PMID: 38633068 PMCID: PMC11019690 DOI: 10.1364/boe.516541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 04/19/2024]
Abstract
Comprehensive visualization and accurate extraction of tumor vasculature are essential to study the nature of glioma. Nowadays, tissue clearing technology enables 3D visualization of human glioma vasculature at micron resolution, but current vessel extraction schemes cannot well cope with the extraction of complex tumor vessels with high disruption and irregularity under realistic conditions. Here, we developed a framework, FineVess, based on deep learning to automatically extract glioma vessels in confocal microscope images of cleared human tumor tissues. In the framework, a customized deep learning network, named 3D ResCBAM nnU-Net, was designed to segment the vessels, and a novel pipeline based on preprocessing and post-processing was developed to refine the segmentation results automatically. On the basis of its application to a practical dataset, we showed that the FineVess enabled extraction of variable and incomplete vessels with high accuracy in challenging 3D images, better than other traditional and state-of-the-art schemes. For the extracted vessels, we calculated vascular morphological features including fractal dimension and vascular wall integrity of different tumor grades, and verified the vascular heterogeneity through quantitative analysis.
Collapse
Affiliation(s)
- Xiaodu Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Dian He
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yu Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Chenyang Li
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xinyue Wang
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xingzheng Zhu
- Institute of Applied Artificial Intelligence of the Guangdong-Hong Kong-Macao Greater Bay Area, Shenzhen Polytechnic University, Shenzhen, China
| | - Haitao Sun
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Southern Medical University, Guangzhou, China
| | - Yingying Xu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| |
Collapse
|
11
|
Nogueira-Reis F, Morgan N, Suryani IR, Tabchoury CPM, Jacobs R. Full virtual patient generated by artificial intelligence-driven integrated segmentation of craniomaxillofacial structures from CBCT images. J Dent 2024; 141:104829. [PMID: 38163456 DOI: 10.1016/j.jdent.2023.104829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 12/13/2023] [Accepted: 12/29/2023] [Indexed: 01/03/2024] Open
Abstract
OBJECTIVES To assess the performance, time-efficiency, and consistency of a convolutional neural network (CNN) based automated approach for integrated segmentation of craniomaxillofacial structures compared with semi-automated method for creating a virtual patient using cone beam computed tomography (CBCT) scans. METHODS Thirty CBCT scans were selected. Six craniomaxillofacial structures, encompassing the maxillofacial complex bones, maxillary sinus, dentition, mandible, mandibular canal, and pharyngeal airway space, were segmented on these scans using semi-automated and composite of previously validated CNN-based automated segmentation techniques for individual structures. A qualitative assessment of the automated segmentation revealed the need for minor refinements, which were manually corrected. These refined segmentations served as a reference for comparing semi-automated and automated integrated segmentations. RESULTS The majority of minor adjustments with the automated approach involved under-segmentation of sinus mucosal thickening and regions with reduced bone thickness within the maxillofacial complex. The automated and the semi-automated approaches required an average time of 1.1 min and 48.4 min, respectively. The automated method demonstrated a greater degree of similarity (99.6 %) to the reference than the semi-automated approach (88.3 %). The standard deviation values for all metrics with the automated approach were low, indicating a high consistency. CONCLUSIONS The CNN-driven integrated segmentation approach proved to be accurate, time-efficient, and consistent for creating a CBCT-derived virtual patient through simultaneous segmentation of craniomaxillofacial structures. CLINICAL RELEVANCE The creation of a virtual orofacial patient using an automated approach could potentially transform personalized digital workflows. This advancement could be particularly beneficial for treatment planning in a variety of dental and maxillofacial specialties.
Collapse
Affiliation(s)
- Fernanda Nogueira-Reis
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Dakahlia 35516, Egypt
| | - Isti Rahayu Suryani
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Cinthia Pereira Machado Tabchoury
- Department of Biosciences, Division of Biochemistry, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, Huddinge, Stockholm 141 04, Sweden.
| |
Collapse
|
12
|
Dot G, Gajny L, Ducret M. [The challenges of artificial intelligence in odontology]. Med Sci (Paris) 2024; 40:79-84. [PMID: 38299907 DOI: 10.1051/medsci/2023199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024] Open
Abstract
Artificial intelligence has numerous potential applications in dentistry, as these algorithms aim to improve the efficiency and safety of several clinical situations. While the first commercial solutions are being proposed, most of these algorithms have not been sufficiently validated for clinical use. This article describes the challenges surrounding the development of these new tools, to help clinicians to keep a critical eye on this technology.
Collapse
Affiliation(s)
- Gauthier Dot
- UFR odontologie, université Paris Cité, Paris, France - AP-HP, hôpital Pitié-Salpêtrière, service de médecine bucco-dentaire, Paris, France - Institut de biomécanique humaine Georges Charpak, école nationale supérieure d'Arts et Métiers, Paris, France
| | - Laurent Gajny
- Institut de biomécanique humaine Georges Charpak, école nationale supérieure d'Arts et Métiers, Paris, France
| | - Maxime Ducret
- Faculté d'odontologie, université Claude Bernard Lyon 1, hospices civils de Lyon, Lyon, France
| |
Collapse
|
13
|
Weingart JV, Schlager S, Metzger MC, Brandenburg LS, Hein A, Schmelzeisen R, Bamberg F, Kim S, Kellner E, Reisert M, Russe MF. Automated detection of cephalometric landmarks using deep neural patchworks. Dentomaxillofac Radiol 2023; 52:20230059. [PMID: 37427585 PMCID: PMC10461263 DOI: 10.1259/dmfr.20230059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/25/2023] [Accepted: 05/13/2023] [Indexed: 07/11/2023] Open
Abstract
OBJECTIVES This study evaluated the accuracy of deep neural patchworks (DNPs), a deep learning-based segmentation framework, for automated identification of 60 cephalometric landmarks (bone-, soft tissue- and tooth-landmarks) on CT scans. The aim was to determine whether DNP could be used for routine three-dimensional cephalometric analysis in diagnostics and treatment planning in orthognathic surgery and orthodontics. METHODS Full skull CT scans of 30 adult patients (18 female, 12 male, mean age 35.6 years) were randomly divided into a training and test data set (each n = 15). Clinician A annotated 60 landmarks in all 30 CT scans. Clinician B annotated 60 landmarks in the test data set only. The DNP was trained using spherical segmentations of the adjacent tissue for each landmark. Automated landmark predictions in the separate test data set were created by calculating the center of mass of the predictions. The accuracy of the method was evaluated by comparing these annotations to the manual annotations. RESULTS The DNP was successfully trained to identify all 60 landmarks. The mean error of our method was 1.94 mm (SD 1.45 mm) compared to a mean error of 1.32 mm (SD 1.08 mm) for manual annotations. The minimum error was found for landmarks ANS 1.11 mm, SN 1.2 mm, and CP_R 1.25 mm. CONCLUSION The DNP-algorithm was able to accurately identify cephalometric landmarks with mean errors <2 mm. This method could improve the workflow of cephalometric analysis in orthodontics and orthognathic surgery. Low training requirements while still accomplishing high precision make this method particularly promising for clinical use.
Collapse
Affiliation(s)
- Julia Vera Weingart
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Stefan Schlager
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Anna Hein
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
14
|
Tao B, Yu X, Wang W, Wang H, Chen X, Wang F, Wu Y. A deep learning-based automatic segmentation of zygomatic bones from cone-beam computed tomography images: A proof of concept. J Dent 2023:104582. [PMID: 37321334 DOI: 10.1016/j.jdent.2023.104582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/28/2023] [Accepted: 06/06/2023] [Indexed: 06/17/2023] Open
Abstract
OBJECTIVES To investigate the efficiency and accuracy of a deep learning-based automatic segmentation method for zygomatic bones from cone-beam computed tomography (CBCT) images. METHODS One hundred thirty CBCT scans were included and randomly divided into three subsets (training, validation, and test) in a 6:2:2 ratio. A deep learning-based model was developed, and it included a classification network and a segmentation network, where an edge supervision module was added to increase the attention of the edges of zygomatic bones. Attention maps were generated by the Grad-CAM and Guided Grad-CAM algorithms to improve the interpretability of the model. The performance of the model was then compared with that of four dentists on 10 CBCT scans from the test dataset. A p value <.05 was considered statistically significant. RESULTS The accuracy of the classification network was 99.64%. The Dice coefficient (Dice) of the deep learning-based model for the test dataset was 92.34 ± 2.04%, the average surface distance (ASD) was 0.1 ± 0.15 mm, and the 95% Hausdorff distance (HD) was 0.98 ± 0.42 mm. The model required 17.03 seconds on average to segment zygomatic bones, whereas this task took 49.3 minutes for dentists to complete. The Dice score of the model for the 10 CBCT scans was 93.2 ± 1.3%, while that of the dentists was 90.37 ± 3.32%. CONCLUSIONS The proposed deep learning-based model could segment zygomatic bones with high accuracy and efficiency compared with those of dentists. CLINICAL SIGNIFICANCE The proposed automatic segmentation model for zygomatic bone could generate an accurate 3D model for the preoperative digital planning of zygoma reconstruction, orbital surgery, zygomatic implant surgery, and orthodontics.
Collapse
Affiliation(s)
- Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xinbo Yu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Wenying Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Haowei Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Room 805, Dongchuan Road 800, Minhang District, Shanghai, 200240, China..
| | - Feng Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| |
Collapse
|
15
|
Kim DY, Woo S, Roh JY, Choi JY, Kim KA, Cha JY, Kim N, Kim SJ. Subregional pharyngeal changes after orthognathic surgery in skeletal Class III patients analyzed by convolutional neural networks-based segmentation. J Dent 2023:104565. [PMID: 37308053 DOI: 10.1016/j.jdent.2023.104565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/14/2023] Open
Abstract
OBJECTIVES To evaluate the accuracy of fully automatic segmentation of pharyngeal volume of interests (VOIs) before and after orthognathic surgery in skeletal Class III patients using a convolutional neural network (CNN) model and to investigate the clinical applicability of artificial intelligence for quantitative evaluation of treatment changes in pharyngeal VOIs. METHODS 310 cone-beam computed tomography (CBCT) images were divided into a training set (n=150), validation set (n=40), and test set (n=120). The test datasets comprised matched pairs of pre- and posttreatment images of 60 skeletal Class III patients (mean age 23.1±5.0 years; ANB<-2⁰) who underwent bimaxillary orthognathic surgery with orthodontic treatment. A 3D U-Net CNNs model was applied for fully automatic segmentation and measurement of subregional pharyngeal volumes of pretreatment (T0) and posttreatment (T1) scans. The model's accuracy was compared to semi-automatic segmentation outcomes by humans using the dice similarity coefficient (DSC) and volume similarity (VS). The correlation between surgical skeletal changes and model accuracy was obtained. RESULTS The proposed model achieved high performance of subregional pharyngeal segmentation on both T0 and T1 images, representing a significant T1-T0 difference of DSC only in the nasopharynx. Region-specific differences among pharyngeal VOIs, which were observed at T0, disappeared on the T1 images. The decreased DSC of nasopharyngeal segmentation after treatment was weakly correlated with the amount of maxillary advancement. There was no correlation between the mandibular setback amount and model accuracy. CONCLUSIONS The proposed model offers fast and accurate subregional pharyngeal segmentation on both pretreatment and posttreatment CBCT images in skeletal Class III patients. CLINICAL SIGNIFICANCE We elucidated the clinical applicability of the CNNs model to quantitatively evaluate subregional pharyngeal changes after surgical-orthodontic treatment, which offers a basis for developing a fully integrated multiclass CNNs model to predict pharyngeal responses after dentoskeletal treatments.
Collapse
Affiliation(s)
- Dong-Yul Kim
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Seoyeon Woo
- Department of Convergence Medicine, Asan Medical Institute of Convergence, Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jae-Yon Roh
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jin-Young Choi
- Department of Orthodontics, Kyung Hee University Dental Hospital, 23, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Kyung-A Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jung-Yul Cha
- Department of Orthodontics, The Institute of Craniofacial Deformity, College of Dentistry, Yonsei University, 50-1 Yonseiro, Seodaemun-gu, Seoul, 03722, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Su-Jung Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea.
| |
Collapse
|
16
|
Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown-Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics (Basel) 2023; 13:diagnostics13081487. [PMID: 37189586 DOI: 10.3390/diagnostics13081487] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). METHODS 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool's database. All teeth, crown-bridge restorations, dental implants, composite-amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. RESULTS The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown-bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. CONCLUSIONS Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.
Collapse
Affiliation(s)
- Emel Gardiyanoğlu
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
- DESAM Institute, Near East University, 99138 Nicosia, Cyprus
| | - Nurullah Akkaya
- Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
| | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
| |
Collapse
|
17
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
18
|
Önder M, Evli C, Türk E, Kazan O, Bayrakdar İŞ, Çelik Ö, Costa ALF, Gomes JPP, Ogawa CM, Jagtap R, Orhan K. Deep-Learning-Based Automatic Segmentation of Parotid Gland on Computed Tomography Images. Diagnostics (Basel) 2023; 13:581. [PMID: 36832069 PMCID: PMC9955422 DOI: 10.3390/diagnostics13040581] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/23/2023] [Accepted: 02/02/2023] [Indexed: 02/08/2023] Open
Abstract
This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model's performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images.
Collapse
Affiliation(s)
- Merve Önder
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Cengiz Evli
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Ezgi Türk
- Dentomaxillofacial Radiology, Oral and Dental Health Center, Hatay 31040, Turkey
| | - Orhan Kazan
- Health Services Vocational School, Gazi University, Ankara 06560, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Özer Çelik
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
| | - Andre Luiz Ferreira Costa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - João Pedro Perez Gomes
- Department of Stomatology, Division of General Pathology, School of Dentistry, University of São Paulo (USP), São Paulo 13560-970, SP, Brazil
| | - Celso Massahiro Ogawa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-093 Lublin, Poland
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara 06000, Turkey
| |
Collapse
|
19
|
Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J Dent Res 2022; 101:1380-1387. [PMID: 35982646 DOI: 10.1177/00220345221112333] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The increasing use of 3-dimensional (3D) imaging by orthodontists and maxillofacial surgeons to assess complex dentofacial deformities and plan orthognathic surgeries implies a critical need for 3D cephalometric analysis. Although promising methods were suggested to localize 3D landmarks automatically, concerns about robustness and generalizability restrain their clinical use. Consequently, highly trained operators remain needed to perform manual landmarking. In this retrospective diagnostic study, we aimed to train and evaluate a deep learning (DL) pipeline based on SpatialConfiguration-Net for automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans. A retrospective sample of consecutive presurgical CT scans was randomly distributed between a training/validation set (n = 160) and a test set (n = 38). The reference data consisted of 33 landmarks, manually localized once by 1 operator(n = 178) or twice by 3 operators (n = 20, test set only). After inference on the test set, 1 CT scan showed "very low" confidence level predictions; we excluded it from the overall analysis but still assessed and discussed the corresponding results. The model performance was evaluated by comparing the predictions with the reference data; the outcome set included localization accuracy, cephalometric measurements, and comparison to manual landmarking reproducibility. On the hold-out test set, the mean localization error was 1.0 ± 1.3 mm, while success detection rates for 2.0, 2.5, and 3.0 mm were 90.4%, 93.6%, and 95.4%, respectively. Mean errors were -0.3 ± 1.3° and -0.1 ± 0.7 mm for angular and linear measurements, respectively. When compared to manual reproducibility, the measurements were within the Bland-Altman 95% limits of agreement for 91.9% and 71.8% of skeletal and dentoalveolar variables, respectively. To conclude, while our DL method still requires improvement, it provided highly accurate 3D landmark localization on a challenging test set, with a reliability for skeletal evaluation on par with what clinicians obtain.
Collapse
Affiliation(s)
- G Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Universite Paris Cite, AP-HP, Hopital Pitie Salpetriere, Service de Medecine Bucco-Dentaire, Paris, France
| | - T Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - S Chang
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - F Rafflenbeul
- Department of Dentofacial Orthopedics, Faculty of Dental Surgery, Strasbourg University, Strasbourg, France
| | - A Kerbrat
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - P Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - L Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
20
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
21
|
Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115581] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The technological advancements in the field of medical science have led to an escalation in the development of artificial intelligence (AI) applications, which are being extensively used in health sciences. This scoping review aims to outline the application and performance of artificial intelligence models used for diagnosing, treatment planning and predicting the prognosis of orthognathic surgery (OGS). Data for this paper was searched through renowned electronic databases such as PubMed, Google Scholar, Scopus, Web of science, Embase and Cochrane for articles related to the research topic that have been published between January 2000 and February 2022. Eighteen articles that met the eligibility criteria were critically analyzed based on QUADAS-2 guidelines and the certainty of evidence of the included studies was assessed using the GRADE approach. AI has been applied for predicting the post-operative facial profiles and facial symmetry, deciding on the need for OGS, predicting perioperative blood loss, planning OGS, segmentation of maxillofacial structures for OGS, and differential diagnosis of OGS. AI models have proven to be efficient and have outperformed the conventional methods. These models are reported to be reliable and reproducible, hence they can be very useful for less experienced practitioners in clinical decision making and in achieving better clinical outcomes.
Collapse
|