1
|
Humbert-Vidan L, Castelo AH, He R, van Dijk LV, Rhee DJ, Wang C, Wang HC, Wahid KA, Joshi S, Gerafian P, West N, Kaffey Z, Mirbahaeddin S, Curiel J, Acharya S, Shekha A, Oderinde P, Ali AMS, Hope A, Watson E, Wesson-Aponte R, Frank SJ, Barbon CEA, Brock KK, Chambers MS, Walji M, Hutcheson KA, Lai SY, Fuller CD, Naser MA, Moreno AC. Image-based Mandibular and Maxillary Parcellation and Annotation using Computer Tomography (IMPACT): A Deep Learning-based Clinical Tool for Orodental Dose Estimation and Osteoradionecrosis Assessment. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.03.18.25324199. [PMID: 40166584 PMCID: PMC11957087 DOI: 10.1101/2025.03.18.25324199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Background Accurate delineation of orodental structures on radiotherapy CT images is essential for dosimetric assessments and dental decisions. We propose a deep-learning auto-segmentation framework for individual teeth and mandible/maxilla sub-volumes aligned with the ClinRad ORN staging system. Methods Mandible and maxilla sub-volumes were manually defined, differentiating between alveolar and basal regions, and teeth were labelled individually. For each task, a DL segmentation model was independently trained. A Swin UNETR-based model was used for the mandible sub-volumes. For the smaller structures (e.g., teeth and maxilla sub-volumes) a two-stage segmentation model first used the ResUNet to segment the entire teeth and maxilla regions as a single ROI that was then used to crop the image input of the Swin UNETR. In addition to segmentation accuracy and geometric precision, a dosimetric comparison was made between manual and model-predicted segmentations. Results Segmentation performance varied across sub-volumes - mean Dice values of 0.85 (mandible basal), 0.82 (mandible alveolar), 0.78 (maxilla alveolar), 0.80 (upper central teeth), 0.69 (upper premolars), 0.76 (upper molars), 0.76 (lower central teeth), 0.70 (lower premolars), 0.71 (lower molars) - and exhibited limited applicability in segmenting teeth and sub-volumes often absent in the data. Only the maxilla alveolar central sub-volume showed a statistically significant dosimetric difference (Bonferroni-adjusted p-value = 0.02). Conclusion We present a novel DL-based auto-segmentation framework of orodental structures, enabling spatial localization of dose-related differences in the jaw. This tool enhances image-based bone injury detection, including ORN, and improves clinical decision-making in radiation oncology and dental care for head and neck cancer patients.
Collapse
Affiliation(s)
- Laia Humbert-Vidan
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Austin H Castelo
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Lisanne V van Dijk
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, Netherlands
| | - Dong Joo Rhee
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Congjun Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - He C Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem A Wahid
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sonali Joshi
- California University of Science and Medicine, Cerritos, California, USA
| | | | - Natalie West
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Zaphanlene Kaffey
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sarah Mirbahaeddin
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jaqueline Curiel
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Samrina Acharya
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Amal Shekha
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Praise Oderinde
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Alaa M S Ali
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Andrew Hope
- Department of Radiation Oncology, Princess Margaret Cancer Center, Toronto, CA
| | - Erin Watson
- Department of Dental Oncology, Princess Margaret Cancer Center, Toronto, CA
| | - Ruth Wesson-Aponte
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Steven J Frank
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Carly E A Barbon
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mark S Chambers
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Muhammad Walji
- Department of Clinical and Health Informatics, Texas Center of Oral Health Care Quality & Safety, Houston, Texas, USA
| | - Katherine A Hutcheson
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Stephen Y Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohamed A Naser
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Amy C Moreno
- Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
2
|
Alahmari M, Alahmari M, Almuaddi A, Abdelmagyd H, Rao K, Hamdoon Z, Alsaegh M, Chaitanya NCSK, Shetty S. Accuracy of artificial intelligence-based segmentation in maxillofacial structures: a systematic review. BMC Oral Health 2025; 25:350. [PMID: 40055718 PMCID: PMC11887095 DOI: 10.1186/s12903-025-05730-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 02/26/2025] [Indexed: 03/23/2025] Open
Abstract
OBJECTIVE The aim of this review was to evaluate the accuracy of artificial intelligence (AI) in the segmentation of teeth, jawbone (maxilla, mandible with temporomandibular joint), and mandibular (inferior alveolar) canal in CBCT and CT scans. MATERIALS AND METHODS Articles were retrieved from MEDLINE, Cochrane CENTRAL, IEEE Xplore, and Google Scholar. Eligible studies were analyzed thematically, and their quality was appraised using the JBI checklist for diagnostic test accuracy studies. Meta-analysis was conducted for key performance metrics, including Dice Similarity Coefficient (DSC) and Average Surface Distance (ASD). RESULTS A total of 767 non-duplicate articles were identified, and 30 studies were included in the review. Of these, 27 employed deep-learning models, while 3 utilized classical machine-learning approaches. The pooled DSC for mandible segmentation was 0.94 (95% CI: 0.91-0.98), mandibular canal segmentation was 0.694 (95% CI: 0.551-0.838), maxilla segmentation was 0.907 (95% CI: 0.867-0.948), and teeth segmentation was 0.925 (95% CI: 0.891-0.959). Pooled ASD values were 0.534 mm (95% CI: 0.366-0.703) for the mandibular canal, 0.468 mm (95% CI: 0.295-0.641) for the maxilla, and 0.189 mm (95% CI: 0.043-0.335) for teeth. Other metrics, such as sensitivity and precision, were variably reported, with sensitivity exceeding 90% across studies. CONCLUSION AI-based segmentation, particularly using deep-learning models, demonstrates high accuracy in the segmentation of dental and maxillofacial structures, comparable to expert manual segmentation. The integration of AI into clinical workflows offers not only accuracy but also substantial time savings, positioning it as a promising tool for automated dental imaging.
Collapse
Affiliation(s)
- Manea Alahmari
- College of Dentistry, King Khalid University, Abha, Saudi Arabia
| | - Maram Alahmari
- Armed Forces Hospital Southern Region, Khamis Mushait, Saudi Arabia
| | | | - Hossam Abdelmagyd
- College of Dentistry, Suez Canal University, Ajman, United Arab Emirates
| | - Kumuda Rao
- AB Shetty Memorial Institute of Dental Sciences, Nitte (Deemed to be University), Mangalore, India
| | - Zaid Hamdoon
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohammed Alsaegh
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Nallan C S K Chaitanya
- College of Dental Sciences, RAK Medical and Health Sciences University, Ras-Al-Khaimah, United Arab Emirates
| | - Shishir Shetty
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates.
| |
Collapse
|
3
|
Yurdakurban E, Süküt Y, Duran GS. Assessment of deep learning technique for fully automated mandibular segmentation. Am J Orthod Dentofacial Orthop 2025; 167:242-249. [PMID: 39863342 DOI: 10.1016/j.ajodo.2024.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Revised: 08/01/2024] [Accepted: 09/01/2024] [Indexed: 01/27/2025]
Abstract
INTRODUCTION This study aimed to assess the precision of an open-source, clinician-trained, and user-friendly convolutional neural network-based model for automatically segmenting the mandible. METHODS A total of 55 cone-beam computed tomography scans that met the inclusion criteria were collected and divided into test and training groups. The MONAI (Medical Open Network for Artificial Intelligence) Label active learning tool extension was used to train the automatic model. To assess the model's performance, 15 cone-beam computed tomography scans from the test group were inputted into the model. The ground truth was obtained from manual segmentation data. Metrics including the Dice similarity coefficient, Hausdorff 95%, precision, recall, and segmentation times were calculated. In addition, surface deviations and volumetric differences between the automated and manual segmentation results were analyzed. RESULTS The automated model showed a high level of similarity to the manual segmentation results, with a mean Dice similarity coefficient of 0.926 ± 0.014. The Hausdorff distance was 1.358 ± 0.466 mm, whereas the mean recall and precision values were 0.941 ± 0.028 and 0.941 ± 0.022, respectively. There were no statistically significant differences in the arithmetic mean of the surface deviation for the entire mandible and 11 different anatomic regions. In terms of volumetric comparisons, the difference between the 2 groups was 1.62 mm³, which was not statistically significant. CONCLUSIONS The automated model was found to be suitable for clinical use, demonstrating a high degree of agreement with the reference manual method. Clinicians can use open-source software to develop custom automated segmentation models tailored to their specific needs.
Collapse
Affiliation(s)
- Ebru Yurdakurban
- Department of Orthodontics, Faculty of Dentistry, Muğla Sıtkı Koçman University, Muğla, Turkey.
| | - Yağızalp Süküt
- Department of Orthodontics, Gulhane Faculty of Dentistry, University of Health Sciences, Ankara, Turkey
| | - Gökhan Serhat Duran
- Department of Orthodontics, Faculty of Dentistry, Çanakkale Onsekiz Mart University, Çanakkale, Turkey
| |
Collapse
|
4
|
Flügge T, Vinayahalingam S, van Nistelrooij N, Kellner S, Xi T, van Ginneken B, Bergé S, Heiland M, Kernen F, Ludwig U, Odaka K. Automated tooth segmentation in magnetic resonance scans using deep learning - A pilot study. Dentomaxillofac Radiol 2025; 54:12-18. [PMID: 39589897 PMCID: PMC11664100 DOI: 10.1093/dmfr/twae059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Revised: 07/19/2024] [Accepted: 10/29/2024] [Indexed: 11/28/2024] Open
Abstract
OBJECTIVES The main objective was to develop and evaluate an artificial intelligence model for tooth segmentation in magnetic resonance (MR) scans. METHODS MR scans of 20 patients performed with a commercial 64-channel head coil with a T1-weighted 3D-SPACE (Sampling Perfection with Application Optimized Contrasts using different flip angle Evolution) sequence were included. Sixteen datasets were used for model training and 4 for accuracy evaluation. Two clinicians segmented and annotated the teeth in each dataset. A segmentation model was trained using the nnU-Net framework. The manual reference tooth segmentation and the inferred tooth segmentation were superimposed and compared by computing precision, sensitivity, and Dice-Sørensen coefficient. Surface meshes were extracted from the segmentations, and the distances between points on each mesh and their closest counterparts on the other mesh were computed, of which the mean (average symmetric surface distance) and 95th percentile (Hausdorff distance 95%, HD95) were reported. RESULTS The model achieved an overall precision of 0.867, a sensitivity of 0.926, a Dice-Sørensen coefficient of 0.895, and a 95% Hausdorff distance of 0.91 mm. The model predictions were less accurate for datasets containing dental restorations due to image artefacts. CONCLUSIONS The current study developed an automated method for tooth segmentation in MR scans with moderate to high effectiveness for scans with respectively without artefacts.
Collapse
Affiliation(s)
- Tabea Flügge
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
- Department of Artificial Intelligence, Radboud University, Thomas van Aquinostraat 4, Nijmegen, 6525 GD, the Netherlands
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Waldeyerstraße 30, 48149 Münster, Germany
| | - Niels van Nistelrooij
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
| | - Stefanie Kellner
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
| | - Bram van Ginneken
- Department of Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, Nijmegen, 6525 GA, the Netherlands
| | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, 6525 EX, the Netherlands
| | - Max Heiland
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Florian Kernen
- Department of Oral and Maxillofacial Surgery, Translational Implantology, Medical Center , Faculty of Medicine, University of Freiburg, Hugstetter Straße 55, 79106 Freiburg, Germany
| | - Ute Ludwig
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, University Medical Center Freiburg, University of Freiburg, Kilianstraße 5a, 79106 Freiburg im Breisgau, Germany
| | - Kento Odaka
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
- Department of Oral and Maxillofacial Radiology, Tokyo Dental College, 2-9-18, Kandamisakicho, Chiyoda-ku, Tokyo, 101-0061, Japan
| |
Collapse
|
5
|
Kargilis DC, Xu W, Reddy S, Ramesh SSK, Wang S, Le AD, Rajapakse CS. Deep learning segmentation of mandible with lower dentition from cone beam CT. Oral Radiol 2025; 41:1-9. [PMID: 39141154 DOI: 10.1007/s11282-024-00770-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 08/08/2024] [Indexed: 08/15/2024]
Abstract
OBJECTIVES This study aimed to train a 3D U-Net convolutional neural network (CNN) for mandible and lower dentition segmentation from cone-beam computed tomography (CBCT) scans. METHODS In an ambispective cross-sectional design, CBCT scans from two hospitals (2009-2019 and 2021-2022) constituted an internal dataset and external validation set, respectively. Manual segmentation informed CNN training, and evaluations employed Dice similarity coefficient (DSC) for volumetric accuracy. A blinded oral maxillofacial surgeon performed qualitative grading of CBCT scans and object meshes. Statistical analyses included independent t-tests and ANOVA tests to compare DSC across patient subgroups of gender, race, body mass index (BMI), test dataset used, age, and degree of metal artifact. Tests were powered for a minimum detectable difference in DSC of 0.025, with alpha of 0.05 and power level of 0.8. RESULTS 648 CBCT scans from 490 patients were included in the study. The CNN achieved high accuracy (average DSC: 0.945 internal, 0.940 external). No DSC differences were observed between test set used, gender, BMI, and race. Significant differences in DSC were identified based on age group and the degree of metal artifact. The majority (80%) of object meshes produced by both manual and automatic segmentation were rated as acceptable or higher quality. CONCLUSION We developed a model for automatic mandible and lower dentition segmentation from CBCT scans in a demographically diverse cohort including a high degree of metal artifacts. The model demonstrated good accuracy on internal and external test sets, with majority acceptable quality from a clinical grader.
Collapse
Affiliation(s)
- Daniel C Kargilis
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA.
- Johns Hopkins University, Baltimore, USA.
| | - Winnie Xu
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | - Samir Reddy
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | | | - Steven Wang
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | - Anh D Le
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | - Chamith S Rajapakse
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| |
Collapse
|
6
|
Liu W, Li X, Liu C, Gao G, Xiong Y, Zhu T, Zeng W, Guo J, Tang W. Automatic classification and segmentation of multiclass jaw lesions in cone-beam CT using deep learning. Dentomaxillofac Radiol 2024; 53:439-446. [PMID: 38937280 DOI: 10.1093/dmfr/twae028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 04/06/2024] [Accepted: 06/24/2024] [Indexed: 06/29/2024] Open
Abstract
OBJECTIVES To develop and validate a modified deep learning (DL) model based on nnU-Net for classifying and segmenting five-class jaw lesions using cone-beam CT (CBCT). METHODS A total of 368 CBCT scans (37 168 slices) were used to train a multi-class segmentation model. The data underwent manual annotation by two oral and maxillofacial surgeons (OMSs) to serve as ground truth. Sensitivity, specificity, precision, F1-score, and accuracy were used to evaluate the classification ability of the model and doctors, with or without artificial intelligence assistance. The dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and segmentation time were used to evaluate the segmentation effect of the model. RESULTS The model achieved the dual task of classifying and segmenting jaw lesions in CBCT. For classification, the sensitivity, specificity, precision, and accuracy of the model were 0.871, 0.974, 0.874, and 0.891, respectively, surpassing oral and maxillofacial radiologists (OMFRs) and OMSs, approaching the specialist. With the model's assistance, the classification performance of OMFRs and OMSs improved, particularly for odontogenic keratocyst (OKC) and ameloblastoma (AM), with F1-score improvements ranging from 6.2% to 12.7%. For segmentation, the DSC was 87.2% and the ASSD was 1.359 mm. The model's average segmentation time was 40 ± 9.9 s, contrasting with 25 ± 7.2 min for OMSs. CONCLUSIONS The proposed DL model accurately and efficiently classified and segmented five classes of jaw lesions using CBCT. In addition, it could assist doctors in improving classification accuracy and segmentation efficiency, particularly in distinguishing confusing lesions (eg, AM and OKC).
Collapse
Affiliation(s)
- Wei Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Xiang Li
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Chang Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Ge Gao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Yutao Xiong
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Tao Zhu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Wei Zeng
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Wei Tang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| |
Collapse
|
7
|
Melerowitz L, Sreenivasa S, Nachbar M, Stsefanenka A, Beck M, Senger C, Predescu N, Ullah Akram S, Budach V, Zips D, Heiland M, Nahles S, Stromberger C. Design and evaluation of a deep learning-based automatic segmentation of maxillary and mandibular substructures using a 3D U-Net. Clin Transl Radiat Oncol 2024; 47:100780. [PMID: 38712013 PMCID: PMC11070663 DOI: 10.1016/j.ctro.2024.100780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 04/09/2024] [Accepted: 04/17/2024] [Indexed: 05/08/2024] Open
Abstract
Background Current segmentation approaches for radiation treatment planning in head and neck cancer patients (HNCP) typically consider the entire mandible as an organ at risk, whereas segmentation of the maxilla remains uncommon. Accurate risk assessment for osteoradionecrosis (ORN) or implant-based dental rehabilitation after radiation therapy may require a nuanced analysis of dose distribution in specific mandibular and maxillary segments. Manual segmentation is time-consuming and inconsistent, and there is no definition of jaw subsections. Materials and methods The mandible and maxilla were divided into 12 substructures. The model was developed from 82 computed tomography (CT) scans of HNCP and adopts an encoder-decoder three-dimensional (3D) U-Net structure. The efficiency and accuracy of the automated method were compared against manual segmentation on an additional set of 20 independent CT scans. The evaluation metrics used were the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and surface DSC (sDSC). Results Automated segmentations were performed in a median of 86 s, compared to manual segmentations, which took a median of 53.5 min. The median DSC per substructure ranged from 0.81 to 0.91, and the median HD95 ranged from 1.61 to 4.22. The number of artifacts did not affect these scores. The maxillary substructures showed lower metrics than the mandibular substructures. Conclusions The jaw substructure segmentation demonstrated high accuracy, time efficiency, and promising results in CT scans with and without metal artifacts. This novel model could provide further investigation into dose relationships with ORN or dental implant failure in normal tissue complication prediction models.
Collapse
Affiliation(s)
- L. Melerowitz
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - S. Sreenivasa
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - M. Nachbar
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - A. Stsefanenka
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - M. Beck
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - C. Senger
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - N. Predescu
- MVision AI, Paciuksenkatu 29 00270 Helsinki, Finland
| | | | - V. Budach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - D. Zips
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - M. Heiland
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353, Berlin, Germany
| | - S. Nahles
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353, Berlin, Germany
| | - C. Stromberger
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| |
Collapse
|
8
|
Xiang B, Lu J, Yu J. Evaluating tooth segmentation accuracy and time efficiency in CBCT images using artificial intelligence: A systematic review and Meta-analysis. J Dent 2024; 146:105064. [PMID: 38768854 DOI: 10.1016/j.jdent.2024.105064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 04/22/2024] [Accepted: 05/09/2024] [Indexed: 05/22/2024] Open
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to assess the current performance of artificial intelligence (AI)-based methods for tooth segmentation in three-dimensional cone-beam computed tomography (CBCT) images, with a focus on their accuracy and efficiency compared to those of manual segmentation techniques. DATA The data analyzed in this review consisted of a wide range of research studies utilizing AI algorithms for tooth segmentation in CBCT images. Meta-analysis was performed, focusing on the evaluation of the segmentation results using the dice similarity coefficient (DSC). SOURCES PubMed, Embase, Scopus, Web of Science, and IEEE Explore were comprehensively searched to identify relevant studies. The initial search yielded 5642 entries, and subsequent screening and selection processes led to the inclusion of 35 studies in the systematic review. Among the various segmentation methods employed, convolutional neural networks, particularly the U-net model, are the most commonly utilized. The pooled effect of the DSC score for tooth segmentation was 0.95 (95 %CI 0.94 to 0.96). Furthermore, seven papers provided insights into the time required for segmentation, which ranged from 1.5 s to 3.4 min when utilizing AI techniques. CONCLUSIONS AI models demonstrated favorable accuracy in automatically segmenting teeth from CBCT images while reducing the time required for the process. Nevertheless, correction methods for metal artifacts and tooth structure segmentation using different imaging modalities should be addressed in future studies. CLINICAL SIGNIFICANCE AI algorithms have great potential for precise tooth measurements, orthodontic treatment planning, dental implant placement, and other dental procedures that require accurate tooth delineation. These advances have contributed to improved clinical outcomes and patient care in dental practice.
Collapse
Affiliation(s)
- Bilu Xiang
- School of Dentistry, Shenzhen University Medical School, Shenzhen University, Shenzhen 518000, China.
| | - Jiayi Lu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| | - Jiayi Yu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
9
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
10
|
Bayrakdar IS, Elfayome NS, Hussien RA, Gulsen IT, Kuran A, Gunes I, Al-Badr A, Celik O, Orhan K. Artificial intelligence system for automatic maxillary sinus segmentation on cone beam computed tomography images. Dentomaxillofac Radiol 2024; 53:256-266. [PMID: 38502963 PMCID: PMC11056744 DOI: 10.1093/dmfr/twae012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/29/2024] [Accepted: 03/14/2024] [Indexed: 03/21/2024] Open
Abstract
OBJECTIVES The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model. METHODS In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values. RESULTS F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.
Collapse
Affiliation(s)
- Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, 26040, Turkey
| | - Nermin Sameh Elfayome
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Cairo University, Cairo, 12613, Egypt
| | - Reham Ashraf Hussien
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Cairo University, Cairo, 12613, Egypt
| | - Ibrahim Tevfik Gulsen
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Alanya Alaaddin Keykubat University, Antalya, 07425, Turkey
| | - Alican Kuran
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Turkey
| | - Ihsan Gunes
- Open and Distance Education Application and Research Center, Eskisehir Technical University, Eskisehir, 26555, Turkey
| | - Alwaleed Al-Badr
- Restorative Dentistry, Riyadh Elm University, Riyadh, 13244, Saudi Arabia
| | - Ozer Celik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, 26040, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, 06560, Turkey
| |
Collapse
|
11
|
Zheng Q, Gao Y, Zhou M, Li H, Lin J, Zhang W, Chen X. Semi or fully automatic tooth segmentation in CBCT images: a review. PeerJ Comput Sci 2024; 10:e1994. [PMID: 38660190 PMCID: PMC11041986 DOI: 10.7717/peerj-cs.1994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cone beam computed tomography (CBCT) is widely employed in modern dentistry, and tooth segmentation constitutes an integral part of the digital workflow based on these imaging data. Previous methodologies rely heavily on manual segmentation and are time-consuming and labor-intensive in clinical practice. Recently, with advancements in computer vision technology, scholars have conducted in-depth research, proposing various fast and accurate tooth segmentation methods. In this review, we review 55 articles in this field and discuss the effectiveness, advantages, and disadvantages of each approach. In addition to simple classification and discussion, this review aims to reveal how tooth segmentation methods can be improved by the application and refinement of existing image segmentation algorithms to solve problems such as irregular morphology and fuzzy boundaries of teeth. It is assumed that with the optimization of these methods, manual operation will be reduced, and greater accuracy and robustness in tooth segmentation will be achieved. Finally, we highlight the challenges that still exist in this field and provide prospects for future directions.
Collapse
Affiliation(s)
- Qianhan Zheng
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yu Gao
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Mengqi Zhou
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Huimin Li
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jiaqi Lin
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weifang Zhang
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Social Medicine & Health Affairs Administration, Zhejiang University, Hangzhou, China
| | - Xuepeng Chen
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Clinical Research Center for Oral Diseases of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
12
|
Wei L, Wu S, Huang Z, Chen Y, Zheng H, Wang L. Autologous Transplantation Tooth Guide Design Based on Deep Learning. J Oral Maxillofac Surg 2024; 82:314-324. [PMID: 37832596 DOI: 10.1016/j.joms.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 09/14/2023] [Accepted: 09/14/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND Autologous tooth transplantation requires precise surgical guide design, involving manual tracing of donor tooth contours based on patient cone-beam computed tomography (CBCT) scans. While manual corrections are time-consuming and prone to human errors, deep learning-based approaches show promise in reducing labor and time costs while minimizing errors. However, the application of deep learning techniques in this particular field is yet to be investigated. PURPOSE We aimed to assess the feasibility of replacing the traditional design pipeline with a deep learning-enabled autologous tooth transplantation guide design pipeline. STUDY DESIGN, SETTING, SAMPLE This retrospective cross-sectional study used 79 CBCT images collected at the Guangzhou Medical University Hospital between October 2022 and March 2023. Following preprocessing, a total of 5,070 region of interest images were extracted from 79 CBCT images. PREDICTOR VARIABLE Autologous tooth transplantation guide design pipelines, either based on traditional manual design or deep learning-based design. MAIN OUTCOME VARIABLE The main outcome variable was the error between the reconstructed model and the gold standard benchmark. We used the third molar extracted clinically as the gold standard and leveraged it as the benchmark for evaluating our reconstructed models from different design pipelines. Both trueness and accuracy were used to evaluate this error. Trueness was assessed using the root mean square (RMS), and accuracy was measured using the standard deviation. The secondary outcome variable was the pipeline efficiency, assessed based on the time cost. Time cost refers to the amount of time required to acquire the third molar model using the pipeline. ANALYSES Data were analyzed using the Kruskal-Wallis test. Statistical significance was set at P < .05. RESULTS In the surface matching comparison for different reconstructed models, the deep learning group achieved the lowest RMS value (0.335 ± 0.066 mm). There were no significant differences in RMS values between manual design by a senior doctor and deep learning-based design (P = .688), and the standard deviation values did not differ among the 3 groups (P = .103). The deep learning-based design pipeline (0.017 ± 0.001 minutes) provided a faster assessment compared to the manual design pipeline by both senior (19.676 ± 2.386 minutes) and junior doctors (30.613 ± 6.571 minutes) (P < .001). CONCLUSIONS AND RELEVANCE The deep learning-based automatic pipeline exhibited similar performance in surgical guide design for autogenous tooth transplantation compared to manual design by senior doctors, and it minimized time costs.
Collapse
Affiliation(s)
- Lifen Wei
- Department of Dental Implantation, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangzhou, Guangdong, China
| | - Shuyang Wu
- Department of Pathology, Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong, China
| | - Zelun Huang
- Department of Dental Implantation, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangzhou, Guangdong, China
| | - Yaxin Chen
- Department of Dental Implantation, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangzhou, Guangdong, China
| | - Haoran Zheng
- Department of Chemical & Materials Engineering, University of Auckland, Auckland, New Zealand
| | - Liping Wang
- Department of Dental Implantation, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangzhou, Guangdong, China.
| |
Collapse
|
13
|
Zhang L, Li W, Lv J, Xu J, Zhou H, Li G, Ai K. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview. J Dent 2023; 138:104727. [PMID: 37769934 DOI: 10.1016/j.jdent.2023.104727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023] Open
Abstract
OBJECTIVES This article reviews recent advances in computer-aided segmentation methods for oral and maxillofacial surgery and describes the advantages and limitations of these methods. The objective is to provide an invaluable resource for precise therapy and surgical planning in oral and maxillofacial surgery. Study selection, data and sources: This review includes full-text articles and conference proceedings reporting the application of segmentation methods in the field of oral and maxillofacial surgery. The research focuses on three aspects: tooth detection segmentation, mandibular canal segmentation and alveolar bone segmentation. The most commonly used imaging technique is CBCT, followed by conventional CT and Orthopantomography. A systematic electronic database search was performed up to July 2023 (Medline via PubMed, IEEE Xplore, ArXiv, Google Scholar were searched). RESULTS These segmentation methods can be mainly divided into two categories: traditional image processing and machine learning (including deep learning). Performance testing on a dataset of images labeled by medical professionals shows that it performs similarly to dentists' annotations, confirming its effectiveness. However, no studies have evaluated its practical application value. CONCLUSION Segmentation methods (particularly deep learning methods) have demonstrated unprecedented performance, while inherent challenges remain, including the scarcity and inconsistency of datasets, visible artifacts in images, unbalanced data distribution, and the "black box" nature. CLINICAL SIGNIFICANCE Accurate image segmentation is critical for precise treatment and surgical planning in oral and maxillofacial surgery. This review aims to facilitate more accurate and effective surgical treatment planning among dental researchers.
Collapse
Affiliation(s)
- Lang Zhang
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Wang Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China.
| | - Jinxun Lv
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Jiajie Xu
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Hengyu Zhou
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Gen Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Keqi Ai
- Department of Radiology, Xinqiao Hospital, Army Medical University, Chongqing 400037, China.
| |
Collapse
|
14
|
Tabatabaian F, Vora SR, Mirabbasi S. Applications, functions, and accuracy of artificial intelligence in restorative dentistry: A literature review. J ESTHET RESTOR DENT 2023; 35:842-859. [PMID: 37522291 DOI: 10.1111/jerd.13079] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
OBJECTIVE The applications of artificial intelligence (AI) are increasing in restorative dentistry; however, the AI performance is unclear for dental professionals. The purpose of this narrative review was to evaluate the applications, functions, and accuracy of AI in diverse aspects of restorative dentistry including caries detection, tooth preparation margin detection, tooth restoration design, metal structure casting, dental restoration/implant detection, removable partial denture design, and tooth shade determination. OVERVIEW An electronic search was performed on Medline/PubMed, Embase, Web of Science, Cochrane, Scopus, and Google Scholar databases. English-language articles, published from January 1, 2000, to March 1, 2022, relevant to the aforementioned aspects were selected using the key terms of artificial intelligence, machine learning, deep learning, artificial neural networks, convolutional neural networks, clustering, soft computing, automated planning, computational learning, computer vision, and automated reasoning as inclusion criteria. A manual search was also performed. Therefore, 157 articles were included, reviewed, and discussed. CONCLUSIONS Based on the current literature, the AI models have shown promising performance in the mentioned aspects when being compared with traditional approaches in terms of accuracy; however, as these models are still in development, more studies are required to validate their accuracy and apply them to routine clinical practice. CLINICAL SIGNIFICANCE AI with its specific functions has shown successful applications with acceptable accuracy in diverse aspects of restorative dentistry. The understanding of these functions may lead to novel applications with optimal accuracy for AI in restorative dentistry.
Collapse
Affiliation(s)
- Farhad Tabatabaian
- Department of Oral Health Sciences, Faculty of Dentistry, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Siddharth R Vora
- Department of Oral Health Sciences, Faculty of Dentistry, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Shahriar Mirabbasi
- Department of Electrical and Computer Engineering, Faculty of Applied Science, The University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
15
|
Morita D, Mazen S, Tsujiko S, Otake Y, Sato Y, Numajiri T. Deep-learning-based automatic facial bone segmentation using a two-dimensional U-Net. Int J Oral Maxillofac Surg 2023; 52:787-792. [PMID: 36328865 DOI: 10.1016/j.ijom.2022.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 08/16/2022] [Accepted: 10/24/2022] [Indexed: 06/04/2023]
Abstract
The use of deep learning (DL) in medical imaging is becoming increasingly widespread. Although DL has been used previously for the segmentation of facial bones in computed tomography (CT) images, there are few reports of segmentation involving multiple areas. In this study, a U-Net was used to investigate the automatic segmentation of facial bones into eight areas, with the aim of facilitating virtual surgical planning (VSP) and computer-aided design and manufacturing (CAD/CAM) in maxillofacial surgery. CT data from 50 patients were prepared and used for training, and five-fold cross-validation was performed. The output results generated by the DL model were validated by Dice coefficient and average symmetric surface distance (ASSD). The automatic segmentation was successful in all cases, with a mean± standard deviation Dice coefficient of 0.897 ± 0.077 and ASSD of 1.168 ± 1.962 mm. The accuracy was very high for the mandible (Dice coefficient 0.984, ASSD 0.324 mm) and zygomatic bones (Dice coefficient 0.931, ASSD 0.487 mm), and these could be introduced for VSP and CAD/CAM without any modification. The results for other areas, particularly the teeth, were slightly inferior, with possible reasons being the effects of defects, bonded maxillary and mandibular teeth, and metal artefacts. A limitation of this study is that the data were from a single institution. Hence further research is required to improve the accuracy for some facial areas and to validate the results in larger and more diverse populations.
Collapse
Affiliation(s)
- D Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - S Mazen
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - S Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Y Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Y Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - T Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
16
|
Polizzi A, Quinzi V, Ronsivalle V, Venezia P, Santonocito S, Lo Giudice A, Leonardi R, Isola G. Tooth automatic segmentation from CBCT images: a systematic review. Clin Oral Investig 2023; 27:3363-3378. [PMID: 37148371 DOI: 10.1007/s00784-023-05048-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 04/26/2023] [Indexed: 05/08/2023]
Abstract
OBJECTIVES To describe the current state of the art regarding technological advances in full-automatic tooth segmentation approaches from 3D cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS In March 2023, a search strategy without a timeline setting was carried out through a combination of MeSH terms and free text words pooled through Boolean operators ('AND', 'OR') on the following databases: PubMed, Scopus, Web of Science and IEEE Explore. Randomized and non-randomized controlled trials, cohort, case-control, cross-sectional and retrospective studies in the English language only were included. RESULTS The search strategy identified 541 articles, of which 23 have been selected. The most employed segmentation methods were based on deep learning approaches. One article exposed an automatic approach for tooth segmentation based on a watershed algorithm and another article used an improved level set method. Four studies presented classical machine learning and thresholding approaches. The most employed metric for evaluating segmentation performance was the Dice similarity index which ranged from 90 ± 3% to 97.9 ± 1.5%. CONCLUSIONS Thresholding appeared not reliable for tooth segmentation from CBCT images, whereas convolutional neural networks (CNNs) have been demonstrated as the most promising approach. CNNs could help overcome tooth segmentation's main limitations from CBCT images related to root anatomy, heavy scattering, immature teeth, metal artifacts and time consumption. New studies with uniform protocols and evaluation metrics with random sampling and blinding for data analysis are encouraged to objectively compare the different deep learning architectures' reliability. CLINICAL RELEVANCE Automatic tooth segmentation's best performance has been obtained through CNNs for the different ambits of digital dentistry.
Collapse
Affiliation(s)
- Alessandro Polizzi
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy.
- Department of Life, Health & Environmental Sciences, Postgraduate School of Orthodontics, University of L'Aquila, 67100, L'Aquila, Italy.
| | - Vincenzo Quinzi
- Department of Life, Health & Environmental Sciences, Postgraduate School of Orthodontics, University of L'Aquila, 67100, L'Aquila, Italy
| | - Vincenzo Ronsivalle
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Pietro Venezia
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Simona Santonocito
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Antonino Lo Giudice
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Rosalia Leonardi
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Gaetano Isola
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| |
Collapse
|
17
|
Kushwaha A, Mourad RF, Heist K, Tariq H, Chan HP, Ross BD, Chenevert TL, Malyarenko D, Hadjiiski LM. Improved Repeatability of Mouse Tibia Volume Segmentation in Murine Myelofibrosis Model Using Deep Learning. Tomography 2023; 9:589-602. [PMID: 36961007 PMCID: PMC10037585 DOI: 10.3390/tomography9020048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
A murine model of myelofibrosis in tibia was used in a co-clinical trial to evaluate segmentation methods for application of image-based biomarkers to assess disease status. The dataset (32 mice with 157 3D MRI scans including 49 test-retest pairs scanned on consecutive days) was split into approximately 70% training, 10% validation, and 20% test subsets. Two expert annotators (EA1 and EA2) performed manual segmentations of the mouse tibia (EA1: all data; EA2: test and validation). Attention U-net (A-U-net) model performance was assessed for accuracy with respect to EA1 reference using the average Jaccard index (AJI), volume intersection ratio (AVI), volume error (AVE), and Hausdorff distance (AHD) for four training scenarios: full training, two half-splits, and a single-mouse subsets. The repeatability of computer versus expert segmentations for tibia volume of test-retest pairs was assessed by within-subject coefficient of variance (%wCV). A-U-net models trained on full and half-split training sets achieved similar average accuracy (with respect to EA1 annotations) for test set: AJI = 83-84%, AVI = 89-90%, AVE = 2-3%, and AHD = 0.5 mm-0.7 mm, exceeding EA2 accuracy: AJ = 81%, AVI = 83%, AVE = 14%, and AHD = 0.3 mm. The A-U-net model repeatability wCV [95% CI]: 3 [2, 5]% was notably better than that of expert annotators EA1: 5 [4, 9]% and EA2: 8 [6, 13]%. The developed deep learning model effectively automates murine bone marrow segmentation with accuracy comparable to human annotators and substantially improved repeatability.
Collapse
|
18
|
Pankert T, Lee H, Peters F, Hölzle F, Modabber A, Raith S. Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-022-02830-w. [PMID: 36637748 PMCID: PMC10363055 DOI: 10.1007/s11548-022-02830-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/26/2022] [Indexed: 01/14/2023]
Abstract
PURPOSE For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic imaging data. This study provides a method to segment accurate, artifact-free 3D surface models of mandibles from CT data using convolutional neural networks. METHODS The presented approach cascades two independently trained 3D-U-Nets to perform accurate segmentations of the mandible bone from full resolution CT images. The networks are trained in different settings using three different loss functions and a data augmentation pipeline. Training and evaluation datasets consist of manually segmented CT images from 307 dentate and edentulous individuals, partly with heavy imaging artifacts. The accuracy of the models is measured using overlap-based, surface-based and anatomical-curvature-based metrics. RESULTS Our approach produces high-resolution segmentations of the mandibles, coping with severe imaging artifacts in the CT imaging data. The use of the two-stepped approach yields highly significant improvements to the prediction accuracies. The best models achieve a Dice coefficient of 94.824% and an average surface distance of 0.31 mm on our test dataset. CONCLUSION The use of two cascaded U-Net allows high-resolution predictions for small regions of interest in the imaging data. The proposed method is fast and allows a user-independent image segmentation, producing objective and repeatable results that can be used in automated surgical planning procedures.
Collapse
Affiliation(s)
- Tobias Pankert
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany.
| | - Hyun Lee
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Florian Peters
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Ali Modabber
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Stefan Raith
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| |
Collapse
|
19
|
Three-Dimensional Innovations in Personalized Surgery. J Pers Med 2023; 13:jpm13010113. [PMID: 36675774 PMCID: PMC9865326 DOI: 10.3390/jpm13010113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 11/30/2022] [Indexed: 01/06/2023] Open
Abstract
Due to the introduction of three-dimensional (3D) technology in surgery, it has become possible to preoperatively plan complex bone resections and reconstructions, (corrections and adjustments related to bones), from head to toe [...].
Collapse
|
20
|
Artificial intelligence models for clinical usage in dentistry with a focus on dentomaxillofacial CBCT: a systematic review. Oral Radiol 2023; 39:18-40. [PMID: 36269515 DOI: 10.1007/s11282-022-00660-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 09/29/2022] [Indexed: 01/05/2023]
Abstract
This study aimed at performing a systematic review of the literature on the application of artificial intelligence (AI) in dental and maxillofacial cone beam computed tomography (CBCT) and providing comprehensive descriptions of current technical innovations to assist future researchers and dental professionals. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA) Statement was followed. The study's protocol was prospectively registered. Following databases were searched, based on MeSH and Emtree terms: PubMed/MEDLINE, Embase and Web of Science. The search strategy enrolled 1473 articles. 59 publications were included, which assessed the use of AI on CBCT images in dentistry. According to the PROBAST guidelines for study design, seven papers reported only external validation and 11 reported both model building and validation on an external dataset. 40 studies focused exclusively on model development. The AI models employed mainly used deep learning models (42 studies), while other 17 papers used conventional approaches, such as statistical-shape and active shape models, and traditional machine learning methods, such as thresholding-based methods, support vector machines, k-nearest neighbors, decision trees, and random forests. Supervised or semi-supervised learning was utilized in the majority (96.62%) of studies, and unsupervised learning was used in two (3.38%). 52 publications included studies had a high risk of bias (ROB), two papers had a low ROB, and four papers had an unclear rating. Applications based on AI have the potential to improve oral healthcare quality, promote personalized, predictive, preventative, and participatory dentistry, and expedite dental procedures.
Collapse
|
21
|
Establishing a Point-of-Care Virtual Planning and 3D Printing Program. Semin Plast Surg 2022; 36:133-148. [PMID: 36506280 PMCID: PMC9729064 DOI: 10.1055/s-0042-1754351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Virtual surgical planning (VSP) and three-dimensional (3D) printing have become a standard of care at our institution, transforming the surgical care of complex patients. Patient-specific, anatomic models and surgical guides are clinically used to improve multidisciplinary communication, presurgical planning, intraoperative guidance, and the patient informed consent. Recent innovations have allowed both VSP and 3D printing to become more accessible to various sized hospital systems. Insourcing such work has several advantages including quicker turnaround times and increased innovation through collaborative multidisciplinary teams. Centralizing 3D printing programs at the point-of-care provides a greater cost-efficient investment for institutions. The following article will detail capital equipment needs, institutional structure, operational personnel, and other considerations necessary in the establishment of a POC manufacturing program.
Collapse
|
22
|
Lasker A, Obaidullah SM, Chakraborty C, Roy K. Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. SN COMPUTER SCIENCE 2022; 4:65. [PMID: 36467853 PMCID: PMC9702883 DOI: 10.1007/s42979-022-01464-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 10/18/2022] [Indexed: 11/26/2022]
Abstract
Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations.
Collapse
Affiliation(s)
- Asifuzzaman Lasker
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Sk Md Obaidullah
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Chandan Chakraborty
- Department of Computer Science & Engineering, National Institute of Technical Teachers’ Training & Research Kolkata, Kolkata, India
| | - Kaushik Roy
- Department of Computer Science, West Bengal State University, Barasat, India
| |
Collapse
|
23
|
Azam MA, Sampieri C, Ioppi A, Benzi P, Giordano GG, De Vecchi M, Campagnari V, Li S, Guastini L, Paderno A, Moccia S, Piazza C, Mattos LS, Peretti G. Videomics of the Upper Aero-Digestive Tract Cancer: Deep Learning Applied to White Light and Narrow Band Imaging for Automatic Segmentation of Endoscopic Images. Front Oncol 2022; 12:900451. [PMID: 35719939 PMCID: PMC9198427 DOI: 10.3389/fonc.2022.900451] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 04/26/2022] [Indexed: 12/13/2022] Open
Abstract
Introduction Narrow Band Imaging (NBI) is an endoscopic visualization technique useful for upper aero-digestive tract (UADT) cancer detection and margins evaluation. However, NBI analysis is strongly operator-dependent and requires high expertise, thus limiting its wider implementation. Recently, artificial intelligence (AI) has demonstrated potential for applications in UADT videoendoscopy. Among AI methods, deep learning algorithms, and especially convolutional neural networks (CNNs), are particularly suitable for delineating cancers on videoendoscopy. This study is aimed to develop a CNN for automatic semantic segmentation of UADT cancer on endoscopic images. Materials and Methods A dataset of white light and NBI videoframes of laryngeal squamous cell carcinoma (LSCC) was collected and manually annotated. A novel DL segmentation model (SegMENT) was designed. SegMENT relies on DeepLabV3+ CNN architecture, modified using Xception as a backbone and incorporating ensemble features from other CNNs. The performance of SegMENT was compared to state-of-the-art CNNs (UNet, ResUNet, and DeepLabv3). SegMENT was then validated on two external datasets of NBI images of oropharyngeal (OPSCC) and oral cavity SCC (OSCC) obtained from a previously published study. The impact of in-domain transfer learning through an ensemble technique was evaluated on the external datasets. Results 219 LSCC patients were retrospectively included in the study. A total of 683 videoframes composed the LSCC dataset, while the external validation cohorts of OPSCC and OCSCC contained 116 and 102 images. On the LSCC dataset, SegMENT outperformed the other DL models, obtaining the following median values: 0.68 intersection over union (IoU), 0.81 dice similarity coefficient (DSC), 0.95 recall, 0.78 precision, 0.97 accuracy. For the OCSCC and OPSCC datasets, results were superior compared to previously published data: the median performance metrics were, respectively, improved as follows: DSC=10.3% and 11.9%, recall=15.0% and 5.1%, precision=17.0% and 14.7%, accuracy=4.1% and 10.3%. Conclusion SegMENT achieved promising performances, showing that automatic tumor segmentation in endoscopic images is feasible even within the highly heterogeneous and complex UADT environment. SegMENT outperformed the previously published results on the external validation cohorts. The model demonstrated potential for improved detection of early tumors, more precise biopsies, and better selection of resection margins.
Collapse
Affiliation(s)
- Muhammad Adeel Azam
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Sampieri
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alessandro Ioppi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Pietro Benzi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Giorgio Gregory Giordano
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Marta De Vecchi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Valentina Campagnari
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Shunlei Li
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Luca Guastini
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alberto Paderno
- Unit of Otorhinolaryngology - Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy.,Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Cesare Piazza
- Unit of Otorhinolaryngology - Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy.,Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Giorgio Peretti
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| |
Collapse
|
24
|
Lee J, Jeong J, Jung S, Moon J, Rho S. Verification of De-Identification Techniques for Personal Information Using Tree-Based Methods with Shapley Values. J Pers Med 2022; 12:jpm12020190. [PMID: 35207676 PMCID: PMC8877642 DOI: 10.3390/jpm12020190] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/24/2022] [Accepted: 01/25/2022] [Indexed: 02/04/2023] Open
Abstract
With the development of big data and cloud computing technologies, the importance of pseudonym information has grown. However, the tools for verifying whether the de-identification methodology is correctly applied to ensure data confidentiality and usability are insufficient. This paper proposes a verification of de-identification techniques for personal healthcare information by considering data confidentiality and usability. Data are generated and preprocessed by considering the actual statistical data, personal information datasets, and de-identification datasets based on medical data to represent the de-identification technique as a numeric dataset. Five tree-based regression models (i.e., decision tree, random forest, gradient boosting machine, extreme gradient boosting, and light gradient boosting machine) are constructed using the de-identification dataset to effectively discover nonlinear relationships between dependent and independent variables in numerical datasets. Then, the most effective model is selected from personal information data in which pseudonym processing is essential for data utilization. The Shapley additive explanation, an explainable artificial intelligence technique, is applied to the most effective model to establish pseudonym processing policies and machine learning to present a machine-learning process that selects an appropriate de-identification methodology.
Collapse
|
25
|
Deep Learning-Based Automatic Segmentation of Mandible and Maxilla in Multi-Center CT Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031358] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Sophisticated segmentation of the craniomaxillofacial bones (the mandible and maxilla) in computed tomography (CT) is essential for diagnosis and treatment planning for craniomaxillofacial surgeries. Conventional manual segmentation is time-consuming and challenging due to intrinsic properties of craniomaxillofacial bones and head CT such as the variance in the anatomical structures, low contrast of soft tissue, and artifacts caused by metal implants. However, data-driven segmentation methods, including deep learning, require a large consistent dataset, which creates a bottleneck in their clinical applications due to limited datasets. In this study, we propose a deep learning approach for the automatic segmentation of the mandible and maxilla in CT images and enhanced the compatibility for multi-center datasets. Four multi-center datasets acquired by various conditions were applied to create a scenario where the model was trained with one dataset and evaluated with the other datasets. For the neural network, we designed a hierarchical, parallel and multi-scale residual block to the U-Net (HPMR-U-Net). To evaluate the performance, segmentation with in-house dataset and with external datasets from multi-center were conducted in comparison to three other neural networks: U-Net, Res-U-Net and mU-Net. The results suggest that the segmentation performance of HPMR-U-Net is comparable to that of other models, with superior data compatibility.
Collapse
|
26
|
Krauel L, Valls-Esteve A, Tejo-Otero A, Fenollosa-Artés F. 3D-Printing in surgery: Beyond bone structures. A review. ANNALS OF 3D PRINTED MEDICINE 2021. [DOI: 10.1016/j.stlm.2021.100039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
|
27
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Nobanee H, Shabir H. Improved Deep Convolutional Neural Network to Classify Osteoarthritis from Anterior Cruciate Ligament Tear Using Magnetic Resonance Imaging. J Pers Med 2021; 11:jpm11111163. [PMID: 34834515 PMCID: PMC8617867 DOI: 10.3390/jpm11111163] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/01/2021] [Accepted: 11/03/2021] [Indexed: 12/14/2022] Open
Abstract
Anterior cruciate ligament (ACL) tear is caused by partially or completely torn ACL ligament in the knee, especially in sportsmen. There is a need to classify the ACL tear before it fully ruptures to avoid osteoarthritis. This research aims to identify ACL tears automatically and efficiently with a deep learning approach. A dataset was gathered, consisting of 917 knee magnetic resonance images (MRI) from Clinical Hospital Centre Rijeka, Croatia. The dataset we used consists of three classes: non-injured, partial tears, and fully ruptured knee MRI. The study compares and evaluates two variants of convolutional neural networks (CNN). We first tested the standard CNN model of five layers and then a customized CNN model of eleven layers. Eight different hyper-parameters were adjusted and tested on both variants. Our customized CNN model showed good results after a 25% random split using RMSprop and a learning rate of 0.001. The average evaluations are measured by accuracy, precision, sensitivity, specificity, and F1-score in the case of the standard CNN using the Adam optimizer with a learning rate of 0.001, i.e., 96.3%, 95%, 96%, 96.9%, and 95.6%, respectively. In the case of the customized CNN model, using the same evaluation measures, the model performed at 98.6%, 98%, 98%, 98.5%, and 98%, respectively, using an RMSprop optimizer with a learning rate of 0.001. Moreover, we also present our results on the receiver operating curve and area under the curve (ROC AUC). The customized CNN model with the Adam optimizer and a learning rate of 0.001 achieved 0.99 over three classes was highest among all. The model showed good results overall, and in the future, we can improve it to apply other CNN architectures to detect and segment other ligament parts like meniscus and cartilages.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Mohd Shafry Mohd Rahim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Research Laboratory, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, P.O. Box 59911, Abu Dhabi 59911, United Arab Emirates
- Oxford Centre for Islamic Studies, University of Oxford, Oxford OX1 2J, UK
- School of Histories, Languages and Cultures, The University of Liverpool, Liverpool L69 3BX, UK
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Hassan Shabir
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
| |
Collapse
|