1
|
Kim S, Koh HK, Lee H, Shin HJ. Deep-Learning Method for the Diagnosis and Classification of Orbital Blowout Fracture Based on Computed Tomography. J Oral Maxillofac Surg 2025:S0278-2391(25)00243-5. [PMID: 40349723 DOI: 10.1016/j.joms.2025.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 04/15/2025] [Accepted: 04/15/2025] [Indexed: 05/14/2025]
Abstract
BACKGROUND Blowout fractures (BOFs) are common injuries. Accurate and rapid diagnosis based on computed tomography (CT) is important for proper management. Deep-learning techniques can contribute to accelerating the diagnostic process and supporting timely and accurate management, particularly in environments with limited medical resources. PURPOSE The purpose of this retrospective in-silico cohort study was to develop deep-learning models for detecting and classifying BOF using facial CT. STUDY DESIGN, SETTING, AND SAMPLE We conducted a retrospective analysis of facial CT from patients diagnosed with BOF involving the medial wall, orbital floor, or both at Konkuk University Hospital between December 2005 and April 2024. Patients with other facial fractures or those involving the superior or lateral orbital walls were excluded. PREDICTOR VARIABLE The predictor variables are the outputs as each model's designated categories from the deep-learning models, which include the predicted 1) fracture status (normal or BOF), 2) fracture location (medial, inferior, or inferomedial), and 3) fracture timing (acute or old). MAIN OUTCOME VARIABLES The main outcomes were the human assessments serving as the gold standard, including the presence or absence of BOF, fracture location, and timing. COVARIATES The covariates were age and sex. ANALYSES Model performance was evaluated using the following metrics: 1) accuracy, 2) positive predictive value (PPV), 3) sensitivity, 4) F1 score (harmonic average between PPV and sensitivity), and 5) area under the receiver operating characteristic curve (AUC) for classification models. RESULTS This study analyzed 1,264 facial CT from 233 patients with multiple CT slices taken from each patient in various coronal views (mean age: 37.5 ± 17.9 years; 79.8% male-186 subjects). Based on these data, 3 deep-learning models were developed for 1) BOF detection (accuracy 99.5%, PPV 99.2%, sensitivity 99.6%, F1 score 99.4%, AUC 0.9999), 2) BOF location (medial, inferior, or inferomedial; accuracy 97.4%, PPV 92.7%, sensitivity 89.0%, F1 score 90.8%), and 3) BOF timing (accuracy 96.8%, PPV 90.1%, sensitivity 89.7%, F1 score 89.9%). In addition, the BOF detection model had an AUC of 0.9999. CONCLUSIONS AND RELEVANCE Deep-learning models developed with Neuro-T (Neurocle Inc, Seoul, Republic of Korea) can reliably diagnose and classify BOF in CT, distinguishing acute from old fractures and aiding clinical decision-making.
Collapse
Affiliation(s)
- Suyoung Kim
- Candidate, Konkuk University School of Medicine, Chungju, Republic of Korea
| | - Hyeon Kang Koh
- Candidate, Konkuk University School of Medicine, Chungju, Republic of Korea; Department of Radiation Oncology, Konkuk University School of Medicine and Medical Center, Seoul, Republic of Korea
| | - Hyungwoo Lee
- Candidate, Konkuk University School of Medicine, Chungju, Republic of Korea; Department of Ophthalmology, Konkuk University Medical Center, Seoul, Republic of Korea; Research Institute of Medical Science, Konkuk University School of Medicine, Seoul, Republic of Korea
| | - Hyun Jin Shin
- Candidate, Konkuk University School of Medicine, Chungju, Republic of Korea; Department of Ophthalmology, Konkuk University Medical Center, Seoul, Republic of Korea; Research Institute of Medical Science, Konkuk University School of Medicine, Seoul, Republic of Korea; Institute of Biomedical Science & Technology, Konkuk University, Seoul, Republic of Korea.
| |
Collapse
|
2
|
Shhadeh A, Daoud S, Redenski I, Oren D, Zoabi A, Kablan F, Srouji S. The Contribution of Real-Time Artificial Intelligence Segmentation in Maxillofacial Trauma Emergencies. Diagnostics (Basel) 2025; 15:984. [PMID: 40310392 PMCID: PMC12025859 DOI: 10.3390/diagnostics15080984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2025] [Revised: 04/03/2025] [Accepted: 04/09/2025] [Indexed: 05/02/2025] Open
Abstract
Background/Objectives: Maxillofacial trauma poses significant challenges in emergency medicine, requiring rapid interventions to minimize morbidity and mortality. Traditional segmentation methods are time-consuming and error-prone, particularly in high-pressure settings. Real-time artificial intelligence (AI) segmentation offers a transformative solution to streamline workflows and enhance clinical decision-making. This study evaluated the potential of real-time AI segmentation to improve diagnostic efficiency and support decision-making in maxillofacial trauma emergencies. Methods: This study evaluated 53 trauma patients with moderate to severe maxillofacial injuries treated over 16 months at Galilee Medical Center. AI-assisted segmentation using Materialise Mimics Viewer and Romexis Smart Tool was compared to semi-automated methods in terms of time and accuracy. The clinical impact of AI on diagnosis and treatment planning was also assessed. Results: AI segmentation was significantly faster than semi-automated methods (9.87 vs. 63.38 min) with comparable accuracy (DSC: 0.92-0.93 for AI; 0.95 for semi-automated). AI tools provided rapid 3D visualization of key structures, enabling faster decisions for airway management, fracture assessment, and foreign body localization. Specific trauma cases illustrate the potential of real-time AI segmentation to enhance the efficiency of diagnosis, treatment planning, and overall management of maxillofacial emergencies. The highest clinical benefit was observed in complex cases, such as orbital injuries or combined mandible and midface fractures. Conclusions: Real-time AI segmentation has the potential to enhance efficiency and clinical utility in managing maxillofacial trauma by providing precise, actionable data in time-sensitive scenarios. However, the expertise of oral and maxillofacial surgeons remains critical, with AI serving as a complementary tool to aid, rather than replace, clinical decision-making.
Collapse
Affiliation(s)
- Amjad Shhadeh
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Shadi Daoud
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Idan Redenski
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Daniel Oren
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Adeeb Zoabi
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Fares Kablan
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Samer Srouji
- Department of Oral and Maxillofacial Surgery, Galilee College of Dental Sciences, Galilee Medical Center, Nahariya 2210001, Israel
- The Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| |
Collapse
|
3
|
Sillmann YM, Monteiro JLGC, Eber P, Baggio AMP, Peacock ZS, Guastaldi FPS. Empowering surgeons: will artificial intelligence change oral and maxillofacial surgery? Int J Oral Maxillofac Surg 2025; 54:179-190. [PMID: 39341693 DOI: 10.1016/j.ijom.2024.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 09/03/2024] [Accepted: 09/10/2024] [Indexed: 10/01/2024]
Abstract
Artificial Intelligence (AI) can enhance the precision and efficiency of diagnostics and treatments in oral and maxillofacial surgery (OMS), leveraging advanced computational technologies to mimic intelligent human behaviors. The study aimed to examine the current state of AI in the OMS literature and highlight the urgent need for further research to optimize AI integration in clinical practice and enhance patient outcomes. A scoping review of journals related to OMS focused on OMS-related applications. PubMed was searched using terms "artificial intelligence", "convolutional networks", "neural networks", "machine learning", "deep learning", and "automation". Ninety articles were analyzed and classified into the following subcategories: pathology, orthognathic surgery, facial trauma, temporomandibular joint disorders, dentoalveolar surgery, dental implants, craniofacial deformities, reconstructive surgery, aesthetic surgery, and complications. There was a significant increase in AI-related studies published after 2019, 95.6% of the total reviewed. This surge in research reflects growing interest in AI and its potential in OMS. Among the studies, the primary uses of AI in OMS were in pathology (e.g., lesion detection, lymph node metastasis detection) and orthognathic surgery (e.g., surgical planning through facial bone segmentation). The studies predominantly employed convolutional neural networks (CNNs) and artificial neural networks (ANNs) for classification tasks, potentially improving clinical outcomes.
Collapse
Affiliation(s)
- Y M Sillmann
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - J L G C Monteiro
- Wellman Center for Photomedicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - P Eber
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - A M P Baggio
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - Z S Peacock
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - F P S Guastaldi
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA.
| |
Collapse
|
4
|
Xu J, Wei Y, Jiang S, Zhou H, Li Y, Chen X. Intelligent surgical planning for automatic reconstruction of orbital blowout fracture using a prior adversarial generative network. Med Image Anal 2025; 99:103332. [PMID: 39321669 DOI: 10.1016/j.media.2024.103332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 08/27/2024] [Accepted: 08/29/2024] [Indexed: 09/27/2024]
Abstract
Orbital blowout fracture (OBF) is a disease that can result in herniation of orbital soft tissue, enophthalmos, and even severe visual dysfunction. Given the complex and diverse types of orbital wall fractures, reconstructing the orbital wall presents a significant challenge in OBF repair surgery. Accurate surgical planning is crucial in addressing this issue. However, there is currently a lack of efficient and precise surgical planning methods. Therefore, we propose an intelligent surgical planning method for automatic OBF reconstruction based on a prior adversarial generative network (GAN). Firstly, an automatic generation method of symmetric prior anatomical knowledge (SPAK) based on spatial transformation is proposed to guide the reconstruction of fractured orbital wall. Secondly, a reconstruction network based on SPAK-guided GAN is proposed to achieve accurate and automatic reconstruction of fractured orbital wall. Building upon this, a new surgical planning workflow based on the proposed reconstruction network and 3D Slicer software is developed to simplify the operational steps. Finally, the proposed surgical planning method is successfully applied in OBF repair surgery, verifying its reliability. Experimental results demonstrate that the proposed reconstruction network achieves relatively accurate automatic reconstruction of the orbital wall, with an average DSC of 92.35 ± 2.13% and a 95% Hausdorff distance of 0.59 ± 0.23 mm, markedly outperforming the compared state-of-the-art networks. Additionally, the proposed surgical planning workflow reduces the traditional planning time from an average of 25 min and 17.8 s to just 1 min and 35.1 s, greatly enhancing planning efficiency. In the future, the proposed surgical planning method will have good application prospects in OBF repair surgery.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200241, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yining Wei
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai 200011, China
| | - Shuanglin Jiang
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200241, China
| | - Huifang Zhou
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai 200011, China
| | - Yinwei Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai 200011, China.
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200241, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200241, China.
| |
Collapse
|
5
|
Gernandt S, Aymon R, Scolozzi P. Assessing the accuracy of artificial intelligence in the diagnosis and management of orbital fractures: Is this the future of surgical decision-making? JPRAS Open 2024; 42:275-283. [PMID: 39498287 PMCID: PMC11532732 DOI: 10.1016/j.jpra.2024.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 09/25/2024] [Indexed: 11/07/2024] Open
Abstract
Orbital fractures are common, but their management remains controversial. The aim of the present study was to assess the accuracy of an advanced artificial intelligence (AI) model, ChatGPT-4, in surgical decision-making, with a focus on orbital fracture diagnosis and management. A retrospective observational analysis was conducted by involving a sample of 30 orbital fracture cases diagnosed and managed at the Geneva University Hospital, Switzerland. The process involved creating patient vignettes from anonymised medical records and presenting them to ChatGPT-4 in three stages: initial diagnosis, refinement with radiological reports and surgical intervention decisions. The performance of ChatGPT-4 in providing the appropriate surgical strategy was evaluated through measures of sensitivity, specificity, positive predictive value and negative predictive value, with the actual management used as the benchmark for accuracy. The AI model could correctly diagnose the fracture in 100 % of the cases. It demonstrated a specificity of 100 % and sensitivity of 57 % for treatment recommendation, indicating its effectiveness in recognising patients who truly required an intervention; however, it demonstrated a moderate performance in correctly identifying cases that were better suited for conservative treatment. Cohen's Kappa statistic for interrater reliability of the choice of treatment was 0.44, indicating a weak level of agreement between ChatGPT and the physician's choice of treatment. The study demonstrates that AI tools such as ChatGPT-4 can offer a high degree of accuracy in diagnosing orbital fractures and recognising patients requiring surgical intervention; however, it performed less satisfactorily in correctly identifying patients who were better suited for non-surgical treatment.
Collapse
Affiliation(s)
- Steven Gernandt
- Division of Oral and Maxillofacial Surgery, Department of Surgery, University of Geneva & University Hospitals of Geneva, Geneva, Switzerland
| | - Romain Aymon
- Division of Oral and Maxillofacial Surgery, Department of Surgery, Faculty of Medicine, University of Geneva & University Hospitals of Geneva, Geneva, Switzerland
| | - Paolo Scolozzi
- Division of Oral and Maxillofacial Surgery, Department of Surgery, Faculty of Medicine, University of Geneva & University Hospitals of Geneva, Geneva, Switzerland
| |
Collapse
|
6
|
Morita D, Kawarazaki A, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic detection of midfacial fractures in facial bone CT images using deep learning-based object detection models. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101914. [PMID: 38750725 DOI: 10.1016/j.jormas.2024.101914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/24/2024] [Accepted: 05/12/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND Midfacial fractures are among the most frequent facial fractures. Surgery is recommended within 2 weeks of injury, but this time frame is often extended because the fracture is missed on diagnostic imaging in the busy emergency medicine setting. Using deep learning technology, which has progressed markedly in various fields, we attempted to develop a system for the automatic detection of midfacial fractures. The purpose of this study was to use this system to diagnose fractures accurately and rapidly, with the intention of benefiting both patients and emergency room physicians. METHODS One hundred computed tomography images that included midfacial fractures (e.g., maxillary, zygomatic, nasal, and orbital fractures) were prepared. In each axial image, the fracture area was surrounded by a rectangular region to create the annotation data. Eighty images were randomly classified as the training dataset (3736 slices) and 20 as the validation dataset (883 slices). Training and validation were performed using Single Shot MultiBox Detector (SSD) and version 8 of You Only Look Once (YOLOv8), which are object detection algorithms. RESULTS The performance indicators for SSD and YOLOv8 were respectively: precision, 0.872 and 0.871; recall, 0.823 and 0.775; F1 score, 0.846 and 0.82; average precision, 0.899 and 0.769. CONCLUSIONS The use of deep learning techniques allowed the automatic detection of midfacial fractures with good accuracy and high speed. The system developed in this study is promising for automated detection of midfacial fractures and may provide a quick and accurate solution for emergency medical care and other settings.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan; Department of Plastic and Reconstructive Surgery, Tokai University School of Medicine, Kanagawa, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
7
|
Li W, Song H, Ai D, Shi J, Wang Y, Wu W, Yang J. Semi-supervised segmentation of orbit in CT images with paired copy-paste strategy. Comput Biol Med 2024; 171:108176. [PMID: 38401453 DOI: 10.1016/j.compbiomed.2024.108176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/06/2024] [Accepted: 02/18/2024] [Indexed: 02/26/2024]
Abstract
The segmentation of the orbit in computed tomography (CT) images plays a crucial role in facilitating the quantitative analysis of orbital decompression surgery for patients with Thyroid-associated Ophthalmopathy (TAO). However, the task of orbit segmentation, particularly in postoperative images, remains challenging due to the significant shape variation and limited amount of labeled data. In this paper, we present a two-stage semi-supervised framework for the automatic segmentation of the orbit in both preoperative and postoperative images, which consists of a pseudo-label generation stage and a semi-supervised segmentation stage. A Paired Copy-Paste strategy is concurrently introduced to proficiently amalgamate features extracted from both preoperative and postoperative images, thereby augmenting the network discriminative capability in discerning changes within orbital boundaries. More specifically, we employ a random cropping technique to transfer regions from labeled preoperative images (foreground) onto unlabeled postoperative images (background), as well as unlabeled preoperative images (foreground) onto labeled postoperative images (background). It is imperative to acknowledge that every set of preoperative and postoperative images belongs to the identical patient. The semi-supervised segmentation network (stage 2) utilizes a combination of mixed supervisory signals from pseudo labels (stage 1) and ground truth to process the two mixed images. The training and testing of the proposed method have been conducted on the CT dataset obtained from the Eye Hospital of Wenzhou Medical University. The experimental results demonstrate that the proposed method achieves a mean Dice similarity coefficient (DSC) of 91.92% with only 5% labeled data, surpassing the performance of the current state-of-the-art method by 2.4%.
Collapse
Affiliation(s)
- Wentao Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jieliang Shi
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325027, China.
| | - Yuanyuan Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Wencan Wu
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|