1
|
Ma Q, Kobayashi E, Jin S, Masamune K, Suenaga H. 3D evaluation model of facial aesthetics based on multi-input 3D convolution neural networks for orthognathic surgery. Int J Med Robot 2024; 20:e2651. [PMID: 38872448 DOI: 10.1002/rcs.2651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 05/17/2024] [Accepted: 05/29/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Quantitative evaluation of facial aesthetics is an important but also time-consuming procedure in orthognathic surgery, while existing 2D beauty-scoring models are mainly used for entertainment with less clinical impact. METHODS A deep-learning-based 3D evaluation model DeepBeauty3D was designed and trained using 133 patients' CT images. The customised image preprocessing module extracted the skeleton, soft tissue, and personal physical information from raw DICOM data, and the predicting network module employed 3-input-2-output convolution neural networks (CNN) to receive the aforementioned data and output aesthetic scores automatically. RESULTS Experiment results showed that this model predicted the skeleton and soft tissue score with 0.231 ± 0.218 (4.62%) and 0.100 ± 0.344 (2.00%) accuracy in 11.203 ± 2.824 s from raw CT images. CONCLUSION This study provided an end-to-end solution using real clinical data based on 3D CNN to quantitatively evaluate facial aesthetics by considering three anatomical factors simultaneously, showing promising potential in reducing workload and bridging the surgeon-patient aesthetics perspective gap.
Collapse
Affiliation(s)
- Qingchuan Ma
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
- School of Engineering Medicine, Beihang University, Beijing, China
| | - Etsuko Kobayashi
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Siao Jin
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ken Masamune
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
2
|
Grillo R, Quinta Reis BA, Lima BC, Peral Ferreira Pinto LA, Melhem-Elias F. Frontal facial analysis of female celebrity attractiveness standards through artificial intelligence. J Craniomaxillofac Surg 2024; 52:722-726. [PMID: 38580557 DOI: 10.1016/j.jcms.2024.03.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 12/23/2023] [Accepted: 03/12/2024] [Indexed: 04/07/2024] Open
Abstract
The contemporary significance of celebrities' facial aesthetics underscores their heightened importance in shaping attractiveness standards. This retrospective study aimed to investigate the impact of patterns on aesthetic canons in the profile views of female celebrities, using artificial intelligence. The study sought to compare different races and propose standards for attractive faces. In this retrospective cohort study, a Python-based algorithm was used to analyze frontal patterns and evaluate their influence on aesthetic norms in publicly accessible images of female global celebrities. Ten ideal angular or proportional measures were gathered from the literature, and were trained to serve as a benchmark for the analysis of facial attractiveness. Demographic characteristics were described statistically. A one-way ANOVA test was employed to assess data distribution. Differences in means between groups were evaluated using nonparametric independent-sample tests, with statistical significance set at < 0.05. The study involved facial analyses for 115 female celebrities. It revealed variations in facial features among races. The mean golden ratio differed, with African and Asian individuals showing lower ratios. Symmetry varied, with Latin and Caucasian faces considered the most symmetrical. The zygomatic-to-mandibular width ratio was similar across races, with a ratio close to 80% being associated with more attractive faces. Differences in nose-to-mouth ratio, lips, alar base width, and chin angle were noted among race groups. The study concluded that, regardless of race, an attractive female face is characterized by specific ratios and angles. Facial symmetry, though desirable, is not strictly necessary. Irrespective of race background, an appealing female face is characterized by a zygomatic-to-mandibular width ratio nearing 80%, a mid-facial third that is slightly larger than the lower third, and a distinctive chin angle of approximately 138°, contributing to a trapezoidal facial shape. The findings contribute valuable insights into attractiveness standards and the impact of frontal patterns on aesthetic canons in female celebrities.
Collapse
Affiliation(s)
- Ricardo Grillo
- Department of Oral and Maxillofacial Surgery, University of São Paulo School of Dentistry, São Paulo, SP, Brazil; Department of Oral and Maxillofacial Surgery, Brasília-DF, Brazil.
| | | | - Bernardo Correia Lima
- Department of Oral and Maxillofacial Surgery, University of São Paulo School of Dentistry, São Paulo, SP, Brazil; Department of Oral and Maxillofacial Surgery and Diagnosis, Hospital da Boca, Santa Casa da Misericórdia Do Rio de Janeiro, RJ, Brazil
| | | | - Fernando Melhem-Elias
- Department of Oral and Maxillofacial Surgery, University of São Paulo School of Dentistry, São Paulo, SP, Brazil; Private Practice in Oral and Maxillofacial Surgery, São Paulo, SP, Brazil
| |
Collapse
|
3
|
Lee J, Kim D, Xu X, Kuang T, Gateno J, Yan P. Predicting optimal patient-specific postoperative facial landmarks for patients with craniomaxillofacial deformities. Int J Oral Maxillofac Surg 2024:S0901-5027(24)00149-8. [PMID: 38782663 DOI: 10.1016/j.ijom.2024.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 05/07/2024] [Accepted: 05/10/2024] [Indexed: 05/25/2024]
Abstract
Orthognathic surgery primarily corrects skeletal anomalies and malocclusion to enhance facial aesthetics, aiming for an improved facial appearance. However, this traditional skeletal-driven approach may result in undesirable residual asymmetry. To address this issue, a soft tissue-driven planning methodology has been proposed. This technique estimates bone movements based on the envisioned optimal facial appearance, thereby enhancing surgical accuracy and effectiveness. This study investigates the initial implementation phase of the soft tissue-driven approach, simulating the patient's ideal appearance by realigning distorted facial landmarks to an ideal state. The algorithm employs symmetrization and weighted optimization strategies, aligning projected optimal landmarks with standard cephalometric values for both facial symmetry and form, which are essential in orthognathic surgery for facial aesthetics. It also incorporates regularization to preserve the patient's facial characteristics. Validation through retrospective analysis of preoperative patients and normal subjects demonstrates this method's efficacy in achieving facial symmetry, particularly in the lower face, and promoting a natural, harmonious contour. Adhering to soft tissue-driven principles, this novel approach shows promise in surpassing traditional methods, potentially leading to enhanced facial outcomes and patient satisfaction in orthognathic surgery.
Collapse
Affiliation(s)
- J Lee
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - D Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA.
| | - X Xu
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - T Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - J Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, USA
| | - P Yan
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
4
|
Knoedler S, Alfertshofer M, Simon S, Panayi AC, Saadoun R, Palackic A, Falkner F, Hundeshagen G, Kauke-Navarro M, Vollbach FH, Bigdeli AK, Knoedler L. Turn Your Vision into Reality-AI-Powered Pre-operative Outcome Simulation in Rhinoplasty Surgery. Aesthetic Plast Surg 2024:10.1007/s00266-024-04043-9. [PMID: 38777929 DOI: 10.1007/s00266-024-04043-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 03/28/2024] [Indexed: 05/25/2024]
Abstract
BACKGROUND The increasing demand and changing trends in rhinoplasty surgery emphasize the need for effective doctor-patient communication, for which Artificial Intelligence (AI) could be a valuable tool in managing patient expectations during pre-operative consultations. OBJECTIVE To develop an AI-based model to simulate realistic postoperative rhinoplasty outcomes. METHODS We trained a Generative Adversarial Network (GAN) using 3,030 rhinoplasty patients' pre- and postoperative images. One-hundred-one study participants were presented with 30 pre-rhinoplasty patient photographs followed by an image set consisting of the real postoperative versus the GAN-generated image and asked to identify the GAN-generated image. RESULTS The study sample (48 males, 53 females, mean age of 31.6 ± 9.0 years) correctly identified the GAN-generated images with an accuracy of 52.5 ± 14.3%. Male study participants were more likely to identify the AI-generated images compared with female study participants (55.4% versus 49.6%; p = 0.042). CONCLUSION We presented a GAN-based simulator for rhinoplasty outcomes which used pre-operative patient images to predict accurate representations that were not perceived as different from real postoperative outcomes. LEVEL OF EVIDENCE III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Samuel Knoedler
- Division of Plastic Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Plastic and Hand Surgery, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Michael Alfertshofer
- Department of Plastic and Hand Surgery, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Oromaxillofacial Surgery, Ludwig-Maximilians University Munich, Munich, Germany
| | - Siddharth Simon
- Department of Oromaxillofacial Surgery, Ludwig-Maximilians University Munich, Munich, Germany
| | - Adriana C Panayi
- Department of Hand-, Plastic and Reconstructive Surgery, Microsurgery, Burn Center, BG Center Ludwigshafen, University of Heidelberg, Ludwigshafen, Germany
- Department of Hand and Plastic Surgery, University of Heidelberg, Heidelberg, Germany
| | - Rakan Saadoun
- Department of Plastic Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Alen Palackic
- Department of Hand-, Plastic and Reconstructive Surgery, Microsurgery, Burn Center, BG Center Ludwigshafen, University of Heidelberg, Ludwigshafen, Germany
- Department of Hand and Plastic Surgery, University of Heidelberg, Heidelberg, Germany
| | - Florian Falkner
- Department of Hand-, Plastic and Reconstructive Surgery, Microsurgery, Burn Center, BG Center Ludwigshafen, University of Heidelberg, Ludwigshafen, Germany
- Department of Hand and Plastic Surgery, University of Heidelberg, Heidelberg, Germany
| | - Gabriel Hundeshagen
- Department of Hand-, Plastic and Reconstructive Surgery, Microsurgery, Burn Center, BG Center Ludwigshafen, University of Heidelberg, Ludwigshafen, Germany
- Department of Hand and Plastic Surgery, University of Heidelberg, Heidelberg, Germany
| | - Martin Kauke-Navarro
- Department of Surgery, Division of Plastic Surgery, Yale School of Medicine, New Haven, CT, USA
| | - Felix H Vollbach
- Department of Hand-, Plastic and Reconstructive Surgery, Microsurgery, Burn Center, BG Center Ludwigshafen, University of Heidelberg, Ludwigshafen, Germany
- Department of Hand and Plastic Surgery, University of Heidelberg, Heidelberg, Germany
| | - Amir K Bigdeli
- Department of Hand-, Plastic and Reconstructive Surgery, Microsurgery, Burn Center, BG Center Ludwigshafen, University of Heidelberg, Ludwigshafen, Germany
- Department of Hand and Plastic Surgery, University of Heidelberg, Heidelberg, Germany
| | - Leonard Knoedler
- Department of Surgery, Division of Plastic Surgery, Yale School of Medicine, New Haven, CT, USA.
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany.
| |
Collapse
|
5
|
Olejnik A, Verstraete L, Croonenborghs TM, Politis C, Swennen GRJ. The Accuracy of Three-Dimensional Soft Tissue Simulation in Orthognathic Surgery-A Systematic Review. J Imaging 2024; 10:119. [PMID: 38786573 PMCID: PMC11122049 DOI: 10.3390/jimaging10050119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 04/26/2024] [Accepted: 05/07/2024] [Indexed: 05/25/2024] Open
Abstract
Three-dimensional soft tissue simulation has become a popular tool in the process of virtual orthognathic surgery planning and patient-surgeon communication. To apply 3D soft tissue simulation software in routine clinical practice, both qualitative and quantitative validation of its accuracy are required. The objective of this study was to systematically review the literature on the accuracy of 3D soft tissue simulation in orthognathic surgery. The Web of Science, PubMed, Cochrane, and Embase databases were consulted for the literature search. The systematic review (SR) was conducted according to the PRISMA statement, and 40 articles fulfilled the inclusion and exclusion criteria. The Quadas-2 tool was used for the risk of bias assessment for selected studies. A mean error varying from 0.27 mm to 2.9 mm for 3D soft tissue simulations for the whole face was reported. In the studies evaluating 3D soft tissue simulation accuracy after a Le Fort I osteotomy only, the upper lip and paranasal regions were reported to have the largest error, while after an isolated bilateral sagittal split osteotomy, the largest error was reported for the lower lip and chin regions. In the studies evaluating simulation after bimaxillary osteotomy with or without genioplasty, the highest inaccuracy was reported at the level of the lips, predominantly the lower lip, chin, and, sometimes, the paranasal regions. Due to the variability in the study designs and analysis methods, a direct comparison was not possible. Therefore, based on the results of this SR, guidelines to systematize the workflow for evaluating the accuracy of 3D soft tissue simulations in orthognathic surgery in future studies are proposed.
Collapse
Affiliation(s)
- Anna Olejnik
- Division of Maxillofacial Surgery, Department of Surgery, AZ Sint-Jan, Ruddershove 10, 8000 Bruges, Belgium
- Maxillofacial Surgery Unit, Department of Head and Neck Surgery, Craniomaxillofacial Center for Children and Young Adults, Regional Specialized Children’s Hospital, ul. Zolnierska 18A, 10-561 Olsztyn, Poland
| | - Laurence Verstraete
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Tomas-Marijn Croonenborghs
- Division of Maxillofacial Surgery, Department of Surgery, AZ Sint-Jan, Ruddershove 10, 8000 Bruges, Belgium
| | - Constantinus Politis
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Gwen R. J. Swennen
- Division of Maxillofacial Surgery, Department of Surgery, AZ Sint-Jan, Ruddershove 10, 8000 Bruges, Belgium
| |
Collapse
|
6
|
Ghamsarian N, El-Shabrawi Y, Nasirihaghighi S, Putzgruber-Adamitsch D, Zinkernagel M, Wolf S, Schoeffmann K, Sznitman R. Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos. Sci Data 2024; 11:373. [PMID: 38609405 PMCID: PMC11014927 DOI: 10.1038/s41597-024-03193-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/28/2024] [Indexed: 04/14/2024] Open
Abstract
In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.
Collapse
Affiliation(s)
- Negin Ghamsarian
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| | - Yosuf El-Shabrawi
- Department of Ophthalmology, Klinikum Klagenfurt, Klagenfurt, Austria
| | - Sahar Nasirihaghighi
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria
| | | | | | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, Bern, Switzerland
| | - Klaus Schoeffmann
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria.
| | - Raphael Sznitman
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
7
|
Fang X, Kim D, Xu X, Kuang T, Lampen N, Lee J, Deng HH, Liebschner MAK, Xia JJ, Gateno J, Yan P. Correspondence attention for facial appearance simulation. Med Image Anal 2024; 93:103094. [PMID: 38306802 DOI: 10.1016/j.media.2024.103094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/02/2023] [Accepted: 01/22/2024] [Indexed: 02/04/2024]
Abstract
In orthognathic surgical planning for patients with jaw deformities, it is crucial to accurately simulate the changes in facial appearance that follow the bony movement. Compared with the traditional biomechanics-based methods like the finite-element method (FEM), which are both labor-intensive and computationally inefficient, deep learning-based methods offer an efficient and robust modeling alternative. However, current methods do not account for the physical relationship between facial soft tissue and bony structure, causing them to fall short in accuracy compared to FEM. In this work, we propose an Attentive Correspondence assisted Movement Transformation network (ACMT-Net) to predict facial changes by correlating facial soft tissue changes with bony movement through a point-to-point attentive correspondence matrix. To ensure efficient training, we also introduce a contrastive loss for self-supervised pre-training of the ACMT-Net with a k-Nearest Neighbors (k-NN) based clustering. Experimental results on patients with jaw deformities show that our proposed solution can achieve significantly improved computational efficiency over the state-of-the-art FEM-based method with comparable facial change prediction accuracy.
Collapse
Affiliation(s)
- Xi Fang
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Xuanang Xu
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Nathan Lampen
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Jungwook Lee
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | | | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA; Weill Medical College, Cornell University, New York, NY, 10021, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA; Weill Medical College, Cornell University, New York, NY, 10021, USA.
| | - Pingkun Yan
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
8
|
Gu Z, Wu Z, Dai N. Image generation technology for functional occlusal pits and fissures based on a conditional generative adversarial network. PLoS One 2023; 18:e0291728. [PMID: 37725620 PMCID: PMC10508633 DOI: 10.1371/journal.pone.0291728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023] Open
Abstract
The occlusal surfaces of natural teeth have complex features of functional pits and fissures. These morphological features directly affect the occlusal state of the upper and lower teeth. An image generation technology for functional occlusal pits and fissures is proposed to address the lack of local detailed crown surface features in existing dental restoration methods. First, tooth depth image datasets were constructed using an orthogonal projection method. Second, the optimization and improvement of the model parameters were guided by introducing the jaw position spatial constraint, the L1 loss and the perceptual loss functions. Finally, two image quality evaluation metrics were applied to evaluate the quality of the generated images, and deform the dental crown by using the generated occlusal pits and fissures as constraints to compare with expert data. The results showed that the images generated using the network constructed in this study had high quality, and the detailed pit and fissure features on the crown were effectively restored, with a standard deviation of 0.1802mm compared to the expert-designed tooth crown models.
Collapse
Affiliation(s)
- Zhaodan Gu
- Jiangsu Automation Research Institute, Lianyungang, P.R. China
| | - Zhilei Wu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| | - Ning Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| |
Collapse
|
9
|
Cheng M, Zhang X, Wang J, Yang Y, Li M, Zhao H, Huang J, Zhang C, Qian D, Yu H. Prediction of orthognathic surgery plan from 3D cephalometric analysis via deep learning. BMC Oral Health 2023; 23:161. [PMID: 36934241 PMCID: PMC10024836 DOI: 10.1186/s12903-023-02844-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 02/27/2023] [Indexed: 03/20/2023] Open
Abstract
BACKGROUND Preoperative planning of orthognathic surgery is indispensable for achieving ideal surgical outcome regarding the occlusion and jaws' position. However, orthognathic surgery planning is sophisticated and highly experience-dependent, which requires comprehensive consideration of facial morphology and occlusal function. This study aimed to investigate a robust and automatic method based on deep learning to predict reposition vectors of jawbones in orthognathic surgery plan. METHODS A regression neural network named VSP transformer was developed based on Transformer architecture. Firstly, 3D cephalometric analysis was employed to quantify skeletal-facial morphology as input features. Next, input features were weighted using pretrained results to minimize bias resulted from multicollinearity. Through encoder-decoder blocks, ten landmark-based reposition vectors of jawbones were predicted. Permutation importance (PI) method was used to calculate contributions of each feature to final prediction to reveal interpretability of the proposed model. RESULTS VSP transformer model was developed with 383 samples and clinically tested with 49 prospectively collected samples. Our proposed model outperformed other four classic regression models in prediction accuracy. Mean absolute errors (MAE) of prediction were 1.41 mm in validation set and 1.34 mm in clinical test set. The interpretability results of the model were highly consistent with clinical knowledge and experience. CONCLUSIONS The developed model can predict reposition vectors of orthognathic surgery plan with high accuracy and good clinically practical-effectiveness. Moreover, the model was proved reliable because of its good interpretability.
Collapse
Affiliation(s)
- Mengjia Cheng
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Xu Zhang
- Mechanical College, Shanghai Dianji University, Shanghai, 201306, China
| | - Jun Wang
- School of Computer & Computing Science, Hangzhou City University, Hangzhou, 310000, China
| | - Yang Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, 200333, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Meng Li
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Hanjiang Zhao
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Jingyang Huang
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Chenglong Zhang
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
| | - Hongbo Yu
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China.
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China.
| |
Collapse
|
10
|
Ma L, Lian C, Kim D, Xiao D, Wei D, Liu Q, Kuang T, Ghanbari M, Li G, Gateno J, Shen SGF, Wang L, Shen D, Xia JJ, Yap PT. Bidirectional prediction of facial and bony shapes for orthognathic surgical planning. Med Image Anal 2023; 83:102644. [PMID: 36272236 PMCID: PMC10445637 DOI: 10.1016/j.media.2022.102644] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/18/2022] [Accepted: 09/27/2022] [Indexed: 11/07/2022]
Abstract
This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.
Collapse
Affiliation(s)
- Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dongming Wei
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Maryam Ghanbari
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Guoshi Li
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| | - Steve G F Shen
- Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai 200025, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA.
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
11
|
Jeong SH, Woo MW, Shin DS, Yeom HG, Lim HJ, Kim BC, Yun JP. Three-Dimensional Postoperative Results Prediction for Orthognathic Surgery through Deep Learning-Based Alignment Network. J Pers Med 2022; 12:998. [PMID: 35743782 PMCID: PMC9225553 DOI: 10.3390/jpm12060998] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/13/2022] Open
Abstract
To date, for the diagnosis of dentofacial dysmorphosis, we have relied almost entirely on reference points, planes, and angles. This is time consuming, and it is also greatly influenced by the skill level of the practitioner. To solve this problem, we wanted to know if deep neural networks could predict postoperative results of orthognathic surgery without relying on reference points, planes, and angles. We use three-dimensional point cloud data of the skull of 269 patients. The proposed method has two main stages for prediction. In step 1, the skull is divided into six parts through the segmentation network. In step 2, three-dimensional transformation parameters are predicted through the alignment network. The ground truth values of transformation parameters are calculated through the iterative closest points (ICP), which align the preoperative part of skull to the corresponding postoperative part of skull. We compare pointnet, pointnet++ and pointconv for the feature extractor of the alignment network. Moreover, we design a new loss function, which considers the distance error of transformed points for a better accuracy. The accuracy, mean intersection over union (mIoU), and dice coefficient (DC) of the first segmentation network, which divides the upper and lower part of skull, are 0.9998, 0.9994, and 0.9998, respectively. For the second segmentation network, which divides the lower part of skull into 5 parts, they were 0.9949, 0.9900, 0.9949, respectively. The mean absolute error of transverse, anterior-posterior, and vertical distance of part 2 (maxilla) are 0.765 mm, 1.455 mm, and 1.392 mm, respectively. For part 3 (mandible), they were 1.069 mm, 1.831 mm, and 1.375 mm, respectively, and for part 4 (chin), they were 1.913 mm, 2.340 mm, and 1.257 mm, respectively. From this study, postoperative results can now be easily predicted by simply entering the point cloud data of computed tomography.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
| | - Min Woo Woo
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea
| | - Dong Sun Shin
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Han Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea;
| | - Hun Jun Lim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Jong Pil Yun
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- KITECH School, University of Science and Technology, Daejeon 34113, Korea
| |
Collapse
|