1
|
Tran VD, Nguyen TN, Ballit A, Dao TT. Novel Baseline Facial Muscle Database Using Statistical Shape Modeling and In Silico Trials toward Decision Support for Facial Rehabilitation. Bioengineering (Basel) 2023; 10:737. [PMID: 37370668 DOI: 10.3390/bioengineering10060737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 06/10/2023] [Accepted: 06/16/2023] [Indexed: 06/29/2023] Open
Abstract
Backgrounds and Objective: Facial palsy is a complex pathophysiological condition affecting the personal and professional lives of the involved patients. Sudden muscle weakness or paralysis needs to be rehabilitated to recover a symmetric and expressive face. Computer-aided decision support systems for facial rehabilitation have been developed. However, there is a lack of facial muscle baseline data to evaluate the patient states and guide as well as optimize the rehabilitation strategy. In this present study, we aimed to develop a novel baseline facial muscle database (static and dynamic behaviors) using the coupling between statistical shape modeling and in-silico trial approaches. Methods: 10,000 virtual subjects (5000 males and 5000 females) were generated from a statistical shape modeling (SSM) head model. Skull and muscle networks were defined so that they statistically fit with the head shapes. Two standard mimics: smiling and kissing were generated. The muscle strains of the lengths in neutral and mimic positions were computed and recorded thanks to the muscle insertion and attachment points on the animated head and skull meshes. For validation, five head and skull meshes were reconstructed from the five computed tomography (CT) image sets. Skull and muscle networks were then predicted from the reconstructed head meshes. The predicted skull meshes were compared with the reconstructed skull meshes based on the mesh-to-mesh distance metrics. The predicted muscle lengths were also compared with those manually defined on the reconstructed head and skull meshes. Moreover, the computed muscle lengths and strains were compared with those in our previous studies and the literature. Results: The skull prediction's median deviations from the CT-based models were 2.2236 mm, 2.1371 mm, and 2.1277 mm for the skull shape, skull mesh, and muscle attachment point regions, respectively. The median deviation of the muscle lengths was 4.8940 mm. The computed muscle strains were compatible with the reported values in our previous Kinect-based method and the literature. Conclusions: The development of our novel facial muscle database opens new avenues to accurately evaluate the facial muscle states of facial palsy patients. Based on the evaluated results, specific types of facial mimic rehabilitation exercises can also be selected optimally to train the target muscles. In perspective, the database of the computed muscle lengths and strains will be integrated into our available clinical decision support system for automatically detecting malfunctioning muscles and proposing patient-specific rehabilitation serious games.
Collapse
Affiliation(s)
- Vi-Do Tran
- Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology and Education, Thu Duc City 71300, Ho Chi Minh City, Vietnam
| | - Tan-Nhu Nguyen
- School of Engineering, Eastern International University, Thu Dau Mot City 75100, Binh Duong Province, Vietnam
| | - Abbass Ballit
- Univ. Lille, CNRS, Centrale Lille, UMR 9013-LaMcube-Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France
| | - Tien-Tuan Dao
- Univ. Lille, CNRS, Centrale Lille, UMR 9013-LaMcube-Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France
| |
Collapse
|
2
|
Sarhan FR, Olivetto M, Ben Mansour K, Neiva C, Colin E, Choteau B, Marie JP, Testelin S, Marin F, Dakpé S. Quantified analysis of facial movement: A reference for clinical applications. Clin Anat 2023; 36:492-502. [PMID: 36625484 DOI: 10.1002/ca.23999] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/11/2023]
Abstract
Most techniques for evaluating unilateral impairments in facial movement yield subjective measurements. The objective of the present study was to define a reference dataset and develop a visualization tool for clinical assessments. In this prospective study, a motion capture system was used to quantify facial movements in 30 healthy adults and 2 patients. We analyzed the displacements of 105 reflective markers placed on the participant's face during five movements (M1-M5). For each marker, the primary endpoint was the maximum amplitude of displacement from the static position (M0) in an analysis of variance. The measurement precision was 0.1 mm. Significant displacements of markers were identified for M1-M5, and displacement patterns were defined. The patients and age-matched healthy participants were compared with regard to the amplitude of displacement. We created a new type of radar plot to visually represent the diagnosis and facilitate effective communication between medical professionals. In proof-of-concept experiments, we collected quantitative data on patients with facial palsy and created a patient-specific radar plot. Our new protocol for clinical facial motion capture ("quantified analysis of facial movement," QAFM) was accurate and should thus facilitate the long-term clinical follow-up of patients with facial palsy. To take account of the limitations affecting the comparison with the healthy side, we created a dataset of healthy facial movements; our method might therefore be applicable to other conditions in which movements on one or both sides of the face are impaired. The patient-specific radar plot enables clinicians to read and understand the results rapidly.
Collapse
Affiliation(s)
- François-Régis Sarhan
- UR 7516 CHIMERE, Université de Picardie Jules Verne, Amiens, France.,Maxillofacial Surgery Department, CHU Amiens-Picardie, Amiens, France.,Institut Faire Faces, CHU Amiens-Picardie, Amiens, France.,Physiotherapy School, CHU Amiens-Picardie, Amiens, France
| | - Matthieu Olivetto
- UR 7516 CHIMERE, Université de Picardie Jules Verne, Amiens, France.,Maxillofacial Surgery Department, CHU Amiens-Picardie, Amiens, France.,Institut Faire Faces, CHU Amiens-Picardie, Amiens, France
| | - Khalil Ben Mansour
- UMR CNRS 7338, Biomécanique et Bioingénierie, Université de Technologie de Compiègne, Sorbonne Université, Compiègne, France
| | - Cécilia Neiva
- Maxillofacial Surgery Department, Hôpital Necker APHP, Paris, France
| | - Emilien Colin
- UR 7516 CHIMERE, Université de Picardie Jules Verne, Amiens, France.,Maxillofacial Surgery Department, CHU Amiens-Picardie, Amiens, France.,Institut Faire Faces, CHU Amiens-Picardie, Amiens, France
| | - Baptiste Choteau
- Maxillofacial Surgery Department, CHU Amiens-Picardie, Amiens, France.,UMR CNRS 7338, Biomécanique et Bioingénierie, Université de Technologie de Compiègne, Sorbonne Université, Compiègne, France
| | - Jean-Paul Marie
- Otorhinolaryngology and Head and Neck Surgery, CHU Rouen Normandie, Hôpital Charles-Nicolles, Rouen, France.,EA3830 GRHV, Université de Rouen Normandie, Rouen, France
| | - Sylvie Testelin
- UR 7516 CHIMERE, Université de Picardie Jules Verne, Amiens, France.,Maxillofacial Surgery Department, CHU Amiens-Picardie, Amiens, France.,Institut Faire Faces, CHU Amiens-Picardie, Amiens, France
| | - Frédéric Marin
- UMR CNRS 7338, Biomécanique et Bioingénierie, Université de Technologie de Compiègne, Sorbonne Université, Compiègne, France
| | - Stéphanie Dakpé
- UR 7516 CHIMERE, Université de Picardie Jules Verne, Amiens, France.,Maxillofacial Surgery Department, CHU Amiens-Picardie, Amiens, France.,Institut Faire Faces, CHU Amiens-Picardie, Amiens, France
| |
Collapse
|
3
|
Fast 3D Face Reconstruction from a Single Image Using Different Deep Learning Approaches for Facial Palsy Patients. Bioengineering (Basel) 2022; 9:bioengineering9110619. [PMID: 36354529 PMCID: PMC9687570 DOI: 10.3390/bioengineering9110619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/20/2022] [Accepted: 10/22/2022] [Indexed: 11/16/2022] Open
Abstract
The 3D reconstruction of an accurate face model is essential for delivering reliable feedback for clinical decision support. Medical imaging and specific depth sensors are accurate but not suitable for an easy-to-use and portable tool. The recent development of deep learning (DL) models opens new challenges for 3D shape reconstruction from a single image. However, the 3D face shape reconstruction of facial palsy patients is still a challenge, and this has not been investigated. The contribution of the present study is to apply these state-of-the-art methods to reconstruct the 3D face shape models of facial palsy patients in natural and mimic postures from one single image. Three different methods (3D Basel Morphable model and two 3D Deep Pre-trained models) were applied to the dataset of two healthy subjects and two facial palsy patients. The reconstructed outcomes were compared to the 3D shapes reconstructed using Kinect-driven and MRI-based information. As a result, the best mean error of the reconstructed face according to the Kinect-driven reconstructed shape is 1.5±1.1 mm. The best error range is 1.9±1.4 mm when compared to the MRI-based shapes. Before using the procedure to reconstruct the 3D faces of patients with facial palsy or other facial disorders, several ideas for increasing the accuracy of the reconstruction can be discussed based on the results. This present study opens new avenues for the fast reconstruction of the 3D face shapes of facial palsy patients from a single image. As perspectives, the best DL method will be implemented into our computer-aided decision support system for facial disorders.
Collapse
|
4
|
"Introduction of a low-cost and automated four-dimensional assessment system of the face.". Plast Reconstr Surg 2022; 150:639e-643e. [PMID: 35791287 DOI: 10.1097/prs.0000000000009453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
SUMMARY Existing automated objective grading systems either fail to consider the face's complex 3D morphology or suffer from poor feasibility and usability. Consumer-based Red Green Blue Depth (RGB-D) sensors and/or smartphone integrated 3D hardware can inexpensively collect detailed four-dimensional facial data in real-time but are yet to be incorporated into a practical system. This study aims to evaluate the feasibility of a proof-of-concept automated 4D facial assessment system using an RGB-D sensor (termed OpenFAS) for use in a standard clinical environment. This study was performed on normal adult volunteers and patients with facial nerve palsy (FNP). The setup consists of the Intel RealSense SR300 connected to a laptop running the OpenFAS application. The subject sequentially mimics the facial expressions shown on screen. Each frame is landmarked, and automatic anthropometric calculations are performed. Any errors during each session were noted. Landmarking accuracy was estimated by comparing the 'ground-truth position' of landmarks annotated manually to those placed automatically. 18 participants were included in the study, nine healthy participants and nine patients with FNP. Each session was standardized at approximately 106 seconds. 61.8% of landmarks were automatically annotated within approximately 1.575mm of their ground-truth locations. Our findings support that OpenFAS is usable and feasible in routine settings, laying down the critical groundwork for a facial assessment system that addresses the shortcomings of existing tools. However, the iteration of OpenFAS presented in this study is undoubtedly nascent with future work including improvements to landmarking accuracy, analyses components, and RGB-D technology required before clinical application.
Collapse
|
5
|
Nguyen DP, Ho Ba Tho MC, Dao TT. Reinforcement learning coupled with finite element modeling for facial motion learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106904. [PMID: 35636356 DOI: 10.1016/j.cmpb.2022.106904] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 05/14/2022] [Accepted: 05/22/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Facial palsy patients or patients with facial transplantation have abnormal facial motion due to altered facial muscle functions and nerve damage. Computer-aided system and physics-based models have been developed to provide objective and quantitative information. However, the predictive capacity of these solutions is still limited to explore the facial motion patterns with emerging properties. The present study aims to couple the reinforcement learning and the finite element modeling for facial motion learning and prediction. METHODS A novel modeling workflow for learning facial motion was developed. A physically-based model of the face within the Artisynth modeling platform was used. Information exchange protocol was proposed to link reinforcement learning and rigid multi-bodies dynamics outcomes. Two reinforcement learning algorithms (deep deterministic policy gradient (DDPG) and Twin-delayed DDPG (TD3)) were used and implemented to drive the simulations of symmetry-oriented and smile movements. Numerical outcomes were compared to experimental observations (Bosphorus database) for evaluation and validation purposes. RESULTS As result, after more than 100 episodes of exploring the environment, the agent starts to learn from previous trials and can find the optimal policy after more than 300 episodes of training. Regarding the symmetry-oriented motion, the muscle excitations predicted by the trained agent help to increase the value of reward from R = -2.06 to R = -0.23, which counts for ∼89% improvement of the symmetry value of the face. For smile-oriented motion, two points at the edge of the mouth move up 0.35 cm, which is within the range of movements estimated from the Bosphorus database (0.4 ± 0.32 cm). CONCLUSIONS The present study explored the muscle excitation patterns by coupling reinforcement learning with a detailed finite element model of the face. We developed, for the first time, a novel coupling scheme to integrate the finite element simulation into the reinforcement learning process for facial motion learning. As perspectives, this present workflow will be applied for facial palsy and facial transplantation patients to guide and optimize the functional rehabilitation program.
Collapse
Affiliation(s)
- Duc-Phong Nguyen
- Université de technologie de Compiègne, CNRS, Biomechanics and Bioengineering, Centre de recherche Royallieu, CS 60 319-60 203, Compiègne Cedex, France.
| | - Marie-Christine Ho Ba Tho
- Université de technologie de Compiègne, CNRS, Biomechanics and Bioengineering, Centre de recherche Royallieu, CS 60 319-60 203, Compiègne Cedex, France.
| | - Tien-Tuan Dao
- Univ. Lille, CNRS, Centrale Lille, UMR 9013 - LaMcube - Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000, Lille, France.
| |
Collapse
|
6
|
Newly Prepared 129Xe Nanoprobe-Based Functional Magnetic Resonance Imaging to Evaluate the Efficacy of Acupuncture on Intractable Peripheral Facial Paralysis. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:3318223. [PMID: 35350701 PMCID: PMC8930243 DOI: 10.1155/2022/3318223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 01/28/2022] [Accepted: 02/15/2022] [Indexed: 11/17/2022]
Abstract
This study focused on the application value of the newly prepared 129Xe nanoprobe-based functional magnetic resonance imaging (fMRI) in exploring the mechanism of the acupuncture treatment for intractable facial paralysis, expected to provide a theoretical reference for the mechanism of acupuncture for the treatment of facial paralysis. In this study, 30 patients with intractable peripheral facial paralysis (experimental group) and 30 healthy volunteers (control group) were selected. All patients were scanned by the newly prepared 129Xe nanoprobe-based fMRI technology, and then brain functional status data and rating data were collected. fMRI scanning results showed that multiple brain regions were activated in the experimental group before treatment, among which the central posterior brain, insula, and thalamus were positively activated, while the precuneus, superior frontal gyrus, and other parts showed signal reduction. After treatment, several brain regions also showed signal enhancement. Comparisons within the healthy control group also showed activation in multiple brain regions, including the lenticular nucleus, inferior frontal gyrus, and superior temporal gyrus, while in the experimental group, no signal changes were detected in these brain regions. At the same time, comparison of fMRI images of patients with intractable peripheral facial paralysis before and after treatment showed that the cerebellar amygdala, superior frontal gyrus, cerebellar mountaintop, and other brain areas were activated, and all showed positive activation. After treatment, the average House–Brackmann (H-B) and Sunnybrook scores of the experimental group were 3.82 and 51, respectively, and the change was significant compared with that before treatment (P < 0.05). In conclusion, the newly prepared 129Xe nanoprobe-based fMRI scan can reflect the functional changes of cerebral cortex after acupuncture. The acupuncture treatment may achieve its therapeutic effect by promoting the functional reorganization of the cerebral cortex in the treatment of intractable facial paralysis.
Collapse
|
7
|
Ballit A, Dao TT. HyperMSM: A new MSM variant for efficient simulation of dynamic soft-tissue deformations. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 216:106659. [PMID: 35108626 DOI: 10.1016/j.cmpb.2022.106659] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 01/11/2022] [Accepted: 01/22/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Fast, accurate, and stable simulation of soft tissue deformation is a challenging task. Mass-Spring Model (MSM) is one of the popular methods used for this purpose for its simple implementation and potential to provide fast dynamic simulations. However, accurately simulating a non-linear material within the mass-spring framework is still challenging. The objective of the present study is to develop and evaluate a new efficient hyperelastic Mass-Spring Model formulation to simulate the Neo-Hookean deformable material, called HyperMSM. METHODS Our novel HyperMSM formulation is applicable for both tetrahedral and hexahedral mesh configurations and is compatible with the original projective dynamics solver. In particular, the proposed MSM variant includes springs with variable rest-lengths and a volume conservation constraint. Two applications (transtibial residual limb and the skeletal muscle) were conducted. RESULTS Compared to finite element simulations, obtained results show RMSE ranges of [2.8%-5.2%] and [0.46%-5.4%] for stress-strain and volumetric responses respectively for strains ranging from -50% to +100%. The displacement error range in our transtibial residual limb simulation is around [0.01mm-0.7 mm]. The RMSE range of relative nodal displacements for the skeletal psoas muscle model is [0.4%-1.7%]. CONCLUSIONS Our novel HyperMSM formulation allows hyperelastic behavior of soft tissues to be described accurately and efficiently within the mass-spring framework. As perspectives, our formulation will be enhanced with electric behavior toward a multi-physical soft tissue mass-spring modeling framework. Then, the coupling with an augmented reality environment will be performed.
Collapse
Affiliation(s)
- Abbass Ballit
- Univ. Lille, CNRS, Centrale Lille, UMR 9013 - LaMcube - Laboratoire de Mécanique, Multiphysique, Multiéchelle, 59655 Villeneuve d'Ascq Cedex, F-59000, Lille, France.
| | - Tien-Tuan Dao
- Univ. Lille, CNRS, Centrale Lille, UMR 9013 - LaMcube - Laboratoire de Mécanique, Multiphysique, Multiéchelle, 59655 Villeneuve d'Ascq Cedex, F-59000, Lille, France.
| |
Collapse
|
8
|
Nguyen TN, Tran VD, Nguyen HQ, Nguyen DP, Dao TT. Enhanced head-skull shape learning using statistical modeling and topological features. Med Biol Eng Comput 2022; 60:559-581. [PMID: 35023072 DOI: 10.1007/s11517-021-02483-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 12/04/2021] [Indexed: 11/24/2022]
Abstract
Skull prediction from the head is a challenging issue toward a cost-effective therapeutic solution for facial disorders. This issue was initially studied in our previous work using full head-to-skull relationship learning. However, the head-skull thickness topology is locally shaped, especially in the face region. Thus, the objective of the present study was to enhance our head-to-skull prediction problem by using local topological features for training and predicting. Head and skull feature points were sampled on 329 head and skull models from computed tomography (CT) images. These feature points were classified into the back and facial topologies. Head-to-skull relations were trained using the partial least square regression (PLSR) models separately in the two topologies. A hyperparameter tuning process was also conducted for selecting optimal parameters for each training model. Thus, a new skull could be generated so that its shape was statistically fitted with the target head. Mean errors of the predicted skulls using the topology-based learning method were better than those using the non-topology-based learning method. After tenfold cross-validation, the mean error was enhanced 36.96% for the skull shapes and 14.17% for the skull models. Mean error in the facial skull region was especially improved with 4.98%. The mean errors were also improved 11.71% and 25.74% in the muscle attachment regions and the back skull regions respectively. Moreover, using the enhanced learning strategy, the errors (mean ± SD) for the best and worst prediction cases are from 1.1994 ± 1.1225 mm (median: 0.9036, coefficient of multiple determination (R2): 0.997274) to 3.6972 ± 2.4118 mm (median: 3.9089, R2: 0.999614) and from 2.0172 ± 2.0454 mm (median: 1.2999, R2: 0.995959) to 4.0227 ± 2.6098 mm (median: 3.9998, R2: 0.998577) for the predicted skull shapes and the predicted skull models respectively. This present study showed that more detailed information on the head-skull shape leads to a better accuracy level for the skull prediction from the head. In particular, local topological features on the back and face regions of interest should be considered toward a better learning strategy for the head-to-skull prediction problem. In perspective, this enhanced learning strategy was used to update our developed clinical decision support system for facial disorders. Furthermore, a new class of learning methods, called geometric deep learning will be studied.
Collapse
Affiliation(s)
- Tan-Nhu Nguyen
- Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam
| | - Vi-Do Tran
- Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam
| | | | - Duc-Phong Nguyen
- Université de technologie de Compiègne, CNRS, Biomechanics and Bioengineering, Centre de Recherche Royallieu, CS 60 319- 60 203, Compiègne Cedex, France
| | - Tien-Tuan Dao
- Univ. Lille, CNRS, Centrale Lille, UMR 9013 - LaMcube - Laboratoire de Mécanique, Multiphysique, Multiéchelle, 59655 Villeneuve d'Ascq Cedex, F-59000, Lille, France.
| |
Collapse
|