1
|
Jablonski RY, Malhotra T, Shaw D, Coward TJ, Shuweihdi F, Bojke C, Pavitt SH, Nattress BR, Keeling AJ. Comparison of trueness and repeatability of facial prosthesis design using a 3D morphable model approach, traditional computer-aided design methods, and conventional manual sculpting techniques. J Prosthet Dent 2025; 133:598-607. [PMID: 38616155 DOI: 10.1016/j.prosdent.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 03/05/2024] [Accepted: 03/05/2024] [Indexed: 04/16/2024]
Abstract
STATEMENT OF PROBLEM Manually sculpting a wax pattern of a facial prosthesis is a time-, skill-, and resource-intensive process. Computer-aided design (CAD) methods have been proposed as a substitute for manual sculpting, but these techniques can still require high technical or artistic abilities. Three-dimensional morphable models (3DMMs) could semi-automate facial prosthesis CAD. Systematic comparisons of different design approaches are needed. PURPOSE The purpose of this study was to compare the trueness and repeatability of replacing facial features with 3 methods of facial prosthesis design involving 3DMM, traditional CAD, and conventional manual sculpting techniques. MATERIAL AND METHODS Fifteen participants without facial defects were scanned with a structured light scanner. The facial meshes were manipulated to generate artificial orbital, nasal, or combined defects. Three methods of facial prosthesis design were compared for the 15 participants and repeated to produce 5 of each design for 2 participants. For the 3DMM approach, the Leeds face model informed the designs in a statistically meaningful way. For the traditional CAD methods, designs were created by using mirroring techniques or from a nose model database. For the conventional manual sculpting techniques, wax patterns were manually created on 3D printed full face baseplates. For analysis, the unedited facial feature was the standard. The unsigned distance was calculated from each of the several thousand vertices on the unedited facial feature to the closest point on the external surface of the prosthesis prototype. The mean absolute error was calculated, and a Friedman test was performed (α=.05). RESULTS The median mean absolute error was 1.13 mm for the 3DMM group, 1.54 mm for the traditional CAD group, and 1.49 mm for the manual sculpting group, with no statistically significant differences among groups (P=.549). Boxplots showed substantial differences in the distribution of mean absolute error among groups, with the 3DMM group showing the greatest consistency. The 3DMM approach produced repeat designs with the lowest coefficient of variation. CONCLUSIONS The 3DMM approach shows potential as a semi-automated method of CAD. Further clinical research is planned to explore the 3DMM approach in a feasibility trial.
Collapse
Affiliation(s)
- Rachael Y Jablonski
- Specialty Registrar in Restorative Dentistry and NIHR Doctoral Fellow, Department of Restorative Dentistry, School of Dentistry, University of Leeds, Leeds, England, UK.
| | - Taran Malhotra
- Lead Specialist Maxillofacial Prosthetist, Maxillofacial Prosthetics Laboratory, Liverpool University Hospitals NHS Foundation Trust, Aintree University Hospital, Liverpool, England, UK
| | - Daniel Shaw
- Maxillofacial Laboratory Manager, Maxillofacial Department, Chesterfield Royal Hospital Calow, Chesterfield, England, UK
| | - Trevor J Coward
- Professor and Honorary Consultant in Maxillofacial and Craniofacial Rehabilitation, Academic Centre of Reconstructive Science, Faculty of Dentistry, Oral and Craniofacial Sciences, King's College London, London, England, UK
| | - Farag Shuweihdi
- Lecturer in Medical Statistics and Health Data Science, Dental Translational and Clinical Research Unit, School of Dentistry, University of Leeds, Leeds, England, UK
| | - Chris Bojke
- Professor of Health Economics, Academic Unit of Health Economics, Leeds Institute of Health Sciences, University of Leeds, Leeds, England, UK
| | - Sue H Pavitt
- Professor of Translational and Applied Health Research, Dental Translational and Clinical Research Unit, School of Dentistry, University of Leeds, Leeds, England, UK
| | - Brian R Nattress
- Emeritus Professor of Restorative Dentistry, Department of Restorative Dentistry, School of Dentistry, University of Leeds, Leeds, England, UK
| | - Andrew J Keeling
- Professor of Prosthodontics and Digital Dentistry, Department of Restorative Dentistry, School of Dentistry, University of Leeds, Leeds, England, UK
| |
Collapse
|
2
|
Schlesinger O, Kundu R, Isaev D, Choi JY, Goetz SM, Turner DA, Sapiro G, Peterchev AV, Di Martino JM. Scalp surface estimation and head registration using sparse sampling and 3D statistical models. Comput Biol Med 2024; 178:108689. [PMID: 38875907 PMCID: PMC11265975 DOI: 10.1016/j.compbiomed.2024.108689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/16/2024]
Abstract
Registering the head and estimating the scalp surface are important for various biomedical procedures, including those using neuronavigation to localize brain stimulation or recording. However, neuronavigation systems rely on manually-identified fiducial head targets and often require a patient-specific MRI for accurate registration, limiting adoption. We propose a practical technique capable of inferring the scalp shape and use it to accurately register the subject's head. Our method does not require anatomical landmark annotation or an individual MRI scan, yet achieves accurate registration of the subject's head and estimation of its surface. The scalp shape is estimated from surface samples easily acquired using existing pointer tools, and registration exploits statistical head model priors. Our method allows for the acquisition of non-trivial shapes from a limited number of data points while leveraging their object class priors, surpassing the accuracy of common reconstruction and registration methods using the same tools. The proposed approach is evaluated in a virtual study with head MRI data from 1152 subjects, achieving an average reconstruction root-mean-square error of 2.95 mm, which outperforms a common neuronavigation technique by 2.70 mm. We also characterize the error under different conditions and provide guidelines for efficient sampling. Furthermore, we demonstrate and validate the proposed method on data from 50 subjects collected with conventional neuronavigation tools and setup, obtaining an average root-mean-square error of 2.89 mm; adding landmark-based registration improves this error to 2.63 mm. The simulation and experimental results support the proposed method's effectiveness with or without landmark annotation, highlighting its broad applicability.
Collapse
Affiliation(s)
- Oded Schlesinger
- Department of Electrical and Computer Engineering, Duke University, Durham, 27708, NC, USA.
| | - Raj Kundu
- Department of Psychiatry & Behavioral Sciences, Duke University, Durham, 27710, NC, USA; Boston University School of Medicine, Boston, 02118, MA, USA
| | - Dmitry Isaev
- Department of Biomedical Engineering, Duke University, Durham, 27708, NC, USA
| | - Jessica Y Choi
- Department of Psychiatry & Behavioral Sciences, Duke University, Durham, 27710, NC, USA
| | - Stefan M Goetz
- Department of Electrical and Computer Engineering, Duke University, Durham, 27708, NC, USA; Department of Psychiatry & Behavioral Sciences, Duke University, Durham, 27710, NC, USA; Department of Neurosurgery, Duke University, Durham, 27710, NC, USA
| | - Dennis A Turner
- Department of Neurosurgery, Duke University, Durham, 27710, NC, USA
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, Durham, 27708, NC, USA; Department of Biomedical Engineering, Duke University, Durham, 27708, NC, USA
| | - Angel V Peterchev
- Department of Electrical and Computer Engineering, Duke University, Durham, 27708, NC, USA; Department of Psychiatry & Behavioral Sciences, Duke University, Durham, 27710, NC, USA; Department of Neurosurgery, Duke University, Durham, 27710, NC, USA; Department of Biomedical Engineering, Duke University, Durham, 27708, NC, USA
| | - J Matias Di Martino
- Department of Electrical and Computer Engineering, Duke University, Durham, 27708, NC, USA; Universidad Católica del Uruguay, Montevideo, 11600, Uruguay
| |
Collapse
|
3
|
Zhang J, Zhou K, Luximon Y, Lee TY, Li P. MeshWGAN: Mesh-to-Mesh Wasserstein GAN With Multi-Task Gradient Penalty for 3D Facial Geometric Age Transformation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4927-4940. [PMID: 37307186 DOI: 10.1109/tvcg.2023.3284500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
As the metaverse develops rapidly, 3D facial age transformation is attracting increasing attention, which may bring many potential benefits to a wide variety of users, e.g., 3D aging figures creation, 3D facial data augmentation and editing. Compared with 2D methods, 3D face aging is an underexplored problem. To fill this gap, we propose a new mesh-to-mesh Wasserstein generative adversarial network (MeshWGAN) with a multi-task gradient penalty to model a continuous bi-directional 3D facial geometric aging process. To the best of our knowledge, this is the first architecture to achieve 3D facial geometric age transformation via real 3D scans. As previous image-to-image translation methods cannot be directly applied to the 3D facial mesh, which is totally different from 2D images, we built a mesh encoder, decoder, and multi-task discriminator to facilitate mesh-to-mesh transformations. To mitigate the lack of 3D datasets containing children's faces, we collected scans from 765 subjects aged 5-17 in combination with existing 3D face databases, which provided a large training dataset. Experiments have shown that our architecture can predict 3D facial aging geometries with better identity preservation and age closeness compared to 3D trivial baselines. We also demonstrated the advantages of our approach via various 3D face-related graphics applications.
Collapse
|
4
|
Schlesinger O, Kundu R, Goetz S, Sapiro G, Peterchev AV, Di Martino JM. Automatic Neurocranial Landmarks Detection from Visible Facial Landmarks Leveraging 3D Head Priors. CLINICAL IMAGE-BASED PROCEDURES, FAIRNESS OF AI IN MEDICAL IMAGING, AND ETHICAL AND PHILOSOPHICAL ISSUES IN MEDICAL IMAGING : 12TH INTERNATIONAL WORKSHOP, CLIP 2023 1ST INTERNATIONAL WORKSHOP, FAIMI 2023 AND 2ND INTERNATIONAL WORKSHOP, ... 2023; 14242:12-20. [PMID: 38155840 PMCID: PMC10752036 DOI: 10.1007/978-3-031-45249-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
The localization and tracking of neurocranial landmarks is essential in modern medical procedures, e.g., transcranial magnetic stimulation (TMS). However, state-of-the-art treatments still rely on the manual identification of head targets and require setting retroreflective markers for tracking. This limits the applicability and scalability of TMS approaches, making them time-consuming, dependent on expensive hardware, and prone to errors when retroreflective markers drift from their initial position. To overcome these limitations, we propose a scalable method capable of inferring the position of points of interest on the scalp, e.g., the International 10-20 System's neurocranial landmarks. In contrast with existing approaches, our method does not require human intervention or markers; head landmarks are estimated leveraging visible facial landmarks, optional head size measurements, and statistical head model priors. We validate the proposed approach on ground truth data from 1,150 subjects, for which facial 3D and head information is available; our technique achieves a localization RMSE of 2.56 mm on average, which is of the same order as reported by high-end techniques in TMS. Our implementation is available at https://github.com/odedsc/ANLD.
Collapse
|
5
|
Foti S, Koo B, Stoyanov D, Clarkson MJ. 3D Generative Model Latent Disentanglement via Local Eigenprojection. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2023; 42:e14793. [PMID: 37915466 PMCID: PMC10617979 DOI: 10.1111/cgf.14793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Designing realistic digital humans is extremely complex. Most data-driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural-network-based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state-of-the-art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre-trained models are available at github.com/simofoti/LocalEigenprojDisentangled.
Collapse
Affiliation(s)
| | - Bongjin Koo
- University College LondonLondonUK
- University of California, Santa BarbaraSanta BarbaraUSA
| | | | | |
Collapse
|
6
|
Lattas A, Moschoglou S, Ploumpis S, Gecer B, Ghosh A, Zafeiriou S. AvatarMe ++: Facial Shape and BRDF Inference With Photorealistic Rendering-Aware GANs. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:9269-9284. [PMID: 34748477 DOI: 10.1109/tpami.2021.3125598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Over the last years, with the advent of Generative Adversarial Networks (GANs), many face analysis tasks have accomplished astounding performance, with applications including, but not limited to, face generation and 3D face reconstruction from a single "in-the-wild" image. Nevertheless, to the best of our knowledge, there is no method which can produce render-ready high-resolution 3D faces from "in-the-wild" images and this can be attributed to the: (a) scarcity of available data for training, and (b) lack of robust methodologies that can successfully be applied on very high-resolution data. In this paper, we introduce the first method that is able to reconstruct photorealistic render-ready 3D facial geometry and BRDF from a single "in-the-wild" image. To achieve this, we capture a large dataset of facial shape and reflectance, which we have made public. Moreover, we define a fast and photorealistic differentiable rendering methodology with accurate facial skin diffuse and specular reflection, self-occlusion and subsurface scattering approximation. With this, we train a network that disentangles the facial diffuse and specular reflectance components from a mesh and texture with baked illumination, scanned or reconstructed with a 3DMM fitting method. As we demonstrate in a series of qualitative and quantitative experiments, our method outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image, that are ready to be rendered in various applications and bridge the uncanny valley.
Collapse
|
7
|
Ferrari C, Berretti S, Pala P, Bimbo AD. A Sparse and Locally Coherent Morphable Face Model for Dense Semantic Correspondence Across Heterogeneous 3D Faces. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6667-6682. [PMID: 34156937 DOI: 10.1109/tpami.2021.3090942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The 3D Morphable Model (3DMM) is a powerful statistical tool for representing 3D face shapes. To build a 3DMM, a training set of face scans in full point-to-point correspondence is required, and its modeling capabilities directly depend on the variability contained in the training data. Thus, to increase the descriptive power of the 3DMM, establishing a dense correspondence across heterogeneous scans with sufficient diversity in terms of identities, ethnicities, or expressions becomes essential. In this manuscript, we present a fully automatic approach that leverages a 3DMM to transfer its dense semantic annotation across raw 3D faces, establishing a dense correspondence between them. We propose a novel formulation to learn a set of sparse deformation components with local support on the face that, together with an original non-rigid deformation algorithm, allow the 3DMM to precisely fit unseen faces and transfer its semantic annotation. We extensively experimented our approach, showing it can effectively generalize to highly diverse samples and accurately establish a dense correspondence even in presence of complex facial expressions. The accuracy of the dense registration is demonstrated by building a heterogeneous, large-scale 3DMM from more than 9,000 fully registered scans obtained by joining three large datasets together.
Collapse
|
8
|
Yang Y, Lyu J, Wang R, Wen Q, Zhao L, Chen W, Bi S, Meng J, Mao K, Xiao Y, Liang Y, Zeng D, Du Z, Wu Y, Cui T, Liu L, Iao WC, Li X, Cheung CY, Zhou J, Hu Y, Wei L, Lai IF, Yu X, Chen J, Wang Z, Mao Z, Ye H, Xiao W, Yang H, Huang D, Lin X, Zheng WS, Wang R, Yu-Wai-Man P, Xu F, Dai Q, Lin H. A digital mask to safeguard patient privacy. Nat Med 2022; 28:1883-1892. [PMID: 36109638 PMCID: PMC9499857 DOI: 10.1038/s41591-022-01966-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 07/25/2022] [Indexed: 11/08/2022]
Abstract
The storage of facial images in medical records poses privacy risks due to the sensitive nature of the personal biometric information that can be extracted from such images. To minimize these risks, we developed a new technology, called the digital mask (DM), which is based on three-dimensional reconstruction and deep-learning algorithms to irreversibly erase identifiable features, while retaining disease-relevant features needed for diagnosis. In a prospective clinical study to evaluate the technology for diagnosis of ocular conditions, we found very high diagnostic consistency between the use of original and reconstructed facial videos (κ ≥ 0.845 for strabismus, ptosis and nystagmus, and κ = 0.801 for thyroid-associated orbitopathy) and comparable diagnostic accuracy (P ≥ 0.131 for all ocular conditions tested) was observed. Identity removal validation using multiple-choice questions showed that compared to image cropping, the DM could much more effectively remove identity attributes from facial images. We further confirmed the ability of the DM to evade recognition systems using artificial intelligence-powered re-identification algorithms. Moreover, use of the DM increased the willingness of patients with ocular conditions to provide their facial images as health information during medical treatment. These results indicate the potential of the DM algorithm to protect the privacy of patients' facial images in an era of rapid adoption of digital health technologies.
Collapse
Grants
- G0701386 Medical Research Council
- G1002570 Medical Research Council
- BRC-1215-20014 Department of Health
- NIHR301696 Department of Health
- National Natural Science Foundation of China (National Science Foundation of China)
- Science and Technology Planning Projects of Guangdong Province (2018B010109008);Guangzhou Key Laboratory Project (202002010006);Hainan Province Clinical Medical Center
- Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Centre (UK), the International Foundation for Optic Nerve Disease (IFOND), the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014), and the NIHR Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology
- the National Key R&D Program of China (2018YFA0704000);Beijing Natural Science Foundation (JQ19015)
- the Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS);Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission (BLBCI)
Collapse
Affiliation(s)
- Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Junfeng Lyu
- School of Software and BNRist, Tsinghua University, Beijing, China
| | - Ruixin Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Quan Wen
- School of Software and BNRist, Tsinghua University, Beijing, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wenben Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shaowei Bi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jie Meng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Keli Mao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yu Xiao
- Department of Ophthalmology, Guangdong Provincial People's Hospital; Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou, China
| | - Yingying Liang
- Department of Ophthalmology, Guangdong Provincial People's Hospital; Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou, China
| | - Danqi Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zijing Du
- Department of Ophthalmology, Guangdong Provincial People's Hospital; Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology & Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Jianhua Zhou
- School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen, China
| | - Youjin Hu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lai Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Iat Fan Lai
- Ophthalmic Center, Kiang Wu Hospital, Macao SAR, Macao, China
| | - Xinping Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jingchang Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhonghao Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhen Mao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Huijing Ye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Huasheng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danping Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoming Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wei-Shi Zheng
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Patrick Yu-Wai-Man
- Cambridge Center for Brain Repair and MRC Mitochondrial Biology Unit, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
- Cambridge Eye Unit, Addenbrooke's Hospital, Cambridge University Hospitals, Cambridge, UK
- Moorfields Eye Hospital, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Feng Xu
- School of Software and BNRist, Tsinghua University, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China.
| | - Qionghai Dai
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, Beijing, China.
- Department of Automation and BNRist, Tsinghua University, Beijing, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
9
|
Gecer B, Ploumpis S, Kotsia I, Zafeiriou S. Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face Reconstruction. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4879-4893. [PMID: 34043505 DOI: 10.1109/tpami.2021.3084524] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
A lot of work has been done towards reconstructing the 3D facial structure from single images by capitalizing on the power of deep convolutional neural networks (DCNNs). In the recent works, the texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction is still not capable of modeling facial texture with high-frequency details. In this paper, we take a radically different approach and harness the power of generative adversarial networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful facial texture prior from a large-scale 3D texture dataset. Then, we revisit the original 3D Morphable Models (3DMMs) fitting making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. In order to be robust towards initialisation and expedite the fitting process, we propose a novel self-supervised regression based approach. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details.
Collapse
|
10
|
Schaufelberger M, Kühle R, Wachter A, Weichel F, Hagen N, Ringwald F, Eisenmann U, Hoffmann J, Engel M, Freudlsperger C, Nahm W. A Radiation-Free Classification Pipeline for Craniosynostosis Using Statistical Shape Modeling. Diagnostics (Basel) 2022; 12:1516. [PMID: 35885422 PMCID: PMC9323148 DOI: 10.3390/diagnostics12071516] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Craniosynostosis is a condition caused by the premature fusion of skull sutures, leading to irregular growth patterns of the head. Three-dimensional photogrammetry is a radiation-free alternative to the diagnosis using computed tomography. While statistical shape models have been proposed to quantify head shape, no shape-model-based classification approach has been presented yet. METHODS We present a classification pipeline that enables an automated diagnosis of three types of craniosynostosis. The pipeline is based on a statistical shape model built from photogrammetric surface scans. We made the model and pathology-specific submodels publicly available, making it the first publicly available craniosynostosis-related head model, as well as the first focusing on infants younger than 1.5 years. To the best of our knowledge, we performed the largest classification study for craniosynostosis to date. RESULTS Our classification approach yields an accuracy of 97.8 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Regarding the statistical shape model, we demonstrate that our model performs similar to other statistical shape models of the human head. CONCLUSION We present a state-of-the-art shape-model-based classification approach for a radiation-free diagnosis of craniosynostosis. Our publicly available shape model enables the assessment of craniosynostosis on realistic and synthetic data.
Collapse
Affiliation(s)
- Matthias Schaufelberger
- Institute of Biomedical Engineering (IBT), Karlsruhe Institute of Technology (KIT), Kaiserstr. 12, 76131 Karlsruhe, Germany; (A.W.); (W.N.)
| | - Reinald Kühle
- Department of Oral, Dental and Maxillofacial Diseases, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (R.K.); (F.W.); (J.H.); (M.E.); (C.F.)
| | - Andreas Wachter
- Institute of Biomedical Engineering (IBT), Karlsruhe Institute of Technology (KIT), Kaiserstr. 12, 76131 Karlsruhe, Germany; (A.W.); (W.N.)
| | - Frederic Weichel
- Department of Oral, Dental and Maxillofacial Diseases, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (R.K.); (F.W.); (J.H.); (M.E.); (C.F.)
| | - Niclas Hagen
- Institute of Medical Informatics, Heidelberg University Hospital, Im Neuenheimer Feld 130.3, 69120 Heidelberg, Germany; (N.H.); (F.R.); (U.E.)
| | - Friedemann Ringwald
- Institute of Medical Informatics, Heidelberg University Hospital, Im Neuenheimer Feld 130.3, 69120 Heidelberg, Germany; (N.H.); (F.R.); (U.E.)
| | - Urs Eisenmann
- Institute of Medical Informatics, Heidelberg University Hospital, Im Neuenheimer Feld 130.3, 69120 Heidelberg, Germany; (N.H.); (F.R.); (U.E.)
| | - Jürgen Hoffmann
- Department of Oral, Dental and Maxillofacial Diseases, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (R.K.); (F.W.); (J.H.); (M.E.); (C.F.)
| | - Michael Engel
- Department of Oral, Dental and Maxillofacial Diseases, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (R.K.); (F.W.); (J.H.); (M.E.); (C.F.)
| | - Christian Freudlsperger
- Department of Oral, Dental and Maxillofacial Diseases, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; (R.K.); (F.W.); (J.H.); (M.E.); (C.F.)
| | - Werner Nahm
- Institute of Biomedical Engineering (IBT), Karlsruhe Institute of Technology (KIT), Kaiserstr. 12, 76131 Karlsruhe, Germany; (A.W.); (W.N.)
| |
Collapse
|
11
|
Growth patterns and shape development of the paediatric mandible – A 3D statistical model. Bone Rep 2022; 16:101528. [PMID: 35399871 PMCID: PMC8987800 DOI: 10.1016/j.bonr.2022.101528] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 03/25/2022] [Accepted: 03/28/2022] [Indexed: 11/24/2022] Open
Abstract
Background/aim To develop a 3D morphable model of the normal paediatric mandible to analyse shape development and growth patterns for males and females. Methods Computed tomography (CT) data was collected for 242 healthy children referred for CT scan between 2011 and 2018 aged between 0 and 47 months (mean, 20.6 ± 13.4 months, 59.9% male). Thresholding techniques were used to segment the mandible from the CT scans. All mandible meshes were annotated using a defined set of 52 landmarks and processed such that all meshes followed a consistent triangulation. Following this, the mandible meshes were rigidly aligned to remove translation and rotation effects, while size effects were retained. Principal component analysis (PCA) was applied to the processed meshes to construct a generative 3D morphable model. Partial least squares (PLS) regression was also applied to the processed data to extract the shape modes with which to evaluate shape differences for age and sex. Growth curves were constructed for anthropometric measurements. Results A 3D morphable model of the paediatric mandible was constructed and validated with good generalisation, compactness, and specificity. Growth curves of the assessed anthropometric measurements were plotted without significant differences between male and female subjects. The first principal component was dominated by size effects and is highly correlated with age at time of scan (Spearman's r = 0.94, p < 0.01). As with PCA, the first extracted PLS mode captures much of the size variation within the dataset and is highly correlated with age (Spearman's r = −0.94, p < 0.01). Little correlation was observed between extracted shape modes and sex with either PCA or PLS for this study population. Conclusion The presented 3D morphable model of the paediatric mandible enables an understanding of mandibular shape development and variation by age and sex. It allowed for the construction of growth curves, which contains valuable information that can be used to enhance our understanding of various disorders that affect the mandibular development. Knowledge of shape changes in the growing mandible has potential to improve diagnostic accuracy for craniofacial conditions that impact the mandibular morphology, objective evaluation, surgical planning, and patient follow-up. Shape and development patterns of the paediatric mandible (0 – 4 years) were evaluated using a dataset of 242 CT scans. A 3D morphable model of the paediatric mandible was constructed using principal component analysis (PCA). Validation experiments demonstrated that the 3D morphable model can produce realistic novel mandible samples. Partial least squares (PLS) regression was applied to the dataset to evaluate shape differences for age and sex. The first shape model correlated strongly with age for PCA and PLS, though little correlation was seen between shape and sex.
Collapse
|
12
|
Duncan C, Pears N, Dai H, Smith WP, O′Higgins P. Applications of 3D photography in craniofacial surgery. J Pediatr Neurosci 2022; 17:S21-S28. [PMID: 36388007 PMCID: PMC9648652 DOI: 10.4103/jpn.jpn_48_22] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 04/04/2022] [Indexed: 11/24/2022] Open
Abstract
Three-dimensional (3D) photography is becoming more common in craniosynostosis practice and may be used for research, archiving, and as a planning tool. In this article, an overview of the uses of 3D photography will be given, including systems available and illustrations of how they can be used. Important innovations in 3D computer vision will also be discussed, including the potential role of statistical shape modeling and analysis as an outcomes tool with presentation of some results and a review of the literature on the topic. Potential future applications in diagnostics using machine learning will also be presented.
Collapse
|
13
|
Abstract
AbstractStandard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.
Collapse
|
14
|
Smith OAM, Nashed YSG, Duncan C, Pears N, Profico A, O'Higgins P. 3D Modeling of craniofacial ontogeny and sexual dimorphism in children. Anat Rec (Hoboken) 2020; 304:1918-1926. [PMID: 33336527 DOI: 10.1002/ar.24582] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 11/10/2020] [Accepted: 12/01/2020] [Indexed: 11/09/2022]
Abstract
BACKGROUND The range of normal variation of growth and development of the craniofacial region is of direct clinical interest but incompletely understood. Here we develop a statistical model of craniofacial growth and development to compare craniofacial ontogeny between age groups and sexes and pilot an approach to modeling that is relatively straightforward to apply in the context of clinical research and assessment. METHODS The sample comprises head surface meshes captured using a 3dMD five-camera system from 65 males and 47 females (range 3-20 years) from the Headspace project, Liverpool, UK. The surface meshes were parameterized using 16 anatomical landmarks and 59 semilandmarks on curves and surfaces. Modes and degrees of growth and development were assessed and compared among ages and sexes using Procrustes based geometric morphometric methods. RESULTS Regression analyses indicate that 3-10 year olds undergo greater changes than 11-20 year olds and that craniofacial growth and development differs between these age groups. The analyses indicate that males extend growth allometrically into larger size ranges, contributing substantially to adult dimorphism. Comparisons of ontogenetic trajectories between sexes find no significant differences, yet when hypermorphosis is accounted for in the older age group there is a significant residual sexual dimorphism. CONCLUSIONS The study adds to knowledge of how adult craniofacial form and sexual dimorphism develop. It was carried out using readily available software which facilitates replication of this work in diverse populations to underpin clinical assessment of deformity and the outcomes of corrective interventions.
Collapse
Affiliation(s)
| | | | - Christian Duncan
- Department of Plastic Surgery, Alder-Hey Hospital, Liverpool, UK
| | - Nick Pears
- Department of Computer Science, University of York, York, UK
| | - Antonio Profico
- PalaeoHub, Department of Archaeology, University of York, York, UK
| | - Paul O'Higgins
- Hull York Medical School, University of York, York, UK.,PalaeoHub, Department of Archaeology, University of York, York, UK
| |
Collapse
|