1
|
Helmy M, El-Rabaie ESM, El-Dokany I, Abd El-Samie FE. A novel cancellable biometric recognition system based on Rubik’s cube technique for cyber-security applications. OPTIK 2023; 285:170475. [DOI: 10.1016/j.ijleo.2022.170475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
2
|
Faragallah OS, Naeem EA, El-Shafai W, Ramadan N, Ahmed HEDH, Elnaby MMA, Elashry I, El-khamy SE, El-Samie FEA. Efficient chaotic-Baker-map-based cancelable face recognition. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2023; 14:1837-1875. [DOI: 10.1007/s12652-021-03398-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 07/13/2021] [Indexed: 09/01/2023]
|
3
|
Advancement in Human Face Prediction Using DNA. Genes (Basel) 2023; 14:genes14010136. [PMID: 36672878 PMCID: PMC9858985 DOI: 10.3390/genes14010136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/15/2022] [Accepted: 12/21/2022] [Indexed: 01/05/2023] Open
Abstract
The rapid improvements in identifying the genetic factors contributing to facial morphology have enabled the early identification of craniofacial syndromes. Similarly, this technology can be vital in forensic cases involving human identification from biological traces or human remains, especially when reference samples are not available in the deoxyribose nucleic acid (DNA) database. This review summarizes the currently used methods for predicting human phenotypes such as age, ancestry, pigmentation, and facial features based on genetic variations. To identify the facial features affected by DNA, various two-dimensional (2D)- and three-dimensional (3D)-scanning techniques and analysis tools are reviewed. A comparison between the scanning technologies is also presented in this review. Face-landmarking techniques and face-phenotyping algorithms are discussed in chronological order. Then, the latest approaches in genetic to 3D face shape analysis are emphasized. A systematic review of the current markers that passed the threshold of a genome-wide association (GWAS) of single nucleotide polymorphism (SNP)-face traits from the GWAS Catalog is also provided using the preferred reporting items for systematic reviews and meta-analyses (PRISMA), approach. Finally, the current challenges in forensic DNA phenotyping are analyzed and discussed.
Collapse
|
4
|
Matthews H, de Jong G, Maal T, Claes P. Static and Motion Facial Analysis for Craniofacial Assessment and Diagnosing Diseases. Annu Rev Biomed Data Sci 2022; 5:19-42. [PMID: 35440145 DOI: 10.1146/annurev-biodatasci-122120-111413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Deviation from a normal facial shape and symmetry can arise from numerous sources, including physical injury and congenital birth defects. Such abnormalities can have important aesthetic and functional consequences. Furthermore, in clinical genetics distinctive facial appearances are often associated with clinical or genetic diagnoses; the recognition of a characteristic facial appearance can substantially narrow the search space of potential diagnoses for the clinician. Unusual patterns of facial movement and expression can indicate disturbances to normal mechanical functioning or emotional affect. Computational analyses of static and moving 2D and 3D images can serve clinicians and researchers by detecting and describing facial structural, mechanical, and affective abnormalities objectively. In this review we survey traditional and emerging methods of facial analysis, including statistical shape modeling, syndrome classification, modeling clinical face phenotype spaces, and analysis of facial motion and affect. Expected final online publication date for the Annual Review of Biomedical Data Science, Volume 5 is August 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Harold Matthews
- Department of Human Genetics, KU Leuven, Leuven, Belgium; .,Medical Imaging Research Center, UZ Leuven, Leuven, Belgium.,Facial Sciences Research Group, Murdoch Children's Research Institute, Parkville, Australia
| | - Guido de Jong
- 3D Lab, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Thomas Maal
- 3D Lab, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Peter Claes
- Department of Human Genetics, KU Leuven, Leuven, Belgium; .,Medical Imaging Research Center, UZ Leuven, Leuven, Belgium.,Facial Sciences Research Group, Murdoch Children's Research Institute, Parkville, Australia.,Processing Speech and Images (PSI), Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium
| |
Collapse
|
5
|
Tran AQ, Yang C, Tooley AA, Kazim M, Glass LRD. Mathematical Modeling of Eyebrow Curvature. Facial Plast Surg 2022; 38:307-310. [PMID: 35114713 DOI: 10.1055/s-0041-1742200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
The aim of the study is to describe a mathematical model for analyzing eyebrow curvature that can be applied broadly to curvilinear facial features. A total of 100 digital images (50 men, 50 women) were obtained from standardized headshots of medical professionals. Images were analyzed in ImageJ by plotting either 8 or 15 points along the inferior-most row of contiguous brow cilia. A best-fit curve was automatically fit to these points in Microsoft Excel. The second derivative of the second-degree polynomial and a fourth-degree polynomial were used to evaluate brow curvature. Both techniques were subsequently compared with each other. A second-degree polynomial and fourth-degree polynomial were fit to all eyebrows. Plotting 15 points yielded greater goodness-of-fit than plotting 8 points along the inferior brow and allowed for more sensitive measurement of curvature across all images. A fourth-degree polynomial function provided a closer fit to the eyebrow than a second-degree polynomial function. This method provides a simple and reliable tool for quantitative analysis of eyebrow curvature from images. Fifteen-point plots and a fourth-degree polynomial curve provide a greater goodness-of-fit. The authors believe the described technique can be applied to other curvilinear facial features and will facilitate the analysis of standardized images.
Collapse
Affiliation(s)
- Ann Q Tran
- Department of Ophthalmology, University of Illinois Eye and Ear Infirmary, Chicago, Illinois.,Department of Ophthalmology, Columbia University Irving Medical Center, Edward S. Harkness Eye Institute, New York, New York
| | - Cameron Yang
- Department of Ophthalmology, Ohio State University, Columbus, Ohio
| | - Andrea A Tooley
- Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota
| | - Michael Kazim
- Department of Ophthalmology, Columbia University Irving Medical Center, Edward S. Harkness Eye Institute, New York, New York
| | - Lora R Dagi Glass
- Department of Ophthalmology, Columbia University Irving Medical Center, Edward S. Harkness Eye Institute, New York, New York
| |
Collapse
|
6
|
Gerbino G, Autorino U, Borbon C, Marcolin F, Olivetti E, Vezzetti E, Zavattero E. Malar augmentation with zygomatic osteotomy in orthognatic surgery: Bone and soft tissue changes threedimensional evaluation. J Craniomaxillofac Surg 2021; 49:223-230. [PMID: 33509673 DOI: 10.1016/j.jcms.2021.01.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 07/28/2020] [Accepted: 01/12/2021] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND The aim of this prospective study is to objectively assess 3D soft tissue and bone changes of the malar region by using the malar valgization osteotomy in concomitant association with orthognatic surgery. MATERIALS AND METHODS From January 2015 to January 2018, 10 patients who underwent single stage bilateral malar valgization osteotomy in conjunction with maxillo-mandibular orthognatic procedures for aesthetic and functional correction were evaluated. Clinical and surgical reports were collected and patient satisfaction was evaluated with a VAS score. For each patient, maxillofacial CT-scans were collected 1 month preoperatively (T0) and 6 months after the operation (T1). DICOM data were imported and elaborated in the software MatLab, which creates a 3D soft tissue model of the face. 3D Bone changes were assessed importing DICOM data into iPlan (BrainLAB 3.0) software and the superimposition process was achieved using autofusion. Descriptive statistical analyses were obtained for soft tissue and bone changes. RESULTS Considering bone assessment the comparison by superimposition between T0 and T1 showed an increase of the distance between bilateral malar prominence (Pr - Pl) and a slight forward movement (87,65 ± 1,55 to 97,60 ± 5,91); p-value 0.007. All of the patients had improvement of α angle, ranging from 36,30 ± 1,70 to 38,45 ± 0,55, p-value 0,04 (αr) and 36,75 ± 1,58 to 38,45 ± 0,35; p-value 0,04 (αl). The distance S increased from 78,05 ± 2,48 to 84,2 ± 1,20; p-value 0,04 (Sr) and 78,65 ± 2,16 to 82,60 ± 0,90 (Sl); p-value 0,03. Considering the soft tissue, the comparison by superimposition between T0 and T1 showed an antero-lateral movement (p-value 0.008 NVL; p-value 0.001 NVR) of the malar bone projection together with an increase in width measurements (p-value 0,05 VL; p-value 0,01 VR). Angular measurement confirmed the pattern of the bony changes (p-value 0.034 αL; p-value 0,05 αR). CONCLUSION The malar valgization osteotomy in conjunction with orthognatic surgery is effective in improving zygomatic projection contributing to a balanced facial correction in midface hypoplasia.3D geometrical based volume and surface analysis demonstrate an increase in transversal and forward direction. The osteotomy can be safely performed in conjunction with orthognatic procedures.
Collapse
Affiliation(s)
- Giovanni Gerbino
- Division of Maxillofacial Surgery, Città Della Salute e Della Scienza Hospital, University of Torino, Italy
| | - Umberto Autorino
- Division of Maxillofacial Surgery, Città Della Salute e Della Scienza Hospital, University of Torino, Italy
| | - Claudia Borbon
- Division of Maxillofacial Surgery, Città Della Salute e Della Scienza Hospital, University of Torino, Italy.
| | - Federica Marcolin
- Department of Management and Production Engineering Politecnico of Torino, Italy
| | - Elena Olivetti
- Department of Management and Production Engineering Politecnico of Torino, Italy
| | - Enrico Vezzetti
- Department of Management and Production Engineering Politecnico of Torino, Italy
| | - Emanuele Zavattero
- Division of Maxillofacial Surgery, Città Della Salute e Della Scienza Hospital, University of Torino, Italy
| |
Collapse
|
7
|
Lee H, Park SH, Yoo JH, Jung SH, Huh JH. Face Recognition at a Distance for a Stand-Alone Access Control System. SENSORS 2020; 20:s20030785. [PMID: 32023973 PMCID: PMC7038408 DOI: 10.3390/s20030785] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 01/20/2020] [Accepted: 01/22/2020] [Indexed: 11/16/2022]
Abstract
Although access control based on human face recognition has become popular in consumer applications, it still has several implementation issues before it can realize a stand-alone access control system. Owing to a lack of computational resources, lightweight and computationally efficient face recognition algorithms are required. The conventional access control systems require significant active cooperation from the users despite its non-aggressive nature. The lighting/illumination change is one of the most difficult and challenging problems for human-face-recognition-based access control applications. This paper presents the design and implementation of a user-friendly, stand-alone access control system based on human face recognition at a distance. The local binary pattern (LBP)-AdaBoost framework was employed for face and eyes detection, which is fast and invariant to illumination changes. It can detect faces and eyes of varied sizes at a distance. For fast face recognition with a high accuracy, the Gabor-LBP histogram framework was modified by substituting the Gabor wavelet with Gaussian derivative filters, which reduced the facial feature size by 40% of the Gabor-LBP-based facial features, and was robust to significant illumination changes and complicated backgrounds. The experiments on benchmark datasets produced face recognition accuracies of 97.27% on an E-face dataset and 99.06% on an XM2VTS dataset, respectively. The system achieved a 91.5% true acceptance rate with a 0.28% false acceptance rate and averaged a 5.26 frames/sec processing speed on a newly collected face image and video dataset in an indoor office environment.
Collapse
Affiliation(s)
- Hansung Lee
- School of Computer Engineering, Youngsan University, 288 Junam-Ro, Yangsan, Gyeongnam 50510, Korea;
| | - So-Hee Park
- Intelligent Convergence Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), 218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Korea;
| | - Jang-Hee Yoo
- Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), 218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Korea;
| | - Se-Hoon Jung
- School of Major Connection (Bigdata Convergence), Youngsan University, 288 Junam-Ro, Yangsan, Gyeongnam 50510, Korea
- Correspondence: (S.H.J.); (J.H.H.)
| | - Jun-Ho Huh
- Department of Data Informatics, Korea Maritime and Ocean University, Busan 49112, Korea
- Correspondence: (S.H.J.); (J.H.H.)
| |
Collapse
|
8
|
Face Image Age Estimation Based on Data Augmentation and Lightweight Convolutional Neural Network. Symmetry (Basel) 2020. [DOI: 10.3390/sym12010146] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Face images contain many important biological characteristics. The research directions of face images mainly include face age estimation, gender judgment, and facial expression recognition. Taking face age estimation as an example, the estimation of face age images through algorithms can be widely used in the fields of biometrics, intelligent monitoring, human-computer interaction, and personalized services. With the rapid development of computer technology, the processing speed of electronic devices has greatly increased, and the storage capacity has been greatly increased, allowing deep learning to dominate the field of artificial intelligence. Traditional age estimation methods first design features manually, then extract features, and perform age estimation. Convolutional neural networks (CNN) in deep learning have incomparable advantages in processing image features. Practice has proven that the accuracy of using convolutional neural networks to estimate the age of face images is far superior to traditional methods. However, as neural networks are designed to be deeper, and networks are becoming larger and more complex, this makes it difficult to deploy models on mobile terminals. Based on a lightweight convolutional neural network, an improved ShuffleNetV2 network based on the mixed attention mechanism (MA-SFV2: Mixed Attention-ShuffleNetV2) is proposed in this paper by transforming the output layer, merging classification and regression age estimation methods, and highlighting important features by preprocessing images and data augmentation methods. The influence of noise vectors such as the environmental information unrelated to faces in the image is reduced, so that the final age estimation accuracy can be comparable to the state-of-the-art.
Collapse
|
9
|
Liu Y, Li Y, Ma X, Song R. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas. SENSORS 2017; 17:s17040712. [PMID: 28353671 PMCID: PMC5421672 DOI: 10.3390/s17040712] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Revised: 03/23/2017] [Accepted: 03/24/2017] [Indexed: 11/16/2022]
Abstract
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features’ dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
Collapse
Affiliation(s)
- Yanpeng Liu
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| | - Yibin Li
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| | - Xin Ma
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| | - Rui Song
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| |
Collapse
|
10
|
Xie W, Shen L, Yang M, Lai Z. Active AU Based Patch Weighting for Facial Expression Recognition. SENSORS 2017; 17:s17020275. [PMID: 28146094 PMCID: PMC5335947 DOI: 10.3390/s17020275] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 01/24/2017] [Indexed: 11/26/2022]
Abstract
Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed.
Collapse
Affiliation(s)
- Weicheng Xie
- Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China.
| | - Linlin Shen
- Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China.
| | - Meng Yang
- Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China.
| | - Zhihui Lai
- Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China.
| |
Collapse
|
11
|
A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7020112] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
12
|
Bonacina L, Froio A, Conti D, Marcolin F, Vezzetti E. Automatic 3D foetal face model extraction from ultrasonography through histogram processing. J Med Ultrasound 2016. [DOI: 10.1016/j.jmu.2016.08.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
13
|
Pithon MM, Alves LP, da Costa Prado M, Oliveira RL, Costa MSC, da Silva Coqueiro R, Gusmão JMR, Santos RL. Perception of Esthetic Impact of Smile Line in Complete Denture Wearers by Different Age Groups. J Prosthodont 2016; 25:531-535. [PMID: 26372165 DOI: 10.1111/jopr.12355] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2015] [Indexed: 11/30/2022] Open
Abstract
PURPOSE To evaluate esthetic perceptions based on tooth exposure when smiling of patients wearing complete dentures by evaluators in different age groups. MATERIALS AND METHODS Alterations were made to a front view photograph of a smiling patient wearing complete maxillary and mandibular dentures. Alterations in the smile line were simulated to increase or decrease tooth exposure (increments of 0.5 mm). For this purpose, image manipulation software was used. After manipulation, images were printed on photo paper, attached to a questionnaire, and distributed to individuals in three age groups (n = 150). To evaluate the esthetic perception for each image, a visual analog scale was used, with 0 representing least attractive, 5 representing attractive, and 10 representing very attractive. Differences between examiners were analyzed using the Mann-Whitney test. All statistical analyses were performed with a degree of confidence of 95%. RESULTS Two evaluators did not observe any differences between images. The images given the best and worst scores were E and O (alterations of 2 and 7 mm), respectively, in the 15- to 19-year-old group, B and O (alterations of 0.5 and 7 mm), respectively, in the 35- to 44-year-old group, and A and M (no alteration and 6 mm alteration), respectively, in the 65- to 74-year-old group. When the images were presented together (images 1 and 2), the unaltered image was selected by individuals of different age groups as the best, and the image with a change of 7 mm was selected as the worst. CONCLUSION In this study, complete dental prostheses with smile lines that coincided with the cervical margins of the anterior teeth were the most acceptable. Less exposure of the maxillary teeth when smiling corresponded with decreased attractiveness.
Collapse
Affiliation(s)
- Matheus Melo Pithon
- Department of Health I, Southwest Bahia State University UESB, Jequié, Bahia, Brazil.
| | - Leandro Pereira Alves
- Department of Health I, Southwest Bahia State University UESB, Jequié, Bahia, Brazil
| | | | - Rener Leal Oliveira
- Department of Health I, Southwest Bahia State University UESB, Jequié, Bahia, Brazil
| | | | | | | | - Rogério Lacerda Santos
- Department of Health and Technology Rural, Federal University of Campina Grande, Patos, Paraíba, Brazil
| |
Collapse
|
14
|
|