1
|
Lin Q, Dong Z, Zheng Q, Wang SJ. The effect of facial attractiveness on micro-expression recognition. Front Psychol 2022; 13:959124. [PMID: 36186390 PMCID: PMC9524498 DOI: 10.3389/fpsyg.2022.959124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 08/11/2022] [Indexed: 11/23/2022] Open
Abstract
Micro-expression (ME) is an extremely quick and uncontrollable facial movement that lasts for 40–200 ms and reveals thoughts and feelings that an individual attempts to cover up. Though much more difficult to detect and recognize, ME recognition is similar to macro-expression recognition in that it is influenced by facial features. Previous studies suggested that facial attractiveness could influence facial expression recognition processing. However, it remains unclear whether facial attractiveness could also influence ME recognition. Addressing this issue, this study tested 38 participants with two ME recognition tasks in a static condition or dynamically. Three different MEs (positive, neutral, and negative) at two attractiveness levels (attractive, unattractive). The results showed that participants recognized MEs on attractive faces much quicker than on unattractive ones, and there was a significant interaction between ME and facial attractiveness. Furthermore, attractive happy faces were recognized faster in both the static and the dynamic conditions, highlighting the happiness superiority effect. Therefore, our results provided the first evidence that facial attractiveness could influence ME recognition in a static condition or dynamically.
Collapse
Affiliation(s)
- Qiongsi Lin
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Zizhao Dong
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Qiuqiang Zheng
- Teacher Education Curriculum Center, School of Educational Science, Huizhou University, Huizhou, China
| | - Su-Jing Wang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
- *Correspondence: Su-Jing Wang
| |
Collapse
|
2
|
Kumar M, Roy S, Bhushan B, Sameer A. Creative problem solving and facial expressions: A stage based comparison. PLoS One 2022; 17:e0269504. [DOI: https:/doi.org/10.1371/journal.pone.0269504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2023] Open
Abstract
A wealth of research indicates that emotions play an instrumental role in creative problem-solving. However, most of these studies have relied primarily on diary studies and self-report scales when measuring emotions during the creative processes. There has been a need to capture in-the-moment emotional experiences of individuals during the creative process using an automated emotion recognition tool. The experiment in this study examined the process-related difference between the creative problem solving (CPS) and simple problem solving (SPS) processes using protocol analysis and Markov’s chains. Further, this experiment introduced a novel method for measuring in-the-moment emotional experiences of individuals during the CPS and SPS processes using facial expressions and machine learning algorithms. The experiment described in this study employed 64 participants to solve different tasks while wearing camera-mounted headgear. Using retrospective analysis, the participants verbally reported their thoughts using video-stimulated recall. Our results indicate differences in the cognitive efforts spent at different stages during the CPS and SPS processes. We also found that most of the creative stages were associated with ambivalent emotions whereas the stage of block was associated with negative emotions.
Collapse
|
3
|
Kumar M, Roy S, Bhushan B, Sameer A. Creative problem solving and facial expressions: A stage based comparison. PLoS One 2022; 17:e0269504. [PMID: 35731723 PMCID: PMC9216609 DOI: 10.1371/journal.pone.0269504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022] Open
Abstract
A wealth of research indicates that emotions play an instrumental role in creative problem-solving. However, most of these studies have relied primarily on diary studies and self-report scales when measuring emotions during the creative processes. There has been a need to capture in-the-moment emotional experiences of individuals during the creative process using an automated emotion recognition tool. The experiment in this study examined the process-related difference between the creative problem solving (CPS) and simple problem solving (SPS) processes using protocol analysis and Markov's chains. Further, this experiment introduced a novel method for measuring in-the-moment emotional experiences of individuals during the CPS and SPS processes using facial expressions and machine learning algorithms. The experiment described in this study employed 64 participants to solve different tasks while wearing camera-mounted headgear. Using retrospective analysis, the participants verbally reported their thoughts using video-stimulated recall. Our results indicate differences in the cognitive efforts spent at different stages during the CPS and SPS processes. We also found that most of the creative stages were associated with ambivalent emotions whereas the stage of block was associated with negative emotions.
Collapse
Affiliation(s)
- Mritunjay Kumar
- Department of Design, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Satyaki Roy
- Department of Design, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
- Department of Humanities and Social Sciences, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Braj Bhushan
- Department of Design, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
- Department of Humanities and Social Sciences, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Ahmed Sameer
- Department of Humanities and Social Sciences, Indian Institute of Technology (ISM) Dhanbad, Dhanbad, Jharkhand, India
| |
Collapse
|
4
|
Hughson E, Javadi R, Thompson J, Lim A. Investigating the Role of Culture on Negative Emotion Expressions in the Wild. Front Integr Neurosci 2021; 15:699667. [PMID: 34955773 PMCID: PMC8696886 DOI: 10.3389/fnint.2021.699667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 10/22/2021] [Indexed: 11/13/2022] Open
Abstract
Even though culture has been found to play some role in negative emotion expression, affective computing research primarily takes on a basic emotion approach when analyzing social signals for automatic emotion recognition technologies. Furthermore, automatic negative emotion recognition systems still train data that originates primarily from North America and contains a majority of Caucasian training samples. As such, the current study aims to address this problem by analyzing what the differences are of the underlying social signals by leveraging machine learning models to classify 3 negative emotions, contempt, anger and disgust (CAD) amongst 3 different cultures: North American, Persian, and Filipino. Using a curated data set compiled from YouTube videos, a support vector machine (SVM) was used to predict negative emotions amongst differing cultures. In addition a one-way ANOVA was used to analyse the differences that exist between each culture group in-terms of level of activation of underlying social signal. Our results not only highlighted the significant differences in the associated social signals that were activated for each culture, but also indicated the specific underlying social signals that differ in our cross-cultural data sets. Furthermore, the automatic classification methods showed North American expressions of CAD to be well-recognized, while Filipino and Persian expressions were recognized at near chance levels.
Collapse
Affiliation(s)
- Emma Hughson
- Robots With Social Intelligence and Empathy Lab, Simon Fraser University, School of Computing Science, Burnaby, BC, Canada
| | - Roya Javadi
- Robots With Social Intelligence and Empathy Lab, Simon Fraser University, School of Computing Science, Burnaby, BC, Canada
| | - James Thompson
- Robots With Social Intelligence and Empathy Lab, Simon Fraser University, School of Computing Science, Burnaby, BC, Canada
| | - Angelica Lim
- Robots With Social Intelligence and Empathy Lab, Simon Fraser University, School of Computing Science, Burnaby, BC, Canada
| |
Collapse
|
5
|
Keating CT, Sowden S, Cook JL. Comparing internal representations of facial expression kinematics between autistic and non-autistic adults. Autism Res 2021; 15:493-506. [PMID: 34846102 DOI: 10.1002/aur.2642] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 09/21/2021] [Accepted: 11/16/2021] [Indexed: 11/08/2022]
Abstract
Recent developments suggest that autistic individuals require dynamic angry expressions to have a higher speed in order for them to be successfully identified. Therefore, it is plausible that autistic individuals do not have a 'deficit' in angry expression recognition, but rather their internal representation of these expressions is characterised by very high-speed movement. In this study, matched groups of autistic and non-autistic adults completed a novel emotion-based task which employed dynamic displays of happy, angry and sad point light facial (PLF) expressions. On each trial, participants moved a slider to manipulate the speed of a PLF stimulus until it moved at a speed that, in their 'mind's eye', was typical of happy, angry or sad expressions. Participants were shown three different types of PLFs-those showing the full-face, only the eye region, and only the mouth region, wherein the latter two were included to test whether differences in facial information sampling underpinned any dissimilarities in speed attributions. Across both groups, participants attributed the highest speeds to angry, then happy, then sad, facial motion. Participants increased the speed of angry and happy expressions by 41% and 27% respectively and decreased the speed of sad expressions by 18%. This suggests that participants have 'caricatured' internal representations of emotion, wherein emotion-related kinematic cues are over-emphasised. There were no differences between autistic and non-autistic individuals in the speeds attributed to full-face and partial-face angry, happy and sad expressions respectively. Consequently, we find no evidence that autistic adults possess atypically fast internal representations of anger.
Collapse
Affiliation(s)
| | - Sophie Sowden
- School of Psychology, University of Birmingham, Birmingham, UK
| | - Jennifer L Cook
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
6
|
NetFACS: Using network science to understand facial communication systems. Behav Res Methods 2021; 54:1912-1927. [PMID: 34755285 PMCID: PMC9374617 DOI: 10.3758/s13428-021-01692-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2021] [Indexed: 11/15/2022]
Abstract
Understanding facial signals in humans and other species is crucial for understanding the evolution, complexity, and function of the face as a communication tool. The Facial Action Coding System (FACS) enables researchers to measure facial movements accurately, but we currently lack tools to reliably analyse data and efficiently communicate results. Network analysis can provide a way to use the information encoded in FACS datasets: by treating individual AUs (the smallest units of facial movements) as nodes in a network and their co-occurrence as connections, we can analyse and visualise differences in the use of combinations of AUs in different conditions. Here, we present ‘NetFACS’, a statistical package that uses occurrence probabilities and resampling methods to answer questions about the use of AUs, AU combinations, and the facial communication system as a whole in humans and non-human animals. Using highly stereotyped facial signals as an example, we illustrate some of the current functionalities of NetFACS. We show that very few AUs are specific to certain stereotypical contexts; that AUs are not used independently from each other; that graph-level properties of stereotypical signals differ; and that clusters of AUs allow us to reconstruct facial signals, even when blind to the underlying conditions. The flexibility and widespread use of network analysis allows us to move away from studying facial signals as stereotyped expressions, and towards a dynamic and differentiated approach to facial communication.
Collapse
|
7
|
|
8
|
Höfling TTA, Alpers GW, Gerdes ABM, Föhl U. Automatic facial coding versus electromyography of mimicked, passive, and inhibited facial response to emotional faces. Cogn Emot 2021; 35:874-889. [PMID: 33761825 DOI: 10.1080/02699931.2021.1902786] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.
Collapse
Affiliation(s)
- T Tim A Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany.,Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Georg W Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B M Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| |
Collapse
|
9
|
Zhang M, Ihme K, Drewitz U, Jipp M. Understanding the Multidimensional and Dynamic Nature of Facial Expressions Based on Indicators for Appraisal Components as Basis for Measuring Drivers' Fear. Front Psychol 2021; 12:622433. [PMID: 33679538 PMCID: PMC7930214 DOI: 10.3389/fpsyg.2021.622433] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/27/2021] [Indexed: 11/13/2022] Open
Abstract
Facial expressions are one of the commonly used implicit measurements for the in-vehicle affective computing. However, the time courses and the underlying mechanism of facial expressions so far have been barely focused on. According to the Component Process Model of emotions, facial expressions are the result of an individual's appraisals, which are supposed to happen in sequence. Therefore, a multidimensional and dynamic analysis of drivers' fear by using facial expression data could profit from a consideration of these appraisals. A driving simulator experiment with 37 participants was conducted, in which fear and relaxation were induced. It was found that the facial expression indicators of high novelty and low power appraisals were significantly activated after a fear event (high novelty: Z = 2.80, p < 0.01, rcontrast = 0.46; low power: Z = 2.43, p < 0.05, rcontrast = 0.50). Furthermore, after the fear event, the activation of high novelty occurred earlier than low power. These results suggest that multidimensional analysis of facial expression is suitable as an approach for the in-vehicle measurement of the drivers' emotions. Furthermore, a dynamic analysis of drivers' facial expressions considering of effects of appraisal components can add valuable information for the in-vehicle assessment of emotions.
Collapse
Affiliation(s)
- Meng Zhang
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| | - Klas Ihme
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| | - Uwe Drewitz
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| | - Meike Jipp
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| |
Collapse
|
10
|
Nanayama Tanaka C, Higa H, Ogawa N, Ishido M, Nakamura T, Nishiwaki M. Negative Mood States Are Related to the Characteristics of Facial Expression Drawing: A Cross-Sectional Study. Front Psychol 2021; 11:576683. [PMID: 33391093 PMCID: PMC7773925 DOI: 10.3389/fpsyg.2020.576683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 11/12/2020] [Indexed: 11/13/2022] Open
Abstract
An assessment of mood or emotion is important in developing mental health measures, and facial expressions are strongly related to mood or emotion. This study thus aimed to examine the relationship between levels of negative mood and characteristics of mouth parts when moods are drawn as facial expressions on a common platform. A cross-sectional study of Japanese college freshmen was conducted, and 1,068 valid responses were analyzed. The questionnaire survey consisted of participants’ characteristics, the Profile of Mood States (POMS), and a sheet of facial expression drawing (FACED), and the sheet was digitized and analyzed using an image-analysis software. Based on the total POMS score as an index of negative mood, the participants were divided into four groups: low (L), normal (N), high (H), and very high (VH). Lengths of drawn lines and between both mouth corners were significantly longer, and circularity and roundness were significantly higher in the L group. With increasing levels of negative mood, significant decreasing trends were observed in these lengths. Convex downward and enclosed figures were significantly predominant in the L group, while convex upward figures were significantly predominant and a tendency toward predominance of no drawn mouths or line figures was found in the H and VH groups. Our results suggest that mood states can be significantly related to the size and figure characteristics of drawn mouths of FACED on a non-verbal common platform. That is, these findings mean that subjects with low negative mood may draw a greater and rounder mouth and figures that may be enclosed and downward convex, while subjects with a high negative mood may not draw the line, or if any, may draw the line shorter and upward convex.
Collapse
Affiliation(s)
| | - Hayato Higa
- Faculty of Medicine, University of Toyama, Toyama, Japan
| | - Noriko Ogawa
- Graduate School of Engineering, Osaka Institute of Technology, Osaka, Japan.,Faculty of Nursing, Setsunan University, Osaka, Japan
| | - Minenori Ishido
- Faculty of Engineering, Osaka Institute of Technology, Osaka, Japan
| | | | - Masato Nishiwaki
- Faculty of Engineering, Osaka Institute of Technology, Osaka, Japan
| |
Collapse
|
11
|
Roitblat Y, Cohensedgh S, Frig-Levinson E, Cohen M, Dadbin K, Shohed C, Shvartsman D, Shterenshis M. Emotional expressions with minimal facial muscle actions. Report 2: Recognition of emotions. CURRENT PSYCHOLOGY 2020. [DOI: 10.1007/s12144-020-00691-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|