1
|
Mulder MJ, Prummer F, Terburg D, Kenemans JL. Drift-diffusion modeling reveals that masked faces are preconceived as unfriendly. Sci Rep 2023; 13:16982. [PMID: 37813970 PMCID: PMC10562405 DOI: 10.1038/s41598-023-44162-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 10/04/2023] [Indexed: 10/11/2023] Open
Abstract
During the COVID-19 pandemic, the use of face masks has become a daily routine. Studies have shown that face masks increase the ambiguity of facial expressions which not only affects (the development of) emotion recognition, but also interferes with social interaction and judgement. To disambiguate facial expressions, we rely on perceptual (stimulus-driven) as well as preconceptual (top-down) processes. However, it is unknown which of these two mechanisms accounts for the misinterpretation of masked expressions. To investigate this, we asked participants (N = 136) to decide whether ambiguous (morphed) facial expressions, with or without a mask, were perceived as friendly or unfriendly. To test for the independent effects of perceptual and preconceptual biases we fitted a drift-diffusion model (DDM) to the behavioral data of each participant. Results show that face masks induce a clear loss of information leading to a slight perceptual bias towards friendly choices, but also a clear preconceptual bias towards unfriendly choices for masked faces. These results suggest that, although face masks can increase the perceptual friendliness of faces, people have the prior preconception to interpret masked faces as unfriendly.
Collapse
Affiliation(s)
- Martijn J Mulder
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Franziska Prummer
- School of Computing and Communications, Lancaster University, Lancaster, UK
| | - David Terburg
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - J Leon Kenemans
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
3
|
Babenko VV, Yavna DV, Ermakov PN, Anokhina PV. Nonlocal contrast calculated by the second order visual mechanisms and its significance in identifying facial emotions. F1000Res 2023; 10:274. [PMID: 37767361 PMCID: PMC10521119 DOI: 10.12688/f1000research.28396.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/15/2023] [Indexed: 09/29/2023] Open
Abstract
Background: Previously obtained results indicate that faces are / preattentively/ detected in the visual scene very fast, and information on facial expression is rapidly extracted at the lower levels of the visual system. At the same time different facial attributes make different contributions in facial expression recognition. However, it is known, among the preattentive mechanisms there are none that would be selective for certain facial features, such as eyes or mouth. The aim of our study was to identify a candidate for the role of such a mechanism. Our assumption was that the most informative areas of the image are those characterized by spatial heterogeneity, particularly with nonlocal contrast changes. These areas may be identified / in the human visual system/ by the second-order visual / mechanisms/ filters selective to contrast modulations of brightness gradients. Methods: We developed a software program imitating the operation of these / mechanisms/ filters and finding areas of contrast heterogeneity in the image. Using this program, we extracted areas with maximum, minimum and medium contrast modulation amplitudes from the initial face images, then we used these to make three variants of one and the same face. The faces were demonstrated to the observers along with other objects synthesized the same way. The participants had to identify faces and define facial emotional expressions. Results: It was found that the greater is the contrast modulation amplitude of the areas shaping the face, the more precisely the emotion is identified. Conclusions: The results suggest that areas with a greater increase in nonlocal contrast are more informative in facial images, and the second-order visual / mechanisms/ filters can claim the role of /filters/ elements that detect areas of interest, attract visual attention and are windows through which subsequent levels of visual processing receive valuable information.
Collapse
Affiliation(s)
- Vitaly V. Babenko
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Denis V. Yavna
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Pavel N. Ermakov
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Polina V. Anokhina
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| |
Collapse
|
4
|
Nussbaum C, Pöhlmann M, Kreysa H, Schweinberger SR. Perceived naturalness of emotional voice morphs. Cogn Emot 2023; 37:731-747. [PMID: 37104118 DOI: 10.1080/02699931.2023.2200920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/03/2023] [Accepted: 04/05/2023] [Indexed: 04/28/2023]
Abstract
Research into voice perception benefits from manipulation software to gain experimental control over acoustic expression of social signals such as vocal emotions. Today, parameter-specific voice morphing allows a precise control of the emotional quality expressed by single vocal parameters, such as fundamental frequency (F0) and timbre. However, potential side effects, in particular reduced naturalness, could limit ecological validity of speech stimuli. To address this for the domain of emotion perception, we collected ratings of perceived naturalness and emotionality on voice morphs expressing different emotions either through F0 or Timbre only. In two experiments, we compared two different morphing approaches, using either neutral voices or emotional averages as emotionally non-informative reference stimuli. As expected, parameter-specific voice morphing reduced perceived naturalness. However, perceived naturalness of F0 and Timbre morphs were comparable with averaged emotions as reference, potentially making this approach more suitable for future research. Crucially, there was no relationship between ratings of emotionality and naturalness, suggesting that the perception of emotion was not substantially affected by a reduction of voice naturalness. We hold that while these findings advocate parameter-specific voice morphing as a suitable tool for research on vocal emotion perception, great care should be taken in producing ecologically valid stimuli.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Manuel Pöhlmann
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Helene Kreysa
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
5
|
Watson DM, Johnston A. A PCA-Based Active Appearance Model for Characterising Modes of Spatiotemporal Variation in Dynamic Facial Behaviours. Front Psychol 2022; 13:880548. [PMID: 35719501 PMCID: PMC9204357 DOI: 10.3389/fpsyg.2022.880548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 04/22/2022] [Indexed: 11/13/2022] Open
Abstract
Faces carry key personal information about individuals, including cues to their identity, social traits, and emotional state. Much research to date has employed static images of faces taken under tightly controlled conditions yet faces in the real world are dynamic and experienced under ambient conditions. A common approach to studying key dimensions of facial variation is the use of facial caricatures. However, such techniques have again typically relied on static images, and the few examples of dynamic caricatures have relied on animating graphical head models. Here, we present a principal component analysis (PCA)-based active appearance model for capturing patterns of spatiotemporal variation in videos of natural dynamic facial behaviours. We demonstrate how this technique can be applied to generate dynamic anti-caricatures of biological motion patterns in facial behaviours. This technique could be extended to caricaturing other facial dimensions, or to more general analyses of spatiotemporal variations in dynamic faces.
Collapse
Affiliation(s)
- David M Watson
- School of Psychology, University of Nottingham, Nottingham, United Kingdom.,Department of Psychology, University of York, York, United Kingdom
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
6
|
Furl N, Begum F, Ferrarese FP, Jans S, Woolley C, Sulik J. Caricatured facial movements enhance perception of emotional facial expressions. Perception 2022; 51:313-343. [PMID: 35341407 PMCID: PMC9017061 DOI: 10.1177/03010066221086452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Although faces “in the wild” constantly undergo complicated movements, humans adeptly
perceive facial identity and expression. Previous studies, focusing mainly on identity,
used photographic caricature to show that distinctive form increases perceived
dissimilarity. We tested whether distinctive facial movements showed
similar effects, and we focussed on both perception of expression and
identity. We caricatured the movements of an animated computer head,
using physical motion metrics extracted from videos. We verified that these “ground truth”
metrics showed the expected effects: Caricature increased physical dissimilarity between
faces differing in expression and those differing in identity. Like the ground truth
dissimilarity, participants’ dissimilarity perception was increased by caricature when
faces differed in expression. We found these perceived dissimilarities to reflect the
“representational geometry” of the ground truth. However, neither of these findings held
for faces differing in identity. These findings replicated across two paradigms: pairwise
ratings and multiarrangement. In a final study, motion caricature did not improve
recognition memory for identity, whether manipulated at study or test. We report several
forms of converging evidence for spatiotemporal caricature effects on dissimilarity
perception of different expressions. However, more work needs to be done to discover what
identity-specific movements can enhance face identification.
Collapse
Affiliation(s)
| | | | | | - Sarah Jans
- Royal Holloway, 3162University of London, UK
| | | | - Justin Sulik
- Royal Holloway, 3162University of London, UK; Cognition, Values & Behavior, Ludwig Maximilian University of Munich, Germany
| |
Collapse
|
7
|
Duran N, Atkinson AP. Foveal processing of emotion-informative facial features. PLoS One 2021; 16:e0260814. [PMID: 34855898 PMCID: PMC8638924 DOI: 10.1371/journal.pone.0260814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 11/17/2021] [Indexed: 11/18/2022] Open
Abstract
Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.
Collapse
Affiliation(s)
- Nazire Duran
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Anthony P. Atkinson
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| |
Collapse
|
8
|
Cavieres A, Maldonado R, Bland A, Elliott R. Relationship Between Gender and Performance on Emotion Perception Tasks in a Latino Population. Int J Psychol Res (Medellin) 2021; 14:106-114. [PMID: 34306583 PMCID: PMC8297575 DOI: 10.21500/20112084.5032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 11/24/2020] [Accepted: 02/26/2021] [Indexed: 12/02/2022] Open
Abstract
Basic emotions are universally recognized, although differences across cultures and between genders have been described. We report results in two emotion recognition tasks, in a sample of healthy adults from Chile. Methods: 192 volunteers (mean 31.58 years, s.d. 8.36; 106 women) completed the Emotional Recognition Task, in which they were asked to identify a briefly displayed emotion, and the Emotional Intensity Morphing Task, in which they viewed faces with increasing or decreasing emotional intensity and indicated when they either detected or no longer detected the emotion. Results: All emotions were recognized at above chance levels. The only sex differences present showed men performed better at identifying anger (p = .0485), and responded more slowly to fear (p = .0057), than women. Discussion: These findings are consistent with some, though not all, prior literature on emotion perception. Crucially, we report data on emotional perception in a healthy adult Latino population for the first time, which contributes to emerging literature on cultural differences in affective processing.
Collapse
Affiliation(s)
- Alvaro Cavieres
- Departamento de Psiquiatría, Universidad de Valparaíso, Chile. Universidad de Valparaíso Universidad de Valparaíso Chile
| | - Rocío Maldonado
- Departamento de Psiquiatría, Universidad de Valparaíso, Chile. Universidad de Valparaíso Universidad de Valparaíso Chile
| | - Amy Bland
- Department of Psychology, Manchester Metropolitan University, UK. Manchester Metropolitan University Manchester Metropolitan University United Kingdom
| | - Rebecca Elliott
- Neuroscience and Psychiatry Unit, Division of Neuroscience and Experimental Psychology,University of Manchester, UK. The University of Manchester University of Manchester United Kingdom
| |
Collapse
|
9
|
ERP evidence for emotional sensitivity in social anxiety. J Affect Disord 2021; 279:361-367. [PMID: 33099050 DOI: 10.1016/j.jad.2020.09.111] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 08/08/2020] [Accepted: 09/26/2020] [Indexed: 11/20/2022]
Abstract
BACKGROUND Emotional sensitivity involves the ability to recognize and interpret facial expressions. This is very important for interpersonal communication. Previous studies found differences in emotional sensitivity between high social anxiety (HSA) individuals and low social anxiety (LSA) individuals. However, the underlying neural mechanisms are still unclear. The present study explored the effects of expression intensity and social anxiety on emotional sensitivity and their neural mechanisms. METHODS The HSA group (n = 20) and the LSA group (n = 20) were asked to recognize anger expressions with different intensities in an emotion recognition task. The hit rate, reaction time, early time window (P1, N170), and late time window (LPP) were recorded. RESULTS The results showed that individuals with HSA had a significantly higher hit rate and shorter reaction time than individuals with LSA (p < 0.01). Event-related potential (ERP) results showed that, compared to the LSA group, the HSA group exhibited significantly enhanced N170 and LPP amplitude (p < 0.01). However, the difference in P1 amplitude was not significant (p > 0.05). LIMITATIONS The participants in this study were a subclinical social anxiety sample, and the effects of other mood disorders were not excluded, partially limiting the generalizability of the results. CONCLUSIONS Our findings suggest that, compared to LSA individuals, HSA individuals are more sensitive to all presented faces. The ERP results indicated that HSA individuals' high sensitivity to threatening expressions is related to stronger structural encoding and fine processing.
Collapse
|
10
|
Körner R, Schütz A. Dominance or prestige: A review of the effects of power poses and other body postures. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2020. [DOI: 10.1111/spc3.12559] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Robert Körner
- Department of PsychologyUniversity of Bamberg Bamberg Germany
| | - Astrid Schütz
- Department of PsychologyUniversity of Bamberg Bamberg Germany
| |
Collapse
|
11
|
Whiting CM, Kotz SA, Gross J, Giordano BL, Belin P. The perception of caricatured emotion in voice. Cognition 2020; 200:104249. [PMID: 32413547 PMCID: PMC7315128 DOI: 10.1016/j.cognition.2020.104249] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Revised: 02/10/2020] [Accepted: 02/27/2020] [Indexed: 11/29/2022]
Abstract
Affective vocalisations such as screams and laughs can convey strong emotional content without verbal information. Previous research using morphed vocalisations (e.g. 25% fear/75% anger) has revealed categorical perception of emotion in voices, showing sudden shifts at emotion category boundaries. However, it is currently unknown how further modulation of vocalisations beyond the veridical emotion (e.g. 125% fear) affects perception. Caricatured facial expressions produce emotions that are perceived as more intense and distinctive, with faster recognition relative to the original and anti-caricatured (e.g. 75% fear) emotions, but a similar effect using vocal caricatures has not been previously examined. Furthermore, caricatures can play a key role in assessing how distinctiveness is identified, in particular by evaluating accounts of emotion perception with reference to prototypes (distance from the central stimulus) and exemplars (density of the stimulus space). Stimuli consisted of four emotions (anger, disgust, fear, and pleasure) morphed at 25% intervals between a neutral expression and each emotion from 25% to 125%, and between each pair of emotions. Emotion perception was assessed using emotion intensity ratings, valence and arousal ratings, speeded categorisation and paired similarity ratings. We report two key findings: 1) across tasks, there was a strongly linear effect of caricaturing, with caricatured emotions (125%) perceived as higher in emotion intensity and arousal, and recognised faster compared to the original emotion (100%) and anti-caricatures (25%-75%); 2) our results reveal evidence for a unique contribution of a prototype-based account in emotion recognition. We show for the first time that vocal caricature effects are comparable to those found previously with facial caricatures. The set of caricatured vocalisations provided open a promising line of research for investigating vocal affect perception and emotion processing deficits in clinical populations.
Collapse
Affiliation(s)
- Caroline M Whiting
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK; Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany
| | - Bruno L Giordano
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK; Institut de Neurosciences de la Timone, CNRS UMR 7289, Aix-Marseille Université, Marseille, France.
| | - Pascal Belin
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK; Institut de Neurosciences de la Timone, CNRS UMR 7289, Aix-Marseille Université, Marseille, France
| |
Collapse
|
12
|
Abstract
This paper describes a method to measure the sensitivity of an individual to different facial expressions. It shows that individual participants are more sensitive to happy than to fearful expressions and that the differences are statistically significant using the model-comparison approach. Sensitivity is measured by asking participants to discriminate between an emotional facial expression and a neutral expression of the same face. The expression was diluted to different degrees by combining it in different proportions with the neutral expression using morphing software. Sensitivity is defined as measurement of the proportion of neutral expression in a stimulus required for participants to discriminate the emotional expression on 75% of presentations. Individuals could reliably discriminate happy expressions diluted with a greater proportion of the neutral expression compared with that required for discrimination of fearful expressions. This tells us that individual participants are more sensitive to happy compared with fearful expressions. Sensitivity is equivalent when measured on two different testing sessions, and greater sensitivity to happy expressions is maintained with short stimulus durations and stimuli generated using different morphing software. Increased sensitivity to happy compared with fear expressions was affected at smaller image sizes for some participants. Application of the approach for use with clinical populations, as well as understanding the relative contribution of perceptual processing and affective processing in facial expression recognition, is discussed.
Collapse
|
13
|
Furl N, Begum F, Sulik J, Ferrarese FP, Jans S, Woolley C. Face space representations of movement. Neuroimage 2020; 212:116676. [DOI: 10.1016/j.neuroimage.2020.116676] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 01/31/2020] [Accepted: 02/20/2020] [Indexed: 10/24/2022] Open
|
14
|
Roch M, Pesciarelli F, Leo I. How Individuals With Down Syndrome Process Faces and Words Conveying Emotions? Evidence From a Priming Paradigm. Front Psychol 2020; 11:692. [PMID: 32362859 PMCID: PMC7180333 DOI: 10.3389/fpsyg.2020.00692] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 03/23/2020] [Indexed: 12/12/2022] Open
Abstract
Emotion recognition from facial expressions and words conveying emotions is considered crucial for the development of interpersonal relations (Pochon and Declercq, 2013). Although Down syndrome (DS) has received growing attention in the last two decades, emotional development has remained underexplored, perhaps because of the stereotype of high sociability in persons with DS. Yet recently, there is some literature that is suggesting the existence of specific deficits in emotion recognition in DS. The current study aimed to expand our knowledge on how individuals with DS process emotion expressions from faces and words by adopting a powerful methodological paradigm, namely priming. The purpose is to analyse to what extent emotion recognition in DS can occur through different processes than in typical development. Individuals with DS (N = 20) were matched to a control group (N = 20) on vocabulary knowledge (PPTV) and non-verbal ability (Raven’s matrices). Subsequently a priming paradigm was adopted: stimuli were photos of faces with different facial expressions (happy, sad, neutral) and three words (happy, sad, neutral). On a computer screen the first item (face or word) was presented for a very short time (prime) and afterward a stimulus (face or word) appeared (target). Participants had to recognize whether the target was an emotion (sad/happy) or not (neutral). Four prime-target pairs were presented (face-word; word-face; word-word; face-word) in two conditions: congruent (same emotion prime/target) and incongruent (different emotion prime/target). The results failed to show evidence for differential processing during emotion recognition between the two groups matched for verbal and non-verbal abilities. Both groups showed a typical priming effect: In the incongruent condition, slower reaction times were recorded, in particular when the target to be recognized is the face, providing evidence that the stimuli were indeed processed. Overall, the data of the current work seem to support the idea of similar developmental trajectories in individuals with DS and TD of the same verbal and non-verbal level, at least as far as the processing of simple visual and linguistic stimuli conveying basic emotions is concerned. Results are interpreted in relation to recent finding on emotion recognition from faces and words in DS.
Collapse
Affiliation(s)
- Maja Roch
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| | - Francesca Pesciarelli
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy.,Center for Neuroscience and Neurotechnology, University of Modena and Reggio Emilia, Modena, Italy
| | - Irene Leo
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| |
Collapse
|
15
|
Nonlinear transduction of emotional facial expression. Vision Res 2020; 170:1-11. [PMID: 32217366 DOI: 10.1016/j.visres.2020.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 03/06/2020] [Accepted: 03/09/2020] [Indexed: 11/23/2022]
Abstract
To create neural representations of external stimuli, the brain performs a number of processing steps that transform its inputs. For fundamental attributes, such as stimulus contrast, this involves one or more nonlinearities that are believed to optimise the neural code to represent features of the natural environment. Here we ask if the same is also true of more complex stimulus dimensions, such as emotional facial expression. We report the results of three experiments combining morphed facial stimuli with electrophysiological and psychophysical methods to measure the function mapping emotional expression intensity to internal response. The results converge on a nonlinearity that accelerates over weak expressions, and then becomes shallower for stronger expressions, similar to the situation for lower level stimulus properties. We further demonstrate that the nonlinearity is not attributable to the morphing procedure used in stimulus generation.
Collapse
|
16
|
Lane J, Robbins RA, Rohan EMF, Crookes K, Essex RW, Maddess T, Sabeti F, Mazlin JL, Irons J, Gradden T, Dawel A, Barnes N, He X, Smithson M, McKone E. Caricaturing can improve facial expression recognition in low-resolution images and age-related macular degeneration. J Vis 2019; 19:18. [DOI: 10.1167/19.6.18] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Jo Lane
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Emilie M. F. Rohan
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Kate Crookes
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Rohan W. Essex
- Academic Unit of Ophthalmology, Medical School, The Australian National University, Canberra, ACT, Australia
| | - Ted Maddess
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Faran Sabeti
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
- Discipline of Optometry and Vision Science, The University of Canberra, Bruce, ACT, Australia
- Collaborative Research in Bioactives and Biomarkers (CRIBB) Group, Canberra, ACT, Australia
| | - Jamie-Lee Mazlin
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Jessica Irons
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Tamara Gradden
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Amy Dawel
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Nick Barnes
- Research School of Engineering, The Australian National University and Data61, Commonwealth Scientific and Industrial Research Organisation, Canberra, ACT, Australia
| | - Xuming He
- School of Information Science and Technology, Shanghai Tech University, Shanghai, China
| | - Michael Smithson
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Elinor McKone
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| |
Collapse
|
17
|
McKone E, Robbins RA, He X, Barnes N. Caricaturing faces to improve identity recognition in low vision simulations: How effective is current-generation automatic assignment of landmark points? PLoS One 2018; 13:e0204361. [PMID: 30286112 PMCID: PMC6171855 DOI: 10.1371/journal.pone.0204361] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 09/05/2018] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Previous behavioural studies demonstrate that face caricaturing can provide an effective image enhancement method for improving poor face identity perception in low vision simulations (e.g., age-related macular degeneration, bionic eye). To translate caricaturing usefully to patients, assignment of the multiple face landmark points needed to produce the caricatures needs to be fully automatised. Recent development in computer science allows automatic face landmark detection of 68 points in real time and in multiple viewpoints. However, previous demonstrations of the behavioural effectiveness of caricaturing have used higher-precision caricatures with 147 landmark points per face, assigned by hand. Here, we test the effectiveness of the auto-assigned 68-point caricatures. We also compare this to the hand-assigned 147-point caricatures. METHOD We assessed human perception of how different in identity pairs of faces appear, when veridical (uncaricatured), caricatured with 68-points, and caricatured with 147-points. Across two experiments, we tested two types of low-vision images: a simulation of blur, as experienced in macular degeneration (testing two blur levels); and a simulation of the phosphenised images seen in prosthetic vision (at three resolutions). RESULTS The 68-point caricatures produced significant improvements in identity discrimination relative to veridical. They were approximately 50% as effective as the 147-point caricatures. CONCLUSION Realistic translation to patients (e.g., via real time caricaturing with the enhanced signal sent to smart glasses or visual prosthetic) is approaching feasibility. For maximum effectiveness software needs to be able to assign landmark points tracing out all details of feature and face shape, to produce high-precision caricatures.
Collapse
Affiliation(s)
- Elinor McKone
- Research School of Psychology, and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Xuming He
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Nick Barnes
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia
- Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, Australian Capital Territory, Australia
- Bionic Vision Australia, Carlton, Victoria, Australia
| |
Collapse
|
18
|
Leleu A, Dzhelyova M, Rossion B, Brochard R, Durand K, Schaal B, Baudouin JY. Tuning functions for automatic detection of brief changes of facial expression in the human brain. Neuroimage 2018; 179:235-251. [DOI: 10.1016/j.neuroimage.2018.06.048] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 05/03/2018] [Accepted: 06/15/2018] [Indexed: 12/27/2022] Open
|
19
|
Zloteanu M, Krumhuber EG, Richardson DC. Detecting Genuine and Deliberate Displays of Surprise in Static and Dynamic Faces. Front Psychol 2018; 9:1184. [PMID: 30042717 PMCID: PMC6048358 DOI: 10.3389/fpsyg.2018.01184] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 06/19/2018] [Indexed: 11/13/2022] Open
Abstract
People are good at recognizing emotions from facial expressions, but less accurate at determining the authenticity of such expressions. We investigated whether this depends upon the technique that senders use to produce deliberate expressions, and on decoders seeing these in a dynamic or static format. Senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine). Other senders faked surprise with no preparation (Improvised) or after having first experienced genuine surprise themselves (Rehearsed). Decoders rated the genuineness and intensity of these expressions, and the confidence of their judgment. It was found that both expression type and presentation format impacted decoder perception and accurate discrimination. Genuine surprise achieved the highest ratings of genuineness, intensity, and judgmental confidence (dynamic only), and was fairly accurately discriminated from deliberate surprise expressions. In line with our predictions, Rehearsed expressions were perceived as more genuine (in dynamic presentation), whereas Improvised were seen as more intense (in static presentation). However, both were poorly discriminated as not being genuine. In general, dynamic stimuli improved authenticity discrimination accuracy and perceptual differences between expressions. While decoders could perceive subtle differences between different expressions (especially from dynamic displays), they were not adept at detecting if these were genuine or deliberate. We argue that senders are capable of producing genuine-looking expressions of surprise, enough to fool others as to their veracity.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Computer Science, University College London, London, United Kingdom.,Department of Experimental Psychology, University College London, London, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Daniel C Richardson
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
20
|
Sutherland CAM, Rhodes G, Young AW. Facial Image Manipulation. SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE 2017. [DOI: 10.1177/1948550617697176] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Clare A. M. Sutherland
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Gillian Rhodes
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Andrew W. Young
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
- Department of Psychology, University of York, Heslington, York, United Kingdom
| |
Collapse
|
21
|
Cebula KR, Wishart JG, Willis DS, Pitcairn TK. Emotion Recognition in Children With Down Syndrome: Influence of Emotion Label and Expression Intensity. AMERICAN JOURNAL ON INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2017; 122:138-155. [PMID: 28257244 DOI: 10.1352/1944-7558-122.2.138] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Some children with Down syndrome may experience difficulties in recognizing facial emotions, particularly fear, but it is not clear why, nor how such skills can best be facilitated. Using a photo-matching task, emotion recognition was tested in children with Down syndrome, children with nonspecific intellectual disability and cognitively matched, typically developing children (all groups N = 21) under four conditions: veridical vs. exaggerated emotions and emotion-labelling vs. generic task instructions. In all groups, exaggerating emotions facilitated recognition accuracy and speed, with emotion labelling facilitating recognition accuracy. Overall accuracy and speed did not differ in the children with Down syndrome, although recognition of fear was poorer than in the typically developing children and unrelated to emotion label use. Implications for interventions are considered.
Collapse
Affiliation(s)
- Katie R Cebula
- Katie R. Cebula, University of Edinburgh, School of Education
| | | | | | | |
Collapse
|
22
|
Atkinson AP, Dittrich WH, Gemmell AJ, Young AW. Emotion Perception from Dynamic and Static Body Expressions in Point-Light and Full-Light Displays. Perception 2016; 33:717-46. [PMID: 15330366 DOI: 10.1068/p5096] [Citation(s) in RCA: 357] [Impact Index Per Article: 44.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the ‘peak’ of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.
Collapse
Affiliation(s)
- Anthony P Atkinson
- Department of Psychology, University of Durham, Science Laboratories, South Road, Durham DH1 3LE, UK.
| | | | | | | |
Collapse
|
23
|
Wingenbach TSH, Ashwin C, Brosnan M. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions. PLoS One 2016; 11:e0147112. [PMID: 26784347 PMCID: PMC4718603 DOI: 10.1371/journal.pone.0147112] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Accepted: 12/29/2015] [Indexed: 11/19/2022] Open
Abstract
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
Collapse
Affiliation(s)
| | - Chris Ashwin
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Mark Brosnan
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
24
|
Almeida PR, Ferreira-Santos F, Chaves PL, Paiva TO, Barbosa F, Marques-Teixeira J. Perceived arousal of facial expressions of emotion modulates the N170, regardless of emotional category: Time domain and time–frequency dynamics. Int J Psychophysiol 2016; 99:48-56. [DOI: 10.1016/j.ijpsycho.2015.11.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 10/13/2015] [Accepted: 11/30/2015] [Indexed: 10/22/2022]
|
25
|
Takehara T, Ochiai F, Suzuki N. A small-world network model of facial emotion recognition. Q J Exp Psychol (Hove) 2015; 69:1508-29. [PMID: 26315136 DOI: 10.1080/17470218.2015.1086393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.
Collapse
Affiliation(s)
- Takuma Takehara
- a Department of Psychology , Doshisha University , Kyoto , Japan.,b Department of Psychology , University of Cincinnati , Cincinnati , OH , USA
| | - Fumio Ochiai
- c Department of Business Administration , Tezukayama University , Nara , Japan
| | - Naoto Suzuki
- a Department of Psychology , Doshisha University , Kyoto , Japan
| |
Collapse
|
26
|
Calvo MG, Nummenmaa L. Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cogn Emot 2015. [PMID: 26212348 DOI: 10.1080/02699931.2015.1049124] [Citation(s) in RCA: 124] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.
Collapse
Affiliation(s)
- Manuel G Calvo
- a Department of Cognitive Psychology , University of La Laguna , Tenerife , Spain
| | - Lauri Nummenmaa
- b School of Science , Aalto University , Espoo , Finland.,c Department of Psychology and Turku PET Centre , University of Turku , Turku , Finland
| |
Collapse
|
27
|
Hsu SM. The neural mechanism underlying the effects of preceding contexts on current categorization decisions. Neuropsychologia 2014; 66:39-47. [PMID: 25445780 DOI: 10.1016/j.neuropsychologia.2014.11.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2014] [Revised: 10/13/2014] [Accepted: 11/04/2014] [Indexed: 10/24/2022]
Abstract
Preceding contexts strongly influence current decision-making. To elucidate the neural mechanism that underlies this phenomenon, magnetoencephalographic signals were recorded while participants performed a binary categorization task on a sequence of facial expressions. The behavioral data indicated that the categorization of current facial expressions differed between the contexts shaped by the immediately preceding expression. We found that the effects of the preceding context were linked to prestimulus power activities in the low-frequency band. However, these context-dependent neural markers did not reflect behavioral decisions. Rather, the beta power observed primarily after stimulus onset and located at distinct sensors was predictive of the trial-by-trial decisions. Despite these results, the coupling strength between context-dependent and decision-related power differed between preceding contexts, suggesting that the context-dependent power interacted with decision-related power in a systemic manner and in turn biased behavioral decisions. Taken together, these findings suggest that categorization decisions are mediated by a series of power activities that coordinate the influence of preceding contexts on current categorization.
Collapse
Affiliation(s)
- Shen-Mou Hsu
- Research Center for Mind, Brain and Learning, National Chengchi University, Taipei, Taiwan, ROC.
| |
Collapse
|
28
|
Walsh JA, Vida MD, Rutherford MD. Strategies for perceiving facial expressions in adults with autism spectrum disorder. J Autism Dev Disord 2014; 44:1018-26. [PMID: 24077783 DOI: 10.1007/s10803-013-1953-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Rutherford and McIntosh (J Autism Dev Disord 37:187–196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding using photographs of real people. In addition, two control tasks were added to eliminate alternative explanations. We replicated the findings of Rutherford and McIntosh (J Autism Dev Disord 37:187–196, 2007) and also demonstrated that adults with ASD do not show this tolerance when evaluating how realistic the expressions are. These results suggest adults with ASD employ a rule-based strategy to a greater extent than typical adults when processing facial expressions but not when processing other aspects of faces.
Collapse
|
29
|
Exaggerating Facial Expressions: A Way to Intensify Emotion or a Way to the Uncanny Valley? Cognit Comput 2014. [DOI: 10.1007/s12559-014-9273-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
30
|
Bimler DL, Paramei GV. Facial-Expression Affective Attributes and their Configural Correlates: Components and Categories. SPANISH JOURNAL OF PSYCHOLOGY 2014; 9:19-31. [PMID: 16673619 DOI: 10.1017/s113874160000593x] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The present study investigates the perception of facial expressions of emotion, and explores the relation between the configural properties of expressions and their subjective attribution. Stimuli were a male and a female series of morphed facial expressions, interpolated between prototypes of seven emotions (happiness, sadness, fear, anger, surprise and disgust, and neutral) from Ekman and Friesen (1976). Topographical properties of the stimuli were quantified using the Facial Expression Measurement (FACEM) scheme. Perceived dissimilarities between the emotional expressions were elicited using a sorting procedure and processed with multidimensional scaling. Four dimensions were retained in the reconstructed facial-expression space, with positive and negative expressions opposed along D1, while the other three dimensions were interpreted as affective attributes distinguishing clusters of expressions categorized as “Surprise-Fear,” “Anger,” and “Disgust.” Significant relationships were found between these affective attributes and objective facial measures of the stimuli. The findings support a componential explanatory scheme for expression processing, wherein each component of a facial stimulus conveys an affective value separable from its context, rather than a categorical-gestalt scheme. The findings further suggest that configural information is closely involved in the decoding of affective attributes of facial expressions. Configural measures are also suggested as a common ground for dimensional as well as categorical perception of emotional faces.
Collapse
Affiliation(s)
- David L Bimler
- Department of Health and Human Development, Massey University, Private Bag 11-222, Palmerston North, New Zealand.
| | | |
Collapse
|
31
|
Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials. Biol Psychol 2014; 96:111-25. [DOI: 10.1016/j.biopsycho.2013.12.003] [Citation(s) in RCA: 54] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 10/21/2013] [Accepted: 12/04/2013] [Indexed: 11/20/2022]
|
32
|
Recio G, Schacht A, Sommer W. Classification of dynamic facial expressions of emotion presented briefly. Cogn Emot 2013; 27:1486-94. [DOI: 10.1080/02699931.2013.794128] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
33
|
Braadbaart L, de Grauw H, Perrett DI, Waiter GD, Williams JHG. The shared neural basis of empathy and facial imitation accuracy. Neuroimage 2013; 84:367-75. [PMID: 24012546 DOI: 10.1016/j.neuroimage.2013.08.061] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2013] [Revised: 08/26/2013] [Accepted: 08/28/2013] [Indexed: 11/29/2022] Open
Abstract
Empathy involves experiencing emotion vicariously, and understanding the reasons for those emotions. It may be served partly by a motor simulation function, and therefore share a neural basis with imitation (as opposed to mimicry), as both involve sensorimotor representations of intentions based on perceptions of others' actions. We recently showed a correlation between imitation accuracy and Empathy Quotient (EQ) using a facial imitation task and hypothesised that this relationship would be mediated by the human mirror neuron system. During functional Magnetic Resonance Imaging (fMRI), 20 adults observed novel 'blends' of facial emotional expressions. According to instruction, they either imitated (i.e. matched) the expressions or executed alternative, pre-prescribed mismatched actions as control. Outside the scanner we replicated the association between imitation accuracy and EQ. During fMRI, activity was greater during mismatch compared to imitation, particularly in the bilateral insula. Activity during imitation correlated with EQ in somatosensory cortex, intraparietal sulcus and premotor cortex. Imitation accuracy correlated with activity in insula and areas serving motor control. Overlapping voxels for the accuracy and EQ correlations occurred in premotor cortex. We suggest that both empathy and facial imitation rely on formation of action plans (or a simulation of others' intentions) in the premotor cortex, in connection with representations of emotional expressions based in the somatosensory cortex. In addition, the insula may play a key role in the social regulation of facial expression.
Collapse
Affiliation(s)
- L Braadbaart
- Aberdeen Biomedical Imaging Centre, University of Aberdeen, Lilian Sutton Building, Aberdeen AB25 2ZD, UK; SINAPSE Collaboration (www.sinapse.ac.uk), UK
| | | | | | | | | |
Collapse
|
34
|
Kumfor F, Irish M, Hodges JR, Piguet O. Discrete Neural Correlates for the Recognition of Negative Emotions: Insights from Frontotemporal Dementia. PLoS One 2013; 8:e67457. [PMID: 23805313 PMCID: PMC3689735 DOI: 10.1371/journal.pone.0067457] [Citation(s) in RCA: 130] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2013] [Accepted: 05/17/2013] [Indexed: 01/09/2023] Open
Abstract
Patients with frontotemporal dementia have pervasive changes in emotion recognition and social cognition, yet the neural changes underlying these emotion processing deficits remain unclear. The multimodal system model of emotion proposes that basic emotions are dependent on distinct brain regions, which undergo significant pathological changes in frontotemporal dementia. As such, this syndrome may provide important insight into the impact of neural network degeneration upon the innate ability to recognise emotions. This study used voxel-based morphometry to identify discrete neural correlates involved in the recognition of basic emotions (anger, disgust, fear, sadness, surprise and happiness) in frontotemporal dementia. Forty frontotemporal dementia patients (18 behavioural-variant, 11 semantic dementia, 11 progressive nonfluent aphasia) and 27 healthy controls were tested on two facial emotion recognition tasks: The Ekman 60 and Ekman Caricatures. Although each frontotemporal dementia group showed impaired recognition of negative emotions, distinct associations between emotion-specific task performance and changes in grey matter intensity emerged. Fear recognition was associated with the right amygdala; disgust recognition with the left insula; anger recognition with the left middle and superior temporal gyrus; and sadness recognition with the left subcallosal cingulate, indicating that discrete neural substrates are necessary for emotion recognition in frontotemporal dementia. The erosion of emotion-specific neural networks in neurodegenerative disorders may produce distinct profiles of performance that are relevant to understanding the neurobiological basis of emotion processing.
Collapse
Affiliation(s)
- Fiona Kumfor
- Neuroscience Research Australia, Sydney, Australia
- School of Medical Sciences, the University of New South Wales, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, the University of New South Wales, Sydney, Australia
| | - Muireann Irish
- Neuroscience Research Australia, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, the University of New South Wales, Sydney, Australia
- School of Psychology, the University of New South Wales, Sydney, Australia
| | - John R. Hodges
- Neuroscience Research Australia, Sydney, Australia
- School of Medical Sciences, the University of New South Wales, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, the University of New South Wales, Sydney, Australia
| | - Olivier Piguet
- Neuroscience Research Australia, Sydney, Australia
- School of Medical Sciences, the University of New South Wales, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, the University of New South Wales, Sydney, Australia
- * E-mail:
| |
Collapse
|
35
|
Kumfor F, Piguet O. Emotion recognition in the dementias: brain correlates and patient implications. Neurodegener Dis Manag 2013. [DOI: 10.2217/nmt.13.16] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
SUMMARY Changes in behavior, personality and the ability to interact in social situations have been reported to varying extents across dementia syndromes. Deficits in the ability to recognize emotion in others probably contribute to these socioemotional changes. This article reviews the patterns of emotion recognition impairments and their underlying brain correlates in four dementia syndromes: Alzheimer’s disease; frontotemporal dementia; Huntington’s disease; and progressive supranuclear palsy. Despite emotion recognition deficits being observed in all these patient groups, a limited understanding exists on how these deficits translate into everyday behavior. The adoption of ecologically valid tasks is likely to improve our understanding of these deficits in everyday settings, and will help to provide guidance for management strategies for patients and their carers.
Collapse
Affiliation(s)
- Fiona Kumfor
- Neuroscience Research Australia, PO Box 1165, Randwick, New South Wales, Australia
- School of Medical Sciences, University of New South Wales, Sydney, Australia
- Australian Research Council Centre of Excellence in Cognition & its Disorders, University of New South Wales, Sydney, Australia
| | - Olivier Piguet
- School of Medical Sciences, University of New South Wales, Sydney, Australia
- Australian Research Council Centre of Excellence in Cognition & its Disorders, University of New South Wales, Sydney, Australia
- Neuroscience Research Australia, PO Box 1165, Randwick, New South Wales, Australia.
| |
Collapse
|
36
|
Gutiérrez-García A, Calvo MG. Social anxiety and interpretation of ambiguous smiles. ANXIETY STRESS AND COPING 2013; 27:74-89. [DOI: 10.1080/10615806.2013.794941] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
37
|
Dean AM, Goodby E, Ooi C, Nathan PJ, Lennox BR, Scoriels L, Shabbir S, Suckling J, Jones PB, Bullmore ET, Barnes A. Speed of facial affect intensity recognition as an endophenotype of first-episode psychosis and associated limbic-cortical grey matter systems. Psychol Med 2013; 43:591-602. [PMID: 22703698 DOI: 10.1017/s0033291712001341] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
BACKGROUND Psychotic disorders are highly heritable such that the unaffected relatives of patients may manifest characteristics, or endophenotypes, that are more closely related to risk genes than the overt clinical condition. Facial affect processing is dependent on a distributed cortico-limbic network that is disrupted in psychosis. This study assessed facial affect processing and related brain structure as a candidate endophenotype of first-episode psychosis (FEP). METHOD Three samples comprising 30 FEP patients, 30 of their first-degree relatives and 31 unrelated healthy controls underwent assessment of facial affect processing and structural magnetic resonance imaging (sMRI) data. Multivariate analysis (partial least squares, PLS) was used to identify a grey matter (GM) system in which anatomical variation was associated with variation in facial affect processing speed. RESULTS The groups did not differ in their accuracy of facial affect intensity rating but differed significantly in speed of response, with controls responding faster than relatives, who responded faster than patients. Within the control group, variation in speed of affect processing was significantly associated with variation of GM density in amygdala, lateral temporal cortex, frontal cortex and cerebellum. However, this association between cortico-limbic GM density and speed of facial affect processing was absent in patients and their relatives. CONCLUSIONS Speed of facial affect processing presents as a candidate endophenotype of FEP. The normal association between speed of facial affect processing and cortico-limbic GM variation was disrupted in FEP patients and their relatives.
Collapse
Affiliation(s)
- A M Dean
- Brain Mapping Unit, Department of Psychiatry, University of Cambridge, UK.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Pell PJ, Richards A. Overlapping facial expression representations are identity-dependent. Vision Res 2013; 79:1-7. [DOI: 10.1016/j.visres.2012.12.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Revised: 12/05/2012] [Accepted: 12/07/2012] [Indexed: 10/27/2022]
|
39
|
Aviezer H, Trope Y, Todorov A. Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science 2012. [PMID: 23197536 DOI: 10.1126/science.1224313] [Citation(s) in RCA: 305] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion, positive and negative situations were successfully discriminated from isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created compounds of intense negative faces combined with positive bodies, and vice versa. Perceived affect and mimicry of the faces shifted systematically as a function of their contextual body emotion. These findings challenge standard models of emotion expression and highlight the role of the body in expressing and perceiving emotions.
Collapse
Affiliation(s)
- Hillel Aviezer
- Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
| | | | | |
Collapse
|
40
|
Calvo MG, Fernández-Martín A, Nummenmaa L. Perceptual, categorical, and affective processing of ambiguous smiling facial expressions. Cognition 2012; 125:373-93. [DOI: 10.1016/j.cognition.2012.07.021] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2011] [Revised: 07/20/2012] [Accepted: 07/23/2012] [Indexed: 11/15/2022]
|
41
|
Skinner AL, Benton CP. Anti-expression aftereffects reveal prototype-referenced coding of facial expressions. Psychol Sci 2010; 21:1248-53. [PMID: 20713632 DOI: 10.1177/0956797610380702] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Adaptation is a powerful experimental technique that has recently provided insights into how people encode representations of facial identity. Here, we used this approach to explore the visual representation of facial expressions of emotion. Participants were adapted to anti-expressions of six facial expressions. The participants were then shown an average face and asked to classify the face's expression using one of six basic emotion descriptors. Participants chose the emotion matching the anti-expression they were adapted to significantly more often than they chose any other emotion (e.g., if they were adapted to antifear, they classified the emotion on the average face as fear). The strength of this aftereffect of adaptation decreased as the strength of the anti-expression adapter decreased. These findings provide evidence that visual representations of facial expressions of emotion are coded with reference to a prototype within a multidimensional framework.
Collapse
Affiliation(s)
- Andrew L Skinner
- Department of Experimental Psychology, University of Bristol, Bristol, United Kingdom.
| | | |
Collapse
|
42
|
Allen H, Brady N, Tredoux C. Perception of 'best likeness' to highly familiar faces of self and friend. Perception 2010; 38:1821-30. [PMID: 20192131 DOI: 10.1068/p6424] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We investigated the idea that our memory for familiar faces involves an accurate representation of their unique spatial configuration and, further, whether this configuration may be caricatured in memory. In separate experimental blocks, thirty-five Irish participants were presented with a series of photographic images of their own face and of the face of a close friend, and were asked to choose the image which looked most like themselves or their friend. Both sets of images included an original full-face colour photograph, and photographic distortions ranging from a highly caricatured (+100%) to a highly anti-caricatured (-100%) version of the original, generated with reference to newly created average male and female Irish faces. Contrary to suggestions that we hold a slightly caricatured version of a familiar face in memory, the mean 'best-likeness' image, calculated across both self and friend trials, was an anti-caricature of -13.88% which was significantly different from 0 (t69 = -5.34, p < 0.0001). The difference in the mean 'best-likeness' image chosen for self (-12.06%) and friend (-15.7%) was not significant (t34 = 0.715, p = 0.48). These results are discussed with reference to our ability to discriminate facial shape, together with the possibility that we idealise the attractiveness of faces of those close to us.
Collapse
Affiliation(s)
- Hannah Allen
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | | | | |
Collapse
|
43
|
Calder AJ, Keane J, Young AW, Lawrence AD, Mason S, Barker RA. The relation between anger and different forms of disgust: implications for emotion recognition impairments in Huntington's disease. Neuropsychologia 2010; 48:2719-29. [PMID: 20580641 DOI: 10.1016/j.neuropsychologia.2010.05.019] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2009] [Revised: 05/04/2010] [Accepted: 05/10/2010] [Indexed: 11/28/2022]
Abstract
Initial reports of emotion recognition in Huntington's disease (HD) found disproportionate impairments in recognising disgust. Not all subsequent studies have found this pattern, and a review of the literature to date shows that marked impairments in recognising anger are also often seen in HD. However, the majority of studies have based their conclusions on a single test of facial expression recognition. In the current study we revisit this issue of emotion recognition in HD to address whether the pattern found on one test of facial expression recognition generalised to another, and to different modalities using tests of emotion recognition from facial expressions, vocal expressions, and short verbal vignettes. The results showed evidence of impairments in recognising anger, fear and disgust across the three domains, with recognition of anger the most severely impaired. Given work identifying different subtypes of disgust that are associated with different facial features, a second study examined the recognition of three disgust expressions that healthy participants reliably associate with unpleasant tastes, unpleasant smells, and a more general elaborated or expanded form of disgust that includes reactions to violations of moral standards. The results showed a disproportionate impairment in recognising faces associated with the expanded form, the subtype most closely aligned with anger. We conclude that the related emotions of disgust and anger associated with social disapproval are frequently impaired in HD and discuss factors that might cause one emotion to show more severe impairments than the other.
Collapse
Affiliation(s)
- Andrew J Calder
- MRC Cognition and Brain Sciences Unit, and Cambridge Centre for Brain Repair, Forvie Site, Addenbrooke's Hosptial, 15 Chaucer Road, Cambridge CB2 7EF, UK.
| | | | | | | | | | | |
Collapse
|
44
|
Montirosso R, Peverelli M, Frigerio E, Crespi M, Borgatti R. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds. SOCIAL DEVELOPMENT 2010. [DOI: 10.1111/j.1467-9507.2008.00527.x] [Citation(s) in RCA: 111] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
45
|
The Processing of Emotion in Patients With Huntington's Disease: Variability and Differential Deficits in Disgust. Cogn Behav Neurol 2009; 22:249-57. [DOI: 10.1097/wnn.0b013e3181c124af] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Derntl B, Seidel EM, Kainz E, Carbon CC. Recognition of Emotional Expressions is Affected by Inversion and Presentation Time. Perception 2009; 38:1849-62. [DOI: 10.1068/p6448] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
It has been repeatedly shown that face inversion affects the recognition of emotional faces. However, previous results are heterogeneous concerning the affected emotions and the influence of presentation time is unclear. We examined the impact of limited presentation time (200 ms) on the face-inversion effect during recognition of basic emotions in 128 healthy young adults. Data analysis revealed differential inversion effects for emotional expressions, further modified by limitation of presentation time: when presentation was limited, we observed inversion effects for angry and neutral faces which were absent in the unlimited trials. In the unlimited condition, inversion particularly affected recognition of disgust and sadness. No general inversion effect occurred for neutral expressions. Error analysis highlighted specific confusions for the inverted condition, except for happy and neutral expressions. Hence, emotion recognition is affected by inversion—an indicator for configural processing, and presentation time—an indicator for cognitive effort of processing.
Collapse
Affiliation(s)
| | - Eva-Maria Seidel
- Department of Psychiatry and Psychotherapy, RWTH Aachen University, Aachen, Germany
| | | | - Claus-Christian Carbon
- General Psychology and Methodology, Department of Psychology, University of Bamberg, Bamberg, Germany
| |
Collapse
|
47
|
An amygdala response to fearful faces with covered eyes. Neuropsychologia 2008; 46:2364-70. [DOI: 10.1016/j.neuropsychologia.2008.03.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2007] [Revised: 03/06/2008] [Accepted: 03/12/2008] [Indexed: 11/15/2022]
|
48
|
Boakes J, Chapman E, Houghton S, West J. Facial affect interpretation in boys with attention deficit/hyperactivity disorder. Child Neuropsychol 2008; 14:82-96. [PMID: 18097801 DOI: 10.1080/09297040701503327] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Recent studies have produced mixed evidence of impairments in facial affect interpretation for children with attention deficit/hyperactivity disorder (ADHD). This study investigated the presence and nature of such impairments across different stimulus formats. Twenty-four boys with ADHD and 24 age-matched comparison boys completed a 72-trial task that included facial expressions of happiness, sadness, fear, anger, surprise, and disgust. Three versions of each expression were used: a static version, a dynamic version, and a dynamic version presented within a relevant situational context. Expressions were also presented in one of two portrayal modes (cartoon versus real-life). Results indicated significant impairments for boys with ADHD on two of the six emotions (fear and disgust), which were consistent across stimulus formats. Directions for further research to identify mediating factors in the expression of such impairments in children with ADHD are discussed.
Collapse
Affiliation(s)
- Jolee Boakes
- The University of Western Australia, 35 Stirling Highway, Crawley, Perth, WA 6009, Australia
| | | | | | | |
Collapse
|
49
|
Categorical perception of facial expressions: evidence for a "category adjustment" model. Mem Cognit 2008; 35:1814-29. [PMID: 18062556 DOI: 10.3758/bf03193512] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Four experiments probed the nature of categorical perception (CP) for facial expressions. A model based on naming alone failed to accurately predict performance onthese tasks. The data are instead consistentwith an extension of the category adjustment model (Huttenlocher et al., 2000), in which the generation of a verbal code (e.g., "happy") activated knowledge ofthe expression category's range andcentral tendency (prototype) in memory, which was retained as veridical perceptual memory faded.Further support for amemory bias toward the category center came from a consistently asymmetric pattern of within-category errors. Verbal interference in the retention interval selectively removed CP for facial expressions, under blocked, but not under randomized presentation conditions. However, verbal interference at encoding removed CPeven under randomized conditions and these effects were shown to extend even to caricatured expressions, which lie outside the normal range of expression categories.
Collapse
|
50
|
Leppänen JM, Kauppinen P, Peltola MJ, Hietanen JK. Differential electrocortical responses to increasing intensities of fearful and happy emotional expressions. Brain Res 2007; 1166:103-9. [PMID: 17662698 DOI: 10.1016/j.brainres.2007.06.060] [Citation(s) in RCA: 95] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2007] [Revised: 06/21/2007] [Accepted: 06/26/2007] [Indexed: 11/16/2022]
Abstract
Previous studies have shown differential event-related potentials (ERPs) to fearful and happy/neutral facial expressions. To investigate whether the brain systems underlying these ERP differences are sensitive to the intensity of fear and happiness, behavioral recognition accuracy and reaction times as well as ERPs were measured while observers categorized low-intensity (50%), prototypical (100%), and caricatured (150%) fearful and happy facial expressions. The speed and accuracy of emotion categorization improved with increasing levels of expression intensity, and 100% and 150% expressions were consistently classified as expressions of the intended emotions. Comparison of ERPs to 100% and 150% expressions revealed a differential pattern of ERPs to 100% and 150% fear expressions over occipital-temporal electrodes 190-290 ms post-stimulus (a negative shift in ERP activity for high-intensity fearful expressions). Similar ERP differences were not observed for 100% and 150% happy expressions, ruling out the possibility that the ERPs to high-intensity fear reflected a response to increased expression intensity per se. Together, these results suggest that differential electrocortical responses to fearful facial expressions over posterior electrodes are generated by a neural system that responds to the intensity of negative but not positive emotional expressions.
Collapse
Affiliation(s)
- Jukka M Leppänen
- Human Information Processing Laboratory, Department of Psychology, FIN-33014 University of Tampere, Tampere, Finland.
| | | | | | | |
Collapse
|