1
|
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024; 56:7674-7690. [PMID: 38834812 PMCID: PMC11362322 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
Abstract
Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
Collapse
|
2
|
Yamasaki D, Nagai M. Emotion-gaze interaction affects time-to-collision estimates, but not preferred interpersonal distance towards looming faces. Front Psychol 2024; 15:1414702. [PMID: 39323584 PMCID: PMC11423545 DOI: 10.3389/fpsyg.2024.1414702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 06/10/2024] [Indexed: 09/27/2024] Open
Abstract
Estimating the time until impending collision (time-to-collision, TTC) of approaching or looming individuals and maintaining a comfortable distance from others (interpersonal distance, IPD) are commonly required in daily life and contribute to survival and social goals. Despite accumulating evidence that facial expressions and gaze direction interactively influence face processing, it remains unclear how these facial features affect the spatiotemporal processing of looming faces. We examined whether facial expressions (fearful vs. neutral) and gaze direction (direct vs. averted) interact on the judgments of TTC and IPD for looming faces, based on the shared signal hypothesis that fear signals the existence of threats in the environment when coupled with averted gaze. Experiment 1 demonstrated that TTC estimates were reduced for fearful faces compared to neutral ones only when the concomitant gaze was averted. In Experiment 2, the emotion-gaze interaction was not observed in the IPD regulation, which is arguably sensitive to affective responses to faces. The results suggest that fearful-averted faces modulate the cognitive extrapolation process of looming motion by communicating environmental threats rather than by altering subjective fear or perceived emotional intensity of faces. The TTC-specific effect may reflect an enhanced defensive response to unseen threats implied by looming fearful-averted faces. Our findings provide insight into how the visual system processes facial features to ensure bodily safety and comfortable interpersonal communication in dynamic environments.
Collapse
Affiliation(s)
- Daiki Yamasaki
- Research Organization of Open, Innovation and Collaboration, Ritsumeikan University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Masayoshi Nagai
- College of Comprehensive Psychology, Ritsumeikan University, Osaka, Japan
| |
Collapse
|
3
|
Varkevisser T, Geuze E, van Honk J. Amygdala fMRI-A Critical Appraisal of the Extant Literature. Neurosci Insights 2024; 19:26331055241270591. [PMID: 39148643 PMCID: PMC11325331 DOI: 10.1177/26331055241270591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 07/08/2024] [Indexed: 08/17/2024] Open
Abstract
Even before the advent of fMRI, the amygdala occupied a central space in the affective neurosciences. Yet this amygdala-centred view on emotion processing gained even wider acceptance after the inception of fMRI in the early 1990s, a landmark that triggered a goldrush of fMRI studies targeting the amygdala in vivo. Initially, this amygdala fMRI research was mostly confined to task-activation studies measuring the magnitude of the amygdala's response to emotional stimuli. Later, interest began to shift more towards the study of the amygdala's resting-state functional connectivity and task-based psychophysiological interactions. Later still, the test-retest reliability of amygdala fMRI came under closer scrutiny, while at the same time, amygdala-based real-time fMRI neurofeedback gained widespread popularity. Each of these major subdomains of amygdala fMRI research has left its marks on the field of affective neuroscience at large. The purpose of this review is to provide a critical assessment of this literature. By integrating the insights garnered by these research branches, we aim to answer the question: What part (if any) can amygdala fMRI still play within the current landscape of affective neuroscience? Our findings show that serious questions can be raised with regard to both the reliability and validity of amygdala fMRI. These conclusions force us to cast doubt on the continued viability of amygdala fMRI as a core pilar of the affective neurosciences.
Collapse
Affiliation(s)
- Tim Varkevisser
- University Medical Center, Utrecht, The Netherlands
- Brain Research and Innovation Center, Ministry of Defence, Utrecht, The Netherlands
- Utrecht University, Utrecht, The Netherlands
| | - Elbert Geuze
- University Medical Center, Utrecht, The Netherlands
- Brain Research and Innovation Center, Ministry of Defence, Utrecht, The Netherlands
| | - Jack van Honk
- Utrecht University, Utrecht, The Netherlands
- University of Cape Town, Cape Town, South Africa
| |
Collapse
|
4
|
Harris LT. The Neuroscience of Human and Artificial Intelligence Presence. Annu Rev Psychol 2024; 75:433-466. [PMID: 37906951 DOI: 10.1146/annurev-psych-013123-123421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Two decades of social neuroscience and neuroeconomics research illustrate the brain mechanisms that are engaged when people consider human beings, often in comparison to considering artificial intelligence (AI) as a nonhuman control. AI as an experimental control preserves agency and facilitates social interactions but lacks a human presence, providing insight into brain mechanisms that are engaged by human presence and the presence of AI. Here, I review this literature to determine how the brain instantiates human and AI presence across social perception and decision-making paradigms commonly used to realize a social context. People behave toward humans differently than they do toward AI. Moreover, brain regions more engaged by humans compared to AI extend beyond the social cognition brain network to all parts of the brain, and the brain sometimes is engaged more by AI than by humans. Finally, I discuss gaps in the literature, limitations in current neuroscience approaches, and how an understanding of the brain correlates of human and AI presence can inform social science in the wild.
Collapse
Affiliation(s)
- Lasana T Harris
- Department of Experimental Psychology, University College London, London, United Kingdom;
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
5
|
Treal T, Jackson PL, Meugnot A. Biological postural oscillations during facial expression of pain in virtual characters modulate early and late ERP components associated with empathy: A pilot study. Heliyon 2023; 9:e18161. [PMID: 37560681 PMCID: PMC10407205 DOI: 10.1016/j.heliyon.2023.e18161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 06/21/2023] [Accepted: 07/10/2023] [Indexed: 08/11/2023] Open
Abstract
There is a surge in the use of virtual characters in cognitive sciences. However, their behavioural realism remains to be perfected in order to trigger more spontaneous and socially expected reactions in users. It was recently shown that biological postural oscillations (idle motion) were a key ingredient to enhance the empathic response to its facial pain expression. The objective of this study was to examine, using electroencephalography, whether idle motion would modulate the neural response associated with empathy when viewing a pain-expressing virtual character. Twenty healthy young adults were shown video clips of a virtual character displaying a facial expression of pain while its body was either static (Still condition) or animated with pre-recorded human postural oscillations (Idle condition). Participants rated the virtual human's facial expression of pain as significantly more intense in the Idle condition compared to the Still condition. Both the early (N2-N3) and the late (rLPP) event-related potentials (ERPs) associated with distinct dimensions of empathy, affective resonance and perspective-taking, respectively, were greater in the Idle condition compared to the Still condition. These findings confirm the potential of idle motion to increase empathy for pain expressed by virtual characters. They are discussed in line with contemporary empathy models in relation to human-machine interactions.
Collapse
Affiliation(s)
- Thomas Treal
- Université Paris-Saclay CIAMS, 91405, Orsay, France
- CIAMS, Université d'Orléans, 45067, Orléans, France
| | - Philip L. Jackson
- École de Psychologie, Université Laval, Québec, Canada
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, Canada
- CERVO Research Center, Québec, Canada
| | - Aurore Meugnot
- Université Paris-Saclay CIAMS, 91405, Orsay, France
- CIAMS, Université d'Orléans, 45067, Orléans, France
| |
Collapse
|
6
|
Van der Biest M, Cracco E, Riva P, Valentini E. Should I trust you? Investigating trustworthiness judgements of painful facial expressions. Acta Psychol (Amst) 2023; 235:103893. [PMID: 36966639 DOI: 10.1016/j.actpsy.2023.103893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 03/14/2023] [Accepted: 03/15/2023] [Indexed: 04/03/2023] Open
Abstract
Past research indicates that patients' reports of pain are often met with skepticism and that observers tend to underestimate patients' pain. The mechanisms behind these biases are not yet fully understood. One relevant domain of inquiry is the interaction between the emotional valence of a stranger's expression and the onlooker's trustworthiness judgment. The emotion overgeneralization hypothesis posits that when facial cues of valence are clear, individuals displaying negative expressions (e.g., disgust) are perceived as less trustworthy than those showing positive facial expressions (e.g., happiness). Accordingly, we hypothesized that facial expressions of pain (like disgust) would be judged more untrustworthy than facial expressions of happiness. In two separate studies, we measured trustworthiness judgments of four different facial expressions (i.e., neutral, happiness, pain, and disgust), displayed by both computer-generated and real faces, via both explicit self-reported ratings (Study 1) and implicit motor trajectories in a trustworthiness categorization task (Study 2). Ratings and categorization findings partly support our hypotheses. Our results reveal for the first time that when judging strangers' facial expressions, both negative expressions were perceived as more untrustworthy than happy expressions. They also indicate that facial expressions of pain are perceived as untrustworthy as disgust expressions, at least for computer-generated faces. These findings are relevant to the clinical setting because they highlight how overgeneralization of emotional facial expressions may subtend an early perceptual bias exerted by the patient's emotional facial cues onto the clinician's cognitive appraisal process.
Collapse
|
7
|
Vaitonytė J, Alimardani M, Louwerse MM. Corneal reflections and skin contrast yield better memory of human and virtual faces. Cogn Res Princ Implic 2022; 7:94. [PMID: 36258062 PMCID: PMC9579222 DOI: 10.1186/s41235-022-00445-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 10/07/2022] [Indexed: 11/10/2022] Open
Abstract
Virtual faces have been found to be rated less human-like and remembered worse than photographic images of humans. What it is in virtual faces that yields reduced memory has so far remained unclear. The current study investigated face memory in the context of virtual agent faces and human faces, real and manipulated, considering two factors of predicted influence, i.e., corneal reflections and skin contrast. Corneal reflections referred to the bright points in each eye that occur when the ambient light reflects from the surface of the cornea. Skin contrast referred to the degree to which skin surface is rough versus smooth. We conducted two memory experiments, one with high-quality virtual agent faces (Experiment 1) and the other with the photographs of human faces that were manipulated (Experiment 2). Experiment 1 showed better memory for virtual faces with increased corneal reflections and skin contrast (rougher rather than smoother skin). Experiment 2 replicated these findings, showing that removing the corneal reflections and smoothening the skin reduced memory recognition of manipulated faces, with a stronger effect exerted by the eyes than the skin. This study highlights specific features of the eyes and skin that can help explain memory discrepancies between real and virtual faces and in turn elucidates the factors that play a role in the cognitive processing of faces.
Collapse
Affiliation(s)
- Julija Vaitonytė
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| | - Maryam Alimardani
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| | - Max M. Louwerse
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| |
Collapse
|
8
|
Dawel A, Miller EJ, Horsburgh A, Ford P. A systematic survey of face stimuli used in psychological research 2000-2020. Behav Res Methods 2022; 54:1889-1901. [PMID: 34731426 DOI: 10.3758/s13428-021-01705-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 12/16/2022]
Abstract
For decades, psychology has relied on highly standardized images to understand how people respond to faces. Many of these stimuli are rigorously generated and supported by excellent normative data; as such, they have played an important role in the development of face science. However, there is now clear evidence that testing with ambient images (i.e., naturalistic images "in the wild") and including expressions that are spontaneous can lead to new and important insights. To precisely quantify the extent to which our current knowledge base has relied on standardized and posed stimuli, we systematically surveyed the face stimuli used in 12 key journals in this field across 2000-2020 (N = 3374 articles). Although a small number of posed expression databases continue to dominate the literature, the use of spontaneous expressions seems to be increasing. However, there has been no increase in the use of ambient or dynamic stimuli over time. The vast majority of articles have used highly standardized and nonmoving pictures of faces. An emerging trend is that virtual faces are being used as stand-ins for human faces in research. Overall, the results of the present survey highlight that there has been a significant imbalance in favor of standardized face stimuli. We argue that psychology would benefit from a more balanced approach because ambient and spontaneous stimuli have much to offer. We advocate a cognitive ethological approach that involves studying face processing in natural settings as well as the lab, incorporating more stimuli from "the wild".
Collapse
Affiliation(s)
- Amy Dawel
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia.
| | - Elizabeth J Miller
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Annabel Horsburgh
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Patrice Ford
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| |
Collapse
|
9
|
Sarauskyte L, Monciunskaite R, Griksiene R. The role of sex and emotion on emotion perception in artificial faces: An ERP study. Brain Cogn 2022; 159:105860. [PMID: 35339916 DOI: 10.1016/j.bandc.2022.105860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 02/08/2022] [Accepted: 03/10/2022] [Indexed: 11/17/2022]
Abstract
Sex has a significant impact on the perception of emotional expressions. However, it remains unclear whether sex influences the perception of emotions in artificial faces, which are becoming popular in emotion research. We used an emotion recognition task with FaceGen faces portraying six basic emotions aiming to investigate the effect of sex and emotion on behavioural and electrophysiological parameters. 71 participants performed the task while EEG was recorded. The recognition of sadness was the poorest, however, females recognized sadness better than males. ERP results indicated that fear, disgust, and anger evoked higher amplitudes of late positive potential over the left parietal region compared to neutral expression. Females demonstrated higher values of global field power as compared to males. The interaction between sex and emotion on ERPs was not significant. The results of our study may be valuable for future therapies and research, as it emphasizes possibly distinct processing of emotions and potential sex differences in the recognition of emotional expressions in FaceGen faces.
Collapse
Affiliation(s)
- Livija Sarauskyte
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania.
| | - Rasa Monciunskaite
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania
| | - Ramune Griksiene
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania
| |
Collapse
|
10
|
Battaglia S, Fabius JH, Moravkova K, Fracasso A, Borgomaneri S. The Neurobiological Correlates of Gaze Perception in Healthy Individuals and Neurologic Patients. Biomedicines 2022; 10:627. [PMID: 35327431 PMCID: PMC8945205 DOI: 10.3390/biomedicines10030627] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 03/01/2022] [Accepted: 03/05/2022] [Indexed: 01/15/2023] Open
Abstract
The ability to adaptively follow conspecific eye movements is crucial for establishing shared attention and survival. Indeed, in humans, interacting with the gaze direction of others causes the reflexive orienting of attention and the faster object detection of the signaled spatial location. The behavioral evidence of this phenomenon is called gaze-cueing. Although this effect can be conceived as automatic and reflexive, gaze-cueing is often susceptible to context. In fact, gaze-cueing was shown to interact with other factors that characterize facial stimulus, such as the kind of cue that induces attention orienting (i.e., gaze or non-symbolic cues) or the emotional expression conveyed by the gaze cues. Here, we address neuroimaging evidence, investigating the neural bases of gaze-cueing and the perception of gaze direction and how contextual factors interact with the gaze shift of attention. Evidence from neuroimaging, as well as the fields of non-invasive brain stimulation and neurologic patients, highlights the involvement of the amygdala and the superior temporal lobe (especially the superior temporal sulcus (STS)) in gaze perception. However, in this review, we also emphasized the discrepancies of the attempts to characterize the distinct functional roles of the regions in the processing of gaze. Finally, we conclude by presenting the notion of invariant representation and underline its value as a conceptual framework for the future characterization of the perceptual processing of gaze within the STS.
Collapse
Affiliation(s)
- Simone Battaglia
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
| | - Jasper H. Fabius
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK; (J.H.F.); (K.M.); (A.F.)
| | - Katarina Moravkova
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK; (J.H.F.); (K.M.); (A.F.)
| | - Alessio Fracasso
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK; (J.H.F.); (K.M.); (A.F.)
| | - Sara Borgomaneri
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| |
Collapse
|
11
|
Domínguez-Borràs J, Vuilleumier P. Amygdala function in emotion, cognition, and behavior. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:359-380. [PMID: 35964983 DOI: 10.1016/b978-0-12-823493-8.00015-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The amygdala is a core structure in the anterior medial temporal lobe, with an important role in several brain functions involving memory, emotion, perception, social cognition, and even awareness. As a key brain structure for saliency detection, it triggers and controls widespread modulatory signals onto multiple areas of the brain, with a great impact on numerous aspects of adaptive behavior. Here we discuss the neural mechanisms underlying these functions, as established by animal and human research, including insights provided in both healthy and pathological conditions.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Psychology and Psychobiology & Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Patrik Vuilleumier
- Department of Neuroscience and Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| |
Collapse
|
12
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Angry facial expressions bias towards aversive actions. PLoS One 2021; 16:e0256912. [PMID: 34469494 PMCID: PMC8409676 DOI: 10.1371/journal.pone.0256912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022] Open
Abstract
Social interaction requires fast and efficient processing of another person's intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants' recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants' action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.
Collapse
Affiliation(s)
- Leon O. H. Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
- * E-mail:
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M, Frankfurt a. M., Germany
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
13
|
Kegel LC, Frühholz S, Grunwald T, Mersch D, Rey A, Jokeit H. Temporal lobe epilepsy alters neural responses to human and avatar facial expressions in the face perception network. Brain Behav 2021; 11:e02140. [PMID: 33951323 PMCID: PMC8213650 DOI: 10.1002/brb3.2140] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 03/21/2021] [Accepted: 03/22/2021] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Although avatars are now widely used in advertisement, entertainment, and business today, no study has investigated whether brain lesions in neurological patients interfere with brain activation in response to dynamic avatar facial expressions. The aim of our event-related fMRI study was to compare brain activation differences in people with epilepsy and controls during the processing of fearful and neutral dynamic expressions displayed by human or avatar faces. METHODS Using functional magnetic resonance imaging (fMRI), we examined brain responses to dynamic facial expressions of trained actors and their avatar look-alikes in 16 people with temporal lobe epilepsy (TLE) and 26 controls. The actors' fearful and neutral expressions were recorded on video and conveyed onto their avatar look-alikes by face tracking. RESULTS Our fMRI results show that people with TLE exhibited reduced response differences between fearful and neutral expressions displayed by humans in the right amygdala and the left superior temporal sulcus (STS). Further, TLE was associated with reduced response differences between human and avatar fearful expressions in the dorsal pathway of the face perception network (STS and inferior frontal gyrus) as well as in the medial prefrontal cortex. CONCLUSIONS Taken together, these findings suggest that brain responses to dynamic facial expressions are altered in people with TLE compared to neurologically healthy individuals-regardless of whether the face is human or computer-generated. In TLE, areas sensitive to dynamic facial features and associated with processes relating to the self and others are particularly affected when processing dynamic human and avatar expressions. Our findings highlight that the impact of TLE on facial emotion processing must be extended to artificial faces and should be considered when applying dynamic avatars in the context of neurological conditions.
Collapse
Affiliation(s)
- Lorena Chantal Kegel
- Swiss Epilepsy Center, Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | | | - Dieter Mersch
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Anton Rey
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Hennric Jokeit
- Swiss Epilepsy Center, Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
14
|
Kegel LC, Brugger P, Frühholz S, Grunwald T, Hilfiker P, Kohnen O, Loertscher ML, Mersch D, Rey A, Sollfrank T, Steiger BK, Sternagel J, Weber M, Jokeit H. Dynamic human and avatar facial expressions elicit differential brain responses. Soc Cogn Affect Neurosci 2021; 15:303-317. [PMID: 32232359 PMCID: PMC7235958 DOI: 10.1093/scan/nsaa039] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/02/2020] [Accepted: 03/24/2020] [Indexed: 01/25/2023] Open
Abstract
Computer-generated characters, so-called avatars, are widely used in advertising, entertainment, human–computer interaction or as research tools to investigate human emotion perception. However, brain responses to avatar and human faces have scarcely been studied to date. As such, it remains unclear whether dynamic facial expressions of avatars evoke different brain responses than dynamic facial expressions of humans. In this study, we designed anthropomorphic avatars animated with motion tracking and tested whether the human brain processes fearful and neutral expressions in human and avatar faces differently. Our fMRI results showed that fearful human expressions evoked stronger responses than fearful avatar expressions in the ventral anterior and posterior cingulate gyrus, the anterior insula, the anterior and posterior superior temporal sulcus, and the inferior frontal gyrus. Fearful expressions in human and avatar faces evoked similar responses in the amygdala. We did not find different responses to neutral human and avatar expressions. Our results highlight differences, but also similarities in the processing of fearful human expressions and fearful avatar expressions even if they are designed to be highly anthropomorphic and animated with motion tracking. This has important consequences for research using dynamic avatars, especially when processes are investigated that involve cortical and subcortical regions.
Collapse
Affiliation(s)
- Lorena C Kegel
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Peter Brugger
- Neuropsychology Unit, Valens Rehabilitation Centre, Valens, Switzerland.,Department of Psychiatry, Psychotherapy, and Psychosomatics, University Hospital of Psychiatry Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | | | | | - Oona Kohnen
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland
| | - Miriam L Loertscher
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland.,Department of Psychology, University of Bern, Bern, Switzerland
| | - Dieter Mersch
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Anton Rey
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | | | | | - Joerg Sternagel
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Michel Weber
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Hennric Jokeit
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
15
|
Burra N, Kerzel D. Meeting another's gaze shortens subjective time by capturing attention. Cognition 2021; 212:104734. [PMID: 33887652 DOI: 10.1016/j.cognition.2021.104734] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 04/09/2021] [Accepted: 04/10/2021] [Indexed: 01/01/2023]
Abstract
Gaze directed at the observer (direct gaze) is an important and highly salient social signal with multiple effects on cognitive processes and behavior. It is disputed whether the effect of direct gaze is caused by attentional capture or increased arousal. Time estimation may provide an answer because attentional capture predicts an underestimation of time whereas arousal predicts an overestimation. In a temporal bisection task, observers were required to classify the duration of a stimulus as short or long. Stimulus duration was selected randomly between 988 and 1479 ms. When gaze was directed at the observer, participants underestimated stimulus duration, suggesting that effects of direct gaze are caused by attentional capture, not increased arousal. Critically, this effect was limited to dynamic stimuli where gaze appeared to move toward the participant. The underestimation was present with stimuli showing a full face, but also with stimuli showing only the eye region, inverted faces and high-contrast eye-like stimuli. However, it was absent with static pictures of full faces and dynamic nonfigurative stimuli. Because the effect of direct gaze depended on motion, which is common in naturalistic scenes, more consideration needs to be given to the ecological validity of stimuli in the study of social attention.
Collapse
Affiliation(s)
- Nicolas Burra
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland.
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| |
Collapse
|
16
|
Canales-Johnson A, Lanfranco RC, Morales JP, Martínez-Pernía D, Valdés J, Ezquerro-Nassar A, Rivera-Rei Á, Ibanez A, Chennu S, Bekinschtein TA, Huepe D, Noreika V. In your phase: neural phase synchronisation underlies visual imagery of faces. Sci Rep 2021; 11:2401. [PMID: 33504828 PMCID: PMC7840739 DOI: 10.1038/s41598-021-81336-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 01/05/2021] [Indexed: 01/15/2023] Open
Abstract
Mental imagery is the process through which we retrieve and recombine information from our memory to elicit the subjective impression of “seeing with the mind’s eye”. In the social domain, we imagine other individuals while recalling our encounters with them or modelling alternative social interactions in future. Many studies using imaging and neurophysiological techniques have shown several similarities in brain activity between visual imagery and visual perception, and have identified frontoparietal, occipital and temporal neural components of visual imagery. However, the neural connectivity between these regions during visual imagery of socially relevant stimuli has not been studied. Here we used electroencephalography to investigate neural connectivity and its dynamics between frontal, parietal, occipital and temporal electrodes during visual imagery of faces. We found that voluntary visual imagery of faces is associated with long-range phase synchronisation in the gamma frequency range between frontoparietal electrode pairs and between occipitoparietal electrode pairs. In contrast, no effect of imagery was observed in the connectivity between occipitotemporal electrode pairs. Gamma range synchronisation between occipitoparietal electrode pairs predicted subjective ratings of the contour definition of imagined faces. Furthermore, we found that visual imagery of faces is associated with an increase of short-range frontal synchronisation in the theta frequency range, which temporally preceded the long-range increase in the gamma synchronisation. We speculate that the local frontal synchrony in the theta frequency range might be associated with an effortful top-down mnemonic reactivation of faces. In contrast, the long-range connectivity in the gamma frequency range along the fronto-parieto-occipital axis might be related to the endogenous binding and subjective clarity of facial visual features.
Collapse
Affiliation(s)
- Andrés Canales-Johnson
- Department of Psychology, University of Cambridge, Downing Site, Cambridge, CB2 3EB, UK. .,Vicerrectoría de Investigación y Posgrado, Universidad Católica del Maule, Talca, Chile.
| | - Renzo C Lanfranco
- Department of Psychology, University of Edinburgh, Edinburgh, UK.,Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Juan Pablo Morales
- Facultad de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Joaquín Valdés
- Escuela de Psicología, Universidad Adolfo Ibáñez, Santiago, Chile
| | | | | | - Agustín Ibanez
- Escuela de Psicología, Universidad Adolfo Ibáñez, Santiago, Chile.,Center for Social and Cognitive Neuroscience (CSCN), Latin American Institute of Brain Health (BrainLat), Universidad Adolfo Ibanez, Santiago, Chile.,National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina.,Universidad Autónoma del Caribe, Barranquilla, Colombia.,Cognitive Neurosience Center (CNC), Universidad de San Andrés, Buenos Aires, Argentina.,Global Brain Health Institute (GBHI), University of California San Francisco (UCSF), San Francisco, USA
| | - Srivas Chennu
- School of Computing, University of Kent, Chatham Maritime, UK.,Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | | | - David Huepe
- Escuela de Psicología, Universidad Adolfo Ibáñez, Santiago, Chile
| | - Valdas Noreika
- Department of Psychology, University of Cambridge, Downing Site, Cambridge, CB2 3EB, UK.,Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
17
|
Vaitonytė J, Blomsma PA, Alimardani M, Louwerse MM. Realism of the face lies in skin and eyes: Evidence from virtual and human agents. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2021. [DOI: 10.1016/j.chbr.2021.100065] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
|
18
|
Emotional appraisal processing of computer-generated facial expressions: an functional near-infrared spectroscopy study. Neuroreport 2020; 31:437-441. [PMID: 32168120 DOI: 10.1097/wnr.0000000000001420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The current study aims to investigate whether computer generated (CG) expressions of emotion evoke similar emotional appraisal processing in the lateral orbitofrontal cortex (lOFC) compared to real human expressions, as well as how speech cues would influence the processing. METHODS Functional near-infrared spectroscopy was used to measure the neural activations in the prefrontal cortex during emotion recognition task. Thirty normal participants were asked to view videos of dynamic facial expressions and selected the emotions that were best matches with the expressions. RESULTS CG expressions evoked less activation in the lOFC comparing to real human expressions. Furthermore, speech cues increased the activation in the lOFC for CG expressions but not real expressions. CONCLUSION Comparing to real expressions, CG expressions evoked less appraisal processing related to motivational values although this disadvantage can be compensated to some extent by presenting the expressions with speech cues.
Collapse
|
19
|
Matyjek M, Meliss S, Dziobek I, Murayama K. A Multidimensional View on Social and Non-Social Rewards. Front Psychiatry 2020; 11:818. [PMID: 32973574 PMCID: PMC7466643 DOI: 10.3389/fpsyt.2020.00818] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 07/29/2020] [Indexed: 01/08/2023] Open
Abstract
Social rewards are a broad and heterogeneous set of stimuli including for instance smiling faces, gestures, or praise. They have been widely investigated in cognitive and social neuroscience as well as psychology. Research often contrasts the neural processing of social rewards with non-social ones, with the aim to demonstrate the privileged and unique nature of social rewards or to examine shared neural processing underlying them. However, such comparisons mostly neglect other important dimensions of rewards that are conflated in those types of rewards: primacy, temporal proximity, duration, familiarity, source, tangibility, naturalness, and magnitude. We identify how commonly used rewards in both social and non-social domains may differ in respect to these dimensions and how their interaction calls for careful consideration of alternative interpretations of observed effects. Additionally, we propose potential solutions on how to adapt the multidimensional view to experimental research. Altogether, these methodological considerations aim to inform and improve future experimental designs in research utilizing rewarding stimuli, especially in the social domain.
Collapse
Affiliation(s)
- Magdalena Matyjek
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Stefanie Meliss
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| | - Isabel Dziobek
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Kou Murayama
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom.,Research Institute, Kochi University of Technology, Kochi, Japan
| |
Collapse
|