1
|
Alp N, Lale G, Saglam C, Sayim B. The effect of processing partial information in dynamic face perception. Sci Rep 2024; 14:9794. [PMID: 38684721 PMCID: PMC11059172 DOI: 10.1038/s41598-024-58605-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 04/01/2024] [Indexed: 05/02/2024] Open
Abstract
Face perception is a major topic in vision research. Most previous research has concentrated on (holistic) spatial representations of faces, often with static faces as stimuli. However, faces are highly dynamic stimuli containing important temporal information. How sensitive humans are regarding temporal information in dynamic faces is not well understood. Studies investigating temporal information in dynamic faces usually focus on the processing of emotional expressions. However, faces also contain relevant temporal information without any strong emotional expression. To investigate cues that modulate human sensitivity to temporal order, we utilized muted dynamic neutral face videos in two experiments. We varied the orientation of the faces (upright and inverted) and the presence/absence of eye blinks as partial dynamic cues. Participants viewed short, muted, monochromic videos of models vocalizing a widely known text (National Anthem). Videos were played either forward (in the correct temporal order) or backward. Participants were asked to determine the direction of the temporal order for each video, and (at the end of the experiment) whether they had understood the speech. We found that face orientation, and the presence/absence of an eye blink affected sensitivity, criterion (bias) and reaction time: Overall, sensitivity was higher for upright compared to inverted faces, and in the condition where an eye blink was present compared to the condition without an eye blink. Reaction times were mostly faster in the conditions with higher sensitivity. A bias to report inverted faces as 'backward' observed in Experiment I, where upright and inverted faces were presented randomly interleaved within each block, was absent when presenting upright and inverted faces in different blocks in Experiment II. Language comprehension results revealed that there was higher sensitivity when understanding the speech compared to not understanding the speech in both experiments. Taken together, our results showed higher sensitivity with upright compared to inverted faces, suggesting that the perception of dynamic, task-relevant information was superior with the canonical orientation of the faces. Furthermore, partial information coming from eye blinks, in addition to mouth movements, seemed to play a significant role in dynamic face perception, both when faces were presented upright and inverted. We suggest that studying the perception of facial dynamics beyond emotional expressions will help us to better understand the mechanisms underlying the temporal integration of facial information from different -partial and holistic- sources, and that our results show how different strategies, depending on the available information, are employed by human observers when judging the temporal order of faces.
Collapse
Affiliation(s)
- Nihan Alp
- Psychology, Sabanci University, Istanbul, Türkiye.
| | - Gülce Lale
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Ceren Saglam
- Department of General Psychology, University of Padua, Padova, Italy
| | - Bilge Sayim
- Univ. Lille, CNRS, UMR 9193, SCALab - Sciences Cognitives et Sciences Affectives, F - 59000, Lille, France
| |
Collapse
|
2
|
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023; 55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
Abstract
Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Collapse
Affiliation(s)
| | | | - İlker Duymaz
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey
| | - Bilge Sayim
- SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France
- Institute of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland
| | - Nihan Alp
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.
| |
Collapse
|
3
|
Alp N, Ozkan H. Neural correlates of integration processes during dynamic face perception. Sci Rep 2022; 12:118. [PMID: 34996892 PMCID: PMC8742062 DOI: 10.1038/s41598-021-02808-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Integrating the spatiotemporal information acquired from the highly dynamic world around us is essential to navigate, reason, and decide properly. Although this is particularly important in a face-to-face conversation, very little research to date has specifically examined the neural correlates of temporal integration in dynamic face perception. Here we present statistically robust observations regarding the brain activations measured via electroencephalography (EEG) that are specific to the temporal integration. To that end, we generate videos of neutral faces of individuals and non-face objects, modulate the contrast of the even and odd frames at two specific frequencies (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_1$$\end{document}f1 and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_2$$\end{document}f2) in an interlaced manner, and measure the steady-state visual evoked potential as participants view the videos. Then, we analyze the intermodulation components (IMs: (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$nf_1\pm mf_2$$\end{document}nf1±mf2), a linear combination of the fundamentals with integer multipliers) that consequently reflect the nonlinear processing and indicate temporal integration by design. We show that electrodes around the medial temporal, inferior, and medial frontal areas respond strongly and selectively when viewing dynamic faces, which manifests the essential processes underlying our ability to perceive and understand our social world. The generation of IMs is only possible if even and odd frames are processed in succession and integrated temporally, therefore, the strong IMs in our frequency spectrum analysis show that the time between frames (1/60 s) is sufficient for temporal integration.
Collapse
Affiliation(s)
- Nihan Alp
- Psychology, Sabanci University, Istanbul, Turkey.
| | - Huseyin Ozkan
- Electronics Engineering, Sabanci University, Istanbul, Turkey
| |
Collapse
|
4
|
The neural coding of face and body orientation in occipitotemporal cortex. Neuroimage 2021; 246:118783. [PMID: 34879251 DOI: 10.1016/j.neuroimage.2021.118783] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 11/09/2021] [Accepted: 12/04/2021] [Indexed: 11/20/2022] Open
Abstract
Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants' brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex.
Collapse
|
5
|
Aktürk T, de Graaf TA, Abra Y, Şahoğlu-Göktaş S, Özkan D, Kula A, Güntekin B. Event-related EEG oscillatory responses elicited by dynamic facial expression. Biomed Eng Online 2021; 20:41. [PMID: 33906649 PMCID: PMC8077950 DOI: 10.1186/s12938-021-00882-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 04/20/2021] [Indexed: 11/30/2022] Open
Abstract
Background Recognition of facial expressions (FEs) plays a crucial role in social interactions. Most studies on FE recognition use static (image) stimuli, even though real-life FEs are dynamic. FE processing is complex and multifaceted, and its neural correlates remain unclear. Transitioning from static to dynamic FE stimuli might help disentangle the neural oscillatory mechanisms underlying face processing and recognition of emotion expression. To our knowledge, we here present the first time–frequency exploration of oscillatory brain mechanisms underlying the processing of dynamic FEs. Results Videos of joyful, fearful, and neutral dynamic facial expressions were presented to 18 included healthy young adults. We analyzed event-related activity in electroencephalography (EEG) data, focusing on the delta, theta, and alpha-band oscillations. Since the videos involved a transition from neutral to emotional expressions (onset around 500 ms), we identified time windows that might correspond to face perception initially (time window 1; first TW), and emotion expression recognition subsequently (around 1000 ms; second TW). First TW showed increased power and phase-locking values for all frequency bands. In the first TW, power and phase-locking values were higher in the delta and theta bands for emotional FEs as compared to neutral FEs, thus potentially serving as a marker for emotion recognition in dynamic face processing. Conclusions Our time–frequency exploration revealed consistent oscillatory responses to complex, dynamic, ecologically meaningful FE stimuli. We conclude that while dynamic FE processing involves complex network dynamics, dynamic FEs were successfully used to reveal temporally separate oscillation responses related to face processing and subsequently emotion expression recognition.
Collapse
Affiliation(s)
- Tuba Aktürk
- Program of Electroneurophysiology, Vocational School, Istanbul Medipol University, Istanbul, Turkey.,Program of Neuroscience Ph.D, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Tom A de Graaf
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Yasemin Abra
- Department of Biological Sciences, Faculty of Arts and Sciences, Middle East Technical University, Ankara, Turkey.,Institute for Psychology, Faculty of Human Sciences, Universität Der Bundeswehr München, Munich, Germany.,Department of Psychology, Faculty of Psychology and Educational Sciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Sevilay Şahoğlu-Göktaş
- Program of Neuroscience Ph.D, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey.,Regenerative and Restorative Medicine Research Center (REMER), Istanbul Medipol University, Istanbul, Turkey
| | - Dilek Özkan
- Meram Faculty of Medicine, Konya Necmettin Erbakan University, Konya, Turkey
| | - Aysun Kula
- Department of Molecular Biology and Genetics, Faculty of Science, Sivas Cumhuriyet University, Sivas, Turkey
| | - Bahar Güntekin
- Department of Biophysics, School of Medicine, Istanbul Medipol University, Istanbul, Turkey. .,Regenerative and Restorative Medicine Research Center (REMER), Istanbul Medipol University, Istanbul, Turkey.
| |
Collapse
|
6
|
Movies and narratives as naturalistic stimuli in neuroimaging. Neuroimage 2020; 224:117445. [PMID: 33059053 PMCID: PMC7805386 DOI: 10.1016/j.neuroimage.2020.117445] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 10/06/2020] [Accepted: 10/09/2020] [Indexed: 01/06/2023] Open
Abstract
Using movies and narratives as naturalistic stimuli in human neuroimaging studies has yielded significant advances in understanding of cognitive and emotional functions. The relevant literature was reviewed, with emphasis on how the use of naturalistic stimuli has helped advance scientific understanding of human memory, attention, language, emotions, and social cognition in ways that would have been difficult otherwise. These advances include discovering a cortical hierarchy of temporal receptive windows, which supports processing of dynamic information that accumulates over several time scales, such as immediate reactions vs. slowly emerging patterns in social interactions. Naturalistic stimuli have also helped elucidate how the hippocampus supports segmentation and memorization of events in day-to-day life and have afforded insights into attentional brain mechanisms underlying our ability to adopt specific perspectives during natural viewing. Further, neuroimaging studies with naturalistic stimuli have revealed the role of the default-mode network in narrative-processing and in social cognition. Finally, by robustly eliciting genuine emotions, these stimuli have helped elucidate the brain basis of both basic and social emotions apparently manifested as highly overlapping yet distinguishable patterns of brain activity.
Collapse
|
7
|
Maffei V, Indovina I, Mazzarella E, Giusti MA, Macaluso E, Lacquaniti F, Viviani P. Sensitivity of occipito-temporal cortex, premotor and Broca's areas to visible speech gestures in a familiar language. PLoS One 2020; 15:e0234695. [PMID: 32559213 PMCID: PMC7304574 DOI: 10.1371/journal.pone.0234695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 06/01/2020] [Indexed: 11/18/2022] Open
Abstract
When looking at a speaking person, the analysis of facial kinematics contributes to language discrimination and to the decoding of the time flow of visual speech. To disentangle these two factors, we investigated behavioural and fMRI responses to familiar and unfamiliar languages when observing speech gestures with natural or reversed kinematics. Twenty Italian volunteers viewed silent video-clips of speech shown as recorded (Forward, biological motion) or reversed in time (Backward, non-biological motion), in Italian (familiar language) or Arabic (non-familiar language). fMRI revealed that language (Italian/Arabic) and time-rendering (Forward/Backward) modulated distinct areas in the ventral occipito-temporal cortex, suggesting that visual speech analysis begins in this region, earlier than previously thought. Left premotor ventral (superior subdivision) and dorsal areas were preferentially activated with the familiar language independently of time-rendering, challenging the view that the role of these regions in speech processing is purely articulatory. The left premotor ventral region in the frontal operculum, thought to include part of the Broca's area, responded to the natural familiar language, consistent with the hypothesis of motor simulation of speech gestures.
Collapse
Affiliation(s)
- Vincenzo Maffei
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy.,Data Lake & BI, DOT - Technology, Poste Italiane, Rome, Italy
| | - Iole Indovina
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Departmental Faculty of Medicine and Surgery, Saint Camillus International University of Health and Medical Sciences, Rome, Italy
| | | | - Maria Assunta Giusti
- Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Emiliano Macaluso
- ImpAct Team, Lyon Neuroscience Research Center, Lyon, France.,Laboratory of Neuroimaging, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Paolo Viviani
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
8
|
Sato W, Kochiyama T, Uono S, Sawada R, Kubota Y, Yoshimura S, Toichi M. Widespread and lateralized social brain activity for processing dynamic facial expressions. Hum Brain Mapp 2019; 40:3753-3768. [PMID: 31090126 DOI: 10.1002/hbm.24629] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 04/28/2019] [Accepted: 05/02/2019] [Indexed: 11/07/2022] Open
Abstract
Dynamic facial expressions of emotions constitute natural and powerful means of social communication in daily life. A number of previous neuroimaging studies have explored the neural mechanisms underlying the processing of dynamic facial expressions, and indicated the activation of certain social brain regions (e.g., the amygdala) during such tasks. However, the activated brain regions were inconsistent across studies, and their laterality was rarely evaluated. To investigate these issues, we measured brain activity using functional magnetic resonance imaging in a relatively large sample (n = 51) during the observation of dynamic facial expressions of anger and happiness and their corresponding dynamic mosaic images. The observation of dynamic facial expressions, compared with dynamic mosaics, elicited stronger activity in the bilateral posterior cortices, including the inferior occipital gyri, fusiform gyri, and superior temporal sulci. The dynamic facial expressions also activated bilateral limbic regions, including the amygdalae and ventromedial prefrontal cortices, more strongly versus mosaics. In the same manner, activation was found in the right inferior frontal gyrus (IFG) and left cerebellum. Laterality analyses comparing original and flipped images revealed right hemispheric dominance in the superior temporal sulcus and IFG and left hemispheric dominance in the cerebellum. These results indicated that the neural mechanisms underlying processing of dynamic facial expressions include widespread social brain regions associated with perceptual, emotional, and motor functions, and include a clearly lateralized (right cortical and left cerebellar) network like that involved in language processing.
Collapse
Affiliation(s)
- Wataru Sato
- Kokoro Research Center, Kyoto University, Kyoto, Japan
| | | | - Shota Uono
- Department of Neurodevelopmental Psychiatry, Habilitation and Rehabilitation, Kyoto University, Kyoto, Japan
| | - Reiko Sawada
- Department of Neurodevelopmental Psychiatry, Habilitation and Rehabilitation, Kyoto University, Kyoto, Japan
| | - Yasutaka Kubota
- Health and Medical Services Center, Shiga University, Hikone, Shiga, Japan
| | - Sayaka Yoshimura
- Department of Neurodevelopmental Psychiatry, Habilitation and Rehabilitation, Kyoto University, Kyoto, Japan
| | - Motomi Toichi
- Faculty of Human Health Science, Kyoto University, Kyoto, Japan.,The Organization for Promoting Neurodevelopmental Disorder Research, Kyoto, Japan
| |
Collapse
|
9
|
Müller-Bardorff M, Bruchmann M, Mothes-Lasch M, Zwitserlood P, Schlossmacher I, Hofmann D, Miltner W, Straube T. Early brain responses to affective faces: A simultaneous EEG-fMRI study. Neuroimage 2018; 178:660-667. [DOI: 10.1016/j.neuroimage.2018.05.081] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 05/23/2018] [Accepted: 05/31/2018] [Indexed: 10/14/2022] Open
|
10
|
Korolkova OA. The role of temporal inversion in the perception of realistic and morphed dynamic transitions between facial expressions. Vision Res 2017; 143:42-51. [PMID: 29274357 DOI: 10.1016/j.visres.2017.10.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 10/04/2017] [Accepted: 10/04/2017] [Indexed: 10/18/2022]
Abstract
Recent studies suggest that video recordings of human facial expressions are perceived differently than linear morphing between the first and last frames of these records. Also, observers can differentiate dynamic expressions presented in normal versus time-reversed frame orders. To date, the simultaneous influence of dynamics (natural or linear) and timeline (normal or reversed) has not yet been tested on a wide range of dynamic emotional expressions and the transitions between them. We compared the perception of dynamic transitions between basic emotions in realistic (human-posed) and artificial (linearly morphed) stimuli which were presented in reversed or non-reversed order. The nonlinearity of realistic stimuli was demonstrated by automated facial structure analysis. The results of the behavioral study revealed that the recognition of emotions in time-reversed stimuli significantly differed from recognition of the normally presented ones, and this difference was substantially higher for videos of a dynamic human face than for linear morphs. Emotions displayed at the end of the transitions were recognized better than the first-frame emotions in all types of stimuli except in the time-reversed videos, which showed a similar recognition rate for both the starting and ending emotions. Our findings suggest that nonlinearity, which is present in a realistic facial display but absent in linear morphing, is an important cue for emotion perception, and that unnatural perceptual conditions (inversion in time) make the recognition of emotions more difficult. These results confirm the ability of the human visual system to use subtle dynamic cues on an interlocutor's face, and reveal its sensitivity to the timeline organization of the displayed emotions.
Collapse
Affiliation(s)
- Olga A Korolkova
- Center for Experimental Psychology, Moscow State University of Psychology and Education, 2a Shelepikhinskaya Quay, 123290 Moscow, Russia.
| |
Collapse
|
11
|
Wright MJ, Kuhn LK. Event-related potentials to changes in facial expression in two-phase transitions. PLoS One 2017; 12:e0175631. [PMID: 28406957 PMCID: PMC5391024 DOI: 10.1371/journal.pone.0175631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Accepted: 03/29/2017] [Indexed: 11/19/2022] Open
Abstract
The purpose of the study was to compare event-related potentials (ERPs) to different transitions between emotional and neutral facial expressions. The stimuli contained a single transition between two different images of the same face, giving a strong impression of changing expression though apparent motion whilst eliminating change in irrelevant stimulus variables such as image contrast or identity. Stimuli were calibrated for intensity, valence and perceived emotion category and only trials where the target emotion was correctly identified were included. In the first experiment, a magnification change (zoom) was a control condition. Transitions from neutral to angry expressions produced a more negative N1 with longer peak latency, and more positive P2 than did an increase in magnification. Critically, response to neutral following angry, relative to neutral following magnified, showed a generally more negative ERP with a delayed N1 peak and reduced P2 amplitude. In the second experiment, comparison of neutral-happy and neutral-frightened transitions showed significantly different ERPs to emotional expression change. Responses to the reversed direction of a transition (happy-neutral and frightened-neutral) were much reduced. Unlike the comparison of angry-neutral with magnified-neutral, there were minimal differences in the responses to neutral following happy and neutral following frightened. The results demonstrate in a young adult sample the directionality of responses to facial expression dynamics, and suggest a separation of neural mechanisms for detecting expression changes and magnification changes.
Collapse
Affiliation(s)
- Michael J. Wright
- Centre for Cognitive Neuroscience, Department of Life Sciences, College of Health and Life Sciences, Brunel University London, Uxbridge, United Kingdom
- * E-mail:
| | - Lisa K. Kuhn
- Experimental Neuropsychology Unit, Department of Psychology, Saarland University, Saarbrücken, Germany
| |
Collapse
|
12
|
Hanke M, Adelhöfer N, Kottke D, Iacovella V, Sengupta A, Kaule FR, Nigbur R, Waite AQ, Baumgartner F, Stadler J. A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation. Sci Data 2016; 3:160092. [PMID: 27779621 PMCID: PMC5079121 DOI: 10.1038/sdata.2016.92] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 09/13/2016] [Indexed: 11/28/2022] Open
Abstract
Here we present an update of the studyforrest (http://studyforrest.org) dataset that complements the previously released functional magnetic resonance imaging (fMRI) data for natural language processing with a new two-hour 3 Tesla fMRI acquisition while 15 of the original participants were shown an audio-visual version of the stimulus motion picture. We demonstrate with two validation analyses that these new data support modeling specific properties of the complex natural stimulus, as well as a substantial within-subject BOLD response congruency in brain areas related to the processing of auditory inputs, speech, and narrative when compared to the existing fMRI data for audio-only stimulation. In addition, we provide participants' eye gaze location as recorded simultaneously with fMRI, and an additional sample of 15 control participants whose eye gaze trajectories for the entire movie were recorded in a lab setting—to enable studies on attentional processes and comparative investigations on the potential impact of the stimulation setting on these processes.
Collapse
Affiliation(s)
- Michael Hanke
- Psychoinformatics Lab, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany.,Center for Behavioral Brain Sciences, Magdeburg D-39016, Germany
| | - Nico Adelhöfer
- Psychoinformatics Lab, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | - Daniel Kottke
- Knowledge Management and Discovery Lab, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | | | - Ayan Sengupta
- Experimental Psychology Lab, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | - Falko R Kaule
- Psychoinformatics Lab, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany.,Visual Processing Laboratory, Department of Ophthalmology, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | - Roland Nigbur
- Department of Neuropsychology, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | - Alexander Q Waite
- Psychoinformatics Lab, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | - Florian Baumgartner
- Experimental Psychology Lab, Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39016, Germany
| | - Jörg Stadler
- Leibniz Institute for Neurobiology, Magdeburg D-39118, Germany
| |
Collapse
|
13
|
Korkmaz Hacialihafiz D, Bartels A. Motion responses in scene-selective regions. Neuroimage 2015; 118:438-44. [DOI: 10.1016/j.neuroimage.2015.06.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 05/18/2015] [Accepted: 06/09/2015] [Indexed: 10/23/2022] Open
|
14
|
Reinl M, Bartels A. Perception of temporal asymmetries in dynamic facial expressions. Front Psychol 2015; 6:1107. [PMID: 26300807 PMCID: PMC4523710 DOI: 10.3389/fpsyg.2015.01107] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 07/20/2015] [Indexed: 11/13/2022] Open
Abstract
In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline). This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline-reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioral as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries.
Collapse
Affiliation(s)
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
15
|
Contrasting specializations for facial motion within the macaque face-processing system. Curr Biol 2015; 25:261-266. [PMID: 25578903 DOI: 10.1016/j.cub.2014.11.038] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 10/29/2014] [Accepted: 11/14/2014] [Indexed: 11/20/2022]
Abstract
Facial motion transmits rich and ethologically vital information, but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain, and facial motion activates these patches and surrounding areas. Yet, it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery's organization might be. To address these questions, we used fMRI to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system.
Collapse
|