1
|
Liang J, Zhang M, Yang L, Li Y, Li Y, Wang L, Li H, Chen J, Luo W. How Linguistic and Nonlinguistic Vocalizations Shape the Perception of Emotional Faces-An Electroencephalography Study. J Cogn Neurosci 2025; 37:970-987. [PMID: 39620941 DOI: 10.1162/jocn_a_02284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2025]
Abstract
Vocal emotions are crucial in guiding visual attention toward emotionally significant environmental events, such as recognizing emotional faces. This study employed continuous EEG recordings to examine the impact of linguistic and nonlinguistic vocalizations on facial emotion processing. Participants completed a facial emotion discrimination task while viewing fearful, happy, and neutral faces. The behavioral and ERP results indicated that fearful nonlinguistic vocalizations accelerated the recognition of fearful faces and elicited a larger P1 amplitude, whereas happy linguistic vocalizations accelerated the recognition of happy faces and similarly induced a greater P1 amplitude. In recognition of fearful faces, a greater N170 component was observed in the right hemisphere when the emotional category of the priming vocalization was consistent with the face stimulus. In contrast, this effect occurred in the left hemisphere while recognizing happy faces. Representational similarity analysis revealed that the temporoparietal regions automatically differentiate between linguistic and nonlinguistic vocalizations early in face processing. In conclusion, these findings enhance our understanding of the interplay between vocalization types and facial emotion recognition, highlighting the importance of cross-modal processing in emotional perception.
Collapse
Affiliation(s)
- Junyu Liang
- South China Normal University
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Mingming Zhang
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Lan Yang
- South China Normal University
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Yiwen Li
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
- Beijing Normal University
| | - Yuchen Li
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Li Wang
- South China Normal University
| | | | | | - Wenbo Luo
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| |
Collapse
|
2
|
Wang X, Becker B, Tong SX. The power of pain: The temporal-spatial dynamics of empathy induced by body gestures and facial expressions. Neuroimage 2025; 310:121148. [PMID: 40096953 DOI: 10.1016/j.neuroimage.2025.121148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 03/10/2025] [Accepted: 03/14/2025] [Indexed: 03/19/2025] Open
Abstract
Two non-verbal pain representations, body gestures and facial expressions, can communicate pain to others and elicit our own empathic responses. However, the specific impact of these representations on neural responses of empathy, particularly in terms of temporal and spatial neural mechanisms, remains unclear. To address this issue, the present study developed a kinetic pain empathy paradigm comprising short animated videos depicting a protagonist's "real life" pain and no-pain experiences through body gestures and facial expressions. Electroencephalographic (EEG) recordings were conducted on 52 neurotypical adults; while they viewed the animations. Results from multivariate pattern, event-related potential, event-related spectrum perturbation, and source localization analyses revealed that pain expressed through facial expressions, but not body gestures, elicited increased N200 and P200 responses and activated various brain regions, i.e., the anterior cingulate cortex, insula, thalamus, ventromedial prefrontal cortex, temporal gyrus, cerebellum, and right supramarginal gyrus. Enhanced theta power with distinct spatial distributions were observed during early affective arousal and late cognitive reappraisal stages of the pain event. Multiple regression analyses showed a negative correlation between the N200 amplitude and pain catastrophizing, and a positive correlation between the P200 amplitude and autism traits. These findings demonstrate the temporal evolution of empathy evoked by dynamic pain display, highlighting the significant impact of facial expression and its association with individuals' unique psychological traits.
Collapse
Affiliation(s)
- Xin Wang
- Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, China
| | - Benjamin Becker
- Department of Psychology, Faculty of Social Sciences, The University of Hong Kong, Hong Kong, China
| | - Shelley Xiuli Tong
- Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
3
|
Mueller C, Durston AJ, Itier RJ. Happy and angry facial expressions are processed independently of task demands and semantic context congruency in the first stages of vision - A mass univariate ERP analysis. Brain Res 2025; 1851:149481. [PMID: 39889942 DOI: 10.1016/j.brainres.2025.149481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 01/24/2025] [Accepted: 01/28/2025] [Indexed: 02/03/2025]
Abstract
Neural decoding of others' facial expressions is critical in social interactions and has been investigated using scalp event related potentials (ERPs). However, the impact of task and emotional context congruency on this neural decoding is unclear. Previous ERP studies employed classic statistical analyses that only focused on specific electrodes and time points, which inflates type I and type II errors. The present study re-analyzed the study by Aguado et al. (2019) using robust data-driven Mass Univariate Statistics across every time point and electrode and rejected trials with early reaction times to rule out motor-related activity on neural recordings. Participants viewed neutral faces paired with negative or positive situational sentences (e.g. "She catches her partner cheating on her with her best friend"), followed by the same individuals' faces expressing happiness or anger, such that the facial expressions were congruent or incongruent with the situation. Participants engaged in two tasks: an emotion discrimination task, and a situation-expression congruency discrimination task. We found significant effects of expression largest during the N170-P2 interval, and effects of congruency and task around an LPP-like component. However, the effect of congruency was significant only in the congruency task, suggesting a limited and task-dependant influence of semantic context. Importantly, emotion did not interact with any factor neurally, suggesting facial expressions were decoded automatically during the first 400 ms of vision, regardless of context congruency or task demands. The results and their discrepancies with the original findings are discussed in the context of ERP statistics and the replication crisis.
Collapse
Affiliation(s)
- Calla Mueller
- University of Waterloo, Department of Psychology, 200 University Ave West, Waterloo, Ontario N2L 3G1, Canada
| | - Amie J Durston
- University of Waterloo, Department of Psychology, 200 University Ave West, Waterloo, Ontario N2L 3G1, Canada
| | - Roxane J Itier
- University of Waterloo, Department of Psychology, 200 University Ave West, Waterloo, Ontario N2L 3G1, Canada.
| |
Collapse
|
4
|
Lehnen JM, Schweinberger SR, Nussbaum C. Vocal Emotion Perception and Musicality-Insights from EEG Decoding. SENSORS (BASEL, SWITZERLAND) 2025; 25:1669. [PMID: 40292745 PMCID: PMC11944463 DOI: 10.3390/s25061669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2025] [Revised: 03/03/2025] [Accepted: 03/04/2025] [Indexed: 04/30/2025]
Abstract
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not be sensitive enough to detect early neural effects. To address this, we re-analyzed EEG data from 38 musicians and 39 non-musicians engaged in a vocal emotion perception task. Stimuli were generated using parameter-specific voice morphing to preserve emotional cues in either the pitch contour (F0) or timbre. By employing a neural decoding framework with a Linear Discriminant Analysis classifier, we tracked the evolution of emotion representations over time in the EEG signal. Converging with the previous ERP study, our findings reveal that musicians-but not non-musicians-exhibited significant emotion decoding between 500 and 900 ms after stimulus onset, a pattern observed for F0-Morphs only. These results suggest that musicians' superior vocal emotion recognition arises from more effective integration of pitch information during later processing stages rather than from enhanced early sensory encoding. Our study also demonstrates the potential of neural decoding approaches using EEG brain activity as a biological sensor for unraveling the temporal dynamics of voice perception.
Collapse
Affiliation(s)
- Johannes M. Lehnen
- Department of Clinical Psychology in Childhood and Adolescence, Friedrich Schiller University Jena, 07743 Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, 07743 Jena, Germany;
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University Jena, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland
| | - Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University Jena, 07743 Jena, Germany
| |
Collapse
|
5
|
Durston AJ, Itier RJ. Event-Related Potentials to Facial Expressions Are Related to Stimulus-Level Perceived Arousal and Valence. Psychophysiology 2025; 62:e70045. [PMID: 40115983 PMCID: PMC11926668 DOI: 10.1111/psyp.70045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 01/27/2025] [Accepted: 02/27/2025] [Indexed: 03/23/2025]
Abstract
Facial expressions provide critical details about social partners' inner states. We investigated whether event-related potentials (ERP) related to the visual processing of facial expressions are modulated by participants' perceived arousal and valence at the stimulus level. ERPs were recorded while participants (N = 80) categorized the gender of faces expressing fear, anger, happiness, and no emotion. Participants then viewed each face again and rated them on arousal and valence using 1-9 Likert scales. For each participant, ratings of each unique face were linked back to corresponding ERP trials. ERPs were analyzed at all time points and electrodes using hierarchical mass univariate statistics. Three different ANOVA models were employed: the original emotion model, and models with valence or arousal ratings as trial-level regressors. Results from models with ratings highly overlapped with the original model, although they were more temporally restricted. The N170 component was the most impacted by arousal and valence ratings, with four out of six emotion contrasts revealing significant valence or arousal interactions. Emotion effects on the P2 component were mostly unrelated to ratings. On the EPN component, only two contrasts related to both arousal and valence ratings. Thus, ERP emotion effects are related to participants' perceived arousal and valence of the stimuli, although this association depends on the contrast analyzed. These findings, their limitations, and generalizability are discussed in reference to existing theories and literature.
Collapse
Affiliation(s)
- Amie J. Durston
- Department of PsychologyUniversity of WaterlooWaterlooOntarioCanada
| | - Roxane J. Itier
- Department of PsychologyUniversity of WaterlooWaterlooOntarioCanada
| |
Collapse
|
6
|
Zhang G, Luck SJ. Assessing the impact of artifact correction and artifact rejection on the performance of SVM-based decoding of EEG signals. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.22.639684. [PMID: 40060477 PMCID: PMC11888300 DOI: 10.1101/2025.02.22.639684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Abstract
Numerous studies have demonstrated that eyeblinks and other large artifacts can decrease the signal-to-noise ratio of EEG data, resulting in decreased statistical power for conventional univariate analyses. However, it is not clear whether eliminating these artifacts during preprocessing enhances the performance of multivariate pattern analysis (MVPA; decoding), especially given that artifact rejection reduces the number of trials available for training the decoder. This study aimed to evaluate the impact of artifact-minimization approaches on the decoding performance of support vector machines. Independent component analysis (ICA) was used to correct ocular artifacts, and artifact rejection was used to discard trials with large voltage deflections from other sources (e.g., muscle artifacts). We assessed decoding performance in relatively simple binary classification tasks using data from seven commonly-used event-related potential paradigms (N170, mismatch negativity, N2pc, P3b, N400, lateralized readiness potential, and error-related negativity), as well as more challenging multi-way decoding tasks, including stimulus location and stimulus orientation. The results indicated that the combination of artifact correction and rejection did not improve decoding performance in the vast majority of cases. However, artifact correction may still be essential to minimize artifact-related confounds that might artificially inflate decoding accuracy. Researchers who are decoding EEG data from paradigms, populations, and recording setups that are similar to those examined here may benefit from our recommendations to optimize decoding performance and avoid incorrect conclusions.
Collapse
Affiliation(s)
- Guanghui Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, China
- Center for Mind & Brain, University of California-Davis, Davis, CA, USA
| | - Steven J Luck
- Center for Mind & Brain, University of California-Davis, Davis, CA, USA
| |
Collapse
|
7
|
Leong C, Gao F, Yuan Z. Neural decoding reveals dynamic patterns of visual chunk memory processes. Brain Res Bull 2025; 221:111208. [PMID: 39814325 DOI: 10.1016/j.brainresbull.2025.111208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 11/29/2024] [Accepted: 01/12/2025] [Indexed: 01/18/2025]
Abstract
Chunk memory constitutes the basic unit that manages long-term memory and converts it into immediate decision-making processes, it remains unclear how to interpret and organize incoming information to form effective chunk memory. This paper investigates electroencephalography (EEG) patterns from the perspective of time-domain feature extraction using chunk memory in visual statistical learning and combines time-resolved multivariate pattern analysis (MVPA). The GFP and MVPA results revealed that chunk memory processes occurred during specific time windows in the learning phase. These processes included attention modulation (P1), recognition and feature extraction (P2), and segmentation for long-term memory conversion (P6). In the decision-making stage, chunk memory processes were encoded by four ERP components. Scene processing correlated with P1, followed by feature extraction facilitated by P2, encoding process (P4), and segmentation process (P6). This paper identifies the early process of chunk memory through implicit learning and applies univariate and multivariate approaches to establish the neural activity patterns of the early chunk memory process, which provides ideas for subsequent related studies.
Collapse
Affiliation(s)
- Chantat Leong
- Centre for Cognitive and Brain Sciences, University of Macau, Macao; Faculty of Health Sciences, University of Macau, Macao
| | - Fei Gao
- Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, University of Macau, Macao; Faculty of Health Sciences, University of Macau, Macao.
| |
Collapse
|
8
|
Xu W, Lyu B, Ru X, Li D, Gu W, Ma X, Zheng F, Li T, Liao P, Cheng H, Yang R, Song J, Jin Z, Li C, He K, Gao JH. Decoding the Temporal Structures and Interactions of Multiple Face Dimensions Using Optically Pumped Magnetometer Magnetoencephalography (OPM-MEG). J Neurosci 2024; 44:e2237232024. [PMID: 39358044 PMCID: PMC11580774 DOI: 10.1523/jneurosci.2237-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 09/18/2024] [Accepted: 09/25/2024] [Indexed: 10/04/2024] Open
Abstract
Humans possess a remarkable ability to rapidly access diverse information from others' faces with just a brief glance, which is crucial for intricate social interactions. While previous studies using event-related potentials/fields have explored various face dimensions during this process, the interplay between these dimensions remains unclear. Here, by applying multivariate decoding analysis to neural signals recorded with optically pumped magnetometer magnetoencephalography, we systematically investigated the temporal interactions between invariant and variable aspects of face stimuli, including race, gender, age, and expression. First, our analysis revealed unique temporal structures for each face dimension with high test-retest reliability. Notably, expression and race exhibited a dominant and stably maintained temporal structure according to temporal generalization analysis. Further exploration into the mutual interactions among face dimensions uncovered age effects on gender and race, as well as expression effects on race, during the early stage (∼200-300 ms postface presentation). Additionally, we observed a relatively late effect of race on gender representation, peaking ∼350 ms after the stimulus onset. Taken together, our findings provide novel insights into the neural dynamics underlying the multidimensional aspects of face perception and illuminate the promising future of utilizing OPM-MEG for exploring higher-level human cognition.
Collapse
Affiliation(s)
- Wei Xu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | | | - Xingyu Ru
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Dongxu Li
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Wenyu Gu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | - Xiao Ma
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Fufu Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Tingyue Li
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Pan Liao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | - Hao Cheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | - Rui Yang
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Jingqi Song
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Zeyu Jin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | | | - Kaiyan He
- Changping Laboratory, Beijing 102206, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
- McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, Peking University, Beijing 100871, China
| |
Collapse
|
9
|
Gao P, Jiang Z, Yang Y, Zheng Y, Feng G, Li X. Temporal neural dynamics of understanding communicative intentions from speech prosody. Neuroimage 2024; 299:120830. [PMID: 39245398 DOI: 10.1016/j.neuroimage.2024.120830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 08/29/2024] [Accepted: 09/01/2024] [Indexed: 09/10/2024] Open
Abstract
Understanding the correct intention of a speaker is critical for social interaction. Speech prosody is an important source for understanding speakers' intentions during verbal communication. However, the neural dynamics by which the human brain translates the prosodic cues into a mental representation of communicative intentions in real time remains unclear. Here, we recorded EEG (electroencephalograph) while participants listened to dialogues. The prosodic features of the critical words at the end of sentences were manipulated to signal either suggestion, warning, or neutral intentions. The results showed that suggestion and warning intentions evoked enhanced late positive event-related potentials (ERPs) compared to the neutral condition. Linear mixed-effects model (LMEM) regression and representational similarity analysis (RSA) analyses revealed that these ERP effects were distinctively correlated with prosodic acoustic analysis, emotional valence evaluation, and intention interpretation in different time windows; The onset latency significantly increased as the processing level of abstractness and communicative intentionality increased. Neural representations of intention and emotional information emerged and parallelly persisted over a long time window, guiding the correct identification of communicative intention. These results provide new insights into understanding the structural components of intention processing and their temporal neural dynamics underlying communicative intention comprehension from speech prosody in online social interactions.
Collapse
Affiliation(s)
- Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhufang Jiang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| | - Yuanyi Zheng
- School of Psychology, Shenzhen University, Shenzhen, Guangdong, China
| | - Gangyi Feng
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China.
| |
Collapse
|
10
|
Liu S, He W, Zhang M, Li Y, Ren J, Guan Y, Fan C, Li S, Gu R, Luo W. Emotional concepts shape the perceptual representation of body expressions. Hum Brain Mapp 2024; 45:e26789. [PMID: 39185719 PMCID: PMC11345699 DOI: 10.1002/hbm.26789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 08/27/2024] Open
Abstract
Emotion perception interacts with how we think and speak, including our concept of emotions. Body expression is an important way of emotion communication, but it is unknown whether and how its perception is modulated by conceptual knowledge. In this study, we employed representational similarity analysis and conducted three experiments combining semantic similarity, mouse-tracking task, and one-back behavioral task with electroencephalography and functional magnetic resonance imaging techniques, the results of which show that conceptual knowledge predicted the perceptual representation of body expressions. Further, this prediction effect occurred at approximately 170 ms post-stimulus. The neural encoding of body expressions in the fusiform gyrus and lingual gyrus was impacted by emotion concept knowledge. Taken together, our results indicate that conceptual knowledge of emotion categories shapes the configural representation of body expressions in the ventral visual cortex, which offers compelling evidence for the constructed emotion theory.
Collapse
Affiliation(s)
- Shuaicheng Liu
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Weiqi He
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Mingming Zhang
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Jie Ren
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yuanhao Guan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Cong Fan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Shuaixia Li
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Ruolei Gu
- Key Laboratory of Behavioral Science, Institute of PsychologyChinese Academy of SciencesBeijingChina
- Department of PsychologyUniversity of Chinese Academy of SciencesBeijingChina
| | - Wenbo Luo
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| |
Collapse
|
11
|
Carrasco CD, Bahle B, Simmons AM, Luck SJ. Using multivariate pattern analysis to increase effect sizes for event-related potential analyses. Psychophysiology 2024; 61:e14570. [PMID: 38516957 DOI: 10.1111/psyp.14570] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 02/21/2024] [Accepted: 03/09/2024] [Indexed: 03/23/2024]
Abstract
Multivariate pattern analysis (MVPA) approaches can be applied to the topographic distribution of event-related potential (ERP) signals to "decode" subtly different stimulus classes, such as different faces or different orientations. These approaches are extremely sensitive, and it seems possible that they could also be used to increase effect sizes and statistical power in traditional paradigms that ask whether an ERP component differs in amplitude across conditions. To assess this possibility, we leveraged the open-source ERP CORE data set and compared the effect sizes resulting from conventional univariate analyses of mean amplitude with two MVPA approaches (support vector machine decoding and the cross-validated Mahalanobis distance, both of which are easy to compute using open-source software). We assessed these approaches across seven widely studied ERP components (N170, N400, N2pc, P3b, lateral readiness potential, error related negativity, and mismatch negativity). Across all components, we found that multivariate approaches yielded effect sizes that were as large or larger than the effect sizes produced by univariate approaches. These results indicate that researchers could obtain larger effect sizes, and therefore greater statistical power, by using multivariate analysis of topographic voltage patterns instead of traditional univariate analyses in many ERP studies.
Collapse
Affiliation(s)
| | - Brett Bahle
- Center for Mind and Brain, University of California, Davis, California, USA
| | | | - Steven J Luck
- Center for Mind and Brain, University of California, Davis, California, USA
| |
Collapse
|
12
|
El Zein M, Mennella R, Sequestro M, Meaux E, Wyart V, Grèzes J. Prioritized neural processing of social threats during perceptual decision-making. iScience 2024; 27:109951. [PMID: 38832023 PMCID: PMC11145357 DOI: 10.1016/j.isci.2024.109951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 04/24/2024] [Accepted: 05/07/2024] [Indexed: 06/05/2024] Open
Abstract
Emotional signals, notably those signaling threat, benefit from prioritized processing in the human brain. Yet, it remains unclear whether perceptual decisions about the emotional, threat-related aspects of stimuli involve specific or similar neural computations compared to decisions about their non-threatening/non-emotional components. We developed a novel behavioral paradigm in which participants performed two different detection tasks (emotion vs. color) on the same, two-dimensional visual stimuli. First, electroencephalographic (EEG) activity in a cluster of central electrodes reflected the amount of perceptual evidence around 100 ms following stimulus onset, when the decision concerned emotion, not color. Second, participants' choice could be predicted earlier for emotion (240 ms) than for color (380 ms) by the mu (10 Hz) rhythm, which reflects motor preparation. Taken together, these findings indicate that perceptual decisions about threat-signaling dimensions of facial displays are associated with prioritized neural coding in action-related brain regions, supporting the motivational value of socially relevant signals.
Collapse
Affiliation(s)
- M. El Zein
- Cognitive and Computational Neuroscience Laboratory (LNC), INSERM U960, DEC, Ecole Normale Supérieure, PSL University, 75005 Paris, France
- Center for Adaptive Rationality, Max-Planck for Human Development, Berlin, Germany
- Centre for Political Research (CEVIPOF), Sciences Po, Paris, France
- Humans Matter, Paris, France
| | - R. Mennella
- Cognitive and Computational Neuroscience Laboratory (LNC), INSERM U960, DEC, Ecole Normale Supérieure, PSL University, 75005 Paris, France
- Laboratory of the Interactions Between Cognition Action and Emotion (LICAÉ, EA2931), UFR STAPS, Université Paris Nanterre, Nanterre, France
| | - M. Sequestro
- Cognitive and Computational Neuroscience Laboratory (LNC), INSERM U960, DEC, Ecole Normale Supérieure, PSL University, 75005 Paris, France
| | - E. Meaux
- Cognitive and Computational Neuroscience Laboratory (LNC), INSERM U960, DEC, Ecole Normale Supérieure, PSL University, 75005 Paris, France
| | - V. Wyart
- Cognitive and Computational Neuroscience Laboratory (LNC), INSERM U960, DEC, Ecole Normale Supérieure, PSL University, 75005 Paris, France
- Institut du Psychotraumatisme de l’Enfant et de l’Adolescent, Conseil Départemental Yvelines et Hauts-de-Seine, Versailles, France
| | - J. Grèzes
- Cognitive and Computational Neuroscience Laboratory (LNC), INSERM U960, DEC, Ecole Normale Supérieure, PSL University, 75005 Paris, France
| |
Collapse
|
13
|
Faghel-Soubeyrand S, Richoz AR, Waeber D, Woodhams J, Caldara R, Gosselin F, Charest I. Neural computations in prosopagnosia. Cereb Cortex 2024; 34:bhae211. [PMID: 38795358 PMCID: PMC11127037 DOI: 10.1093/cercor/bhae211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/30/2024] [Accepted: 05/03/2024] [Indexed: 05/27/2024] Open
Abstract
We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Woodstock Rd, Oxford OX2 6GG
| | - Anne-Raphaelle Richoz
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Delphine Waeber
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Jessica Woodhams
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | - Roberto Caldara
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| |
Collapse
|
14
|
Li Y, Li S, Hu W, Yang L, Luo W. Spatial representation of multidimensional information in emotional faces revealed by fMRI. Neuroimage 2024; 290:120578. [PMID: 38499051 DOI: 10.1016/j.neuroimage.2024.120578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 03/20/2024] Open
Abstract
Face perception is a complex process that involves highly specialized procedures and mechanisms. Investigating into face perception can help us better understand how the brain processes fine-grained, multidimensional information. This research aimed to delve deeply into how different dimensions of facial information are represented in specific brain regions or through inter-regional connections via an implicit face recognition task. To capture the representation of various facial information in the brain, we employed support vector machine decoding, functional connectivity, and model-based representational similarity analysis on fMRI data, resulting in the identification of three crucial findings. Firstly, despite the implicit nature of the task, emotions were still represented in the brain, contrasting with all other facial information. Secondly, the connection between the medial amygdala and the parahippocampal gyrus was found to be essential for the representation of facial emotion in implicit tasks. Thirdly, in implicit tasks, arousal representation occurred in the parahippocampal gyrus, while valence depended on the connection between the primary visual cortex and the parahippocampal gyrus. In conclusion, these findings dissociate the neural mechanisms of emotional valence and arousal, revealing the precise spatial patterns of multidimensional information processing in faces.
Collapse
Affiliation(s)
- Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Weiyu Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Lan Yang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China.
| |
Collapse
|
15
|
Carrasco CD, Bahle B, Simmons AM, Luck SJ. Using Multivariate Pattern Analysis to Increase Effect Sizes for Event-Related Potential Analyses. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.11.07.566051. [PMID: 37986854 PMCID: PMC10659264 DOI: 10.1101/2023.11.07.566051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Multivariate pattern analysis approaches can be applied to the topographic distribution of event-related potential (ERP) signals to 'decode' subtly different stimulus classes, such as different faces or different orientations. These approaches are extremely sensitive, and it seems possible that they could also be used to increase effect sizes and statistical power in traditional paradigms that ask whether an ERP component differs in amplitude across conditions. To assess this possibility, we leveraged the open-source ERP CORE dataset and compared the effect sizes resulting from conventional univariate analyses of mean amplitude with two multivariate pattern analysis approaches (support vector machine decoding and the cross-validated Mahalanobis distance, both of which are easy to compute using open-source software). We assessed these approaches across seven widely studied ERP components (N170, N400, N2pc, P3b, lateral readiness potential, error related negativity, and mismatch negativity). Across all components, we found that multivariate approaches yielded effect sizes that were as large or larger than the effect sizes produced by univariate approaches. These results indicate that researchers could obtain larger effect sizes, and therefore greater statistical power, by using multivariate analysis of topographic voltage patterns instead of traditional univariate analyses in many ERP studies.
Collapse
Affiliation(s)
| | - Brett Bahle
- Center for Mind & Brain, University of California, Davis
| | | | - Steven J Luck
- Center for Mind & Brain, University of California, Davis
| |
Collapse
|
16
|
Liu J, Fan T, Chen Y, Zhao J. Seeking the neural representation of statistical properties in print during implicit processing of visual words. NPJ SCIENCE OF LEARNING 2023; 8:60. [PMID: 38102191 PMCID: PMC10724295 DOI: 10.1038/s41539-023-00209-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 11/29/2023] [Indexed: 12/17/2023]
Abstract
Statistical learning (SL) plays a key role in literacy acquisition. Studies have increasingly revealed the influence of distributional statistical properties of words on visual word processing, including the effects of word frequency (lexical level) and mappings between orthography, phonology, and semantics (sub-lexical level). However, there has been scant evidence to directly confirm that the statistical properties contained in print can be directly characterized by neural activities. Using time-resolved representational similarity analysis (RSA), the present study examined neural representations of different types of statistical properties in visual word processing. From the perspective of predictive coding, an equal probability sequence with low built-in prediction precision and three oddball sequences with high built-in prediction precision were designed with consistent and three types of inconsistent (orthographically inconsistent, orthography-to-phonology inconsistent, and orthography-to-semantics inconsistent) Chinese characters as visual stimuli. In the three oddball sequences, consistent characters were set as the standard stimuli (probability of occurrence p = 0.75) and three types of inconsistent characters were set as deviant stimuli (p = 0.25), respectively. In the equal probability sequence, the same consistent and inconsistent characters were presented randomly with identical occurrence probability (p = 0.25). Significant neural representation activities of word frequency were observed in the equal probability sequence. By contrast, neural representations of sub-lexical statistics only emerged in oddball sequences where short-term predictions were shaped. These findings reveal that the statistical properties learned from long-term print environment continues to play a role in current word processing mechanisms and these mechanisms can be modulated by short-term predictions.
Collapse
Affiliation(s)
- Jianyi Liu
- School of Psychology, Shaanxi Normal University, and Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, Xi'an, China.
| | - Tengwen Fan
- School of Psychology, Shaanxi Normal University, and Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Yan Chen
- Key laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan, China
| | - Jingjing Zhao
- School of Psychology, Shaanxi Normal University, and Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, Xi'an, China.
| |
Collapse
|
17
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
18
|
Lee Y, Seo Y, Lee Y, Lee D. Dimensional emotions are represented by distinct topographical brain networks. Int J Clin Health Psychol 2023; 23:100408. [PMID: 37663040 PMCID: PMC10472247 DOI: 10.1016/j.ijchp.2023.100408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 08/21/2023] [Indexed: 09/05/2023] Open
Abstract
The ability to recognize others' facial emotions has become increasingly important after the COVID-19 pandemic, which causes stressful situations in emotion regulation. Considering the importance of emotion in maintaining a social life, emotion knowledge to perceive and label emotions of oneself and others requires an understanding of affective dimensions, such as emotional valence and emotional arousal. However, limited information is available about whether the behavioral representation of affective dimensions is similar to their neural representation. To explore the relationship between the brain and behavior in the representational geometries of affective dimensions, we constructed a behavioral paradigm in which emotional faces were categorized into geometric spaces along the valence, arousal, and valence and arousal dimensions. Moreover, we compared such representations to neural representations of the faces acquired by functional magnetic resonance imaging. We found that affective dimensions were similarly represented in the behavior and brain. Specifically, behavioral and neural representations of valence were less similar to those of arousal. We also found that valence was represented in the dorsolateral prefrontal cortex, frontal eye fields, precuneus, and early visual cortex, whereas arousal was represented in the cingulate gyrus, middle frontal gyrus, orbitofrontal cortex, fusiform gyrus, and early visual cortex. In conclusion, the current study suggests that dimensional emotions are similarly represented in the behavior and brain and are presented with differential topographical organizations in the brain.
Collapse
Affiliation(s)
| | | | - Youngju Lee
- Cognitive Science Research Group, Korea Brain Research Institute, 61 Cheomdan-ro, Dong-gu, Daegu 41062, Republic of Korea
| | - Dongha Lee
- Cognitive Science Research Group, Korea Brain Research Institute, 61 Cheomdan-ro, Dong-gu, Daegu 41062, Republic of Korea
| |
Collapse
|
19
|
Nie L, Ku Y. Decoding Emotion From High-frequency Steady State Visual Evoked Potential (SSVEP). J Neurosci Methods 2023:109919. [PMID: 37422072 DOI: 10.1016/j.jneumeth.2023.109919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/22/2023] [Accepted: 07/05/2023] [Indexed: 07/10/2023]
Abstract
BACKGROUND Steady-state visual evoked potential (SSVEP) by flickering sensory stimuli has been widely applied in the brain-machine interface (BMI). Yet, it remains largely unexplored whether affective information could be decoded from the signal of SSVEP, especially from the frequencies higher than the critical flicker frequency (an upper-frequency limit one can see the flicker). NEW METHOD Participants fixated on visual stimuli presented at 60Hz above the critical flicker frequency. The stimuli were pictures with different affective valance (positive, neutral, negative) in distinctive semantic categories (human, animal, scene). SSVEP entrainment in the brain evoked by the flickering stimuli at 60Hz was used to decode the affective and semantic information. RESULTS During the presentation of stimuli (1s), the affective valance could be decoded from the SSVEP signals at 60Hz, while the semantic categories could not. In contrast, neither the affective nor the semantic information could be decoded from the brain signal 1second before the onset of stimuli. COMPARISON WITH EXISTING METHOD(S) Previous studies focused mainly on EEG activity tagged at frequencies lower than the critical flickering frequency and investigated whether the affective valence of stimuli drew participants' attention. The current study was the first to use SSVEP signals from high-frequency (60Hz) above the critical flickering frequency to decode affective information from stimuli. The high-frequency flickering was invisible and thus substantially reduced the fatigue of participants. CONCLUSIONS We found that affective information could be decoded from high-frequency SSVEP and the current finding could be added to designing affective BMI in the future.
Collapse
Affiliation(s)
- Lu Nie
- Guangdong Provincial Key Laboratory of Brain Function and Disease, Center for Brain and Mental Well-being, Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Yixuan Ku
- Guangdong Provincial Key Laboratory of Brain Function and Disease, Center for Brain and Mental Well-being, Department of Psychology, Sun Yat-sen University, Guangzhou, China; Peng Cheng Laboratory, Shenzhen, China.
| |
Collapse
|