1
|
Jang J, Kim J. Consistency of affective responses to naturalistic stimuli across individuals using intersubject correlation analysis based on neuroimaging data. Brain Cogn 2025; 186:106295. [PMID: 40188618 DOI: 10.1016/j.bandc.2025.106295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2025] [Revised: 03/09/2025] [Accepted: 03/20/2025] [Indexed: 04/08/2025]
Abstract
In this study, we utilized functional magnetic resonance imaging (fMRI) data obtained for naturalistic emotional stimuli to examine the consistency of neural responses among participants in specific regions related to valence. We reanalyzed fMRI data from 17 participants as they watched episodes of "Sherlock" and used emotional ratings from 125 participants. To determine regions where neural response patterns were synchronized across participants based on the pattern of valence changes, intersubject correlation analysis was conducted. As a validation analysis, multidimensional scaling was conducted to investigate emotional representation for significant regions of interest. The results revealed increased neural synchrony in the ventromedial prefrontal cortex, bilateral superior frontal cortex, left posterior cingulate cortex, thalamus, right anterior cingulate cortex, and bilateral inferior frontal cortices during the presentation of positive scenes. Also, the bilateral superior temporal gyrus and bilateral medial temporal gyrus exhibited increased neural synchrony as negative scenes were presented. Moreover, the left inferior frontal cortex and right superior frontal gyrus were found to be engaged in emotion representation and display increased neural synchrony. These findings provide insights into the differential neural responses to emotionally evocative naturalistic stimuli as compared to conventional experimental stimuli. Also, this study highlights the future potential for using intersubject correlation analysis for examining consistency of neural responses to naturalistic stimuli.
Collapse
Affiliation(s)
- Junhyeok Jang
- Department of Psychology, Jeonbuk National University, South Korea
| | - Jongwan Kim
- Department of Psychology, Jeonbuk National University, South Korea.
| |
Collapse
|
2
|
Guo R, Wang G, Wu D, Wu Z. Keep bright in the dark: Multimodal emotional effects on donation-based crowdfunding performance and their empathic mechanisms. Br J Psychol 2025. [PMID: 39871780 DOI: 10.1111/bjop.12774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 01/16/2025] [Indexed: 01/29/2025]
Abstract
How to raise donations effectively, especially in the E-era, has puzzled fundraisers and scientists across various disciplines. Our research focuses on donation-based crowdfunding projects and investigates how the emotional valence expressed verbally (in textual descriptions) and visually (in facial images) in project descriptions affects project performance. Study 1 uses field data (N = 3817), grabs project information and descriptions from a top donation-based crowdfunding platform, computes visual and verbal emotional valence using a deep-learning-based affective computing method and analyses how multimodal emotional valence influences donation outcomes. Study 2 conducts experiments in GPT-4 (Study 2a, N = 400) and humans (Study 2b, N = 240), manipulates the project's visual and verbal emotional valence through AI-generated stimuli and then assesses donation decisions (both GPT-4 and humans) and corresponding state empathy (humans). The results indicate a multimodal positivity superiority effect: both visual and verbal emotional valence promote initial whether-to-donate decisions, whereas only verbal emotional valence further promotes the how-much-to-donate decisions. Notably, such multimodal emotional effects can be explained through different mediating paths of empathic concern and empathic hopefulness. The current study theoretically facilitates our understanding of the emotional motivations underlying human prosociality and provides insights into crafting impactful advertisements for online donations.
Collapse
Affiliation(s)
- Rui Guo
- School of Marxism, Beijing Normal University, Beijing, China
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Guolong Wang
- School of Information Technology & Management, University of International Business and Economics, Beijing, China
| | - Ding Wu
- School of International Trade and Economics, University of International Business and Economics, Beijing, China
| | - Zhen Wu
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China
| |
Collapse
|
3
|
Shuxiang H, Ying L, Qizong Y, Huan Z, Maoping Z. Harmonizing the past: EEG-based brain network unveil modality-specific mechanisms of nostalgia. Front Psychol 2025; 16:1517449. [PMID: 39911992 PMCID: PMC11794493 DOI: 10.3389/fpsyg.2025.1517449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2024] [Accepted: 01/02/2025] [Indexed: 02/07/2025] Open
Abstract
Introduction Nostalgia is a complex emotional experience involving fond memories of the past and mild sadness, characterized by positive emotions associated with reflecting on previous events. It can awaken emotional memories of loved ones or significant events, contributing to an increase in positive emotions. An unresolved question regarding nostalgia is whether different channels of nostalgia input exhibit distinct mechanisms. Methods This study examined the emotional and neural effects of nostalgia using various sensory channels through behavioral experiments and electroencephalography (EEG) measurements conducted with college students in China. Participants' emotions were elicited using nostalgic and non-nostalgic stimuli presented through three different sensory channels: auditory (sound only), visual (e.g., still images or synchronized lyrics related to music), and audiovisual (a combination of sound and visual elements, such as music videos). Results The results demonstrated that nostalgic stimuli elicited significantly higher levels of emotional arousal, pleasure, nostalgia, and dominance compared to non-nostalgic stimuli. At the neural level, nostalgic stimuli enhanced the connection strength, global and local efficiency, and diminished eigenpath length of brain networks in the alpha and gamma bands. Additionally, nostalgia through the auditory channel induced higher activity intensity in the theta and gamma bands and increased brainwave amplitudes in the alpha bands. The audiovisual channel was capable of triggering stronger alpha-wave responses than the visual channel alone. Discussion These findings suggest that nostalgia effectively triggers positive emotional states and enhances cognitive processing. The audiovisual channel, in particular, showed advantages in eliciting alpha-wave responses. Further research is needed to explore the potential of nostalgia as an adjunctive therapeutic tool.
Collapse
Affiliation(s)
- Hu Shuxiang
- School of Music, Southwest University, Chongqing, China
- School of Music, China Music Mental Health Institute, Southwest University, Chongqing, China
| | - Liu Ying
- School of Music, Southwest University, Chongqing, China
- School of Music, China Music Mental Health Institute, Southwest University, Chongqing, China
| | - Yue Qizong
- School of Music, Southwest University, Chongqing, China
- School of Music, China Music Mental Health Institute, Southwest University, Chongqing, China
| | - Zhao Huan
- School of Music, Southwest University, Chongqing, China
- School of Music, China Music Mental Health Institute, Southwest University, Chongqing, China
| | - Zheng Maoping
- School of Music, Southwest University, Chongqing, China
- School of Music, China Music Mental Health Institute, Southwest University, Chongqing, China
| |
Collapse
|
4
|
Gao C, Hayes WM, LaPierre M, Shinkareva SV. The effect of auditory valence on subsequent visual semantic processing. Psychon Bull Rev 2023; 30:1928-1938. [PMID: 36997717 DOI: 10.3758/s13423-023-02269-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2023] [Indexed: 04/01/2023]
Abstract
Emotion influences many cognitive processes and plays an important role in our daily life. Previous studies focused on the effects of arousal on subsequent cognitive processing, but the effect of valence on subsequent semantic processing is still not clear. The present study examined the effect of auditory valence on subsequent visual semantic processing when controlling for arousal. We used instrumental music clips varying in valence while matching in arousal to induce valence states and asked participants to make natural or man-made judgements on subsequent neutral objects. We found that positive and negative valences similarly impaired subsequent semantic processing compared with neutral valence. The linear ballistic accumulator model analyses showed that the valence effects can be attributed to drift rate differences, suggesting that the effects are likely related to attentional selection. Our findings are consistent with a motivated attention model, indicating comparable attentional capture by both positive and negative valences in modulating subsequent cognitive processes.
Collapse
Affiliation(s)
- Chuanji Gao
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA
| | - William M Hayes
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA
| | - Melissa LaPierre
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA.
| |
Collapse
|
5
|
Souter NE, Reddy A, Walker J, Marino Dávolos J, Jefferies E. How do valence and meaning interact? The contribution of semantic control. J Neuropsychol 2023; 17:521-539. [PMID: 37010272 DOI: 10.1111/jnp.12312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 03/06/2023] [Indexed: 04/04/2023]
Abstract
The hub-and-spoke model of semantic cognition proposes that conceptual representations in a heteromodal 'hub' interact with and emerge from modality-specific features or 'spokes', including valence (whether a concept is positive or negative), along with visual and auditory features. As a result, valence congruency might facilitate our ability to link words conceptually. Semantic relatedness may similarly affect explicit judgements about valence. Moreover, conflict between meaning and valence may recruit semantic control processes. Here we tested these predictions using two-alternative forced-choice tasks, in which participants matched a probe word to one of two possible target words, based on either global meaning or valence. Experiment 1 examined timed responses in healthy young adults, while Experiment 2 examined decision accuracy in semantic aphasia patients with impaired controlled semantic retrieval following left hemisphere stroke. Across both experiments, semantically related targets facilitated valence matching, while related distractors impaired performance. Valence congruency was also found to facilitate semantic decision-making. People with semantic aphasia showed impaired valence matching and had particular difficulty when semantically related distractors were presented, suggesting that the selective retrieval of valence information relies on semantic control processes. Taken together, the results are consistent with the hypothesis that automatic access to the global meaning of written words affects the processing of valence, and that the valence of words is also retrieved even when this feature is task-irrelevant, affecting the efficiency of global semantic judgements.
Collapse
Affiliation(s)
| | - Ariyana Reddy
- Department of Psychology, University of York, York, UK
- Faculty of Health Sciences, University of Hull, Hull, UK
| | - Jake Walker
- Department of Psychology, University of York, York, UK
- School of Psychology and Computer Science, University of Central Lancashire, Preston, UK
| | | | | |
Collapse
|
6
|
Xu S, Zhang Z, Li L, Zhou Y, Lin D, Zhang M, Zhang L, Huang G, Liu X, Becker B, Liang Z. Functional connectivity profiles of the default mode and visual networks reflect temporal accumulative effects of sustained naturalistic emotional experience. Neuroimage 2023; 269:119941. [PMID: 36791897 DOI: 10.1016/j.neuroimage.2023.119941] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 01/30/2023] [Accepted: 02/11/2023] [Indexed: 02/15/2023] Open
Abstract
Determining and decoding emotional brain processes under ecologically valid conditions remains a key challenge in affective neuroscience. The current functional Magnetic Resonance Imaging (fMRI) based emotion decoding studies are mainly based on brief and isolated episodes of emotion induction, while sustained emotional experience in naturalistic environments that mirror daily life experiences are scarce. Here we used 12 different 10-minute movie clips as ecologically valid emotion-evoking procedures in n = 52 individuals to explore emotion-specific fMRI functional connectivity (FC) profiles on the whole-brain level at high spatial resolution (432 parcellations including cortical and subcortical structures). Employing machine-learning based decoding and cross validation procedures allowed to investigate FC profiles contributing to classification that can accurately distinguish sustained happiness and sadness and that generalize across subjects, movie clips, and parcellations. Both functional brain network-based and subnetwork-based emotion classification results suggested that emotion manifests as distributed representation of multiple networks, rather than a single functional network or subnetwork. Further, the results showed that the Visual Network (VN) and Default Mode Network (DMN) associated functional networks, especially VN-DMN, exhibited a strong contribution to emotion classification. To further estimate the temporal accumulative effect of naturalistic long-term movie-based video-evoking emotions, we divided the 10-min episode into three stages: early stimulation (1∼200 s), middle stimulation (201∼400 s), and late stimulation (401∼600 s) and examined the emotion classification performance at different stimulation stages. We found that the late stimulation contributes most to the classification (accuracy=85.32%, F1-score=85.62%) compared to early and middle stimulation stages, implying that continuous exposure to emotional stimulation can lead to more intense emotions and further enhance emotion-specific distinguishable representations. The present work demonstrated that sustained happiness and sadness under naturalistic conditions are presented in emotion-specific network profiles and these expressions may play different roles in the generation and modulation of emotions. These findings elucidated the importance of network level adaptations for sustained emotional experiences during naturalistic contexts and open new venues for imaging network level contributions under naturalistic conditions.
Collapse
Affiliation(s)
- Shuyue Xu
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Zhiguo Zhang
- Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China; Peng Cheng Laboratory, Shenzhen 518055, China; Marshall Laboratory of Biomedical Engineering, Shenzhen 518060, China
| | - Linling Li
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Yongjie Zhou
- Department of Psychiatric Rehabilitation, Shenzhen Kangning Hospital, Shenzhen, China
| | - Danyi Lin
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Min Zhang
- Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
| | - Li Zhang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Gan Huang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Xiqin Liu
- Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, MOE Key Laboratory for Neuroinformation, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Benjamin Becker
- Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, MOE Key Laboratory for Neuroinformation, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Zhen Liang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China; Marshall Laboratory of Biomedical Engineering, Shenzhen 518060, China.
| |
Collapse
|
7
|
Cortes PM, García-Hernández JP, Iribe-Burgos FA, Guevara MA, Hernández-González M. Effects of emotional congruency and task complexity on decision-making. Cogn Process 2023; 24:161-171. [PMID: 36862269 DOI: 10.1007/s10339-023-01129-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 02/15/2023] [Indexed: 03/03/2023]
Abstract
The heuristic approach to decision-making holds that the selection process becomes more efficient when part of the information available is ignored. One element involved in selecting information is emotional valence. If emotional congruency is related to simplified decision-making strategies, then the interaction of this factor with task complexity should exist. The present study explored how factors of this nature influence decision-making efficiency. We hypothesized that emotional congruency would have a positive effect on task execution and that the magnitude of that effect would increase with greater task complexity because in that condition the amount of information to be processed is greater, meaning that a heuristic approach to the problem would be more efficient. We design a decision in browser decision-making task in which participants had to select emotional images to gain points. Depending on the correlation between emotional valence and in-task image value, we defined three emotional congruency conditions: direct, null, and inverse. Our results show that distinct types of emotional congruency have differential effects on behavior. While direct congruency-enhanced overall decision-making performance, inverse congruency interacted with task complexity to modify the pace at which task feedback affected behavior.
Collapse
Affiliation(s)
- Pedro Manuel Cortes
- Instituto de Neurociencias, Universidad de Guadalajara, Francisco de Quevedo 180, Col. Arcos-Vallarta, 44130, Guadalajara, Jalisco, Mexico
| | - Juan Pablo García-Hernández
- Instituto de Neurociencias, Universidad de Guadalajara, Francisco de Quevedo 180, Col. Arcos-Vallarta, 44130, Guadalajara, Jalisco, Mexico
| | - Fabiola Alejandra Iribe-Burgos
- Instituto de Neurociencias, Universidad de Guadalajara, Francisco de Quevedo 180, Col. Arcos-Vallarta, 44130, Guadalajara, Jalisco, Mexico
| | - Miguel Angel Guevara
- Instituto de Neurociencias, Universidad de Guadalajara, Francisco de Quevedo 180, Col. Arcos-Vallarta, 44130, Guadalajara, Jalisco, Mexico
| | - Marisela Hernández-González
- Instituto de Neurociencias, Universidad de Guadalajara, Francisco de Quevedo 180, Col. Arcos-Vallarta, 44130, Guadalajara, Jalisco, Mexico.
| |
Collapse
|
8
|
Chang DHF, Thinnes D, Au PY, Maziero D, Stenger VA, Sinnett S, Vibell J. Sound-modulations of visual motion perception implicate the cortico-vestibular brain. Neuroimage 2022; 257:119285. [PMID: 35537600 DOI: 10.1016/j.neuroimage.2022.119285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 04/20/2022] [Accepted: 05/05/2022] [Indexed: 11/19/2022] Open
Abstract
A widely used example of the intricate (yet poorly understood) intertwining of multisensory signals in the brain is the audiovisual bounce inducing effect (ABE). This effect presents two identical objects moving along the azimuth with uniform motion and towards opposite directions. The perceptual interpretation of the motion is ambiguous and is modulated if a transient (sound) is presented in coincidence with the point of overlap of the two objects' motion trajectories. This phenomenon has long been written-off to simple attentional or decision-making mechanisms, although the neurological underpinnings for the effect are not well understood. Using behavioural metrics concurrently with event-related fMRI, we show that sound-induced modulations of motion perception can be further modulated by changing motion dynamics of the visual targets. The phenomenon engages the posterior parietal cortex and the parieto-insular-vestibular cortical complex, with a close correspondence of activity in these regions with behaviour. These findings suggest that the insular cortex is engaged in deriving a probabilistic perceptual solution through the integration of multisensory data.
Collapse
Affiliation(s)
- Dorita H F Chang
- Department of Psychology and The State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong.
| | - David Thinnes
- Department of Psychology, University of Hawai'i at Mānoa, Hawaii, USA; Faculty of Medicine, Systems Neuroscience & Neurotechnology Unit, Saarland University & HTW Saar, Germany
| | - Pak Yam Au
- Department of Psychology and The State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong
| | - Danilo Maziero
- Department of Medicine, MR Research Program, John A. Burns School of Medicine, University of Hawai'i, HI, USA
| | - Victor Andrew Stenger
- Department of Medicine, MR Research Program, John A. Burns School of Medicine, University of Hawai'i, HI, USA
| | - Scott Sinnett
- Department of Psychology, University of Hawai'i at Mānoa, Hawaii, USA
| | - Jonas Vibell
- Department of Psychology, University of Hawai'i at Mānoa, Hawaii, USA.
| |
Collapse
|
9
|
Su C, Zhou H, Wang C, Geng F, Hu Y. Individualized video recommendation modulates functional connectivity between large scale networks. Hum Brain Mapp 2021; 42:5288-5299. [PMID: 34363282 PMCID: PMC8519862 DOI: 10.1002/hbm.25616] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 07/17/2021] [Accepted: 07/20/2021] [Indexed: 01/18/2023] Open
Abstract
With the emergence of AI‐powered recommender systems and their extensive use in the video streaming service, questions and concerns also arise. Why can recommended video content continuously capture users' attention? What is the impact of long‐term exposure to personalized video content on one's behaviors and brain functions? To address these questions, we designed an fMRI experiment presenting participants with personally recommended videos and generally recommended ones. To examine how large‐scale networks were modulated by personalized video content, graph theory analysis was applied to investigate the interaction between seven networks, including the ventral and dorsal attention networks (VAN, DAN), frontal–parietal network (FPN), salience network (SN), and three subnetworks of default mode network (dorsal medial prefrontal (dMPFC), Core, and medial temporal lobe (MTL)). Our results showed that viewing nonpersonalized video content mainly enhanced the connectivity in the DAN‐FPN‐Core pathway, whereas viewing personalized ones increased not only the connectivity in this pathway but also the DAN‐VAN‐dMPFC pathway. In addition, both personalized and nonpersonalized short videos decreased the couplings between SN and VAN as well as between two DMN subsystems, Core and MTL. Collectively, these findings uncovered distinct patterns of network interactions in response to short videos and provided insights into potential neural mechanisms by which human behaviors are biased by personally recommended content.
Collapse
Affiliation(s)
- Conghui Su
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Hui Zhou
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Chunjie Wang
- Institute of Brain Science and Department of Psychology, School of Education, Hangzhou Normal University, Hangzhou, China
| | - Fengji Geng
- Department of Curriculum and Learning Sciences, Zhejiang University, Hangzhou, China
| | - Yuzheng Hu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| |
Collapse
|
10
|
Modality-general and modality-specific audiovisual valence processing. Cortex 2021; 138:127-137. [PMID: 33684626 DOI: 10.1016/j.cortex.2021.01.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 11/08/2020] [Accepted: 01/20/2021] [Indexed: 11/23/2022]
Abstract
A fundamental question in affective neuroscience is whether there is a common hedonic system for valence processing independent of modality, or there are distinct neural systems for different modalities. To address this question, we used both region of interest and whole-brain representational similarity analyses on functional magnetic resonance imaging data to identify modality-general and modality-specific brain areas involved in valence processing across visual and auditory modalities. First, region of interest analyses showed that the superior temporal cortex was associated with both modality-general and auditory-specific models, while the primary visual cortex was associated with the visual-specific model. Second, the whole-brain searchlight analyses also identified both modality-general and modality-specific representations. The modality-general regions included the superior temporal, medial superior frontal, inferior frontal, precuneus, precentral, postcentral, supramarginal, paracentral lobule and middle cingulate cortices. The modality-specific regions included both perceptual cortices and higher-order brain areas. The valence representations derived from individualized behavioral valence ratings were consistent with these results. Together, these findings suggest both modality-general and modality-specific representations of valence.
Collapse
|
11
|
Shinkareva SV, Gao C, Wedell D. Audiovisual Representations of Valence: a Cross-study Perspective. ACTA ACUST UNITED AC 2020; 1:237-246. [DOI: 10.1007/s42761-020-00023-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 10/22/2020] [Indexed: 01/25/2023]
|
12
|
Levine SM, Kumpf M, Rupprecht R, Schwarzbach JV. Supracategorical fear information revealed by aversively conditioning multiple categories. Cogn Neurosci 2020; 12:28-39. [PMID: 33135598 DOI: 10.1080/17588928.2020.1839039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Fear-generalization is a critical function for survival, in which an organism extracts information from a specific instantiation of a threat (e.g., the western diamondback rattlesnake in my front yard on Sunday) and learns to fear - and accordingly respond to - pertinent higher-order information (e.g., snakes live in my yard). Previous work investigating fear-conditioning in humans has used functional magnetic resonance imaging (fMRI) to demonstrate that activity patterns representing stimuli from an aversively-conditioned category (CS+) are more similar to each other than those of a neutral category (CS-). Here we used fMRI and multiple aversively-conditioned categories to ask whether we would find only similarity increases within the CS+ categories or also similarity increases between the CS+ categories. Using representational similarity analysis, we correlated several models to activity patterns underlying different brain regions and found that, following fear-conditioning, between-category and within-category similarity increased for the CS+ categories in the insula, superior frontal gyrus (SFG), and the right temporal pole. When specifically investigating fear-generalization, these between- and within-category effects were detected in the SFG. These results advance prior pattern-based neuroimaging work by exploring the effect of aversively-conditioning multiple categories and indicate an extended role for such regions in potentially representing supracategorical information during fear-learning.
Collapse
Affiliation(s)
- Seth M Levine
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University , Mannheim, Germany.,Department of Psychiatry and Psychotherapy, University of Regensburg , Regensburg, Germany
| | - Miriam Kumpf
- Department of Psychiatry and Psychotherapy, University of Regensburg , Regensburg, Germany
| | - Rainer Rupprecht
- Department of Psychiatry and Psychotherapy, University of Regensburg , Regensburg, Germany
| | - Jens V Schwarzbach
- Department of Psychiatry and Psychotherapy, University of Regensburg , Regensburg, Germany
| |
Collapse
|
13
|
A study in affect: Predicting valence from fMRI data. Neuropsychologia 2020; 143:107473. [DOI: 10.1016/j.neuropsychologia.2020.107473] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 04/10/2020] [Accepted: 04/19/2020] [Indexed: 12/19/2022]
|