1
|
Mobile version of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA): Implementation and adult norms. Behav Res Methods 2024:10.3758/s13428-024-02363-x. [PMID: 38459221 DOI: 10.3758/s13428-024-02363-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2024] [Indexed: 03/10/2024]
Abstract
Timing and rhythm abilities are complex and multidimensional skills that are highly widespread in the general population. This complexity can be partly captured by the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery, consisting of four perceptual and five sensorimotor tests (finger-tapping), has been used in healthy adults and in clinical populations (e.g., Parkinson's disease, ADHD, developmental dyslexia, stuttering), and shows sensitivity to individual differences and impairment. However, major limitations for the generalized use of this tool are the lack of reliable and standardized norms and of a version of the battery that can be used outside the lab. To circumvent these caveats, we put forward a new version of BAASTA on a tablet device capable of ensuring lab-equivalent measurements of timing and rhythm abilities. We present normative data obtained with this version of BAASTA from over 100 healthy adults between the ages of 18 and 87 years in a test-retest protocol. Moreover, we propose a new composite score to summarize beat-based rhythm capacities, the Beat Tracking Index (BTI), with close to excellent test-retest reliability. BTI derives from two BAASTA tests (beat alignment, paced tapping), and offers a swift and practical way of measuring rhythmic abilities when research imposes strong time constraints. This mobile BAASTA implementation is more inclusive and far-reaching, while opening new possibilities for reliable remote testing of rhythmic abilities by leveraging accessible and cost-efficient technologies.
Collapse
|
2
|
Emotional voices modulate perception and predictions about an upcoming face. Cortex 2022; 149:148-164. [DOI: 10.1016/j.cortex.2021.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/15/2021] [Accepted: 01/05/2022] [Indexed: 11/26/2022]
|
3
|
Unattended Emotional Prosody Affects Visual Processing of Facial Expressions in Mandarin-Speaking Chinese: A Comparison With English-Speaking Canadians. JOURNAL OF CROSS-CULTURAL PSYCHOLOGY 2021; 52:275-294. [PMID: 33958813 PMCID: PMC8053741 DOI: 10.1177/0022022121990897] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Emotional cues from different modalities have to be integrated during communication, a process that can be shaped by an individual’s cultural background. We explored this issue in 25 Chinese participants by examining how listening to emotional prosody in Mandarin influenced participants’ gazes at emotional faces in a modified visual search task. We also conducted a cross-cultural comparison between data of this study and that of our previous work in English-speaking Canadians using analogous methodology. In both studies, eye movements were recorded as participants scanned an array of four faces portraying fear, anger, happy, and neutral expressions, while passively listening to a pseudo-utterance expressing one of the four emotions (Mandarin utterance in this study; English utterance in our previous study). The frequency and duration of fixations to each face were analyzed during 5 seconds after the onset of faces, both during the presence of the speech (early time window) and after the utterance ended (late time window). During the late window, Chinese participants looked more frequently and longer at faces conveying congruent emotions as the speech, consistent with findings from English-speaking Canadians. Cross-cultural comparison further showed that Chinese, but not Canadians, looked more frequently and longer at angry faces, which may signal potential conflicts and social threats. We hypothesize that the socio-cultural norms related to harmony maintenance in the Eastern culture promoted Chinese participants’ heightened sensitivity to, and deeper processing of, angry cues, highlighting culture-specific patterns in how individuals scan their social environment during emotion processing.
Collapse
|
4
|
Temporal decoding of vocal and musical emotions: Same code, different timecourse? Brain Res 2020; 1741:146887. [PMID: 32422128 DOI: 10.1016/j.brainres.2020.146887] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/22/2020] [Accepted: 05/12/2020] [Indexed: 11/24/2022]
Abstract
From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.
Collapse
|
5
|
Neurophysiological correlates of sexually evocative speech. Biol Psychol 2020; 154:107909. [DOI: 10.1016/j.biopsycho.2020.107909] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2019] [Revised: 05/14/2020] [Accepted: 05/20/2020] [Indexed: 12/11/2022]
|
6
|
Auditory repetition suppression alterations in relation to cognitive functioning in fragile X syndrome: a combined EEG and machine learning approach. J Neurodev Disord 2018; 10:4. [PMID: 29378522 PMCID: PMC5789548 DOI: 10.1186/s11689-018-9223-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Accepted: 01/12/2018] [Indexed: 11/10/2022] Open
Abstract
Background Fragile X syndrome (FXS) is a neurodevelopmental genetic disorder causing cognitive and behavioural deficits. Repetition suppression (RS), a learning phenomenon in which stimulus repetitions result in diminished brain activity, has been found to be impaired in FXS. Alterations in RS have been associated with behavioural problems in FXS; however, relations between RS and intellectual functioning have not yet been elucidated. Methods EEG was recorded in 14 FXS participants and 25 neurotypical controls during an auditory habituation paradigm using repeatedly presented pseudowords. Non-phased locked signal energy was compared across presentations and between groups using linear mixed models (LMMs) in order to investigate RS effects across repetitions and brain areas and a possible relation to non-verbal IQ (NVIQ) in FXS. In addition, we explored group differences according to NVIQ and we probed the feasibility of training a support vector machine to predict cognitive functioning levels across FXS participants based on single-trial RS features. Results LMM analyses showed that repetition effects differ between groups (FXS vs. controls) as well as with respect to NVIQ in FXS. When exploring group differences in RS patterns, we found that neurotypical controls revealed the expected pattern of RS between the first and second presentations of a pseudoword. More importantly, while FXS participants in the ≤ 42 NVIQ group showed no RS, the > 42 NVIQ group showed a delayed RS response after several presentations. Concordantly, single-trial estimates of repetition effects over the first four repetitions provided the highest decoding accuracies in the classification between the FXS participant groups. Conclusion Electrophysiological measures of repetition effects provide a non-invasive and unbiased measure of brain responses sensitive to cognitive functioning levels, which may be useful for clinical trials in FXS. Electronic supplementary material The online version of this article (10.1186/s11689-018-9223-3) contains supplementary material, which is available to authorized users.
Collapse
|
7
|
Effects of musical expertise on oscillatory brain activity in response to emotional sounds. Neuropsychologia 2017; 103:96-105. [DOI: 10.1016/j.neuropsychologia.2017.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 10/19/2022]
|
8
|
Altered visual repetition suppression in Fragile X Syndrome: New evidence from ERPs and oscillatory activity. Int J Dev Neurosci 2017; 59:52-59. [DOI: 10.1016/j.ijdevneu.2017.03.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 12/31/2016] [Accepted: 03/17/2017] [Indexed: 12/13/2022] Open
|
9
|
Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
|
10
|
Abstract
To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.
Collapse
|
11
|
Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody. Biol Psychol 2015; 111:14-25. [PMID: 26307467 DOI: 10.1016/j.biopsycho.2015.08.008] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Revised: 08/04/2015] [Accepted: 08/19/2015] [Indexed: 11/26/2022]
Abstract
This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450-700ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice.
Collapse
|
12
|
Cultural differences in on-line sensitivity to emotional voices: comparing East and West. Front Hum Neurosci 2015; 9:311. [PMID: 26074808 PMCID: PMC4448034 DOI: 10.3389/fnhum.2015.00311] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Accepted: 05/15/2015] [Indexed: 12/16/2022] Open
Abstract
Evidence that culture modulates on-line neural responses to the emotional meanings encoded by vocal and facial expressions was demonstrated recently in a study comparing English North Americans and Chinese (Liu et al., 2015). Here, we compared how individuals from these two cultures passively respond to emotional cues from faces and voices using an Oddball task. Participants viewed in-group emotional faces, with or without simultaneous vocal expressions, while performing a face-irrelevant visual task as the EEG was recorded. A significantly larger visual Mismatch Negativity (vMMN) was observed for Chinese vs. English participants when faces were accompanied by voices, suggesting that Chinese were influenced to a larger extent by task-irrelevant vocal cues. These data highlight further differences in how adults from East Asian vs. Western cultures process socio-emotional cues, arguing that distinct cultural practices in communication (e.g., display rules) shape neurocognitive activity associated with the early perception and integration of multi-sensory emotional cues.
Collapse
|
13
|
Time course of the influence of musical expertise on the processing of vocal and musical sounds. Neuroscience 2015; 290:175-84. [PMID: 25637804 DOI: 10.1016/j.neuroscience.2015.01.033] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2014] [Revised: 01/09/2015] [Accepted: 01/12/2015] [Indexed: 11/18/2022]
Abstract
Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants.
Collapse
|
14
|
Culture modulates the brain response to human expressions of emotion: electrophysiological evidence. Neuropsychologia 2014; 67:1-13. [PMID: 25477081 DOI: 10.1016/j.neuropsychologia.2014.11.034] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2014] [Revised: 11/24/2014] [Accepted: 11/30/2014] [Indexed: 11/17/2022]
Abstract
To understand how culture modulates on-line neural responses to social information, this study compared how individuals from two distinct cultural groups, English-speaking North Americans and Chinese, process emotional meanings of multi-sensory stimuli as indexed by both behaviour (accuracy) and event-related potential (N400) measures. In an emotional Stroop-like task, participants were presented face-voice pairs expressing congruent or incongruent emotions in conditions where they judged the emotion of one modality while ignoring the other (face or voice focus task). Results indicated that while both groups were sensitive to emotional differences between channels (with lower accuracy and higher N400 amplitudes for incongruent face-voice pairs), there were marked group differences in how intruding facial or vocal cues affected accuracy and N400 amplitudes, with English participants showing greater interference from irrelevant faces than Chinese. Our data illuminate distinct biases in how adults from East Asian versus Western cultures process socio-emotional cues, supplying new evidence that cultural learning modulates not only behaviour, but the neurocognitive response to different features of multi-channel emotion expressions.
Collapse
|
15
|
Neural correlates of inferring speaker sincerity from white lies: An event-related potential source localization study. Brain Res 2014; 1565:48-62. [DOI: 10.1016/j.brainres.2014.04.022] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2014] [Revised: 04/11/2014] [Accepted: 04/12/2014] [Indexed: 11/25/2022]
|
16
|
Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition. Front Psychol 2013; 4:367. [PMID: 23805115 PMCID: PMC3690349 DOI: 10.3389/fpsyg.2013.00367] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Accepted: 06/04/2013] [Indexed: 11/13/2022] Open
Abstract
Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech.
Collapse
|
17
|
Electrophysiological correlates of enhanced perceptual processes and attentional capture by emotional faces in social anxiety. Brain Res 2012; 1460:50-62. [DOI: 10.1016/j.brainres.2012.04.034] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2011] [Revised: 04/17/2012] [Accepted: 04/18/2012] [Indexed: 01/17/2023]
|
18
|
Seeing emotion with your ears: emotional prosody implicitly guides visual attention to faces. PLoS One 2012; 7:e30740. [PMID: 22303454 PMCID: PMC3268762 DOI: 10.1371/journal.pone.0030740] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2011] [Accepted: 12/22/2011] [Indexed: 11/17/2022] Open
Abstract
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.
Collapse
|
19
|
Abstract
Current research in affective neuroscience suggests that the emotional content of visual stimuli activates brain–body responses that could be critical to general health and physical disease. The aim of this study was to develop an integrated neurophysiological approach linking central and peripheral markers of nervous activity during the presentation of natural scenes in order to determine the temporal stages of brain processing related to the bodily impact of emotions. More specifically, whole head magnetoencephalogram (MEG) data and skin conductance response (SCR), a reliable autonomic marker of central activation, were recorded in healthy volunteers during the presentation of emotional (unpleasant and pleasant) and neutral pictures selected from the International Affective Picture System (IAPS). Analyses of event-related magnetic fields (ERFs) revealed greater activity at 180 ms in an occipitotemporal component for emotional pictures than for neutral counterparts. More importantly, these early effects of emotional arousal on cerebral activity were significantly correlated with later increases in SCR magnitude. For the first time, a neuromagnetic cortical component linked to a well-documented marker of bodily arousal expression of emotion, namely, the SCR, was identified and located. This finding sheds light on the time course of the brain–body interaction with emotional arousal and provides new insights into the neural bases of complex and reciprocal mind–body links.
Collapse
|
20
|
Emotion and spatial properties of objects 2 — Effect of valence depends on trait-anxiety. Int J Psychophysiol 2008. [DOI: 10.1016/j.ijpsycho.2008.05.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
Neural processing of peripherally presented emotional faces: An ERP study. Int J Psychophysiol 2008. [DOI: 10.1016/j.ijpsycho.2008.05.454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
22
|
Peripherally Presented Emotional Scenes: A Spatiotemporal Analysis of Early ERP Responses. Brain Topogr 2008; 20:216-23. [DOI: 10.1007/s10548-008-0050-9] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2007] [Accepted: 02/14/2008] [Indexed: 11/24/2022]
|
23
|
Arousal and valence effects on event-related P3a and P3b during emotional categorization. Int J Psychophysiol 2005; 60:315-22. [PMID: 16226819 DOI: 10.1016/j.ijpsycho.2005.06.006] [Citation(s) in RCA: 144] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2004] [Revised: 10/27/2004] [Accepted: 06/30/2005] [Indexed: 11/25/2022]
Abstract
Due to the adaptive value of emotional situations, categorizing along the valence dimension may be supported by critical brain functions. The present study examined emotion-cognition relationships by focusing on the influence of an emotional categorization task on the cognitive processing induced by an oddball-like paradigm. Event-related potentials (ERPs) were recorded from subjects explicitly asked to categorize along the valence dimension (unpleasant, neutral or pleasant) deviant target pictures embedded in a train of standard stimuli. Late positivities evoked in response to the target pictures were decomposed into a P3a and a P3b and topographical differences were observed according to the valence content of the stimuli. P3a showed enhanced amplitudes at posterior sites in response to unpleasant pictures as compared to both neutral and pleasant pictures. This effect is interpreted as a negativity bias related to attentional processing. The P3b component was sensitive to the arousal value of the stimulation, with higher amplitudes at several posterior sites for both types of emotional pictures. Moreover, unpleasant pictures evoked smaller amplitudes than pleasant ones at fronto-central sites. Thus, the context updating process may be differentially modulated by the affective arousal and valence of the stimulus. The present study supports the assumption that, during an emotional categorization, the emotional content of the stimulus may modulate the reorientation of attention and the subsequent updating process in a specific way.
Collapse
|