51
|
Ethofer T, Bretscher J, Wiethoff S, Bisch J, Schlipf S, Wildgruber D, Kreifelts B. Functional responses and structural connections of cortical areas for processing faces and voices in the superior temporal sulcus. Neuroimage 2013; 76:45-56. [DOI: 10.1016/j.neuroimage.2013.02.064] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2012] [Revised: 01/17/2013] [Accepted: 02/26/2013] [Indexed: 10/27/2022] Open
|
52
|
Mitchell RL. Further characterisation of the functional neuroanatomy associated with prosodic emotion decoding. Cortex 2013; 49:1722-32. [DOI: 10.1016/j.cortex.2012.07.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2012] [Revised: 07/13/2012] [Accepted: 07/25/2012] [Indexed: 11/17/2022]
|
53
|
Heightened emotional contagion in mild cognitive impairment and Alzheimer's disease is associated with temporal lobe degeneration. Proc Natl Acad Sci U S A 2013; 110:9944-9. [PMID: 23716653 DOI: 10.1073/pnas.1301119110] [Citation(s) in RCA: 115] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Emotional changes are common in mild cognitive impairment (MCI) and Alzheimer's disease (AD). Intrinsic connectivity imaging studies suggest that default mode network degradation in AD is accompanied by the release of an emotion-relevant salience network. We investigated whether emotional contagion, an evolutionarily conserved affect-sharing mechanism, is higher in MCI and AD secondary to biological alterations in neural networks that support emotion. We measured emotional contagion in 237 participants (111 healthy controls, 62 patients with MCI, and 64 patients with AD) with the Interpersonal Reactivity Index Personal Distress subscale. Depressive symptoms were evaluated with the Geriatric Depression Scale. Participants underwent structural MRI, and voxel-based morphometry was used to relate whole-brain maps to emotional contagion. Analyses of covariance found significantly higher emotional contagion at each stage of disease progression [controls < MCI (P < 0.01) and MCI < AD (P < 0.001)]. Depressive symptoms were also higher in patients compared with controls [controls < MCI (P < 0.01) and controls < AD (P < 0.0001)]. Higher emotional contagion (but not depressive symptoms) was associated with smaller volume in right inferior, middle, and superior temporal gyri (PFWE < 0.05); right temporal pole, anterior hippocampus, parahippocampal gyrus; and left middle temporal gyrus (all P < 0.001, uncorrected). These findings suggest that in MCI and AD, neurodegeneration of temporal lobe structures important for affective signal detection and emotion inhibition are associated with up-regulation of emotion-generating mechanisms. Emotional contagion, a quantifiable index of empathic reactivity that is present in other species, may be a useful tool with which to study emotional alterations in animal models of AD.
Collapse
|
54
|
Schulz C, Mothes-Lasch M, Straube T. Automatic neural processing of disorder-related stimuli in social anxiety disorder: faces and more. Front Psychol 2013; 4:282. [PMID: 23745116 PMCID: PMC3662886 DOI: 10.3389/fpsyg.2013.00282] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Accepted: 05/03/2013] [Indexed: 11/13/2022] Open
Abstract
It has been proposed that social anxiety disorder (SAD) is associated with automatic information processing biases resulting in hypersensitivity to signals of social threat such as negative facial expressions. However, the nature and extent of automatic processes in SAD on the behavioral and neural level is not entirely clear yet. The present review summarizes neuroscientific findings on automatic processing of facial threat but also other disorder-related stimuli such as emotional prosody or negative words in SAD. We review initial evidence for automatic activation of the amygdala, insula, and sensory cortices as well as for automatic early electrophysiological components. However, findings vary depending on tasks, stimuli, and neuroscientific methods. Only few studies set out to examine automatic neural processes directly and systematic attempts are as yet lacking. We suggest that future studies should: (1) use different stimulus modalities, (2) examine different emotional expressions, (3) compare findings in SAD with other anxiety disorders, (4) use more sophisticated experimental designs to investigate features of automaticity systematically, and (5) combine different neuroscientific methods (such as functional neuroimaging and electrophysiology). Finally, the understanding of neural automatic processes could also provide hints for therapeutic approaches.
Collapse
Affiliation(s)
- Claudia Schulz
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster Muenster, Germany
| | | | | |
Collapse
|
55
|
Gädeke JC, Föcker J, Röder B. Is the processing of affective prosody influenced by spatial attention? An ERP study. BMC Neurosci 2013; 14:14. [PMID: 23360491 PMCID: PMC3616832 DOI: 10.1186/1471-2202-14-14] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2012] [Accepted: 01/24/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The present study asked whether the processing of affective prosody is modulated by spatial attention. Pseudo-words with a neutral, happy, threatening, and fearful prosody were presented at two spatial positions. Participants attended to one position in order to detect infrequent targets. Emotional prosody was task irrelevant. The electro-encephalogram (EEG) was recorded to assess processing differences as a function of spatial attention and emotional valence. RESULTS Event-related potentials (ERPs) differed as a function of emotional prosody both when attended and when unattended. While emotional prosody effects interacted with effects of spatial attention at early processing levels (< 200 ms), these effects were additive at later processing stages (> 200 ms). CONCLUSIONS Emotional prosody, therefore, seems to be partially processed outside the focus of spatial attention. Whereas at early sensory processing stages spatial attention modulates the degree of emotional voice processing as a function of emotional valence, emotional prosody is processed outside of the focus of spatial attention at later processing stages.
Collapse
Affiliation(s)
- Julia C Gädeke
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, Hamburg 20146, Germany
| | | | | |
Collapse
|
56
|
Frühholz S, Grandjean D. Multiple subregions in superior temporal cortex are differentially sensitive to vocal expressions: A quantitative meta-analysis. Neurosci Biobehav Rev 2013; 37:24-35. [DOI: 10.1016/j.neubiorev.2012.11.002] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 10/08/2012] [Accepted: 11/04/2012] [Indexed: 11/16/2022]
|
57
|
Processing of angry voices is modulated by visual load. Neuroimage 2012; 63:485-90. [PMID: 22796986 DOI: 10.1016/j.neuroimage.2012.07.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Revised: 06/04/2012] [Accepted: 07/05/2012] [Indexed: 11/21/2022] Open
Abstract
Visual perceptual load has been shown to modulate brain activation to emotional facial expressions. However, it is unknown whether cross-modal effects of visual perceptual load on brain activation to threat-related auditory stimuli also exist. The current fMRI study investigated brain responses to angry and neutral voices while subjects had to solve an easy or a demanding visual task. Although the easy visual condition was associated with increased activation in the right superior temporal region to angry vs. neutral prosody, this effect was absent during the demanding task. Thus, our results show that cross-modal perceptual load modulates the activation to emotional voices in the auditory cortex and that high visual load prevents the increased processing of emotional prosody.
Collapse
|
58
|
Cheng Y, Lee SY, Chen HY, Wang PY, Decety J. Voice and Emotion Processing in the Human Neonatal Brain. J Cogn Neurosci 2012; 24:1411-9. [DOI: 10.1162/jocn_a_00214] [Citation(s) in RCA: 75] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Abstract
Although the voice-sensitive neural system emerges very early in development, it has yet to be demonstrated whether the neonatal brain is sensitive to voice perception. We measured the EEG mismatch response (MMR) elicited by emotionally spoken syllables “dada” along with correspondingly synthesized nonvocal sounds, whose fundamental frequency contours were matched, in 98 full-term newborns aged 1–5 days. In Experiment 1, happy syllables relative to nonvocal sounds elicited an MMR lateralized to the right hemisphere. In Experiment 2, fearful syllables elicited stronger amplitudes than happy or neutral syllables, and this response had no sex differences. In Experiment 3, angry versus happy syllables elicited an MMR, although their corresponding nonvocal sounds did not. Here, we show that affective discrimination is selectively driven by voice processing per se rather than low-level acoustical features and that the cerebral specialization for human voice and emotion processing emerges over the right hemisphere during the first days of life.
Collapse
Affiliation(s)
- Yawei Cheng
- 1Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
- 2National Yang-Ming University Hospital, Yilan, Taiwan
| | - Shin-Yi Lee
- 1Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Hsin-Yu Chen
- 1Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Ping-Yao Wang
- 2National Yang-Ming University Hospital, Yilan, Taiwan
| | | |
Collapse
|
59
|
Escoffier N, Zhong J, Schirmer A, Qiu A. Emotional expressions in voice and music: same code, same effect? Hum Brain Mapp 2012; 34:1796-810. [PMID: 22505222 DOI: 10.1002/hbm.22029] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2010] [Revised: 11/15/2011] [Accepted: 12/05/2011] [Indexed: 11/09/2022] Open
Abstract
Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition.
Collapse
Affiliation(s)
- Nicolas Escoffier
- Department of Psychology, National University of Singapore, Singapore, Singapore
| | | | | | | |
Collapse
|
60
|
Müller VI, Cieslik EC, Turetsky BI, Eickhoff SB. Crossmodal interactions in audiovisual emotion processing. Neuroimage 2011; 60:553-61. [PMID: 22182770 DOI: 10.1016/j.neuroimage.2011.12.007] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2011] [Revised: 11/16/2011] [Accepted: 12/03/2011] [Indexed: 10/14/2022] Open
Abstract
Emotion in daily life is often expressed in a multimodal fashion. Consequently emotional information from one modality can influence processing in another. In a previous fMRI study we assessed the neural correlates of audio-visual integration and found that activity in the left amygdala is significantly attenuated when a neutral stimulus is paired with an emotional one compared to conditions where emotional stimuli were present in both channels. Here we used dynamic causal modelling to investigate the effective connectivity in the neuronal network underlying this emotion presence congruence effect. Our results provided strong evidence in favor of a model family, differing only in the interhemispheric interactions. All winning models share a connection from the bilateral fusiform gyrus (FFG) into the left amygdala and a non-linear modulatory influence of bilateral posterior superior temporal sulcus (pSTS) on these connections. This result indicates that the pSTS not only integrates multi-modal information from visual and auditory regions (as reflected in our model by significant feed-forward connections) but also gates the influence of the sensory information on the left amygdala, leading to attenuation of amygdala activity when a neutral stimulus is integrated. Moreover, we found a significant lateralization of the FFG due to stronger driving input by the stimuli (faces) into the right hemisphere, whereas such lateralization was not present for sound-driven input into the superior temporal gyrus. In summary, our data provides further evidence for a rightward lateralization of the FFG and in particular for a key role of the pSTS in the integration and gating of audio-visual emotional information.
Collapse
Affiliation(s)
- Veronika I Müller
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Germany.
| | | | | | | |
Collapse
|
61
|
Brück C, Kreifelts B, Wildgruber D. Emotional voices in context: A neurobiological model of multimodal affective information processing. Phys Life Rev 2011; 8:383-403. [DOI: 10.1016/j.plrev.2011.10.002] [Citation(s) in RCA: 108] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2011] [Accepted: 10/11/2011] [Indexed: 11/27/2022]
|
62
|
McNealy K, Mazziotta JC, Dapretto M. Age and experience shape developmental changes in the neural basis of language-related learning. Dev Sci 2011; 14:1261-82. [PMID: 22010887 PMCID: PMC3717169 DOI: 10.1111/j.1467-7687.2011.01075.x] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Very little is known about the neural underpinnings of language learning across the lifespan and how these might be modified by maturational and experiential factors. Building on behavioral research highlighting the importance of early word segmentation (i.e. the detection of word boundaries in continuous speech) for subsequent language learning, here we characterize developmental changes in brain activity as this process occurs online, using data collected in a mixed cross-sectional and longitudinal design. One hundred and fifty-six participants, ranging from age 5 to adulthood, underwent functional magnetic resonance imaging (fMRI) while listening to three novel streams of continuous speech, which contained either strong statistical regularities, strong statistical regularities and speech cues, or weak statistical regularities providing minimal cues to word boundaries. All age groups displayed significant signal increases over time in temporal cortices for the streams with high statistical regularities; however, we observed a significant right-to-left shift in the laterality of these learning-related increases with age. Interestingly, only the 5- to 10-year-old children displayed significant signal increases for the stream with low statistical regularities, suggesting an age-related decrease in sensitivity to more subtle statistical cues. Further, in a sample of 78 10-year-olds, we examined the impact of proficiency in a second language and level of pubertal development on learning-related signal increases, showing that the brain regions involved in language learning are influenced by both experiential and maturational factors.
Collapse
Affiliation(s)
- Kristin McNealy
- Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, USA
- FPR Center for Culture, Brain, and Development, University of California, Los Angeles, USA
| | - John C. Mazziotta
- Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, USA
- The Brain Research Institute, University of California, Los Angeles, USA
- Departments of Neurology, Pharmacology, and Radiological Sciences in the David Geffen School of Medicine, University of California, Los Angeles, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, USA
| | - Mirella Dapretto
- Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, USA
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, USA
- The Brain Research Institute, University of California, Los Angeles, USA
- Departments of Neurology, Pharmacology, and Radiological Sciences in the David Geffen School of Medicine, University of California, Los Angeles, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, USA
| |
Collapse
|
63
|
Witteman J, van Ijzendoorn MH, van de Velde D, van Heuven VJJP, Schiller NO. The nature of hemispheric specialization for linguistic and emotional prosodic perception: a meta-analysis of the lesion literature. Neuropsychologia 2011; 49:3722-38. [PMID: 21964199 DOI: 10.1016/j.neuropsychologia.2011.09.028] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2011] [Revised: 08/17/2011] [Accepted: 09/15/2011] [Indexed: 11/17/2022]
Abstract
It is unclear whether there is hemispheric specialization for prosodic perception and, if so, what the nature of this hemispheric asymmetry is. Using the lesion-approach, many studies have attempted to test whether there is hemispheric specialization for emotional and linguistic prosodic perception by examining the impact of left vs. right hemispheric damage on prosodic perception task performance. However, so far no consensus has been reached. In an attempt to find a consistent pattern of lateralization for prosodic perception, a meta-analysis was performed on 38 lesion studies (including 450 left hemisphere damaged patients, 534 right hemisphere damaged patients and 491 controls) of prosodic perception. It was found that both left and right hemispheric damage compromise emotional and linguistic prosodic perception task performance. Furthermore, right hemispheric damage degraded emotional prosodic perception more than left hemispheric damage (trimmed g=-0.37, 95% CI [-0.66; -0.09], N=620 patients). It is concluded that prosodic perception is under bihemispheric control with relative specialization of the right hemisphere for emotional prosodic perception.
Collapse
Affiliation(s)
- Jurriaan Witteman
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands.
| | | | | | | | | |
Collapse
|
64
|
Fujisawa TX, Shinohara K. Sex differences in the recognition of emotional prosody in late childhood and adolescence. J Physiol Sci 2011; 61:429-35. [PMID: 21647818 PMCID: PMC10717528 DOI: 10.1007/s12576-011-0156-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2010] [Accepted: 05/22/2011] [Indexed: 10/18/2022]
Abstract
We examined sex-related differences in the ability to recognize emotional prosody in late childhood (9-12 year olds) and adolescence (13-15 year olds) in relation to salivary testosterone levels. In order to examine both the accuracy and the sensitivity in labeling emotional prosody expressions, five intensities (20, 40, 60, 80, and 100%) for each of three emotion categories were used as stimuli. Totals of 25 male and 22 female children and 28 male and 28 female adolescents were tested on their recognition of happy, angry and sad prosody at the different intensities. The results showed that adolescent females were more sensitive to happy and sad prosody than males but not to angry prosody, whereas there were no sex-related differences in emotional prosody in late childhood for any of the emotional categories. Furthermore, salivary testosterone levels were higher in males than females in adolescence, but not in late childhood, suggesting that the sex differences for emotional prosody recognition emerges in adolescence during which testosterone levels become higher in males than females.
Collapse
Affiliation(s)
- Takashi X. Fujisawa
- Department of Neurobiology and Behavior, Nagasaki University School of Medicine, 1-12-4 Sakamoto, Nagasaki, 852-8523 Japan
| | - Kazuyuki Shinohara
- Department of Neurobiology and Behavior, Nagasaki University School of Medicine, 1-12-4 Sakamoto, Nagasaki, 852-8523 Japan
| |
Collapse
|
65
|
Distinct pathways of neural coupling for different basic emotions. Neuroimage 2011; 59:1804-17. [PMID: 21888979 DOI: 10.1016/j.neuroimage.2011.08.018] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2011] [Revised: 07/18/2011] [Accepted: 08/06/2011] [Indexed: 11/21/2022] Open
Abstract
Emotions are complex events recruiting distributed cortical and subcortical cerebral structures, where the functional integration dynamics within the involved neural circuits in relation to the nature of the different emotions are still unknown. Using fMRI, we measured the neural responses elicited by films representing basic emotions (fear, disgust, sadness, happiness). The amygdala and the associative cortex were conjointly activated by all basic emotions. Furthermore, distinct arrays of cortical and subcortical brain regions were additionally activated by each emotion, with the exception of sadness. Such findings informed the definition of three effective connectivity models, testing for the functional integration of visual cortex and amygdala, as regions processing all emotions, with domain-specific regions, namely: i) for fear, the frontoparietal system involved in preparing adaptive motor responses; ii) for disgust, the somatosensory system, reflecting protective responses against contaminating stimuli; iii) for happiness: medial prefrontal and temporoparietal cortices involved in understanding joyful interactions. Consistently with these domain-specific models, the results of the effective connectivity analysis indicate that the amygdala is involved in distinct functional integration effects with cortical networks processing sensorimotor, somatosensory, or cognitive aspects of basic emotions. The resulting effective connectivity networks may serve to regulate motor and cognitive behavior based on the quality of the induced emotional experience.
Collapse
|
66
|
Viinikainen M, Kätsyri J, Sams M. Representation of perceived sound valence in the human brain. Hum Brain Mapp 2011; 33:2295-305. [PMID: 21826759 DOI: 10.1002/hbm.21362] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Revised: 03/23/2011] [Accepted: 04/26/2011] [Indexed: 11/09/2022] Open
Abstract
Perceived emotional valence of sensory stimuli influences their processing in various cortical and subcortical structures. Recent evidence suggests that negative and positive valences are processed separately, not along a single linear continuum. Here, we examined how brain is activated when subjects are listening to auditory stimuli varying parametrically in perceived valence (very unpleasant-neutral-very pleasant). Seventeen healthy volunteers were scanned in 3 Tesla while listening to International Affective Digital Sounds (IADS-2) in a block design paradigm. We found a strong quadratic U-shaped relationship between valence and blood oxygen level dependent (BOLD) signal strength in the medial prefrontal cortex, auditory cortex, and amygdala. Signals were the weakest for neutral stimuli and increased progressively for more unpleasant or pleasant stimuli. The results strengthen the view that valence is a crucial factor in neural processing of emotions. An alternative explanation is salience, which increases with both negative and positive valences.
Collapse
Affiliation(s)
- Mikko Viinikainen
- Mind and Brain Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, Finland.
| | | | | |
Collapse
|
67
|
Straube T, Mothes-Lasch M, Miltner WHR. Neural mechanisms of the automatic processing of emotional information from faces and voices. Br J Psychol 2011; 102:830-48. [DOI: 10.1111/j.2044-8295.2011.02056.x] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
68
|
Frühholz S, Ceravolo L, Grandjean D. Specific Brain Networks during Explicit and Implicit Decoding of Emotional Prosody. Cereb Cortex 2011; 22:1107-17. [DOI: 10.1093/cercor/bhr184] [Citation(s) in RCA: 129] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
69
|
Brück C, Kreifelts B, Kaza E, Lotze M, Wildgruber D. Impact of personality on the cerebral processing of emotional prosody. Neuroimage 2011; 58:259-68. [PMID: 21689767 DOI: 10.1016/j.neuroimage.2011.06.005] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2010] [Revised: 05/27/2011] [Accepted: 06/03/2011] [Indexed: 10/18/2022] Open
Abstract
While several studies have focused on identifying common brain mechanisms governing the decoding of emotional speech melody, interindividual variations in the cerebral processing of prosodic information, in comparison, have received only little attention to date: Albeit, for instance, differences in personality among individuals have been shown to modulate emotional brain responses, personality influences on the neural basis of prosody decoding have not been investigated systematically yet. Thus, the present study aimed at delineating relationships between interindividual differences in personality and hemodynamic responses evoked by emotional speech melody. To determine personality-dependent modulations of brain reactivity, fMRI activation patterns during the processing of emotional speech cues were acquired from 24 healthy volunteers and subsequently correlated with individual trait measures of extraversion and neuroticism obtained for each participant. Whereas correlation analysis did not indicate any link between brain activation and extraversion, strong positive correlations between measures of neuroticism and hemodynamic responses of the right amygdala, the left postcentral gyrus as well as medial frontal structures including the right anterior cingulate cortex emerged, suggesting that brain mechanisms mediating the decoding of emotional speech melody may vary depending on differences in neuroticism among individuals. Observed trait-specific modulations are discussed in the light of processing biases as well as differences in emotion control or task strategies which may be associated with the personality trait of neuroticism.
Collapse
Affiliation(s)
- Carolin Brück
- Department of Psychiatry and Psychotherapy, University of Tübingen, Calwerstraße 14, 72076 Tübingen, Germany.
| | | | | | | | | |
Collapse
|
70
|
|
71
|
Péron J, El Tamer S, Grandjean D, Leray E, Travers D, Drapier D, Vérin M, Millet B. Major depressive disorder skews the recognition of emotional prosody. Prog Neuropsychopharmacol Biol Psychiatry 2011; 35:987-96. [PMID: 21296120 DOI: 10.1016/j.pnpbp.2011.01.019] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2010] [Revised: 01/26/2011] [Accepted: 01/26/2011] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Major depressive disorder (MDD) is associated with abnormalities in the recognition of emotional stimuli. MDD patients ascribe more negative emotion but also less positive emotion to facial expressions, suggesting blunted responsiveness to positive emotional stimuli. To ascertain whether these emotional biases are modality-specific, we examined the effects of MDD on the recognition of emotions from voices using a paradigm designed to capture subtle effects of biases. METHODS Twenty-one MDD patients and 21 healthy controls (HC) underwent clinical and neuropsychological assessments, followed by a paradigm featuring pseudowords spoken by actors in five types of emotional prosody, rated on continuous scales. RESULTS Overall, MDD patients performed more poorly than HC, displaying significantly impaired recognition of fear, happiness and sadness. Compared with HC, they rated fear significantly more highly when listening to anger stimuli. They also displayed a bias toward surprise, rating it far higher when they heard sad or fearful utterances. Furthermore, for happiness stimuli, MDD patients gave higher ratings for negative emotions (fear and sadness). A multiple regression model on recognition of emotional prosody in MDD patients showed that the best fit was achieved using the executive functioning (categorical fluency, number of errors in the MCST, and TMT B-A) and the total score of the Montgomery-Asberg Depression Rating Scale. CONCLUSIONS Impaired recognition of emotions would appear not to be specific to the visual modality but to be present also when emotions are expressed vocally, this impairment being related to depression severity and dysexecutive syndrome. MDD seems to skew the recognition of emotional prosody toward negative emotional stimuli and the blunting of positive emotion appears not to be restricted to the visual modality.
Collapse
Affiliation(s)
- Julie Péron
- URU-EM 425 Behavior and Basal Ganglia, University of Rennes 1, Hôpital Pontchaillou, CHU de Rennes, rue Henri Le Guilloux, 35033 Rennes, France.
| | | | | | | | | | | | | | | |
Collapse
|
72
|
Ethofer T, Bretscher J, Gschwind M, Kreifelts B, Wildgruber D, Vuilleumier P. Emotional Voice Areas: Anatomic Location, Functional Properties, and Structural Connections Revealed by Combined fMRI/DTI. Cereb Cortex 2011; 22:191-200. [PMID: 21625012 DOI: 10.1093/cercor/bhr113] [Citation(s) in RCA: 130] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Affiliation(s)
- Thomas Ethofer
- Department of General Psychiatry, University of Tübingen, 72076 Tübingen, Germany.
| | | | | | | | | | | |
Collapse
|
73
|
Plichta M, Gerdes A, Alpers G, Harnisch W, Brill S, Wieser M, Fallgatter A. Auditory cortex activation is modulated by emotion: A functional near-infrared spectroscopy (fNIRS) study. Neuroimage 2011; 55:1200-7. [DOI: 10.1016/j.neuroimage.2011.01.011] [Citation(s) in RCA: 84] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2010] [Accepted: 01/06/2011] [Indexed: 10/18/2022] Open
|
74
|
Szameitat DP, Kreifelts B, Alter K, Szameitat AJ, Sterr A, Grodd W, Wildgruber D. It is not always tickling: Distinct cerebral responses during perception of different laughter types. Neuroimage 2010; 53:1264-71. [DOI: 10.1016/j.neuroimage.2010.06.028] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2010] [Revised: 06/01/2010] [Accepted: 06/09/2010] [Indexed: 11/30/2022] Open
|
75
|
Kreifelts B, Ethofer T, Huberle E, Grodd W, Wildgruber D. Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face. Hum Brain Mapp 2010; 31:979-91. [PMID: 19937724 DOI: 10.1002/hbm.20913] [Citation(s) in RCA: 115] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Multimodal integration of nonverbal social signals is essential for successful social interaction. Previous studies have implicated the posterior superior temporal sulcus (pSTS) in the perception of social signals such as nonverbal emotional signals as well as in social cognitive functions like mentalizing/theory of mind. In the present study, we evaluated the relationships between trait emotional intelligence (EI) and fMRI activation patterns in individual subjects during the multimodal perception of nonverbal emotional signals from voice and face. Trait EI was linked to hemodynamic responses in the right pSTS, an area which also exhibits a distinct sensitivity to human voices and faces. Within all other regions known to subserve the perceptual audiovisual integration of human social signals (i.e., amygdala, fusiform gyrus, thalamus), no such linked responses were observed. This functional difference in the network for the audiovisual perception of human social signals indicates a specific contribution of the pSTS as a possible interface between the perception of social information and social cognition.
Collapse
Affiliation(s)
- Benjamin Kreifelts
- Department of Psychiatry and Psychotherapy, University of Tuebingen, Tuebingen, Germany.
| | | | | | | | | |
Collapse
|
76
|
Ethofer T, Wiethoff S, Anders S, Kreifelts B, Grodd W, Wildgruber D. The voices of seduction: cross-gender effects in processing of erotic prosody. Soc Cogn Affect Neurosci 2010; 2:334-7. [PMID: 18985138 DOI: 10.1093/scan/nsm028] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2007] [Accepted: 05/25/2007] [Indexed: 11/12/2022] Open
Abstract
Gender specific differences in cognitive functions have been widely discussed. Considering social cognition such as emotion perception conveyed by non-verbal cues, generally a female advantage is assumed. In the present study, however, we revealed a cross-gender interaction with increasing responses to the voice of opposite sex in male and female subjects. This effect was confined to erotic tone of speech in behavioural data and haemodynamic responses within voice sensitive brain areas (right middle superior temporal gyrus). The observed response pattern, thus, indicates a particular sensitivity to emotional voices that have a high behavioural relevance for the listener.
Collapse
Affiliation(s)
- Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | | | | | | | | | | |
Collapse
|
77
|
Grossmann T, Oberecker R, Koch SP, Friederici AD. The developmental origins of voice processing in the human brain. Neuron 2010; 65:852-8. [PMID: 20346760 PMCID: PMC2852650 DOI: 10.1016/j.neuron.2010.03.001] [Citation(s) in RCA: 195] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/13/2010] [Indexed: 11/24/2022]
Abstract
In human adults, voices are processed in specialized brain regions in superior temporal cortices. We examined the development of this cortical organization during infancy by using near-infrared spectroscopy. In experiment 1, 7-month-olds but not 4-month-olds showed increased responses in left and right superior temporal cortex to the human voice when compared to nonvocal sounds, suggesting that voice-sensitive brain systems emerge between 4 and 7 months of age. In experiment 2, 7-month-old infants listened to words spoken with neutral, happy, or angry prosody. Hearing emotional prosody resulted in increased responses in a voice-sensitive region in the right hemisphere. Moreover, a region in right inferior frontal cortex taken to serve evaluative functions in the adult brain showed particular sensitivity to happy prosody. The pattern of findings suggests that temporal regions specialize in processing voices very early in development and that, already in infancy, emotions differentially modulate voice processing in the right hemisphere.
Collapse
Affiliation(s)
- Tobias Grossmann
- Centre for Brain and Cognitive Development, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK.
| | | | | | | |
Collapse
|
78
|
Péron J, Grandjean D, Le Jeune F, Sauleau P, Haegelen C, Drapier D, Rouaud T, Drapier S, Vérin M. Recognition of emotional prosody is altered after subthalamic nucleus deep brain stimulation in Parkinson's disease. Neuropsychologia 2010; 48:1053-62. [DOI: 10.1016/j.neuropsychologia.2009.12.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2009] [Revised: 11/20/2009] [Accepted: 12/03/2009] [Indexed: 10/20/2022]
|
79
|
Thönnessen H, Boers F, Dammers J, Chen YH, Norra C, Mathiak K. Early sensory encoding of affective prosody: Neuromagnetic tomography of emotional category changes. Neuroimage 2010; 50:250-9. [DOI: 10.1016/j.neuroimage.2009.11.082] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2009] [Revised: 11/01/2009] [Accepted: 11/26/2009] [Indexed: 11/16/2022] Open
Affiliation(s)
- Heike Thönnessen
- Department of Psychiatry and Psychotherapy, JARA-Translational Brain Medicine, RWTH Aachen University, Germany.
| | | | | | | | | | | |
Collapse
|
80
|
Schirmer A. Mark my words: tone of voice changes affective word representations in memory. PLoS One 2010; 5:e9080. [PMID: 20169154 PMCID: PMC2821399 DOI: 10.1371/journal.pone.0009080] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2009] [Accepted: 01/04/2010] [Indexed: 11/19/2022] Open
Abstract
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
81
|
Park JY, Gu BM, Kang DH, Shin YW, Choi CH, Lee JM, Kwon JS. Integration of cross-modal emotional information in the human brain: An fMRI study. Cortex 2010; 46:161-9. [DOI: 10.1016/j.cortex.2008.06.008] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2008] [Revised: 06/01/2008] [Accepted: 06/20/2008] [Indexed: 11/15/2022]
|
82
|
Brosch T, Grandjean D, Sander D, Scherer KR. Cross-modal Emotional Attention: Emotional Voices Modulate Early Stages of Visual Processing. J Cogn Neurosci 2009; 21:1670-9. [DOI: 10.1162/jocn.2009.21110] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Emotional attention, the boosting of the processing of emotionally relevant stimuli, has, up to now, mainly been investigated within a sensory modality, for instance, by using emotional pictures to modulate visual attention. In real-life environments, however, humans typically encounter simultaneous input to several different senses, such as vision and audition. As multiple signals entering different channels might originate from a common, emotionally relevant source, the prioritization of emotional stimuli should be able to operate across modalities. In this study, we explored cross-modal emotional attention. Spatially localized utterances with emotional and neutral prosody served as cues for a visually presented target in a cross-modal dot-probe task. Participants were faster to respond to targets that appeared at the spatial location of emotional compared to neutral prosody. Event-related brain potentials revealed emotional modulation of early visual target processing at the level of the P1 component, with neural sources in the striate visual cortex being more active for targets that appeared at the spatial location of emotional compared to neutral prosody. These effects were not found using synthesized control sounds matched for mean fundamental frequency and amplitude envelope. These results show that emotional attention can operate across sensory modalities by boosting early sensory stages of processing, thus facilitating the multimodal assessment of emotionally relevant stimuli in the environment.
Collapse
|
83
|
Ethofer T, Kreifelts B, Wiethoff S, Wolf J, Grodd W, Vuilleumier P, Wildgruber D. Differential Influences of Emotion, Task, and Novelty on Brain Regions Underlying the Processing of Speech Melody. J Cogn Neurosci 2009; 21:1255-68. [DOI: 10.1162/jocn.2009.21099] [Citation(s) in RCA: 112] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
We investigated the functional characteristics of brain regions implicated in processing of speech melody by presenting words spoken in either neutral or angry prosody during a functional magnetic resonance imaging experiment using a factorial habituation design. Subjects judged either affective prosody or word class for these vocal stimuli, which could be heard for either the first, second, or third time. Voice-sensitive temporal cortices, as well as the amygdala, insula, and mediodorsal thalami, reacted stronger to angry than to neutral prosody. These stimulus-driven effects were not influenced by the task, suggesting that these brain structures are automatically engaged during processing of emotional information in the voice and operate relatively independent of cognitive demands. By contrast, the right middle temporal gyrus and the bilateral orbito-frontal cortices (OFC) responded stronger during emotion than word classification, but were also sensitive to anger expressed by the voices, suggesting that some perceptual aspects of prosody are also encoded within these regions subserving explicit processing of vocal emotion. The bilateral OFC showed a selective modulation by emotion and repetition, with particularly pronounced responses to angry prosody during the first presentation only, indicating a critical role of the OFC in detection of vocal information that is both novel and behaviorally relevant. These results converge with previous findings obtained for angry faces and suggest a general involvement of the OFC for recognition of anger irrespective of the sensory modality. Taken together, our study reveals that different aspects of voice stimuli and perceptual demands modulate distinct areas involved in the processing of emotional prosody.
Collapse
Affiliation(s)
- Thomas Ethofer
- 1University of Tübingen, Tübingen, Germany
- 2University Medical Center of Geneva, Geneva, Switzerland
| | | | | | | | | | | | | |
Collapse
|
84
|
Decoding of Emotional Information in Voice-Sensitive Cortices. Curr Biol 2009; 19:1028-33. [DOI: 10.1016/j.cub.2009.04.054] [Citation(s) in RCA: 186] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2008] [Revised: 04/13/2009] [Accepted: 04/14/2009] [Indexed: 11/18/2022]
|
85
|
Recognition of affective prosody in brain-damaged patients and healthy controls: A neurophysiological study using EEG and whole-head MEG. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2009; 9:153-67. [DOI: 10.3758/cabn.9.2.153] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
86
|
Bach DR, Grandjean D, Sander D, Herdener M, Strik WK, Seifritz E. The effect of appraisal level on processing of emotional prosody in meaningless speech. Neuroimage 2008; 42:919-27. [DOI: 10.1016/j.neuroimage.2008.05.034] [Citation(s) in RCA: 96] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2008] [Revised: 05/14/2008] [Accepted: 05/19/2008] [Indexed: 10/22/2022] Open
|
87
|
Neural processing of vocal emotion and identity. Brain Cogn 2008; 69:121-6. [PMID: 18644670 DOI: 10.1016/j.bandc.2008.06.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2007] [Revised: 06/06/2008] [Accepted: 06/08/2008] [Indexed: 11/24/2022]
Abstract
The voice is a marker of a person's identity which allows individual recognition even if the person is not in sight. Listening to a voice also affords inferences about the speaker's emotional state. Both these types of personal information are encoded in characteristic acoustic feature patterns analyzed within the auditory cortex. In the present study 16 volunteers listened to pairs of non-verbal voice stimuli with happy or sad valence in two different task conditions while event-related brain potentials (ERPs) were recorded. In an emotion matching task, participants indicated whether the expressed emotion of a target voice was congruent or incongruent with that of a (preceding) prime voice. In an identity matching task, participants indicated whether or not the prime and target voice belonged to the same person. Effects based on emotion expressed occurred earlier than those based on voice identity. Specifically, P2 (approximately 200 ms)-amplitudes were reduced for happy voices when primed by happy voices. Identity match effects, by contrast, did not start until around 300 ms. These results show an early task-specific emotion-based influence on the early stages of auditory sensory processing.
Collapse
|
88
|
Hoekert M, Bais L, Kahn RS, Aleman A. Time course of the involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum in emotional prosody perception. PLoS One 2008; 3:e2244. [PMID: 18493307 PMCID: PMC2373925 DOI: 10.1371/journal.pone.0002244] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2008] [Accepted: 04/10/2008] [Indexed: 11/19/2022] Open
Abstract
In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400-1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence.
Collapse
Affiliation(s)
- Marjolijn Hoekert
- BCN Neuroimaging Center, University of Groningen, Groningen, The Netherlands.
| | | | | | | |
Collapse
|
89
|
Affective auditory stimuli: characterization of the International Affective Digitized Sounds (IADS) by discrete emotional categories. Behav Res Methods 2008; 40:315-21. [PMID: 18411555 DOI: 10.3758/brm.40.1.315] [Citation(s) in RCA: 84] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although there are many well-characterized affective visual stimuli sets available to researchers, there are few auditory sets available. Those auditory sets that are available have been characterized primarily according to one of two major theories of affect: dimensional or categorical. Current trends have attempted to utilize both theories to more fully understand emotional processing. As such, stimuli that have been thoroughly characterized according to both of these approaches are exceptionally useful. In an effort to provide researchers with such a stimuli set, we collected descriptive data on the International Affective Digitized Sounds (IADS), identifying which discrete categorical emotions are elicited by each sound. The IADS is a database of 111 sounds characterized along the affective dimensions of valence, arousal, and dominance. Our data complement these characterizations of the IADS, allowing researchers to control for or manipulate stimulus properties in accordance with both theories of affect, providing an avenue for further integration of these perspectives. Related materials may be downloaded from the Psychonomic Society Web archive at www.psychonomic.org/archive.
Collapse
|
90
|
Aeschlimann M, Knebel JF, Murray MM, Clarke S. Emotional pre-eminence of human vocalizations. Brain Topogr 2008; 20:239-48. [PMID: 18347967 DOI: 10.1007/s10548-008-0051-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2007] [Accepted: 02/11/2008] [Indexed: 11/28/2022]
Abstract
Human vocalizations (HV), as well as environmental sounds, convey a wide range of information, including emotional expressions. The latter have been relatively rarely investigated, and, in particular, it is unclear if duration-controlled non-linguistic HV sequences can reliably convey both positive and negative emotional information. The aims of the present psychophysical study were: (i) to generate a battery of duration-controlled and acoustically controlled extreme valence stimuli, and (ii) to compare the emotional impact of HV with that of other environmental sounds. A set of 144 HV and other environmental sounds was selected to cover emotionally positive, negative, and neutral values. Sequences of 2 s duration were rated on Likert scales by 16 listeners along three emotional dimensions (arousal, intensity, and valence) and two non-emotional dimensions (confidence in identifying the sound source and perceived loudness). The 2 s stimuli were reliably perceived as emotionally positive, negative or neutral. We observed a linear relationship between intensity and arousal ratings and a "boomerang-shaped" intensity-valence distribution, as previously reported for longer, duration-variable stimuli. In addition, the emotional intensity ratings for HV were higher than for other environmental sounds, suggesting that HV constitute a characteristic class of emotional auditory stimuli. In addition, emotionally positive HV were more readily identified than other sounds, and emotionally negative stimuli, irrespective of their source, were perceived as louder than their positive and neutral counterparts. In conclusion, HV are a distinct emotional category of environmental sounds and they retain this emotional pre-eminence even when presented for brief periods.
Collapse
Affiliation(s)
- Mélanie Aeschlimann
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and Université de Lausanne (UNIL), Av. Pierre Decker 5, 1011 Lausanne, Switzerland.
| | | | | | | |
Collapse
|
91
|
Schirmer A, Escoffier N, Zysset S, Koester D, Striano T, Friederici AD. When vocal processing gets emotional: on the role of social orientation in relevance detection by the human amygdala. Neuroimage 2008; 40:1402-10. [PMID: 18299209 DOI: 10.1016/j.neuroimage.2008.01.018] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2007] [Revised: 11/14/2007] [Accepted: 01/13/2008] [Indexed: 10/22/2022] Open
Abstract
Previous work on vocal emotional processing provided little evidence for involvement of emotional processing areas such as the amygdala or the orbitofrontal cortex (OFC). Here, we sought to specify whether involvement of these areas depends on how relevant vocal expressions are for the individual. To this end, we assessed participants' social orientation--a measure of the interest and concern for other individuals and hence the relevance of social signals. We then presented task-irrelevant syllable sequences that contained rare changes in tone of voice that could be emotional or neutral. Processing differences between emotional and neutral vocal change in the right amygdala and the bilateral OFC were significantly correlated with the social orientation measure. Specifically, higher social orientation scores were associated with enhanced amygdala and OFC activity to emotional as compared to neutral change. Given the presumed role of the amygdala in the detection of emotionally relevant information, our results suggest that social orientation enhances this detection process and the activation of emotional representations mediated by the OFC. Moreover, social orientation may predict listener responses to vocal emotional cues and explain interindividual variability in vocal emotional processing.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, Faculty of Arts and Social Sciences, National University of Singapore, Singapore.
| | | | | | | | | | | |
Collapse
|
92
|
Effects of emotional prosody on auditory extinction for voices in patients with spatial neglect. Neuropsychologia 2008; 46:487-96. [DOI: 10.1016/j.neuropsychologia.2007.08.025] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2007] [Revised: 08/15/2007] [Accepted: 08/28/2007] [Indexed: 11/24/2022]
|
93
|
Meyer M, Baumann S, Wildgruber D, Alter K. How the brain laughs. Behav Brain Res 2007; 182:245-60. [PMID: 17568693 DOI: 10.1016/j.bbr.2007.04.023] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2007] [Revised: 04/26/2007] [Accepted: 04/30/2007] [Indexed: 10/23/2022]
Abstract
Laughter is an affective nonspeech vocalization that is not reserved to humans, but can also be observed in other mammalians, in particular monkeys and great apes. This observation makes laughter an interesting subject for brain research as it allows us to learn more about parallels and differences of human and animal communication by studying the neural underpinnings of expressive and perceptive laughter. In the first part of this review we will briefly sketch the acoustic structure of a bout of laughter and relate this to the differential anatomy of the larynx and the vocal tract in human and monkey. The subsequent part of the article introduces the present knowledge on behavioral and brain mechanisms of "laughter-like responses" and other affective vocalizations in monkeys and apes, before we describe the scant evidence on the cerebral organization of laughter provided by neuroimaging studies. Our review indicates that a densely intertwined network of auditory and (pre-) motor functions subserves perceptive and expressive aspects of human laughter. Even though there is a tendency in the present literature to suggest a rightward asymmetry of the cortical representation of laughter, there is no doubt that left cortical areas are also involved. In addition, subcortical areas, namely the amygdala, have also been identified as part of this network. Furthermore, we can conclude from our overview that research on the brain mechanisms of affective vocalizations in monkeys and great apes report the recruitment of similar cortical and subcortical areas similar to those attributed to laughter in humans. Therefore, we propose the existence of equivalent brain representations of emotional tone in human and great apes. This reasoning receives support from neuroethological models that describe laughter as a primal behavioral tool used by individuals - be they human or ape - to prompt other individuals of a peer group and to create a mirthful context for social interaction and communication.
Collapse
Affiliation(s)
- Martin Meyer
- Institute of Neuroradiology, Department of Medical Radiology, University Hospital of Zurich, Frauenklinikstrasse 10, CH-8091 Zurich, Switzerland.
| | | | | | | |
Collapse
|
94
|
Schirmer A, Simpson E, Escoffier N. Listen up! Processing of intensity change differs for vocal and nonvocal sounds. Brain Res 2007; 1176:103-12. [PMID: 17900543 DOI: 10.1016/j.brainres.2007.08.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2007] [Revised: 08/03/2007] [Accepted: 08/04/2007] [Indexed: 12/22/2022]
Abstract
Changes in the intensity of both vocal and nonvocal sounds can be emotionally relevant. However, as only vocal sounds directly reflect communicative intent, intensity change of vocal but not nonvocal sounds is socially relevant. Here we investigated whether a change in sound intensity is processed differently depending on its social relevance. To this end, participants listened passively to a sequence of vocal or nonvocal sounds that contained rare deviants which differed from standards in sound intensity. Concurrently recorded event-related potentials (ERPs) revealed a mismatch negativity (MMN) and P300 effect for intensity change. Direction of intensity change was of little importance for vocal stimulus sequences, which recruited enhanced sensory and attentional resources for both loud and soft deviants. In contrast, intensity change in nonvocal sequences recruited more sensory and attentional resources for loud as compared to soft deviants. This was reflected in markedly larger MMN/P300 amplitudes and shorter P300 latencies for the loud as compared to soft nonvocal deviants. Furthermore, while the processing pattern observed for nonvocal sounds was largely comparable between men and women, sex differences for vocal sounds suggest that women were more sensitive to their social relevance. These findings extend previous evidence of sex differences in vocal processing and add to reports of voice specific processing mechanisms by demonstrating that simple acoustic change recruits more processing resources if it is socially relevant.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, Faculty of Arts and Social Sciences, National University of Singapore, Singapore.
| | | | | |
Collapse
|
95
|
Kreifelts B, Ethofer T, Grodd W, Erb M, Wildgruber D. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study. Neuroimage 2007; 37:1445-56. [PMID: 17659885 DOI: 10.1016/j.neuroimage.2007.06.020] [Citation(s) in RCA: 195] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2007] [Revised: 06/08/2007] [Accepted: 06/25/2007] [Indexed: 11/30/2022] Open
Abstract
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.
Collapse
Affiliation(s)
- Benjamin Kreifelts
- Department of Psychiatry and Psychotherapy, University of Tuebingen, Osianderstrasse 24, 72076 Tuebingen, Germany.
| | | | | | | | | |
Collapse
|
96
|
Abstract
Affective pictures trigger attentional responses in humans but very little is known about the processing of affective environmental sounds. Here, we used an oddball event-related potential paradigm to determine the saliency of unpleasant sounds presented among affectively neutral sounds. Participants performed a one-back task while listening to pseudo-randomized sound sequences comprising 70% neutral sounds, 15% unpleasant sounds of matched peak intensity, and 15% louder neutral sounds. Louder neutral sounds elicited a larger N1 component and a significant P3a variation with a central distribution. Unpleasant sounds did not affect early components but elicited a significant frontocentral P3a modulation. We conclude that affective environmental sounds spontaneously capture human attention but fail to modulate early perceptual processing when sound peak intensity is controlled.
Collapse
|
97
|
Wildgruber D, Ackermann H, Kreifelts B, Ethofer T. Cerebral processing of linguistic and emotional prosody: fMRI studies. PROGRESS IN BRAIN RESEARCH 2006; 156:249-68. [PMID: 17015084 DOI: 10.1016/s0079-6123(06)56013-3] [Citation(s) in RCA: 196] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
During acoustic communication in humans, information about a speaker's emotional state is predominantly conveyed by modulation of the tone of voice (emotional or affective prosody). Based on lesion data, a right hemisphere superiority for cerebral processing of emotional prosody has been assumed. However, the available clinical studies do not yet provide a coherent picture with respect to interhemispheric lateralization effects of prosody recognition and intrahemispheric localization of the respective brain regions. To further delineate the cerebral network engaged in the perception of emotional tone, a series of experiments was carried out based upon functional magnetic resonance imaging (fMRI). The findings obtained from these investigations allow for the separation of three successive processing stages during recognition of emotional prosody: (1) extraction of suprasegmental acoustic information predominantly subserved by right-sided primary and higher order acoustic regions; (2) representation of meaningful suprasegmental acoustic sequences within posterior aspects of the right superior temporal sulcus; (3) explicit evaluation of emotional prosody at the level of the bilateral inferior frontal cortex. Moreover, implicit processing of affective intonation seems to be bound to subcortical regions mediating automatic induction of specific emotional reactions such as activation of the amygdala in response to fearful stimuli. As concerns lower level processing of the underlying suprasegmental acoustic cues, linguistic and emotional prosody seem to share the same right hemisphere neural resources. Explicit judgment of linguistic aspects of speech prosody, however, appears to be linked to left-sided language areas whereas bilateral orbitofrontal cortex has been found involved in explicit evaluation of emotional prosody. These differences in hemispheric lateralization effects might explain that specific impairments in nonverbal emotional communication subsequent to focal brain lesions are relatively rare clinical observations as compared to the more frequent aphasic disorders.
Collapse
Affiliation(s)
- D Wildgruber
- Department of Psychiatry, University of Tübingen, Osianderstr. 24, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|