1
|
Cai CQ, Lavan N, Chen SHY, Wang CZX, Ozturk OC, Chiu RMY, Gilbert SJ, White SJ, Scott SK. Mapping the differential impact of spontaneous and conversational laughter on brain and mind: an fMRI study in autism. Cereb Cortex 2024; 34:bhae199. [PMID: 38752979 PMCID: PMC11097909 DOI: 10.1093/cercor/bhae199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 04/23/2024] [Accepted: 04/26/2024] [Indexed: 05/18/2024] Open
Abstract
Spontaneous and conversational laughter are important socio-emotional communicative signals. Neuroimaging findings suggest that non-autistic people engage in mentalizing to understand the meaning behind conversational laughter. Autistic people may thus face specific challenges in processing conversational laughter, due to their mentalizing difficulties. Using fMRI, we explored neural differences during implicit processing of these two types of laughter. Autistic and non-autistic adults passively listened to funny words, followed by spontaneous laughter, conversational laughter, or noise-vocoded vocalizations. Behaviourally, words plus spontaneous laughter were rated as funnier than words plus conversational laughter, and the groups did not differ. However, neuroimaging results showed that non-autistic adults exhibited greater medial prefrontal cortex activation while listening to words plus conversational laughter, than words plus genuine laughter, while autistic adults showed no difference in medial prefrontal cortex activity between these two laughter types. Our findings suggest a crucial role for the medial prefrontal cortex in understanding socio-emotionally ambiguous laughter via mentalizing. Our study also highlights the possibility that autistic people may face challenges in understanding the essence of the laughter we frequently encounter in everyday life, especially in processing conversational laughter that carries complex meaning and social ambiguity, potentially leading to social vulnerability. Therefore, we advocate for clearer communication with autistic people.
Collapse
Affiliation(s)
- Ceci Qing Cai
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| | - Nadine Lavan
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Queen Mary University of London, London E1 4NS, United Kingdom
| | - Sinead H Y Chen
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| | - Claire Z X Wang
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| | - Ozan Cem Ozturk
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| | - Roni Man Ying Chiu
- Department of Social and Behavioural Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong SAR
| | - Sam J Gilbert
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| | - Sarah J White
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| |
Collapse
|
2
|
Nagarajan G, Matrov D, Pearson AC, Yen C, Bradley SP, Chudasama Y. Cingulate cortex shapes early postnatal development of social vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.17.580738. [PMID: 38529485 PMCID: PMC10962701 DOI: 10.1101/2024.02.17.580738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The social dynamics of vocal behavior has major implications for social development in humans. We asked whether early life damage to the anterior cingulate cortex (ACC), which is closely associated with socioemotional regulation more broadly, impacts the normal development of vocal expression. The common marmoset provides a unique opportunity to study the developmental trajectory of vocal behavior, and to track the consequences of early brain damage on aspects of social vocalizations. We created ACC lesions in neonatal marmosets and compared their pattern of vocalization to that of age-matched controls throughout the first 6 weeks of life. We found that while early life ACC lesions had little influence on the production of vocal calls, developmental changes to the quality of social contact calls and their associated syntactical and acoustic characteristics were compromised. These animals made fewer social contact calls, and when they did, they were short, loud and monotonic. We further determined that damage to ACC in infancy results in a permanent alteration in downstream brain areas known to be involved in social vocalizations, such as the amygdala and periaqueductal gray. Namely, in the adult, these structures exhibited diminished GABA-immunoreactivity relative to control animals, likely reflecting disruption of the normal inhibitory balance following ACC deafferentation. Together, these data indicate that the normal development of social vocal behavior depends on the ACC and its interaction with other areas in the vocal network during early life.
Collapse
|
3
|
Whitehead JC, Spiousas I, Armony JL. Individual differences in the evaluation of ambiguous visual and auditory threat-related expressions. Eur J Neurosci 2024; 59:370-393. [PMID: 38185821 DOI: 10.1111/ejn.16220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/29/2023] [Accepted: 11/22/2023] [Indexed: 01/09/2024]
Abstract
This study investigated the neural correlates of the judgement of auditory and visual ambiguous threat-related information, and the influence of state anxiety on this process. Healthy subjects were scanned using a fast, high-resolution functional magnetic resonance imaging (fMRI) multiband sequence while they performed a two-alternative forced-choice emotion judgement task on faces and vocal utterances conveying explicit anger or fear, as well as ambiguous ones. Critically, the latter was specific to each subject, obtained through a morphing procedure and selected prior to scanning following a perceptual decision-making task. Behavioural results confirmed a greater task-difficulty for subject-specific ambiguous stimuli and also revealed a judgement bias for visual fear, and, to a lesser extent, for auditory anger. Imaging results showed increased activity in regions of the salience and frontoparietal control networks (FPCNs) and deactivation in areas of the default mode network for ambiguous, relative to explicit, expressions. In contrast, the right amygdala (AMG) responded more strongly to explicit stimuli. Interestingly, its response to the same ambiguous stimulus depended on the subjective judgement of the expression. Finally, we found that behavioural and neural differences between ambiguous and explicit expressions decreased as a function of state anxiety scores. Taken together, our results show that behavioural and brain responses to emotional expressions are determined not only by emotional clarity but also modality and the subjects' subjective perception of the emotion expressed, and that some of these responses are modulated by state anxiety levels.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Human Neuroscience, Douglas Mental Health University Institute, Verdun, Quebec, Canada
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
| | - Ignacio Spiousas
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Laboratorio Interdisciplinario del Tiempo y la Experiencia (LITERA), CONICET, Universidad de San Andrés, Victoria, Argentina
| | - Jorge L Armony
- Human Neuroscience, Douglas Mental Health University Institute, Verdun, Quebec, Canada
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Laboratorio Interdisciplinario del Tiempo y la Experiencia (LITERA), CONICET, Universidad de San Andrés, Victoria, Argentina
- Department of Psychiatry, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
4
|
Talwar S, Barbero FM, Calce RP, Collignon O. Automatic Brain Categorization of Discrete Auditory Emotion Expressions. Brain Topogr 2023; 36:854-869. [PMID: 37639111 PMCID: PMC10522533 DOI: 10.1007/s10548-023-00983-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 06/21/2023] [Indexed: 08/29/2023]
Abstract
Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.
Collapse
Affiliation(s)
- Siddharth Talwar
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium.
| | - Francesca M Barbero
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium
| | - Roberta P Calce
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium
| | - Olivier Collignon
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium.
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
5
|
Jia G, Bai S, Lin Y, Wang X, Zhu L, Lyu C, Sun G, An K, Roe AW, Li X, Gao L. Representation of conspecific vocalizations in amygdala of awake marmosets. Natl Sci Rev 2023; 10:nwad194. [PMID: 37818111 PMCID: PMC10561708 DOI: 10.1093/nsr/nwad194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 06/23/2023] [Accepted: 07/06/2023] [Indexed: 10/12/2023] Open
Abstract
Human speech and animal vocalizations are important for social communication and animal survival. Neurons in the auditory pathway are responsive to a range of sounds, from elementary sound features to complex acoustic sounds. For social communication, responses to distinct patterns of vocalization are usually highly specific to an individual conspecific call, in some species. This includes the specificity of sound patterns and embedded biological information. We conducted single-unit recordings in the amygdala of awake marmosets and presented calls used in marmoset communication, calls of other species and calls from specific marmoset individuals. We found that some neurons (47/262) in the amygdala distinguished 'Phee' calls from vocalizations of other animals and other types of marmoset vocalizations. Interestingly, a subset of Phee-responsive neurons (22/47) also exhibited selectivity to one out of the three Phees from two different 'caller' marmosets. Our findings suggest that, while it has traditionally been considered the key structure in the limbic system, the amygdala also represents a critical stage of socially relevant auditory perceptual processing.
Collapse
Affiliation(s)
- Guoqiang Jia
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Siyi Bai
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yingxu Lin
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xiaohui Wang
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Lin Zhu
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Chenfei Lyu
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Guanglong Sun
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Kang An
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| | - Anna Wang Roe
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xinjian Li
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Medical Neurobiology of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou 310020, China
| | - Lixia Gao
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
6
|
Putkinen V, Nazari-Farsani S, Karjalainen T, Santavirta S, Hudson M, Seppälä K, Sun L, Karlsson HK, Hirvonen J, Nummenmaa L. Pattern recognition reveals sex-dependent neural substrates of sexual perception. Hum Brain Mapp 2023; 44:2543-2556. [PMID: 36773282 PMCID: PMC10028630 DOI: 10.1002/hbm.26229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 12/13/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Sex differences in brain activity evoked by sexual stimuli remain elusive despite robust evidence for stronger enjoyment of and interest toward sexual stimuli in men than in women. To test whether visual sexual stimuli evoke different brain activity patterns in men and women, we measured hemodynamic brain activity induced by visual sexual stimuli in two experiments with 91 subjects (46 males). In one experiment, the subjects viewed sexual and nonsexual film clips, and dynamic annotations for nudity in the clips were used to predict hemodynamic activity. In the second experiment, the subjects viewed sexual and nonsexual pictures in an event-related design. Men showed stronger activation than women in the visual and prefrontal cortices and dorsal attention network in both experiments. Furthermore, using multivariate pattern classification we could accurately predict the sex of the subject on the basis of the brain activity elicited by the sexual stimuli. The classification generalized across the experiments indicating that the sex differences were task-independent. Eye tracking data obtained from an independent sample of subjects (N = 110) showed that men looked longer than women at the chest area of the nude female actors in the film clips. These results indicate that visual sexual stimuli evoke discernible brain activity patterns in men and women which may reflect stronger attentional engagement with sexual stimuli in men.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
| | - Sanaz Nazari-Farsani
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
| | - Tomi Karjalainen
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
| | - Severi Santavirta
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
| | - Matthew Hudson
- Turku PET Centre, University of Turku, Turku, Finland
- School of Psychology, University of Plymouth, Plymouth, UK
| | - Kerttu Seppälä
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
- Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Lihua Sun
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
| | - Henry K Karlsson
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
| | - Jussi Hirvonen
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
- Department of Radiology, Turku University Hospital, Turku, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku, Finland
- Turku University Hospital, Turku, Finland
- Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
7
|
Nummenmaa L, Malèn T, Nazari-Farsani S, Seppälä K, Sun L, Santavirta S, Karlsson HK, Hudson M, Hirvonen J, Sams M, Scott S, Putkinen V. Decoding brain basis of laughter and crying in natural scenes. Neuroimage 2023; 273:120082. [PMID: 37030414 DOI: 10.1016/j.neuroimage.2023.120082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 03/08/2023] [Accepted: 03/31/2023] [Indexed: 04/08/2023] Open
Abstract
Laughter and crying are universal signals of prosociality and distress, respectively. Here we investigated the functional brain basis of perceiving laughter and crying using naturalistic functional magnetic resonance imaging (fMRI) approach. We measured haemodynamic brain activity evoked by laughter and crying in three experiments with 100 subjects in each. The subjects i) viewed a 20-minute medley of short video clips, and ii) 30 minutes of a full-length feature film, and iii) listened to 15 minutes of a radio play that all contained bursts of laughter and crying. Intensity of laughing and crying in the videos and radio play was annotated by independent observes, and the resulting time series were used to predict hemodynamic activity to laughter and crying episodes. Multivariate pattern analysis (MVPA) was used to test for regional selectivity in laughter and crying evoked activations. Laughter induced widespread activity in ventral visual cortex and superior and middle temporal and motor cortices. Crying activated thalamus, cingulate cortex along the anterior-posterior axis, insula and orbitofrontal cortex. Both laughter and crying could be decoded accurately (66-77% depending on the experiment) from the BOLD signal, and the voxels contributing most significantly to classification were in superior temporal cortex. These results suggest that perceiving laughter and crying engage distinct neural networks, whose activity suppresses each other to manage appropriate behavioral responses to others' bonding and distress signals.
Collapse
|
8
|
Steiner F, Fernandez N, Dietziker J, Stämpfli SP, Seifritz E, Rey A, Frühholz FS. Affective speech modulates a cortico-limbic network in real time. Prog Neurobiol 2022; 214:102278. [DOI: 10.1016/j.pneurobio.2022.102278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 10/18/2022]
|
9
|
Interaction effects of the 5-HTT and MAOA-uVNTR gene variants on pre-attentive EEG activity in response to threatening voices. Commun Biol 2022; 5:340. [PMID: 35396540 PMCID: PMC8993814 DOI: 10.1038/s42003-022-03297-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 03/21/2022] [Indexed: 11/08/2022] Open
Abstract
Both the serotonin transporter polymorphism (5-HTTLPR) and the monoamine oxidase A gene (MAOA-uVNTR) are considered genetic contributors for anxiety-related symptomatology and aggressive behavior. Nevertheless, an interaction between these genes and the pre-attentive processing of threatening voices -a biological marker for anxiety-related conditions- has not been assessed yet. Among the entire sample of participants in the study with valid genotyping and electroencephalographic (EEG) data (N = 140), here we show that men with low-activity MAOA-uVNTR, and who were not homozygous for the 5-HTTLPR short allele (s) (n = 11), had significantly larger fearful MMN amplitudes -as driven by significant larger ERPs to fearful stimuli- than men with high-activity MAOA-uVNTR variants (n = 20). This is in contrast with previous studies, where significantly reduced fearful MMN amplitudes, driven by increased ERPs to neutral stimuli, were observed in those homozygous for the 5-HTT s-allele. In conclusion, using genetic, neurophysiological, and behavioral measurements, this study illustrates how the intricate interaction between the 5-HTT and the MAOA-uVNTR variants have an impact on threat processing, and social cognition, in male individuals (n = 62).
Collapse
|
10
|
Domínguez-Borràs J, Vuilleumier P. Amygdala function in emotion, cognition, and behavior. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:359-380. [PMID: 35964983 DOI: 10.1016/b978-0-12-823493-8.00015-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The amygdala is a core structure in the anterior medial temporal lobe, with an important role in several brain functions involving memory, emotion, perception, social cognition, and even awareness. As a key brain structure for saliency detection, it triggers and controls widespread modulatory signals onto multiple areas of the brain, with a great impact on numerous aspects of adaptive behavior. Here we discuss the neural mechanisms underlying these functions, as established by animal and human research, including insights provided in both healthy and pathological conditions.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Psychology and Psychobiology & Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Patrik Vuilleumier
- Department of Neuroscience and Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| |
Collapse
|
11
|
Holz N, Larrouy-Maestri P, Poeppel D. The paradoxical role of emotional intensity in the perception of vocal affect. Sci Rep 2021; 11:9663. [PMID: 33958630 PMCID: PMC8102532 DOI: 10.1038/s41598-021-88431-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 04/09/2021] [Indexed: 11/08/2022] Open
Abstract
Vocalizations including laughter, cries, moans, or screams constitute a potent source of information about the affective states of others. It is typically conjectured that the higher the intensity of the expressed emotion, the better the classification of affective information. However, attempts to map the relation between affective intensity and inferred meaning are controversial. Based on a newly developed stimulus database of carefully validated non-speech expressions ranging across the entire intensity spectrum from low to peak, we show that the intuition is false. Based on three experiments (N = 90), we demonstrate that intensity in fact has a paradoxical role. Participants were asked to rate and classify the authenticity, intensity and emotion, as well as valence and arousal of the wide range of vocalizations. Listeners are clearly able to infer expressed intensity and arousal; in contrast, and surprisingly, emotion category and valence have a perceptual sweet spot: moderate and strong emotions are clearly categorized, but peak emotions are maximally ambiguous. This finding, which converges with related observations from visual experiments, raises interesting theoretical challenges for the emotion communication literature.
Collapse
Affiliation(s)
- N Holz
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - P Larrouy-Maestri
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany
| | - D Poeppel
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
12
|
Michael V, Goffinet J, Pearson J, Wang F, Tschida K, Mooney R. Circuit and synaptic organization of forebrain-to-midbrain pathways that promote and suppress vocalization. eLife 2020; 9:e63493. [PMID: 33372655 PMCID: PMC7793624 DOI: 10.7554/elife.63493] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/28/2020] [Indexed: 12/11/2022] Open
Abstract
Animals vocalize only in certain behavioral contexts, but the circuits and synapses through which forebrain neurons trigger or suppress vocalization remain unknown. Here, we used transsynaptic tracing to identify two populations of inhibitory neurons that lie upstream of neurons in the periaqueductal gray (PAG) that gate the production of ultrasonic vocalizations (USVs) in mice (i.e. PAG-USV neurons). Activating PAG-projecting neurons in the preoptic area of the hypothalamus (POAPAG neurons) elicited USV production in the absence of social cues. In contrast, activating PAG-projecting neurons in the central-medial boundary zone of the amygdala (AmgC/M-PAG neurons) transiently suppressed USV production without disrupting non-vocal social behavior. Optogenetics-assisted circuit mapping in brain slices revealed that POAPAG neurons directly inhibit PAG interneurons, which in turn inhibit PAG-USV neurons, whereas AmgC/M-PAG neurons directly inhibit PAG-USV neurons. These experiments identify two major forebrain inputs to the PAG that trigger and suppress vocalization, respectively, while also establishing the synaptic mechanisms through which these neurons exert opposing behavioral effects.
Collapse
Affiliation(s)
- Valerie Michael
- Department of Neurobiology, Duke University Medical CenterDurhamUnited States
| | - Jack Goffinet
- Department of Neurobiology, Duke University Medical CenterDurhamUnited States
| | - John Pearson
- Department of Neurobiology, Duke University Medical CenterDurhamUnited States
- Department of Biostatistics & Bioinformatics, Duke University Medical CenterDurhamUnited States
| | - Fan Wang
- Department of Neurobiology, Duke University Medical CenterDurhamUnited States
| | | | - Richard Mooney
- Department of Neurobiology, Duke University Medical CenterDurhamUnited States
| |
Collapse
|
13
|
Berntsen MB, Cooper NR, Romei V. Emotional Valence Modulates Low Beta Suppression and Recognition of Social Interactions. J PSYCHOPHYSIOL 2020. [DOI: 10.1027/0269-8803/a000251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Abstract. Emotional valence may have evolutionary adaptive purposes as negative stimuli can be related to survival against threat and positive stimuli to facilitating relationships. This can be seen in the different impact positive and negative stimuli have on human health and well-being, and in the valence-specific cortical activity and neurophysiological patterns reported; for example, negative stimuli are processed more rapidly than positive. Valence-specific patterns are affected by individual differences and personality traits such as empathy, where levels of empathy relate to different reactivity patterns to valence. Here we investigated the effect of valence on neurophysiological responses and interpretation of social interactions depicted by point-light biological motion (PLBM) displays. The meaning of each PLBM display is revealed as the sequence unfolds and is therefore not readily available for snap assessments such as fight or flight responses. We compared electroencephalogram (EEG) reactivity during observation of the displays between individuals with low, moderate, or high levels of empathy. Results indicated that positive displays induced significantly larger suppression in lower beta (13–20 Hz) compared to control displays, while negative displays revealed no difference in suppression compared to scrambled versions. However, no difference between positive and negative displays was observed, suggesting that the rapid processing of negative displays may have been minimized by revealing meaning more slowly. Positive displays were interpreted more accurately, while levels of empathy did not modulate either neurophysiological responses or interpretation, suggesting that empathy under these conditions did not influence the way in which valence was processed or interpreted.
Collapse
Affiliation(s)
- Monica B. Berntsen
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
| | - Nicholas R. Cooper
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
| | - Vincenzo Romei
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
- Department of Psychology and Centre for Studies and Researches in Cognitive Neuroscience, Cesena Campus, University of Bologna, Cesena, Italy
| |
Collapse
|
14
|
Affect-biased attention and predictive processing. Cognition 2020; 203:104370. [DOI: 10.1016/j.cognition.2020.104370] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 05/22/2020] [Accepted: 06/03/2020] [Indexed: 01/22/2023]
|
15
|
Beaurenaut M, Tokarski E, Dezecache G, Grèzes J. The 'Threat of Scream' paradigm: a tool for studying sustained physiological and subjective anxiety. Sci Rep 2020; 10:12496. [PMID: 32719491 PMCID: PMC7385655 DOI: 10.1038/s41598-020-68889-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 07/02/2020] [Indexed: 12/22/2022] Open
Abstract
Progress in understanding the emergence of pathological anxiety depends on the availability of paradigms effective in inducing anxiety in a simple, consistent and sustained manner. The Threat-of-Shock paradigm has typically been used to elicit anxiety, but poses ethical issues when testing vulnerable populations. Moreover, it is not clear from past studies whether anxiety can be sustained in experiments of longer durations. Here, we present empirical support for an alternative approach, the ‘Threat-of-Scream’ paradigm, in which shocks are replaced by screams. In two studies, participants were repeatedly exposed to blocks in which they were at risk of hearing aversive screams at any time vs. blocks in which they were safe from screams. Contrary to previous ‘Threat-of-Scream’ studies, we ensured that our screams were neither harmful nor intolerable by presenting them at low intensity. We found higher subjective reports of anxiety, higher skin conductance levels, and a positive correlation between the two measures, in threat compared to safe blocks. These results were reproducible and we found no significant change over time. The unpredictable delivery of low intensity screams could become an essential part of a psychology toolkit, particularly when investigating the impact of anxiety in a diversity of cognitive functions and populations.
Collapse
Affiliation(s)
- Morgan Beaurenaut
- Laboratoire de Neurosciences Cognitives et Computationnelles, ENS, PSL Research University, INSERM, Département d'études Cognitives, Paris, France.
| | - Elliot Tokarski
- Laboratoire de Neurosciences Cognitives et Computationnelles, ENS, PSL Research University, INSERM, Département d'études Cognitives, Paris, France
| | - Guillaume Dezecache
- Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK.,Université Clermont Auvergne, CNRS, LAPSCO, Clermont-Ferrand, France
| | - Julie Grèzes
- Laboratoire de Neurosciences Cognitives et Computationnelles, ENS, PSL Research University, INSERM, Département d'études Cognitives, Paris, France.
| |
Collapse
|
16
|
Adam-Darque A, Pittet MP, Grouiller F, Rihs TA, Leuchter RHV, Lazeyras F, Michel CM, Hüppi PS. Neural Correlates of Voice Perception in Newborns and the Influence of Preterm Birth. Cereb Cortex 2020; 30:5717-5730. [PMID: 32518940 DOI: 10.1093/cercor/bhaa144] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 05/01/2020] [Accepted: 05/01/2020] [Indexed: 12/30/2022] Open
Abstract
Maternal voice is a highly relevant stimulus for newborns. Adult voice processing occurs in specific brain regions. Voice-specific brain areas in newborns and the relevance of an early vocal exposure on these networks have not been defined. This study investigates voice perception in newborns and the impact of prematurity on the cerebral processes. Functional magnetic resonance imaging (fMRI) and high-density electroencephalography (EEG) were used to explore the brain responses to maternal and stranger female voices in full-term newborns and preterm infants at term-equivalent age (TEA). fMRI results and the EEG oddball paradigm showed enhanced processing for voices in preterms at TEA than in full-term infants. Preterm infants showed additional cortical regions involved in voice processing in fMRI and a late mismatch response for maternal voice, considered as a first trace of a recognition process based on memory representation. Full-term newborns showed increased cerebral activity to the stranger voice. Results from fMRI, oddball, and standard auditory EEG paradigms highlighted important change detection responses to novelty after birth. These findings suggest that the main components of the adult voice-processing networks emerge early in development. Moreover, an early postnatal exposure to voices in premature infants might enhance their capacity to process voices.
Collapse
Affiliation(s)
- Alexandra Adam-Darque
- Division of Development and Growth, Department of Pediatrics, Geneva University Hospitals, 1205 Geneva, Switzerland.,Laboratory of Cognitive Neurorehabilitation, Division of Neurorehabilitation, Department of Clinical Neuroscience, Geneva University Hospitals, 1205 Geneva, Switzerland
| | - Marie P Pittet
- Division of Development and Growth, Department of Pediatrics, Geneva University Hospitals, 1205 Geneva, Switzerland
| | - Frédéric Grouiller
- Department of Radiology and Medical Informatics, University of Geneva, 1205 Geneva, Switzerland.,Swiss Centre for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland
| | - Tonia A Rihs
- Functional Brain Mapping Laboratory, Department of Neurosciences, University of Geneva, 1205 Geneva, Switzerland
| | - Russia Ha-Vinh Leuchter
- Division of Development and Growth, Department of Pediatrics, Geneva University Hospitals, 1205 Geneva, Switzerland
| | - François Lazeyras
- Department of Radiology and Medical Informatics, University of Geneva, 1205 Geneva, Switzerland
| | - Christoph M Michel
- Functional Brain Mapping Laboratory, Department of Neurosciences, University of Geneva, 1205 Geneva, Switzerland
| | - Petra S Hüppi
- Division of Development and Growth, Department of Pediatrics, Geneva University Hospitals, 1205 Geneva, Switzerland
| |
Collapse
|
17
|
Young AW, Frühholz S, Schweinberger SR. Face and Voice Perception: Understanding Commonalities and Differences. Trends Cogn Sci 2020; 24:398-410. [DOI: 10.1016/j.tics.2020.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 01/16/2020] [Accepted: 02/03/2020] [Indexed: 01/01/2023]
|
18
|
An integrative analysis of 5HTT-mediated mechanism of hyperactivity to non-threatening voices. Commun Biol 2020; 3:113. [PMID: 32157156 PMCID: PMC7064530 DOI: 10.1038/s42003-020-0850-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 02/21/2020] [Indexed: 01/24/2023] Open
Abstract
The tonic model delineating the serotonin transporter polymorphism’s (5-HTTLPR) modulatory effect on anxiety points towards a universal underlying mechanism involving a hyper-or-elevated baseline level of arousal even to non-threatening stimuli. However, to our knowledge, this mechanism has never been observed in non-clinical cohorts exhibiting high anxiety. Moreover, empirical support regarding said association is mixed, potentially because of publication bias with a relatively small sample size. Hence, how the 5-HTTLPR modulates neural correlates remains controversial. Here we show that 5-HTTLPR short-allele carriers had significantly increased baseline ERPs and reduced fearful MMN, phenomena which can nevertheless be reversed by acute anxiolytic treatment. This provides evidence that the 5-HTT affects the automatic processing of threatening and non-threatening voices, impacts broadly on social cognition, and conclusively asserts the heightened baseline arousal level as the universal underlying neural mechanism for anxiety-related susceptibilities, functioning as a spectrum-like distribution from high trait anxiety non-patients to anxiety patients. Chen et al. apply a multi-level approach to show that serotonin signaling modulates neuronal responses to both threatening and non-threatening voices even in the pre-attentive stage. They show that 5-HTTLPR short-allele carriers had higher baseline event-related potentials and lower fearful mismatch negativity, which can be reversed by acute anxiolytic treatment.
Collapse
|
19
|
Lin H, Müller-Bardorff M, Gathmann B, Brieke J, Mothes-Lasch M, Bruchmann M, Miltner WHR, Straube T. Stimulus arousal drives amygdalar responses to emotional expressions across sensory modalities. Sci Rep 2020; 10:1898. [PMID: 32024891 PMCID: PMC7002496 DOI: 10.1038/s41598-020-58839-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/23/2019] [Indexed: 11/08/2022] Open
Abstract
The factors that drive amygdalar responses to emotionally significant stimuli are still a matter of debate - particularly the proneness of the amygdala to respond to negatively-valenced stimuli has been discussed controversially. Furthermore, it is uncertain whether the amygdala responds in a modality-general fashion or whether modality-specific idiosyncrasies exist. Therefore, the present functional magnetic resonance imaging (fMRI) study systematically investigated amygdalar responding to stimulus valence and arousal of emotional expressions across visual and auditory modalities. During scanning, participants performed a gender judgment task while prosodic and facial emotional expressions were presented. The stimuli varied in stimulus valence and arousal by including neutral, happy and angry expressions of high and low emotional intensity. Results demonstrate amygdalar activation as a function of stimulus arousal and accordingly associated emotional intensity regardless of stimulus valence. Furthermore, arousal-driven amygdalar responding did not depend on the visual and auditory modalities of emotional expressions. Thus, the current results are consistent with the notion that the amygdala codes general stimulus relevance across visual and auditory modalities irrespective of valence. In addition, whole brain analyses revealed that effects in visual and auditory areas were driven mainly by high intense emotional facial and vocal stimuli, respectively, suggesting modality-specific representations of emotional expressions in auditory and visual cortices.
Collapse
Affiliation(s)
- Huiyan Lin
- Institute of Applied Psychology, School of Public Administration, Guangdong University of Finance, 510521, Guangzhou, China.
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany.
| | - Miriam Müller-Bardorff
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Bettina Gathmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Jaqueline Brieke
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Martin Mothes-Lasch
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Wolfgang H R Miltner
- Department of Clinical Psychology, Friedrich Schiller University of Jena, 07743, Jena, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| |
Collapse
|
20
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
21
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
22
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
23
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
24
|
Zhang D, Chen Y, Hou X, Wu YJ. Near-infrared spectroscopy reveals neural perception of vocal emotions in human neonates. Hum Brain Mapp 2019; 40:2434-2448. [PMID: 30697881 DOI: 10.1002/hbm.24534] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 01/19/2019] [Accepted: 01/20/2019] [Indexed: 12/20/2022] Open
Abstract
Processing affective prosody, that is the emotional tone of a speaker, is fundamental to human communication and adaptive behaviors. Previous studies have mainly focused on adults and infants; thus the neural mechanisms underlying the processing of affective prosody in newborns remain unclear. Here, we used near-infrared spectroscopy to examine the ability of 0-to-4-day-old neonates to discriminate emotions conveyed by speech prosody in their maternal language and a foreign language. Happy, fearful, and angry prosodies enhanced neural activation in the right superior temporal gyrus relative to neutral prosody in the maternal but not the foreign language. Happy prosody elicited greater activation than negative prosody in the left superior frontal gyrus and the left angular gyrus, regions that have not been associated with affective prosody processing in infants or adults. These findings suggest that sensitivity to affective prosody is formed through prenatal exposure to vocal stimuli of the maternal language. Furthermore, the sensitive neural correlates appeared more distributed in neonates than infants, indicating a high-level of neural specialization between the neonatal stage and early infancy. Finally, neonates showed preferential neural responses to positive over negative prosody, which is contrary to the "negativity bias" phenomenon established in adult and infant studies.
Collapse
Affiliation(s)
- Dandan Zhang
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China.,Shenzhen Key Laboratory of Affective and Social Cognitive Science, Shenzhen University, Shenzhen, China
| | - Yu Chen
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yan Jing Wu
- Faculty of Foreign Languages, Ningbo University, Ningbo, China
| |
Collapse
|
25
|
Whitehead JC, Armony JL. Multivariate fMRI pattern analysis of fear perception across modalities. Eur J Neurosci 2019; 49:1552-1563. [DOI: 10.1111/ejn.14322] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 11/23/2018] [Accepted: 12/17/2018] [Indexed: 01/04/2023]
Affiliation(s)
- Jocelyne C. Whitehead
- Douglas Mental Health University Institute Verdun Quebec Canada
- BRAMS LaboratoryCentre for Research on Brain, Language and Music Montreal Quebec Canada
- Integrated Program in NeuroscienceMcGill University Montreal Quebec Canada
| | - Jorge L. Armony
- Douglas Mental Health University Institute Verdun Quebec Canada
- BRAMS LaboratoryCentre for Research on Brain, Language and Music Montreal Quebec Canada
- Department of PsychiatryMcGill University Montreal Quebec Canada
| |
Collapse
|
26
|
Sliwa J, Takahashi D, Shepherd S. Mécanismes neuronaux pour la communication chez les primates. REVUE DE PRIMATOLOGIE 2018. [DOI: 10.4000/primatologie.2950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
27
|
Schirmer A. Is the voice an auditory face? An ALE meta-analysis comparing vocal and facial emotion processing. Soc Cogn Affect Neurosci 2018; 13:1-13. [PMID: 29186621 PMCID: PMC5793823 DOI: 10.1093/scan/nsx142] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Accepted: 11/19/2017] [Indexed: 11/13/2022] Open
Abstract
This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition. Neuroimaging studies contrasting emotional with neutral (face: N = 76, voice: N = 34) and explicit with implicit emotion processing (face: N = 27, voice: N = 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively. Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced, modality-matched data sets for modality comparison. Stimulus-driven emotion processing engaged large networks with significant modality differences in the superior temporal (voice-specific) and the medial temporal (face-specific) cortex. Goal-driven processing was associated with only a small cluster in the dorsomedial prefrontal cortex for voices but not faces. Neither stimulus- nor goal-driven processing showed significant modality overlap. Together, these findings suggest that stimulus-driven processes shape activity in the social brain more powerfully than goal-driven processes in both the visual and the auditory domains. Yet, whereas faces emphasize subcortical emotional and mnemonic mechanisms, voices emphasize cortical mechanisms associated with perception and effortful stimulus evaluation (e.g. via subvocalization). These differences may be due to sensory stimulus properties and highlight the need for a modality-specific perspective when modeling emotion processing in the brain.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology.,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, Hong Kong.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
28
|
Categorical emotion recognition from voice improves during childhood and adolescence. Sci Rep 2018; 8:14791. [PMID: 30287837 PMCID: PMC6172235 DOI: 10.1038/s41598-018-32868-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Accepted: 08/20/2018] [Indexed: 11/16/2022] Open
Abstract
Converging evidence demonstrates that emotion processing from facial expressions continues to improve throughout childhood and part of adolescence. Here we investigated whether this is also the case for emotions conveyed by non-linguistic vocal expressions, another key aspect of social interactions. We tested 225 children and adolescents (age 5–17) and 30 adults in a forced-choice labeling task using vocal bursts expressing four basic emotions (anger, fear, happiness and sadness). Mixed-model logistic regressions revealed a small but highly significant change with age, mainly driven by changes in the ability to identify anger and fear. Adult-level of performance was reached between 14 and 15 years of age. Also, across ages, female participants obtained better scores than male participants, with no significant interaction between age and sex effects. These results expand the findings showing that affective prosody understanding improves during childhood; they document, for the first time, continued improvement in vocal affect recognition from early childhood to mid- adolescence, a pivotal period for social maturation.
Collapse
|
29
|
Lateralized Brainstem and Cervical Spinal Cord Responses to Aversive Sounds: A Spinal fMRI Study. Brain Sci 2018; 8:brainsci8090165. [PMID: 30200289 PMCID: PMC6162493 DOI: 10.3390/brainsci8090165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 08/25/2018] [Accepted: 08/29/2018] [Indexed: 12/22/2022] Open
Abstract
Previous research has delineated the networks of brain structures involved in the perception of emotional auditory stimuli. These include the amygdala, insula, and auditory cortices, as well as frontal-lobe, basal ganglia, and cerebellar structures involved in the planning and execution of motoric behaviors. The aim of the current research was to examine whether emotional sounds also influence activity in the brainstem and cervical spinal cord. Seventeen undergraduate participants completed a spinal functional magnetic resonance imaging (fMRI) study consisting of two fMRI runs. One run consisted of three one-minute blocks of aversive sounds taken from the International Affective Digitized Sounds (IADS) stimulus set; these blocks were interleaved by 40-s rest periods. The other block consisted of emotionally neutral stimuli also drawn from the IADS. The results indicated a stark pattern of lateralization. Aversive sounds elicited greater activity than neutral sounds in the right midbrain and brainstem, and in right dorsal and ventral regions of the cervical spinal cord. Neutral stimuli, on the other hand, elicited less neural activity than aversive sounds overall; these responses were left lateralized and were found in the medial midbrain and the dorsal sensory regions of the cervical spinal cord. Together, these results demonstrate that aversive auditory stimuli elicit increased sensorimotor responses in brainstem and cervical spinal cord structures.
Collapse
|
30
|
Mother's recorded voice on emergence can decrease postoperative emergence delirium from general anaesthesia in paediatric patients: a prospective randomised controlled trial. Br J Anaesth 2018; 121:483-489. [DOI: 10.1016/j.bja.2018.01.042] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 01/11/2018] [Accepted: 01/31/2018] [Indexed: 11/18/2022] Open
|
31
|
Crespo-Llado MM, Vanderwert RE, Geangu E. Individual differences in infants’ neural responses to their peers’ cry and laughter. Biol Psychol 2018; 135:117-127. [DOI: 10.1016/j.biopsycho.2018.03.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Revised: 12/08/2017] [Accepted: 03/21/2018] [Indexed: 12/11/2022]
|
32
|
Koch K, Stegmaier S, Schwarz L, Erb M, Reinl M, Scheffler K, Wildgruber D, Ethofer T. Neural correlates of processing emotional prosody in unipolar depression. Hum Brain Mapp 2018; 39:3419-3427. [PMID: 29682814 DOI: 10.1002/hbm.24185] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/15/2018] [Accepted: 04/09/2018] [Indexed: 12/11/2022] Open
Abstract
Major depressive disorder (MDD) is characterized by a biased emotion perception. In the auditory domain, MDD patients have been shown to exhibit attenuated processing of positive emotions expressed by speech melody (prosody). So far, no neuroimaging studies examining the neural basis of altered processing of emotional prosody in MDD are available. In this study, we addressed this issue by examining the emotion bias in MDD during evaluation of happy, neutral, and angry prosodic stimuli on a five-point Likert scale during functional magnetic resonance imaging (fMRI). As expected, MDD patients rated happy prosody less intense than healthy controls (HC). At neural level, stronger activation in the middle superior temporal gyrus (STG) and the amygdala was found in all participants when processing emotional as compared to neutral prosody. MDD patients exhibited an increased activation of the amygdala during processing prosody irrespective of valence while no significant differences between groups were found for the STG, indicating that altered processing of prosodic emotions in MDD occurs rather within the amygdala than in auditory areas. Concurring with the valence-specific behavioral effect of attenuated evaluation of positive prosodic stimuli, activation within the left amygdala of MDD patients correlated with ratings of happy, but not neutral or angry prosody. Our study provides first insights in the neural basis of reduced experience of positive information and an abnormally increased amygdala activity during prosody processing.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Maren Reinl
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany.,Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.,Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
33
|
Schirmer A, Gunter TC. Temporal signatures of processing voiceness and emotion in sound. Soc Cogn Affect Neurosci 2018; 12:902-909. [PMID: 28338796 PMCID: PMC5472162 DOI: 10.1093/scan/nsx020] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 02/07/2017] [Indexed: 12/22/2022] Open
Abstract
This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Psychology, Chinese University of Hong Kong, Hong Kong
| | - Thomas C Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
34
|
Koelsch S, Skouras S, Lohmann G. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy. PLoS One 2018; 13:e0190057. [PMID: 29385142 PMCID: PMC5791961 DOI: 10.1371/journal.pone.0190057] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 12/07/2017] [Indexed: 01/12/2023] Open
Abstract
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions.
Collapse
Affiliation(s)
- Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- * E-mail:
| | - Stavros Skouras
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Gabriele Lohmann
- Department of Biomedical Magnetic Resonance, University Clinic Tübingen, Tübingen, Germany
- Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
35
|
Kryklywy JH, Macpherson EA, Mitchell DGV. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques. Exp Brain Res 2018; 236:945-953. [PMID: 29374776 PMCID: PMC5887003 DOI: 10.1007/s00221-018-5185-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 01/22/2018] [Indexed: 11/27/2022]
Abstract
Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory ‘what’ but not ‘where’ processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.
Collapse
Affiliation(s)
- James H Kryklywy
- Department of Psychology, University of British Columbia, Vancouver, V6T 1Z4, Canada.,Graduate Program in Neuroscience, University of Western Ontario, London, ON, N6A 5A5, Canada.,Brain and Mind Institute, University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Ewan A Macpherson
- School of Communication Sciences and Disorders, University of Western Ontario, London, ON, N6G 1H1, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, N6G 1H1, Canada
| | - Derek G V Mitchell
- Graduate Program in Neuroscience, University of Western Ontario, London, ON, N6A 5A5, Canada. .,Brain and Mind Institute, University of Western Ontario, London, ON, N6A 5B7, Canada. .,Department of Anatomy and Cell Biology, University of Western Ontario, London, ON, N6A 3K7, Canada. .,Department of Psychiatry, University of Western Ontario, London, ON, N6A 5A5, Canada.
| |
Collapse
|
36
|
Speech Prosodies of Different Emotional Categories Activate Different Brain Regions in Adult Cortex: an fNIRS Study. Sci Rep 2018; 8:218. [PMID: 29317758 PMCID: PMC5760650 DOI: 10.1038/s41598-017-18683-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 12/14/2017] [Indexed: 11/12/2022] Open
Abstract
Emotional expressions of others embedded in speech prosodies are important for social interactions. This study used functional near-infrared spectroscopy to investigate how speech prosodies of different emotional categories are processed in the cortex. The results demonstrated several cerebral areas critical for emotional prosody processing. We confirmed that the superior temporal cortex, especially the right middle and posterior parts of superior temporal gyrus (BA 22/42), primarily works to discriminate between emotional and neutral prosodies. Furthermore, the results suggested that categorization of emotions occurs within a high-level brain region–the frontal cortex, since the brain activation patterns were distinct when positive (happy) were contrasted to negative (fearful and angry) prosody in the left middle part of inferior frontal gyrus (BA 45) and the frontal eye field (BA8), and when angry were contrasted to neutral prosody in bilateral orbital frontal regions (BA 10/11). These findings verified and extended previous fMRI findings in adult brain and also provided a “developed version” of brain activation for our following neonatal study.
Collapse
|
37
|
Tang X, Chen N, Zhang S, Jones JA, Zhang B, Li J, Liu P, Liu H. Predicting auditory feedback control of speech production from subregional shape of subcortical structures. Hum Brain Mapp 2017; 39:459-471. [PMID: 29058356 DOI: 10.1002/hbm.23855] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 09/27/2017] [Accepted: 10/11/2017] [Indexed: 11/06/2022] Open
Abstract
Although a growing body of research has focused on the cortical sensorimotor mechanisms that support auditory feedback control of speech production, much less is known about the subcortical contributions to this control process. This study examined whether subregional anatomy of subcortical structures assessed by statistical shape analysis is associated with vocal compensations and cortical event-related potentials in response to pitch feedback errors. The results revealed significant negative correlations between the magnitudes of vocal compensations and subregional shape of the right thalamus, between the latencies of vocal compensations and subregional shape of the left caudate and pallidum, and between the latencies of cortical N1 responses and subregional shape of the left putamen. These associations indicate that smaller local volumes of the basal ganglia and thalamus are predictive of slower and larger neurobehavioral responses to vocal pitch errors. Furthermore, increased local volumes of the left hippocampus and right amygdala were predictive of larger vocal compensations, suggesting that there is an interplay between the memory-related subcortical structures and auditory-vocal integration. These results, for the first time, provide evidence for differential associations of subregional morphology of the basal ganglia, thalamus, hippocampus, and amygdala with neurobehavioral processing of vocal pitch errors, suggesting that subregional shape measures of subcortical structures can predict behavioral outcome of auditory-vocal integration and associated neural features. Hum Brain Mapp 39:459-471, 2018. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Xiaoying Tang
- Sun Yat-sen University-Carnegie Melon University (SYSU-CMU) Joint Institute of Engineering, Sun Yat-sen University, Guangzhou, 510006, China.,Sun Yat-sen University-Carnegie Melon University (SYSU-CMU) Shunde International Joint Research Institute, Shunde, 528300, China.,School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, 510006, China.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, 15213, Pennsylvania
| | - Na Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Siyun Zhang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
| | - Baofeng Zhang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Jingyuan Li
- Sun Yat-sen University-Carnegie Melon University (SYSU-CMU) Joint Institute of Engineering, Sun Yat-sen University, Guangzhou, 510006, China.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, 15213, Pennsylvania
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
| |
Collapse
|
38
|
Nolden S, Rigoulot S, Jolicoeur P, Armony JL. Effects of musical expertise on oscillatory brain activity in response to emotional sounds. Neuropsychologia 2017; 103:96-105. [DOI: 10.1016/j.neuropsychologia.2017.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 10/19/2022]
|
39
|
Faragó T, Takács N, Miklósi Á, Pongrácz P. Dog growls express various contextual and affective content for human listeners. ROYAL SOCIETY OPEN SCIENCE 2017; 4:170134. [PMID: 28573021 PMCID: PMC5451822 DOI: 10.1098/rsos.170134] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Accepted: 04/11/2017] [Indexed: 06/07/2023]
Abstract
Vocal expressions of emotions follow simple rules to encode the inner state of the caller into acoustic parameters, not just within species, but also in cross-species communication. Humans use these structural rules to attribute emotions to dog vocalizations, especially to barks, which match with their contexts. In contrast, humans were found to be unable to differentiate between playful and threatening growls, probably because single growls' aggression level was assessed based on acoustic size cues. To resolve this contradiction, we played back natural growl bouts from three social contexts (food guarding, threatening and playing) to humans, who had to rate the emotional load and guess the context of the playbacks. Listeners attributed emotions to growls according to their social contexts. Within threatening and playful contexts, bouts with shorter, slower pulsing growls and showing smaller apparent body size were rated to be less aggressive and fearful, but more playful and happy. Participants associated the correct contexts with the growls above chance. Moreover, women and participants experienced with dogs scored higher in this task. Our results indicate that dogs may communicate honestly their size and inner state in a serious contest situation, while manipulatively in more uncertain defensive and playful contexts.
Collapse
Affiliation(s)
- T. Faragó
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| | - N. Takács
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| | - Á. Miklósi
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
- MTA-ELTE Comparative Ethology Research Group, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| | - P. Pongrácz
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| |
Collapse
|
40
|
What is the Melody of That Voice? Probing Unbiased Recognition Accuracy with the Montreal Affective Voices. JOURNAL OF NONVERBAL BEHAVIOR 2017. [DOI: 10.1007/s10919-017-0253-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
41
|
Webster PJ, Skipper-Kallal LM, Frum CA, Still HN, Ward BD, Lewis JW. Divergent Human Cortical Regions for Processing Distinct Acoustic-Semantic Categories of Natural Sounds: Animal Action Sounds vs. Vocalizations. Front Neurosci 2017; 10:579. [PMID: 28111538 PMCID: PMC5216875 DOI: 10.3389/fnins.2016.00579] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 12/05/2016] [Indexed: 11/13/2022] Open
Abstract
A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds.
Collapse
Affiliation(s)
- Paula J. Webster
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| | - Laura M. Skipper-Kallal
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
- Department of Neurology, Georgetown University Medical CampusWashington, DC, USA
| | - Chris A. Frum
- Department of Physiology and Pharmacology, West Virginia UniversityMorgantown, WV, USA
| | - Hayley N. Still
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| | - B. Douglas Ward
- Department of Biophysics, Medical College of WisconsinMilwaukee, WI, USA
| | - James W. Lewis
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| |
Collapse
|
42
|
Durai M, O'Keeffe MG, Searchfield GD. Examining the short term effects of emotion under an Adaptation Level Theory model of tinnitus perception. Hear Res 2016; 345:23-29. [PMID: 28027920 DOI: 10.1016/j.heares.2016.12.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2016] [Revised: 12/11/2016] [Accepted: 12/16/2016] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. STUDY DESIGN Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. RESULTS Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. CONCLUSIONS The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be used for sound therapy for tinnitus relief; it is of interest to determine whether the emotional component of sound treatments can play a role in reversing the negative responses discussed in this paper.
Collapse
Affiliation(s)
- Mithila Durai
- Department of Audiology, University of Auckland, Auckland, New Zealand; Center for Brain Research, University of Auckland, Auckland, New Zealand
| | - Mary G O'Keeffe
- Department of Audiology, University of Auckland, Auckland, New Zealand
| | - Grant D Searchfield
- Department of Audiology, University of Auckland, Auckland, New Zealand; Center for Brain Research, University of Auckland, Auckland, New Zealand.
| |
Collapse
|
43
|
Gruber T, Grandjean D. A comparative neurological approach to emotional expressions in primate vocalizations. Neurosci Biobehav Rev 2016; 73:182-190. [PMID: 27993605 DOI: 10.1016/j.neubiorev.2016.12.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 12/01/2016] [Accepted: 12/03/2016] [Indexed: 12/20/2022]
Abstract
Different approaches from different research domains have crystallized debate over primate emotional processing and vocalizations in recent decades. On one side, researchers disagree about whether emotional states or processes in animals truly compare to those in humans. On the other, a long-held assumption is that primate vocalizations are innate communicative signals over which nonhuman primates have limited control and a mirror of the emotional state of the individuals producing them, despite growing evidence of intentional production for some vocalizations. Our goal is to connect both sides of the discussion in deciphering how the emotional content of primate calls compares with emotional vocal signals in humans. We focus particularly on neural bases of primate emotions and vocalizations to identify cerebral structures underlying emotion, vocal production, and comprehension in primates, and discuss whether particular structures or neuronal networks solely evolved for specific functions in the human brain. Finally, we propose a model to classify emotional vocalizations in primates according to four dimensions (learning, control, emotional, meaning) to allow comparing calls across species.
Collapse
Affiliation(s)
- Thibaud Gruber
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland.
| | - Didier Grandjean
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland
| |
Collapse
|
44
|
Abstract
Previous studies have shown that the amygdala is more involved in processing animate categories, such as humans and animals, than inanimate objects, but little is known regarding whether this animate advantage applies to auditory stimuli. To address this issue, we performed a functional Magnetic Resonance Imaging (fMRI) study with emotion and category as factors, in which subjects heard sounds from different categories (i.e., humans, animals, and objects) in negative and neutral dimensions. Emotional levels and semantic familiarity were matched across categories. The results showed that the amygdala responded more to human vocalization than to animal vocalization and sounds of inanimate objects in both negative and neutral valences, and more to animal sounds than to objects in neural condition. In addition, the amygdala, together with the insula and the right superior temporal sulcus, further distinguished human voices from animal sounds. These data indicated that the amygdala is prepared to respond to animate sources, especially human vocalizations in auditory modality.
Collapse
Affiliation(s)
- Yanbing Zhao
- a School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health , Peking University , Beijing , China
| | - Qing Sun
- a School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health , Peking University , Beijing , China
| | - Gang Chen
- b Scientific and Statistical Computing Core , National Institute of Mental Health, National Institutes of Health , Bethesda , MD , USA
| | - Jiongjiong Yang
- a School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health , Peking University , Beijing , China
| |
Collapse
|
45
|
Neural correlates of the affective properties of spontaneous and volitional laughter types. Neuropsychologia 2016; 95:30-39. [PMID: 27940151 DOI: 10.1016/j.neuropsychologia.2016.12.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 12/06/2016] [Accepted: 12/07/2016] [Indexed: 11/23/2022]
Abstract
Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative.
Collapse
|
46
|
Liebenthal E, Silbersweig DA, Stern E. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception. Front Neurosci 2016; 10:506. [PMID: 27877106 PMCID: PMC5099784 DOI: 10.3389/fnins.2016.00506] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 10/24/2016] [Indexed: 11/24/2022] Open
Abstract
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Collapse
Affiliation(s)
- Einat Liebenthal
- Department of Psychiatry, Brigham and Women's Hospital Boston, MA, USA
| | | | - Emily Stern
- Department of Psychiatry, Brigham and Women's HospitalBoston, MA, USA; Department of Radiology, Brigham and Women's HospitalBoston, MA, USA
| |
Collapse
|
47
|
Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions. Cortex 2016; 85:116-125. [PMID: 27855282 DOI: 10.1016/j.cortex.2016.10.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 09/19/2016] [Accepted: 10/19/2016] [Indexed: 11/23/2022]
Abstract
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions.
Collapse
|
48
|
Rigoulot S, Armony JL. Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
Affiliation(s)
- Simon Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| | - Jorge L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| |
Collapse
|
49
|
De Dreu CKW, Kret ME, Sauter DA. Assessing Emotional Vocalizations From Cultural In-Group and Out-Group Depends on Oxytocin. SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE 2016. [DOI: 10.1177/1948550616657596] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Group-living animals, humans included, produce vocalizations like screams, growls, laughs, and victory calls. Accurately decoding such emotional vocalizations serves both individual and group functioning, suggesting that (i) vocalizations from in-group members may be privileged, in terms of speed and accuracy of processing, and (ii) such processing may depend on evolutionary ancient neural circuitries that sustain and enable cooperation with and protection of the in-group against outside threat. Here, we examined this possibility and focused on the neuropeptide oxytocin. Dutch participants self-administered oxytocin or placebo (double-blind, placebo-controlled study design) and responded to emotional vocalizations produced by cultural in-group members (Native Dutch) and cultural out-group members (Namibian Himba). In-group vocalizations were recognized faster and more accurately than out-group vocalizations, and oxytocin enhanced accurate decoding of specific vocalizations from one’s cultural out-group—triumph and anger. We discuss possible explanations and suggest avenues for new research.
Collapse
Affiliation(s)
- Carsten K. W. De Dreu
- Institute of Psychology, Leiden University, Leiden, the Netherlands
- Center for Experimental Economics and Political Decision Making (CREED), University of Amsterdam, the Netherlands
| | - Mariska E. Kret
- Institute of Psychology, Leiden University, Leiden, the Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
50
|
Symons AE, El-Deredy W, Schwartze M, Kotz SA. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication. Front Hum Neurosci 2016; 10:239. [PMID: 27252638 PMCID: PMC4879141 DOI: 10.3389/fnhum.2016.00239] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 05/09/2016] [Indexed: 12/18/2022] Open
Abstract
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Collapse
Affiliation(s)
- Ashley E. Symons
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Wael El-Deredy
- School of Psychological Sciences, University of ManchesterManchester, UK
- School of Biomedical Engineering, Universidad de ValparaisoValparaiso, Chile
| | - Michael Schwartze
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| | - Sonja A. Kotz
- School of Psychological Sciences, University of ManchesterManchester, UK
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|