1
|
Matthews TE, Lumaca M, Witek MAG, Penhune VB, Vuust P. Music reward sensitivity is associated with greater information transfer capacity within dorsal and motor white matter networks in musicians. Brain Struct Funct 2024; 229:2299-2313. [PMID: 39052097 PMCID: PMC11611946 DOI: 10.1007/s00429-024-02836-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 07/12/2024] [Indexed: 07/27/2024]
Abstract
There are pronounced differences in the degree to which individuals experience music-induced pleasure which are linked to variations in structural connectivity between auditory and reward areas. However, previous studies exploring the link between white matter structure and music reward sensitivity (MRS) have relied on standard diffusion tensor imaging methods, which present challenges in terms of anatomical accuracy and interpretability. Further, the link between MRS and connectivity in regions outside of auditory-reward networks, as well as the role of musical training, have yet to be investigated. Therefore, we investigated the relation between MRS and structural connectivity in a large number of directly segmented and anatomically verified white matter tracts in musicians (n = 24) and non-musicians (n = 23) using state-of-the-art tract reconstruction and fixel-based analysis. Using a manual tract-of-interest approach, we additionally tested MRS-white matter associations in auditory-reward networks seen in previous studies. Within the musician group, there was a significant positive relation between MRS and fiber density and cross section in the right middle longitudinal fascicle connecting auditory and inferior parietal cortices. There were also positive relations between MRS and fiber-bundle cross-section in tracts connecting the left thalamus to the ventral precentral gyrus and connecting the right thalamus to the right supplementary motor area, however, these did not survive FDR correction. These results suggest that, within musicians, dorsal auditory and motor networks are crucial to MRS, possibly via their roles in top-down predictive processing and auditory-motor transformations.
Collapse
Affiliation(s)
- Tomas E Matthews
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Hospital, Nørrebrogade 44, Building 1A, Aarhus C, 8000, Denmark.
| | - Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Hospital, Nørrebrogade 44, Building 1A, Aarhus C, 8000, Denmark
| | - Maria A G Witek
- Department of Music School of Languages, Art History and Music, University of Birmingham, Cultures, Birmingham, B15 2TT, UK
| | - Virginia B Penhune
- Department of Psychology, Concordia University, 7141 Sherbrooke St W, Montreal, QC, H4B 1R6, Canada
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Hospital, Nørrebrogade 44, Building 1A, Aarhus C, 8000, Denmark
- Royal Academy of Music, Skovgaardsgade 2C, Aarhus C, DK-8000, Denmark
| |
Collapse
|
2
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
3
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
4
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
5
|
The Time Course of Emotional Authenticity Detection in Nonverbal Vocalizations. Cortex 2022; 151:116-132. [DOI: 10.1016/j.cortex.2022.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/23/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022]
|
6
|
Whitehead JC, Armony JL. Intra-individual Reliability of Voice- and Music-elicited Responses and their Modulation by Expertise. Neuroscience 2022; 487:184-197. [PMID: 35182696 DOI: 10.1016/j.neuroscience.2022.02.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Integrated Program in Neuroscience, McGill University, Montreal, Canada.
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
7
|
Caballero JA, Mauchand M, Jiang X, Pell MD. Cortical processing of speaker politeness: Tracking the dynamic effects of voice tone and politeness markers. Soc Neurosci 2021; 16:423-438. [PMID: 34102955 DOI: 10.1080/17470919.2021.1938667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Information in the tone of voice alters social impressions and underlying brain activity as listeners evaluate the interpersonal relevance of utterances. Here, we presented requests that expressed politeness distinctions through the voice (polite/rude) and explicit linguistic markers (half of the requests began with Please). Thirty participants performed a social perception task (rating friendliness) while their electroencephalogram was recorded. Behaviorally, vocal politeness strategies had a much stronger influence on the perceived friendliness than the linguistic marker. Event-related potentials revealed rapid effects of (im)polite voices on cortical activity prior to ~300 ms; P200 amplitudes increased for polite versus rude voices, suggesting that the speaker's polite stance was registered as more salient in our task. At later stages, politeness distinctions encoded by the speaker's voice and their use of Please interacted, modulating activity in the N400 (300-500 ms) and late positivity (600-800 ms) time windows. Patterns of results suggest that initial attention deployment to politeness cues is rapidly influenced by the motivational significance of a speaker's voice. At later stages, processes for integrating vocal and lexical information resulted in increased cognitive effort to reevaluate utterances with ambiguous/contradictory cues. The potential influence of social anxiety on the P200 effect is also discussed.
Collapse
Affiliation(s)
- Jonathan A Caballero
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| | - Maël Mauchand
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| | - Xiaoming Jiang
- Shanghai International Studies University, Institute of Linguistics (IoL), Shanghai, China
| | - Marc D Pell
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| |
Collapse
|
8
|
Olszewska AM, Gaca M, Herman AM, Jednoróg K, Marchewka A. How Musical Training Shapes the Adult Brain: Predispositions and Neuroplasticity. Front Neurosci 2021; 15:630829. [PMID: 33776638 PMCID: PMC7987793 DOI: 10.3389/fnins.2021.630829] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 02/12/2021] [Indexed: 11/25/2022] Open
Abstract
Learning to play a musical instrument is a complex task that integrates multiple sensory modalities and higher-order cognitive functions. Therefore, musical training is considered a useful framework for the research on training-induced neuroplasticity. However, the classical nature-or-nurture question remains, whether the differences observed between musicians and non-musicians are due to predispositions or result from the training itself. Here we present a review of recent publications with strong focus on experimental designs to better understand both brain reorganization and the neuronal markers of predispositions when learning to play a musical instrument. Cross-sectional studies identified structural and functional differences between the brains of musicians and non-musicians, especially in regions related to motor control and auditory processing. A few longitudinal studies showed functional changes related to training while listening to and producing music, in the motor network and its connectivity with the auditory system, in line with the outcomes of cross-sectional studies. Parallel changes within the motor system and between the motor and auditory systems were revealed for structural connectivity. In addition, potential predictors of musical learning success were found including increased brain activation in the auditory and motor systems during listening, the microstructure of the arcuate fasciculus, and the functional connectivity between the auditory and the motor systems. We show that “the musical brain” is a product of both the natural human neurodiversity and the training practice.
Collapse
Affiliation(s)
- Alicja M Olszewska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Maciej Gaca
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Aleksandra M Herman
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
9
|
Sorati M, Behne DM. Considerations in Audio-Visual Interaction Models: An ERP Study of Music Perception by Musicians and Non-musicians. Front Psychol 2021; 11:594434. [PMID: 33551911 PMCID: PMC7854916 DOI: 10.3389/fpsyg.2020.594434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 12/03/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research with speech and non-speech stimuli suggested that in audiovisual perception, visual information starting prior to the onset of corresponding sound can provide visual cues, and form a prediction about the upcoming auditory sound. This prediction leads to audiovisual (AV) interaction. Auditory and visual perception interact and induce suppression and speeding up of the early auditory event-related potentials (ERPs) such as N1 and P2. To investigate AV interaction, previous research examined N1 and P2 amplitudes and latencies in response to audio only (AO), video only (VO), audiovisual, and control (CO) stimuli, and compared AV with auditory perception based on four AV interaction models (AV vs. AO+VO, AV-VO vs. AO, AV-VO vs. AO-CO, AV vs. AO). The current study addresses how different models of AV interaction express N1 and P2 suppression in music perception. Furthermore, the current study took one step further and examined whether previous musical experience, which can potentially lead to higher N1 and P2 amplitudes in auditory perception, influenced AV interaction in different models. Musicians and non-musicians were presented the recordings (AO, AV, VO) of a keyboard /C4/ key being played, as well as CO stimuli. Results showed that AV interaction models differ in their expression of N1 and P2 amplitude and latency suppression. The calculation of model (AV-VO vs. AO) and (AV-VO vs. AO-CO) has consequences for the resulting N1 and P2 difference waves. Furthermore, while musicians, compared to non-musicians, showed higher N1 amplitude in auditory perception, suppression of amplitudes and latencies for N1 and P2 was similar for the two groups across the AV models. Collectively, these results suggest that when visual cues from finger and hand movements predict the upcoming sound in AV music perception, suppression of early ERPs is similar for musicians and non-musicians. Notably, the calculation differences across models do not lead to the same pattern of results for N1 and P2, demonstrating that the four models are not interchangeable and are not directly comparable.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn M Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
10
|
Abstract
Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in which prosody serves a predominantly interpersonal function? This comment examines recent data highlighting the dynamic interplay of prosody and language, when vocal attributes serve the sociopragmatic goals of the speaker or reveal interpersonal information that listeners use to construct a mental representation of what is being communicated. Our comment serves as a beacon to researchers interested in how the neurocognitive system “makes sense” of socioemotive aspects of prosody.
Collapse
Affiliation(s)
- Marc D. Pell
- School of Communication Sciences and Disorders, McGill University, Canada
| | - Sonja A. Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| |
Collapse
|
11
|
Nair PS, Raijas P, Ahvenainen M, Philips AK, Ukkola-Vuoti L, Järvelä I. Music-listening regulates human microRNA expression. Epigenetics 2020; 16:554-566. [PMID: 32867562 DOI: 10.1080/15592294.2020.1809853] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Music-listening and performance have been shown to affect human gene expression. In order to further elucidate the biological basis of the effects of music on the human body, we studied the effects of music-listening on gene regulation by sequencing microRNAs of the listeners (Music Group) and their controls (Control Group) without music exposure. We identified upregulation of six microRNAs (hsa-miR-132-3p, hsa-miR-361-5p, hsa-miR-421, hsa-miR-23a-3p, hsa-miR-23b-3p, hsa-miR-25-3p) and downregulation of two microRNAs (hsa-miR-378a-3p, hsa-miR-16-2-3p) in Music Group with high musical aptitude. Some upregulated microRNAs were reported to be responsive to neuronal activity (miR-132, miR-23a, miR-23b) and modulators of neuronal plasticity, CNS myelination, and cognitive functions like long-term potentiation and memory. miR-132 plays a critical role in regulating TAU protein levels and is important for preventing tau protein aggregation that causes Alzheimer's disease. miR-132 and DICER, upregulated after music-listening, protect dopaminergic neurons and are important for retaining striatal dopamine levels. Some of the transcriptional regulators (FOS, CREB1, JUN, EGR1, and BDNF) of the upregulated microRNAs were immediate early genes and top candidates associated with musical traits. BDNF and SNCA, co-expressed and upregulated in music-listening and music-performance, are both are activated by GATA2, which is associated with musical aptitude. Several miRNAs were associated with song-learning, singing, and seasonal plasticity networks in songbirds. We did not detect any significant changes in microRNA expressions associated with music education or low musical aptitude. Our data thereby show the importance of inherent musical aptitude for music appreciation and for eliciting the human microRNA response to music-listening.
Collapse
Affiliation(s)
| | | | - Minna Ahvenainen
- Department of Medical Genetics, University of Helsinki, Helsinki, Finland
| | - Anju K Philips
- Department of Medical Genetics, University of Helsinki, Helsinki, Finland
| | - Liisa Ukkola-Vuoti
- Department of Medical Genetics, University of Helsinki, Helsinki, Finland
| | - Irma Järvelä
- Department of Medical Genetics, University of Helsinki, Helsinki, Finland
| |
Collapse
|
12
|
Paquette S, Rigoulot S, Grunewald K, Lehmann A. Temporal decoding of vocal and musical emotions: Same code, different timecourse? Brain Res 2020; 1741:146887. [PMID: 32422128 DOI: 10.1016/j.brainres.2020.146887] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/22/2020] [Accepted: 05/12/2020] [Indexed: 11/24/2022]
Abstract
From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.
Collapse
Affiliation(s)
- S Paquette
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada.
| | - S Rigoulot
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; Department of Psychology, Université du Québec à Trois-Rivières, Trois-Rivières, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - K Grunewald
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - A Lehmann
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| |
Collapse
|
13
|
Sorati M, Behne DM. Audiovisual Modulation in Music Perception for Musicians and Non-musicians. Front Psychol 2020; 11:1094. [PMID: 32547458 PMCID: PMC7273518 DOI: 10.3389/fpsyg.2020.01094] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/29/2020] [Indexed: 11/13/2022] Open
Abstract
In audiovisual music perception, visual information from a musical instrument being played is available prior to the onset of the corresponding musical sound and consequently allows a perceiver to form a prediction about the upcoming audio music. This prediction in audiovisual music perception, compared to auditory music perception, leads to lower N1 and P2 amplitudes and latencies. Although previous research suggests that audiovisual experience, such as previous musical experience may enhance this prediction, a remaining question is to what extent musical experience modifies N1 and P2 amplitudes and latencies. Furthermore, corresponding event-related phase modulations quantified as inter-trial phase coherence (ITPC) have not previously been reported for audiovisual music perception. In the current study, audio video recordings of a keyboard key being played were presented to musicians and non-musicians in audio only (AO), video only (VO), and audiovisual (AV) conditions. With predictive movements from playing the keyboard isolated from AV music perception (AV-VO), the current findings demonstrated that, compared to the AO condition, both groups had a similar decrease in N1 amplitude and latency, and P2 amplitude, along with correspondingly lower ITPC values in the delta, theta, and alpha frequency bands. However, while musicians showed lower ITPC values in the beta-band in AV-VO compared to the AO, non-musicians did not show this pattern. Findings indicate that AV perception may be broadly correlated with auditory perception, and differences between musicians and non-musicians further indicate musical experience to be a specific factor influencing AV perception. Predicting an upcoming sound in AV music perception may involve visual predictory processes, as well as beta-band oscillations, which may be influenced by years of musical training. This study highlights possible interconnectivity in AV perception as well as potential modulation with experience.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn Marie Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
14
|
Neurophysiological Differences in Emotional Processing by Cochlear Implant Users, Extending Beyond the Realm of Speech. Ear Hear 2020; 40:1197-1209. [PMID: 30762600 DOI: 10.1097/aud.0000000000000701] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.
Collapse
|
15
|
Ogg M, Carlson TA, Slevc LR. The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes. J Cogn Neurosci 2019; 32:111-123. [PMID: 31560265 DOI: 10.1162/jocn_a_01472] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Collapse
|
16
|
Whitehead JC, Armony JL. Multivariate fMRI pattern analysis of fear perception across modalities. Eur J Neurosci 2019; 49:1552-1563. [DOI: 10.1111/ejn.14322] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 11/23/2018] [Accepted: 12/17/2018] [Indexed: 01/04/2023]
Affiliation(s)
- Jocelyne C. Whitehead
- Douglas Mental Health University Institute Verdun Quebec Canada
- BRAMS LaboratoryCentre for Research on Brain, Language and Music Montreal Quebec Canada
- Integrated Program in NeuroscienceMcGill University Montreal Quebec Canada
| | - Jorge L. Armony
- Douglas Mental Health University Institute Verdun Quebec Canada
- BRAMS LaboratoryCentre for Research on Brain, Language and Music Montreal Quebec Canada
- Department of PsychiatryMcGill University Montreal Quebec Canada
| |
Collapse
|
17
|
Wang L, Tsao Y, Chen F. Congruent Visual Stimulation Facilitates Auditory Frequency Change Detection: An ERP Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2446-2449. [PMID: 30440902 DOI: 10.1109/embc.2018.8512835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Exploring effective methods to improve the ability of frequency change detection for normal-hearing listeners and hearing-impaired patients is important to enhance their auditory perception, particularly in noise. This work studied the effect of congruent visual stimulation on facilitating auditory frequency change detection. Specially, an event-related potential (ERP) experiment was designed to investigate the functional mechanism underlying the audiovisual integration ability. Subjects were stimulated in three types of modalities, i.e., auditory-only, visual-only, and audiovisual. ERP components (e.g., N1 and P2) were compared among the three modalities. Results showed that the congruent visual stimulation significantly improved the perceptual ability of auditory frequency change detection. Compared with the other two unimodal modalities, the audiovisual modality yielded larger amplitudes in N1 and P2 components. This work provided neurophysiological evidence supporting that the ability of frequency change detection could be facilitated by the congruent visual stimulation.
Collapse
|
18
|
Hamada M, Zaidan BB, Zaidan AA. A Systematic Review for Human EEG Brain Signals Based Emotion Classification, Feature Extraction, Brain Condition, Group Comparison. J Med Syst 2018; 42:162. [PMID: 30043178 DOI: 10.1007/s10916-018-1020-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 07/18/2018] [Indexed: 11/24/2022]
Abstract
The study of electroencephalography (EEG) signals is not a new topic. However, the analysis of human emotions upon exposure to music considered as important direction. Although distributed in various academic databases, research on this concept is limited. To extend research in this area, the researchers explored and analysed the academic articles published within the mentioned scope. Thus, in this paper a systematic review is carried out to map and draw the research scenery for EEG human emotion into a taxonomy. Systematically searched all articles about the, EEG human emotion based music in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 1999 to 2016. These databases feature academic studies that used EEG to measure brain signals, with a focus on the effects of music on human emotions. The screening and filtering of articles were performed in three iterations. In the first iteration, duplicate articles were excluded. In the second iteration, the articles were filtered according to their titles and abstracts, and articles outside of the scope of our domain were excluded. In the third iteration, the articles were filtered by reading the full text and excluding articles outside of the scope of our domain and which do not meet our criteria. Based on inclusion and exclusion criteria, 100 articles were selected and separated into five classes. The first class includes 39 articles (39%) consists of emotion, wherein various emotions are classified using artificial intelligence (AI). The second class includes 21 articles (21%) is composed of studies that use EEG techniques. This class is named 'brain condition'. The third class includes eight articles (8%) is related to feature extraction, which is a step before emotion classification. That this process makes use of classifiers should be noted. However, these articles are not listed under the first class because these eight articles focus on feature extraction rather than classifier accuracy. The fourth class includes 26 articles (26%) comprises studies that compare between or among two or more groups to identify and discover human emotion-based EEG. The final class includes six articles (6%) represents articles that study music as a stimulus and its impact on brain signals. Then, discussed the five main categories which are action types, age of the participants, and number size of the participants, duration of recording and listening to music and lastly countries or authors' nationality that published these previous studies. it afterward recognizes the main characteristics of this promising area of science in: motivation of using EEG process for measuring human brain signals, open challenges obstructing employment and recommendations to improve the utilization of EEG process.
Collapse
Affiliation(s)
- Mohamed Hamada
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - B B Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - A A Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia.
| |
Collapse
|
19
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Decoding the neural signatures of emotions expressed through sound. Neuroimage 2018; 174:1-10. [DOI: 10.1016/j.neuroimage.2018.02.058] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 02/23/2018] [Accepted: 02/27/2018] [Indexed: 12/15/2022] Open
|
20
|
Ahmed DG, Paquette S, Zeitouni A, Lehmann A. Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation. Clin EEG Neurosci 2018; 49:143-151. [PMID: 28958161 DOI: 10.1177/1550059417733386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Collapse
Affiliation(s)
- Duha G Ahmed
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,3 Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - Sebastian Paquette
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,4 Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Anthony Zeitouni
- 2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
21
|
Schirmer A, Gunter TC. Temporal signatures of processing voiceness and emotion in sound. Soc Cogn Affect Neurosci 2018; 12:902-909. [PMID: 28338796 PMCID: PMC5472162 DOI: 10.1093/scan/nsx020] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 02/07/2017] [Indexed: 12/22/2022] Open
Abstract
This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Psychology, Chinese University of Hong Kong, Hong Kong
| | - Thomas C Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
22
|
Ogg M, Slevc LR, Idsardi WJ. The time course of sound category identification: Insights from acoustic features. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3459. [PMID: 29289109 DOI: 10.1121/1.5014057] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Humans have an impressive, automatic capacity for identifying and organizing sounds in their environment. However, little is known about the timescales that sound identification functions on, or the acoustic features that listeners use to identify auditory objects. To better understand the temporal and acoustic dynamics of sound category identification, two go/no-go perceptual gating studies were conducted. Participants heard speech, musical instrument, and human-environmental sounds ranging from 12.5 to 200 ms in duration. Listeners could reliably identify sound categories with just 25 ms of duration. In experiment 1, participants' performance on instrument sounds showed a distinct processing advantage at shorter durations. Experiment 2 revealed that this advantage was largely dependent on regularities in instrument onset characteristics relative to the spectrotemporal complexity of environmental sounds and speech. Models of participant responses indicated that listeners used spectral, temporal, noise, and pitch cues in the task. Aspects of spectral centroid were associated with responses for all categories, while noisiness and spectral flatness were associated with environmental and instrument responses, respectively. Responses for speech and environmental sounds were also associated with spectral features that varied over time. Experiment 2 indicated that variability in fundamental frequency was useful in identifying steady state speech and instrument stimuli.
Collapse
Affiliation(s)
- Mattson Ogg
- Neuroscience and Cognitive Science Program, University of Maryland, 4090 Union Drive, College Park, Maryland 20742, USA
| | - L Robert Slevc
- Department of Psychology, University of Maryland, 4094 Campus Drive, College Park, Maryland 20742, USA
| | - William J Idsardi
- Department of Linguistics, University of Maryland, 1401 Marie Mount Hall, College Park, Maryland 20742, USA
| |
Collapse
|
23
|
Nolden S, Rigoulot S, Jolicoeur P, Armony JL. Effects of musical expertise on oscillatory brain activity in response to emotional sounds. Neuropsychologia 2017; 103:96-105. [DOI: 10.1016/j.neuropsychologia.2017.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 10/19/2022]
|
24
|
Zioga I, Di Bernardi Luft C, Bhattacharya J. Musical training shapes neural responses to melodic and prosodic expectation. Brain Res 2016; 1650:267-282. [PMID: 27622645 PMCID: PMC5069926 DOI: 10.1016/j.brainres.2016.09.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 09/01/2016] [Accepted: 09/09/2016] [Indexed: 11/15/2022]
Abstract
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise. Melodic expectancy influences the processing of prosodic expectancy. Musical expertise modulates pitch processing in music and language. Musicians have a more refined response to pitch. Musicians' neural responses are proportional to their level of musical expertise. Possible association between the P200 neural component and behavioural facilitation.
Collapse
Affiliation(s)
- Ioanna Zioga
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom.
| | - Caroline Di Bernardi Luft
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom; School of Biological and Chemical Sciences, Queen Mary, University of London, Mile End Rd, London E1 4NS, United Kingdom
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom
| |
Collapse
|
25
|
Rigoulot S, Armony JL. Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
Affiliation(s)
- Simon Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| | - Jorge L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| |
Collapse
|
26
|
Jiang X, Pell MD. Neural responses towards a speaker's feeling of (un)knowing. Neuropsychologia 2015; 81:79-93. [PMID: 26700458 DOI: 10.1016/j.neuropsychologia.2015.12.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 11/16/2015] [Accepted: 12/11/2015] [Indexed: 10/22/2022]
Abstract
During interpersonal communication, listeners must rapidly evaluate verbal and vocal cues to arrive at an integrated meaning about the utterance and about the speaker, including a representation of the speaker's 'feeling of knowing' (i.e., how confident they are in relation to the utterance). In this study, we investigated the time course and neural responses underlying a listener's ability to evaluate speaker confidence from combined verbal and vocal cues. We recorded real-time brain responses as listeners judged statements conveying three levels of confidence with the speaker's voice (confident, close-to-confident, unconfident), which were preceded by meaning-congruent lexical phrases (e.g. I am positive, Most likely, Perhaps). Event-related potentials to utterances with combined lexical and vocal cues about speaker confidence were compared to responses elicited by utterances without the verbal phrase in a previous study (Jiang and Pell, 2015). Utterances with combined cues about speaker confidence elicited reduced, N1, P2 and N400 responses when compared to corresponding utterances without the phrase. When compared to confident statements, close-to-confident and unconfident expressions elicited reduced N1 and P2 responses and a late positivity from 900 to 1250 ms; unconfident and close-to-confident expressions were differentiated later in the 1250-1600 ms time window. The effect of lexical phrases on confidence processing differed for male and female participants, with evidence that female listeners incorporated information from the verbal and vocal channels in a distinct manner. Individual differences in trait empathy and trait anxiety also moderated neural responses during confidence processing. Our findings showcase the cognitive processing mechanisms and individual factors governing how we infer a speaker's mental (knowledge) state from the speech signal.
Collapse
Affiliation(s)
- Xiaoming Jiang
- School of Communication Sciences and Disorders and Center for Research in Brain, Language and Music, McGill University, Montréal, Canada.
| | - Marc D Pell
- School of Communication Sciences and Disorders and Center for Research in Brain, Language and Music, McGill University, Montréal, Canada.
| |
Collapse
|
27
|
Pell MD, Rothermich K, Liu P, Paulmann S, Sethi S, Rigoulot S. Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody. Biol Psychol 2015; 111:14-25. [PMID: 26307467 DOI: 10.1016/j.biopsycho.2015.08.008] [Citation(s) in RCA: 82] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Revised: 08/04/2015] [Accepted: 08/19/2015] [Indexed: 11/26/2022]
Abstract
This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450-700ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice.
Collapse
Affiliation(s)
- M D Pell
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Montreal, Canada.
| | - K Rothermich
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| | - P Liu
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| | - S Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - S Sethi
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| | - S Rigoulot
- International Laboratory for Brain, Music, and Sound Research, Montreal, Canada
| |
Collapse
|