1
|
Sugii N, Matsuda M, Ishikawa E. Prosody Disorder and Sing-Song Speech in a Patient With Recurrent Glioblastoma: A Case Report. Cureus 2024; 16:e76385. [PMID: 39867057 PMCID: PMC11761159 DOI: 10.7759/cureus.76385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/25/2024] [Indexed: 01/28/2025] Open
Abstract
Dysprosody affects rhythm and intonation in speech, resulting in the impairment of emotional or attitude expression, and usually presents as a negative symptom resulting in a monotonous tone. We herein report a rare case of recurrent glioblastoma (GBM) with dysprosody featuring sing-song speech. A 68-year-old man, formerly left-handed, with right temporal GBM underwent gross total resection. After chemoradiation therapy, he was discharged without any deficits. Nineteen months later, the patient exhibited recurrence and presented a peculiar way of speaking with excessive melodic intonation. A head magnetic resonance imaging revealed new enhanced lesions in the residual right temporal lobe and the splenium of the corpus callosum with a massive surrounding T2-high area. The case highlights the bilateral hemispheric network underlying prosody and the compensatory failure caused by tumor progression and connectivity disruption. This first account of sing-song dysprosody in a GBM patient underscores the complexity of the language network and the need for further case accumulation to elucidate the pathophysiology of such rare presentations.
Collapse
Affiliation(s)
- Narushi Sugii
- Department of Neurosurgery, University of Tsukuba, Tsukuba, JPN
| | | | - Eiichi Ishikawa
- Department of Neurosurgery, University of Tsukuba Hospital, Tsukuba, JPN
| |
Collapse
|
2
|
Gao P, Jiang Z, Yang Y, Zheng Y, Feng G, Li X. Temporal neural dynamics of understanding communicative intentions from speech prosody. Neuroimage 2024; 299:120830. [PMID: 39245398 DOI: 10.1016/j.neuroimage.2024.120830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 08/29/2024] [Accepted: 09/01/2024] [Indexed: 09/10/2024] Open
Abstract
Understanding the correct intention of a speaker is critical for social interaction. Speech prosody is an important source for understanding speakers' intentions during verbal communication. However, the neural dynamics by which the human brain translates the prosodic cues into a mental representation of communicative intentions in real time remains unclear. Here, we recorded EEG (electroencephalograph) while participants listened to dialogues. The prosodic features of the critical words at the end of sentences were manipulated to signal either suggestion, warning, or neutral intentions. The results showed that suggestion and warning intentions evoked enhanced late positive event-related potentials (ERPs) compared to the neutral condition. Linear mixed-effects model (LMEM) regression and representational similarity analysis (RSA) analyses revealed that these ERP effects were distinctively correlated with prosodic acoustic analysis, emotional valence evaluation, and intention interpretation in different time windows; The onset latency significantly increased as the processing level of abstractness and communicative intentionality increased. Neural representations of intention and emotional information emerged and parallelly persisted over a long time window, guiding the correct identification of communicative intention. These results provide new insights into understanding the structural components of intention processing and their temporal neural dynamics underlying communicative intention comprehension from speech prosody in online social interactions.
Collapse
Affiliation(s)
- Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhufang Jiang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| | - Yuanyi Zheng
- School of Psychology, Shenzhen University, Shenzhen, Guangdong, China
| | - Gangyi Feng
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China.
| |
Collapse
|
3
|
Nault DR, Bonar RJT, Ilyaz E, Dirks MA, Morningstar M. Fast and friendly: The role of vocal cues in adolescents' responses to and perceptions of peer provocation. JOURNAL OF RESEARCH ON ADOLESCENCE : THE OFFICIAL JOURNAL OF THE SOCIETY FOR RESEARCH ON ADOLESCENCE 2024; 34:1054-1068. [PMID: 38888263 DOI: 10.1111/jora.12992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/31/2024] [Indexed: 06/20/2024]
Abstract
Adolescents self-report using different strategies to respond to peer provocation. However, we have a limited understanding of how these responses are behaviorally enacted and perceived by peers. This study examined the extent to which adolescents' self-reported responses to peer provocation (i.e., aggressive, assertive, and withdrawn) predicted how their vocal enactments of standardized responses to peer provocation were perceived by other adolescents. Three vocal cues relevant to the communication of emotional intent-average pitch, average intensity, and speech rate-were explored as moderators of these associations. Adolescent speakers (n = 39; Mage = 12.67; 66.7% girls) completed a self-report measure of how they would choose to respond to scenarios involving peer provocation; they also enacted standardized vocal responses to hypothetical peer provocation scenarios. Recordings of speakers' vocal responses were presented to a separate sample of adolescent listeners (n = 129; Mage = 12.12; 52.7% girls) in an online listening task. Speakers who self-reported greater use of assertive response strategies enacted standardized vocal responses that were rated as significantly friendlier by listeners. Vocal responses enacted with faster speech rates were also rated as significantly friendlier by listeners. Speakers' self-reported use of aggression and withdrawal was not significantly related to listeners' ratings of their standardized vocal responses. These findings suggest that adolescents may be perceived differently by their peers depending on the way in which their response is enacted; specifically, faster speech rate may be perceived as friendlier and thus de-escalate peer conflict. Future studies should consider not only what youth say and/or do when responding to peer provocation but also how they say it.
Collapse
Affiliation(s)
- Daniel R Nault
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Riley J T Bonar
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Emma Ilyaz
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Melanie A Dirks
- Department of Psychology, McGill University, Montreal, Québec, Canada
| | | |
Collapse
|
4
|
Ross ED. Affective Prosody and Its Impact on the Neurology of Language, Depression, Memory and Emotions. Brain Sci 2023; 13:1572. [PMID: 38002532 PMCID: PMC10669595 DOI: 10.3390/brainsci13111572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 10/25/2023] [Accepted: 11/01/2023] [Indexed: 11/26/2023] Open
Abstract
Based on the seminal publications of Paul Broca and Carl Wernicke who established that aphasic syndromes (disorders of the verbal-linguistic aspects of communication) were predominantly the result of focal left-hemisphere lesions, "language" is traditionally viewed as a lateralized function of the left hemisphere. This, in turn, has diminished and delayed the acceptance that the right hemisphere also has a vital role in language, specifically in modulating affective prosody, which is essential for communication competency and psychosocial well-being. Focal lesions of the right hemisphere may result in disorders of affective prosody (aprosodic syndromes) that are functionally and anatomically analogous to the aphasic syndromes that occur following focal left-hemisphere lesions. This paper will review the deductive research published over the last four decades that has elucidated the neurology of affective prosody which, in turn, has led to a more complete and nuanced understanding of the neurology of language, depression, emotions and memory. In addition, the paper will also present the serendipitous clinical observations (inductive research) and fortuitous inter-disciplinary collaborations that were crucial in guiding and developing the deductive research processes that culminated in the concept that primary emotions and related display behaviors are a lateralized function of the right hemisphere and social emotions, and related display behaviors are a lateralized function of the left hemisphere.
Collapse
Affiliation(s)
- Elliott D. Ross
- Department of Neurology, University of Oklahoma Health Science Center, Oklahoma City, OK 73104, USA; or
- Department of Neurology, University of Colorado School of Medicine, Aurora, CO 80045, USA
| |
Collapse
|
5
|
Baglione H, Coulombe V, Martel-Sauvageau V, Monetta L. The impacts of aging on the comprehension of affective prosody: A systematic review. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-16. [PMID: 37603689 DOI: 10.1080/23279095.2023.2245940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
Recent clinical reports have suggested a possible decline in the ability to understand emotions in speech (affective prosody comprehension) with aging. The present study aims to further examine the differences in performance between older and younger adults in terms of affective prosody comprehension. Following a recent cognitive model dividing affective prosody comprehension into perceptual and lexico-semantic components, a cognitive approach targeting these components was adopted. The influence of emotions' valence and category on aging performance was also investigated. A systematic review of the literature was carried out using six databases. Twenty-one articles, presenting 25 experiments, were included. All experiments analyzed affective prosody comprehension performance of older versus younger adults. The results confirmed that older adults' performance in identifying emotions in speech was reduced compared to younger adults. The results also brought out the fact that affective prosody comprehension abilities could be modulated by the emotion category but not by the emotional valence. Various theories account for this difference in performance, namely auditory perception, brain aging, and socioemotional selectivity theory suggesting that older people tend to neglect negative emotions. However, the explanation of the underlying deficits of the affective prosody decline is still limited.
Collapse
Affiliation(s)
- Héloïse Baglione
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| | - Valérie Coulombe
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| | - Vincent Martel-Sauvageau
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| | - Laura Monetta
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| |
Collapse
|
6
|
Hu N, Chen A, Quené H, Sanders TJM. The role of prosody in interpreting causality in English discourse. PLoS One 2023; 18:e0286003. [PMID: 37267347 DOI: 10.1371/journal.pone.0286003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 05/06/2023] [Indexed: 06/04/2023] Open
Abstract
Previous studies have well established that certain causal connectives encode information about the semantic-pragmatic distinction between different types of causal relations such as CAUSE-CONSEQUENCE versus CLAIM-ARGUMENT relations. These "specialized" causal connectives assist listeners in discerning different types of causality. Additionally, research has demonstrated that utterances expressing CLAIM-ARGUMENT relations exhibit distinct prosodic characteristics compared to utterances expressing CAUSE-CONSEQUENCE relations. However, it remains unknown whether the prosodic characteristics of utterances expressing causality can aid listeners in determining the specific type of causality being conveyed. To address this knowledge gap, this study investigates the impact of the prosody, specifically the prosody of the causal connective so in English, on listeners' interpretation of the type of causality expressed. We conducted a perception experiment employing a forced-choice discourse completion task, where the participants were required to select a continuation for each sound clip they heard. The sound clip consisted of factual events followed by the causal connective so. We found that the odds of listeners choosing subjective continuations over objective continuations increased when the connective so at the end of the sound clip was pronounced with subjective causality prosodic features, such as prolonged duration and a concave f0 contour. This finding suggests that the prosody of the connective so plays a role in conveying subjectivity in causality, guiding listeners in interpreting causal relations. In addition, it is important to note that our data revealed individual variation among listeners in their interpretations of prosodic information related to subjective-objective causality contrast.
Collapse
Affiliation(s)
- Na Hu
- Department of Language, Literature and Communication, Institute for Language Sciences, Utrecht University, Utrecht, the Netherlands
| | - Aoju Chen
- Department of Language, Literature and Communication, Institute for Language Sciences, Utrecht University, Utrecht, the Netherlands
| | - Hugo Quené
- Department of Language, Literature and Communication, Institute for Language Sciences, Utrecht University, Utrecht, the Netherlands
| | - Ted J M Sanders
- Department of Language, Literature and Communication, Institute for Language Sciences, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
7
|
Ukaegbe OC, Holt BE, Keator LM, Brownell H, Blake ML, Lundgren K. Aprosodia Following Focal Brain Damage: What's Right and What's Left? AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2022; 31:2313-2328. [PMID: 35868292 DOI: 10.1044/2022_ajslp-21-00302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Hemispheric specialization for the comprehension and expression of linguistic and emotional prosody is typically attributed to the right hemisphere. This study used techniques adapted from meta-analysis to critically examine the strength of existing evidence for hemispheric lateralization of prosody following brain damage. METHOD Twenty-one databases were searched for articles published from 1970 to 2020 addressing differences in prosody performance between groups defined by right hemisphere damage and left hemisphere damage. Hedges's g effect sizes were calculated for all possible prosody comparisons. Primary analyses summarize effects for four types: linguistic production, linguistic comprehension, emotion comprehension, and emotion production. Within each primary analysis, Hedges's g values were averaged across comparisons (usually from a single article) based on the same sample of individuals. Secondary analyses explore more specific classifications of comparisons. RESULTS Out of the 113 articles investigating comprehension and production of emotional and linguistic prosody, 62 were deemed appropriate for data extraction, but only 21 met inclusion criteria, passed quality reviews, and provided sufficient information for analysis. Evidence from this review illustrates the heterogeneity of research methods and results from studies that have investigated aprosodia. This review provides inconsistent support for selective contribution of the two cerebral hemispheres to prosody comprehension and production; however, the strongest finding suggests that right hemisphere lesions disrupt emotional prosody comprehension more than left hemisphere lesions. CONCLUSION This review highlights the impoverished nature of the existing literature; offers suggestions for future research; and highlights relevant clinical implications for the prognostication, evaluation, and treatment of aprosodia. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.20334987.
Collapse
Affiliation(s)
- Onyinyechi C Ukaegbe
- Department of Communication Sciences and Disorders, The University of North Carolina Greensboro
| | - Brooke E Holt
- Department of Communication Sciences and Disorders, The University of North Carolina Greensboro
| | - Lynsey M Keator
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia
| | - Hiram Brownell
- Department of Psychology and Neuroscience, Boston College, MA
| | | | - Kristine Lundgren
- Department of Communication Sciences and Disorders, The University of North Carolina Greensboro
| |
Collapse
|
8
|
Morningstar M, Grannis C, Mattson WI, Nelson EE. Functional patterns of neural activation during vocal emotion recognition in youth with and without refractory epilepsy. Neuroimage Clin 2022; 34:102966. [PMID: 35182929 PMCID: PMC8859003 DOI: 10.1016/j.nicl.2022.102966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/12/2022] [Accepted: 02/11/2022] [Indexed: 01/10/2023]
Abstract
Epilepsy has been associated with deficits in the social cognitive ability to decode others' nonverbal cues to infer their emotional intent (emotion recognition). Studies have begun to identify potential neural correlates of these deficits, but have focused primarily on one type of nonverbal cue (facial expressions) to the detriment of other crucial social signals that inform the tenor of social interactions (e.g., tone of voice). Less is known about how individuals with epilepsy process these forms of social stimuli, with a particular gap in knowledge about representation of vocal cues in the developing brain. The current study compared vocal emotion recognition skills and functional patterns of neural activation to emotional voices in youth with and without refractory focal epilepsy. We made novel use of inter-subject pattern analysis to determine brain areas in which activation to emotional voices was predictive of epilepsy status. Results indicated that youth with epilepsy were comparatively less able to infer emotional intent in vocal expressions than their typically developing peers. Activation to vocal emotional expressions in regions of the mentalizing and/or default mode network (e.g., right temporo-parietal junction, right hippocampus, right medial prefrontal cortex, among others) differentiated youth with and without epilepsy. These results are consistent with emerging evidence that pediatric epilepsy is associated with altered function in neural networks subserving social cognitive abilities. Our results contribute to ongoing efforts to understand the neural markers of social cognitive deficits in pediatric epilepsy, in order to better tailor and funnel interventions to this group of youth at risk for poor social outcomes.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Kingston, ON, Canada; Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States.
| | - C Grannis
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - W I Mattson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - E E Nelson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
9
|
Morningstar M, Mattson WI, Nelson EE. Longitudinal Change in Neural Response to Vocal Emotion in Adolescence. Soc Cogn Affect Neurosci 2022; 17:890-903. [PMID: 35323933 PMCID: PMC9527472 DOI: 10.1093/scan/nsac021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 01/09/2023] Open
Abstract
Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth’s neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants’ age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.
Collapse
Affiliation(s)
- Michele Morningstar
- Correspondence should be addressed to Michele Morningstar, Department of Psychology, Queen’s University, 62 Arch Street, Kingston, ON K7L 3L3, Canada. E-mail:
| | - Whitney I Mattson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
| | - Eric E Nelson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH 43205, USA
| |
Collapse
|
10
|
Caballero JA, Mauchand M, Jiang X, Pell MD. Cortical processing of speaker politeness: Tracking the dynamic effects of voice tone and politeness markers. Soc Neurosci 2021; 16:423-438. [PMID: 34102955 DOI: 10.1080/17470919.2021.1938667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Information in the tone of voice alters social impressions and underlying brain activity as listeners evaluate the interpersonal relevance of utterances. Here, we presented requests that expressed politeness distinctions through the voice (polite/rude) and explicit linguistic markers (half of the requests began with Please). Thirty participants performed a social perception task (rating friendliness) while their electroencephalogram was recorded. Behaviorally, vocal politeness strategies had a much stronger influence on the perceived friendliness than the linguistic marker. Event-related potentials revealed rapid effects of (im)polite voices on cortical activity prior to ~300 ms; P200 amplitudes increased for polite versus rude voices, suggesting that the speaker's polite stance was registered as more salient in our task. At later stages, politeness distinctions encoded by the speaker's voice and their use of Please interacted, modulating activity in the N400 (300-500 ms) and late positivity (600-800 ms) time windows. Patterns of results suggest that initial attention deployment to politeness cues is rapidly influenced by the motivational significance of a speaker's voice. At later stages, processes for integrating vocal and lexical information resulted in increased cognitive effort to reevaluate utterances with ambiguous/contradictory cues. The potential influence of social anxiety on the P200 effect is also discussed.
Collapse
Affiliation(s)
- Jonathan A Caballero
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| | - Maël Mauchand
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| | - Xiaoming Jiang
- Shanghai International Studies University, Institute of Linguistics (IoL), Shanghai, China
| | - Marc D Pell
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| |
Collapse
|
11
|
Luo H, Zhao Y, Fan F, Fan H, Wang Y, Qu W, Wang Z, Tan Y, Zhang X, Tan S. A bottom-up model of functional outcome in schizophrenia. Sci Rep 2021; 11:7577. [PMID: 33828168 PMCID: PMC8027854 DOI: 10.1038/s41598-021-87172-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/10/2021] [Indexed: 02/01/2023] Open
Abstract
Schizophrenia results in poor functional outcomes owing to numerous factors. This study provides the first test of a bottom-up causal model of functional outcome in schizophrenia, using neurocognition, vocal emotional cognition, alexithymia, and negative symptoms as predictors of functional outcome. We investigated a cross-sectional sample of 135 individuals with schizophrenia and 78 controls. Using a series of structural equation modelling analyses, a single pathway was generated among scores from the MATRICS Consensus Cognitive Battery (MCCB), vocal emotion recognition test, Toronto Alexithymia Scale (TAS), Brief Negative Symptom Scale, and the Personal and Social Performance Scale. The scores for each dimension of the MCCB in the schizophrenia group were significantly lower than that in the control group. The recognition accuracy for different emotions (anger, disgust, fear, sadness, surprise, and satire, but not calm was significantly lower in the schizophrenia group than in the control group. Moreover, the scores on the three dimensions of TAS were significantly higher in the schizophrenia group than in the control group. On path analysis modelling, the proposed bottom-up causal model showed a strong fit with the data and formed a single pathway, from neurocognition to vocal emotional cognition, to alexithymia, to negative symptoms, and to poor functional outcomes. The study results strongly support the proposed bottom-up causal model of functional outcome in schizophrenia. The model could be used to better understand the causal factors related to the functional outcome, as well as for the development of intervention strategies to improve functional outcomes in schizophrenia.
Collapse
Affiliation(s)
- Hongge Luo
- grid.440734.00000 0001 0707 0296School of Public Health, North China University of Science and Technology, Tangshan, China ,grid.440734.00000 0001 0707 0296College of Psychology, North China University of Science and Technology, Tangshan, China
| | - Yanli Zhao
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Fengmei Fan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Hongzhen Fan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Yunhui Wang
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Wei Qu
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Zhiren Wang
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Yunlong Tan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Xiujun Zhang
- grid.440734.00000 0001 0707 0296School of Public Health, North China University of Science and Technology, Tangshan, China
| | - Shuping Tan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| |
Collapse
|
12
|
Vergis N, Jiang X, Pell MD. Neural responses to interpersonal requests: Effects of imposition and vocally-expressed stance. Brain Res 2020; 1740:146855. [DOI: 10.1016/j.brainres.2020.146855] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 04/02/2020] [Accepted: 04/23/2020] [Indexed: 02/07/2023]
|
13
|
Sumathi TA, Spinola O, Singh NC, Chakrabarti B. Perceived Closeness and Autistic Traits Modulate Interpersonal Vocal Communication. Front Psychiatry 2020; 11:50. [PMID: 32180734 PMCID: PMC7059848 DOI: 10.3389/fpsyt.2020.00050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 01/21/2020] [Indexed: 11/29/2022] Open
Abstract
Vocal modulation is a critical component of interpersonal communication. It not only serves as a dynamic and flexible tool for self-expression and linguistic information but also plays a key role in social behavior. Variation in vocal modulation can be driven by individual traits of interlocutors as well as factors relating to the dyad, such as the perceived closeness between interlocutors. In this study we examine both of these sources of variation. At an individual level, we examine the impact of autistic traits, since lack of appropriate vocal modulation has often been associated with Autism Spectrum Disorders. At a dyadic level, we examine the role of perceived closeness between interlocutors on vocal modulation. The study was conducted in three separate samples from India, Italy, and the UK. Articulatory features were extracted from recorded conversations between a total of 85 same-sex pairs of participants, and the articulation space calculated. A larger articulation space corresponds to greater number of spectro-temporal modulations (articulatory variations) sampled by the speaker. Articulation space showed a positive association with interpersonal closeness and a weak negative association with autistic traits. This study thus provides novel insights into individual and dyadic variation that can influence interpersonal vocal communication.
Collapse
Affiliation(s)
- T. A. Sumathi
- National Brain Research Centre, Language, Literacy and Music Laboratory, Manesar, India
| | - Olivia Spinola
- Department of Psychology, Universita` degli Studi di Milano Bicocca, Milan, Italy
- Centre for Autism, School of Psychology & Clinical Language Sciences, University of Reading, Reading, United Kingdom
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | | | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology & Clinical Language Sciences, University of Reading, Reading, United Kingdom
- Inter University Centre for Biomedical Research, Mahatma Gandhi University, Kottayam, India
- India Autism Center, Kolkata, India
| |
Collapse
|
14
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
15
|
Oechslin MS, Gschwind M, James CE. Tracking Training-Related Plasticity by Combining fMRI and DTI: The Right Hemisphere Ventral Stream Mediates Musical Syntax Processing. Cereb Cortex 2019; 28:1209-1218. [PMID: 28203797 DOI: 10.1093/cercor/bhx033] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2015] [Accepted: 01/25/2017] [Indexed: 12/25/2022] Open
Abstract
As a functional homolog for left-hemispheric syntax processing in language, neuroimaging studies evidenced involvement of right prefrontal regions in musical syntax processing, of which underlying white matter connectivity remains unexplored so far. In the current experiment, we investigated the underlying pathway architecture in subjects with 3 levels of musical expertise. Employing diffusion tensor imaging tractography, departing from seeds from our previous functional magnetic resonance imaging study on music syntax processing in the same participants, we identified a pathway in the right ventral stream that connects the middle temporal lobe with the inferior frontal cortex via the extreme capsule, and corresponds to the left hemisphere ventral stream, classically attributed to syntax processing in language comprehension. Additional morphometric consistency analyses allowed dissociating tract core from more dispersed fiber portions. Musical expertise related to higher tract consistency of the right ventral stream pathway. Specifically, tract consistency in this pathway predicted the sensitivity for musical syntax violations. We conclude that enduring musical practice sculpts ventral stream architecture. Our results suggest that training-related pathway plasticity facilitates the right hemisphere ventral stream information transfer, supporting an improved sound-to-meaning mapping in music.
Collapse
Affiliation(s)
- Mathias S Oechslin
- Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211 Geneva, Switzerland.,Department of Education and Culture of the Canton of Thurgau, CH-8500, Frauenfeld, Switzerland
| | - Markus Gschwind
- Department of Neurology, Geneva University Hospitals, CH-1211 Geneva, Switzerland.,Department of Neuroscience, Campus Biotech, University of Geneva, CH-1202 Geneva, Switzerland
| | - Clara E James
- Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211 Geneva, Switzerland.,Geneva Neuroscience Center, University of Geneva, CH-1211 Geneva, Switzerland.,HES-SO University of Applied Sciences and Arts Western Switzerland, School of Health Sciences, CH-1206 Geneva, Switzerland
| |
Collapse
|
16
|
Burred JJ, Ponsot E, Goupil L, Liuni M, Aucouturier JJ. CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition. PLoS One 2019; 14:e0205943. [PMID: 30947281 PMCID: PMC6448843 DOI: 10.1371/journal.pone.0205943] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 02/15/2019] [Indexed: 11/29/2022] Open
Abstract
Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment’s pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies.
Collapse
Affiliation(s)
| | - Emmanuel Ponsot
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248) and Département d’études cognitives, École Normale Supérieure, PSL Research University, Paris, France
| | - Louise Goupil
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
| | - Marco Liuni
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
| | - Jean-Julien Aucouturier
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
- * E-mail:
| |
Collapse
|
17
|
Cowen AS, Laukka P, Elfenbein HA, Liu R, Keltner D. The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures. Nat Hum Behav 2019; 3:369-382. [PMID: 30971794 PMCID: PMC6687085 DOI: 10.1038/s41562-019-0533-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 01/15/2019] [Indexed: 12/30/2022]
Abstract
Central to emotion science is the degree to which categories, such as Awe, or broader affective features, such as Valence, underlie the recognition of emotional expression. To explore the processes by which people recognize emotion from prosody, US and Indian participants were asked to judge the emotion categories or affective features communicated by 2,519 speech samples produced by 100 actors from 5 cultures. With large-scale statistical inference methods, we find that prosody can communicate at least 12 distinct kinds of emotion that are preserved across the 2 cultures. Analyses of the semantic and acoustic structure of the recognition of emotions reveal that emotion categories drive the recognition of emotions more so than affective features, including Valence. In contrast to discrete emotion theories, however, emotion categories are bridged by gradients representing blends of emotions. Our findings, visualized within an interactive map, reveal a complex, high-dimensional space of emotional states recognized cross-culturally in speech prosody.
Collapse
Affiliation(s)
- Alan S Cowen
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA.
| | - Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | | - Runjing Liu
- Department of Statistics, University of California, Berkeley, Berkeley, CA, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
18
|
Paulmann S, Weinstein N, Zougkou K. Now listen to this! Evidence from a cross-spliced experimental design contrasting pressuring and supportive communications. Neuropsychologia 2019; 124:192-201. [DOI: 10.1016/j.neuropsychologia.2018.12.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 12/13/2018] [Accepted: 12/14/2018] [Indexed: 02/04/2023]
|
19
|
Jones M, Corcoran A, Jorge RE. The psychopharmacology of brain vascular disease/poststroke depression. PSYCHOPHARMACOLOGY OF NEUROLOGIC DISEASE 2019; 165:229-241. [DOI: 10.1016/b978-0-444-64012-3.00013-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
20
|
Cowen AS, Elfenbein HA, Laukka P, Keltner D. Mapping 24 emotions conveyed by brief human vocalization. ACTA ACUST UNITED AC 2018; 74:698-712. [PMID: 30570267 DOI: 10.1037/amp0000399] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Emotional vocalizations are central to human social life. Recent studies have documented that people recognize at least 13 emotions in brief vocalizations. This capacity emerges early in development, is preserved in some form across cultures, and informs how people respond emotionally to music. What is poorly understood is how emotion recognition from vocalization is structured within what we call a semantic space, the study of which addresses questions critical to the field: How many distinct kinds of emotions can be expressed? Do expressions convey emotion categories or affective appraisals (e.g., valence, arousal)? Is the recognition of emotion expressions discrete or continuous? Guided by a new theoretical approach to emotion taxonomies, we apply large-scale data collection and analysis techniques to judgments of 2,032 emotional vocal bursts produced in laboratory settings (Study 1) and 48 found in the real world (Study 2) by U.S. English speakers (N = 1,105). We find that vocal bursts convey at least 24 distinct kinds of emotion. Emotion categories (sympathy, awe), more so than affective appraisals (including valence and arousal), organize emotion recognition. In contrast to discrete emotion theories, the emotion categories conveyed by vocal bursts are bridged by smooth gradients with continuously varying meaning. We visualize the complex, high-dimensional space of emotion conveyed by brief human vocalization within an online interactive map. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
21
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
22
|
Zougkou K, Weinstein N, Paulmann S. ERP correlates of motivating voices: quality of motivation and time-course matters. Soc Cogn Affect Neurosci 2018; 12:1687-1700. [PMID: 28525641 PMCID: PMC5647802 DOI: 10.1093/scan/nsx064] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Accepted: 04/23/2017] [Indexed: 11/12/2022] Open
Abstract
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. ‘You absolutely have to do it my way’ spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. ‘Why don’t we meet again tomorrow’ spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms.
Collapse
Affiliation(s)
- Konstantina Zougkou
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester CO43SQ
| | - Netta Weinstein
- School of Psychology, Cardiff University, Cardiff CF10 3AT, UK
| | - Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester CO43SQ
| |
Collapse
|
23
|
Abstract
In speech, social evaluations of a speaker’s dominance or trustworthiness are conveyed by distinguishing, but little-understood, pitch variations. This work describes how to combine state-of-the-art vocal pitch transformations with the psychophysical technique of reverse correlation and uses this methodology to uncover the prosodic prototypes that govern such social judgments in speech. This finding is of great significance, because the exact shape of these prototypes, and how they vary with sex, age, and culture, is virtually unknown, and because prototypes derived with the method can then be reapplied to arbitrary spoken utterances, thus providing a principled way to modulate personality impressions in speech. Human listeners excel at forming high-level social representations about each other, even from the briefest of utterances. In particular, pitch is widely recognized as the auditory dimension that conveys most of the information about a speaker’s traits, emotional states, and attitudes. While past research has primarily looked at the influence of mean pitch, almost nothing is known about how intonation patterns, i.e., finely tuned pitch trajectories around the mean, may determine social judgments in speech. Here, we introduce an experimental paradigm that combines state-of-the-art voice transformation algorithms with psychophysical reverse correlation and show that two of the most important dimensions of social judgments, a speaker’s perceived dominance and trustworthiness, are driven by robust and distinguishing pitch trajectories in short utterances like the word “Hello,” which remained remarkably stable whether male or female listeners judged male or female speakers. These findings reveal a unique communicative adaptation that enables listeners to infer social traits regardless of speakers’ physical characteristics, such as sex and mean pitch. By characterizing how any given individual’s mental representations may differ from this generic code, the method introduced here opens avenues to explore dysprosody and social-cognitive deficits in disorders like autism spectrum and schizophrenia. In addition, once derived experimentally, these prototypes can be applied to novel utterances, thus providing a principled way to modulate personality impressions in arbitrary speech signals.
Collapse
|
24
|
Morningstar M, Ly VY, Feldman L, Dirks MA. Mid-Adolescents’ and Adults’ Recognition of Vocal Cues of Emotion and Social Intent: Differences by Expression and Speaker Age. JOURNAL OF NONVERBAL BEHAVIOR 2018. [DOI: 10.1007/s10919-018-0274-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
25
|
Jiang X, Sanford R, Pell MD. Neural systems for evaluating speaker (Un)believability. Hum Brain Mapp 2017; 38:3732-3749. [PMID: 28462535 DOI: 10.1002/hbm.23630] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 04/13/2017] [Accepted: 04/17/2017] [Indexed: 12/11/2022] Open
Abstract
Our voice provides salient cues about how confident we sound, which promotes inferences about how believable we are. However, the neural mechanisms involved in these social inferences are largely unknown. Employing functional magnetic resonance imaging, we examined the brain networks and individual differences underlying the evaluation of speaker believability from vocal expressions. Participants (n = 26) listened to statements produced in a confident, unconfident, or "prosodically unmarked" (neutral) voice, and judged how believable the speaker was on a 4-point scale. We found frontal-temporal networks were activated for different levels of confidence, with the left superior and inferior frontal gyrus more activated for confident statements, the right superior temporal gyrus for unconfident expressions, and bilateral cerebellum for statements in a neutral voice. Based on listener's believability judgment, we observed increased activation in the right superior parietal lobule (SPL) associated with higher believability, while increased left posterior central gyrus (PoCG) was associated with less believability. A psychophysiological interaction analysis found that the anterior cingulate cortex and bilateral caudate were connected to the right SPL when higher believability judgments were made, while supplementary motor area was connected with the left PoCG when lower believability judgments were made. Personal characteristics, such as interpersonal reactivity and the individual tendency to trust others, modulated the brain activations and the functional connectivity when making believability judgments. In sum, our data pinpoint neural mechanisms that are involved when inferring one's believability from a speaker's voice and establish ways that these mechanisms are modulated by individual characteristics of a listener. Hum Brain Mapp 38:3732-3749, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Xiaoming Jiang
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada
| | - Ryan Sanford
- McConnell Brain Imaging Center, Montréal Neurological Institute, McGill University, Montréal, Canada
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada.,McConnell Brain Imaging Center, Montréal Neurological Institute, McGill University, Montréal, Canada
| |
Collapse
|
26
|
Vocal Cues Underlying Youth and Adult Portrayals of Socio-emotional Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2017. [DOI: 10.1007/s10919-017-0250-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
27
|
Matsui T, Nakamura T, Utsumi A, Sasaki AT, Koike T, Yoshida Y, Harada T, Tanabe HC, Sadato N. The role of prosody and context in sarcasm comprehension: Behavioral and fMRI evidence. Neuropsychologia 2016; 87:74-84. [DOI: 10.1016/j.neuropsychologia.2016.04.031] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Revised: 04/29/2016] [Accepted: 04/29/2016] [Indexed: 11/17/2022]
|
28
|
Abstract
Failure to recognize sarcasm can lead to important miscommunications. Few previous studies have identified brain lesions associated with impaired recognition of sarcasm. We tested the hypothesis that percent damage to specific white matter tracts, age, and education together predict accuracy in sarcasm recognition. Using multivariable linear regression, with age, education, and percent damage to each of eight white matter tracts as independent variables, and percent accuracy on sarcasm recognition as the dependent variable, we developed a model for predicting sarcasm recognition. Percent damage to the sagittal stratum had the greatest weight and was the only independent predictor of sarcasm recognition.
Collapse
Affiliation(s)
- Cameron L Davis
- a Departments of Neurology , Johns Hopkins University , Baltimore , MD 21287 , USA
| | - Kenichi Oishi
- b Radiology , Johns Hopkins University , Baltimore , MD 21287 , USA
| | - Andreia V Faria
- b Radiology , Johns Hopkins University , Baltimore , MD 21287 , USA
| | - John Hsu
- b Radiology , Johns Hopkins University , Baltimore , MD 21287 , USA
| | - Yessenia Gomez
- a Departments of Neurology , Johns Hopkins University , Baltimore , MD 21287 , USA
| | - Susumu Mori
- b Radiology , Johns Hopkins University , Baltimore , MD 21287 , USA
| | - Argye E Hillis
- a Departments of Neurology , Johns Hopkins University , Baltimore , MD 21287 , USA.,c Physical Medicine and Rehabilitation , Johns Hopkins University School of Medicine, Johns Hopkins University , Baltimore , MD 21287 , USA.,d Department of Cognitive Science , Krieger School of Arts and Sciences, Johns Hopkins University , Baltimore , MD 21287 , USA
| |
Collapse
|
29
|
What Do You Mean by That?! An Electrophysiological Study of Emotional and Attitudinal Prosody. PLoS One 2015; 10:e0132947. [PMID: 26176622 PMCID: PMC4503638 DOI: 10.1371/journal.pone.0132947] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2014] [Accepted: 06/21/2015] [Indexed: 11/29/2022] Open
Abstract
The use of prosody during verbal communication is pervasive in everyday language and whilst there is a wealth of research examining the prosodic processing of emotional information, much less is known about the prosodic processing of attitudinal information. The current study investigated the online neural processes underlying the prosodic processing of non-verbal emotional and attitudinal components of speech via the analysis of event-related brain potentials related to the processing of anger and sarcasm. To examine these, sentences with prosodic expectancy violations created by cross-splicing a prosodically neutral head (‘he has’) and a prosodically neutral, angry, or sarcastic ending (e.g., ‘a serious face’) were used. Task demands were also manipulated, with participants in one experiment performing prosodic classification and participants in another performing probe-verification. Overall, whilst minor differences were found across the tasks, the results suggest that angry and sarcastic prosodic expectancy violations follow a similar processing time-course underpinned by similar neural resources.
Collapse
|
30
|
On how the brain decodes vocal cues about speaker confidence. Cortex 2015; 66:9-34. [DOI: 10.1016/j.cortex.2015.02.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Revised: 01/09/2015] [Accepted: 02/06/2015] [Indexed: 11/22/2022]
|
31
|
McGettigan C. The social life of voices: studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 2015; 9:129. [PMID: 25852517 PMCID: PMC4365687 DOI: 10.3389/fnhum.2015.00129] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2014] [Accepted: 02/25/2015] [Indexed: 11/24/2022] Open
Affiliation(s)
- Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London Egham, UK
| |
Collapse
|