1
|
Zipse L, Gallée J, Shattuck-Hufnagel S. A targeted review of prosodic production in agrammatic aphasia. Neuropsychol Rehabil 2024:1-41. [PMID: 38848458 DOI: 10.1080/09602011.2024.2362243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 05/24/2024] [Indexed: 06/09/2024]
Abstract
It is unclear whether individuals with agrammatic aphasia have particularly disrupted prosody, or in fact have relatively preserved prosody they can use in a compensatory way. A targeted literature review was undertaken to examine the evidence regarding the capacity of speakers with agrammatic aphasia to produce prosody. The aim was to answer the question, how much prosody can a speaker "do" with limited syntax? The literature was systematically searched for articles examining the production of grammatical prosody in people with agrammatism, and yielded 16 studies that were ultimately included in this review. Participant inclusion criteria, spoken language tasks, and analysis procedures vary widely across studies. The evidence indicates that timing aspects of prosody are disrupted in people with agrammatic aphasia, while the use of pitch and amplitude cues is more likely to be preserved in this population. Some, but not all, of these timing differences may be attributable to motor speech programming deficits (AOS) rather than aphasia, as these conditions frequently co-occur. Many of the included studies do not address AOS and its possible role in any observed effects. Finally, the available evidence indicates that even speakers with severe aphasia show a degree of preserved prosody in functional communication.
Collapse
Affiliation(s)
- Lauryn Zipse
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, USA
| | - Jeanne Gallée
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, USA
- Department of Medicine, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
2
|
Lausberg H, Dvoretska D, Ptito A. Production of co-speech gestures in the right hemisphere: Evidence from individuals with complete or anterior callosotomy. Neuropsychologia 2023; 180:108484. [PMID: 36638861 DOI: 10.1016/j.neuropsychologia.2023.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 01/04/2023] [Accepted: 01/09/2023] [Indexed: 01/12/2023]
Abstract
INTRODUCTION A right-hand preference for co-speech gestures in right-handed neurotypical individuals as well as the co-occurrence of speech and gesture has induced neuropsychological research to primarily target the left hemisphere when investigating co-speech gesture production. However, the substantial number of spontaneous left-hand gestures in right-handed individuals has, thus far, been unexplained. Recent studies in individuals with complete callosotomy and exclusive left hemisphere speech production show a reliable left-hand preference for co-speech gestures, indicating a right hemispheric generation. However, the findings raise the issue if the separate right hemisphere is able to also generate representational gestures. The present study challenges the proposition of a specific right hemispheric contribution to gesture production by differentiating gesture types including representational ones in individuals with complete callosotomy and by including individuals with anterior callosotomy in whom neural reorganization is less extensive. METHODS Three right-handed individuals with complete commissurotomy (A.A., N.G., G.C.) and three right-handed individuals with anterior callosotomy (C.E., S.R., L. D), all with left hemisphere language dominance, and a matched right-handed neurotypical control group (n = 10) were examined in an experimental setting, including re-narration of a nonverbal animated cartoon and responding to intelligence questions. The participants' video-taped hand movement behavior was analyzed by two independent certified raters with the NEUROGES-ELAN system for nonverbal behavior and gesture. Unimanual right-hand and left-hand gestures were classified into eight gesture types. RESULTS The individuals with complete and anterior callosotomy performed unimanual co-speech gestures with the left as well as the right hand, with no significant preference of one hand for gestures overall. Concerning the specific gesture types, the group with complete callosotomy showed a significant right-hand preference for pantomime gestures, which also applied to the callosotomy total group. The group with anterior callosotomy displayed a significant left-hand preference for form presentation gestures. As a trend, the callosotomy total group differed from the neurotypical group as they performed more left-hand egocentric deictic and left-hand form presentation gestures. DISCUSSION The present study replicates the finding of a substantial left-hand use for unimanual co-speech gestures in individuals with complete callosotomy. The proposition of a right hemispheric contribution to gesture production independent from left hemispheric language production is corroborated by the finding that individuals with anterior callosotomy show a similar pattern of hand use for gestures. Representational gestures were displayed with either hand, suggesting that in particular right hemispheric spatial cognition can be directly expressed in gesture. The significant right-hand preference for pantomime gesture was outstanding and compatible with the established left hemispheric specialization for tool use praxis. The findings shed a new light on the left-hand gestures in neurotypical individuals, suggesting that these can be generated in the right hemisphere.
Collapse
Affiliation(s)
- Hedda Lausberg
- Department of Neurology, Psychosomatic Medicine, and Psychiatry, German Sport University, Cologne, Germany.
| | - Daniela Dvoretska
- Department of Neurology, Psychosomatic Medicine, and Psychiatry, German Sport University, Cologne, Germany
| | - Alain Ptito
- Montreal Neurological Institute, McGill University and McGill University Health Centre Research Institute, Montreal, Quebec, Canada
| |
Collapse
|
3
|
de Beer C, Wartenburger I, Huttenlauch C, Hanne S. A systematic review on production and comprehension of linguistic prosody in people with acquired language and communication disorders resulting from unilateral brain lesions. JOURNAL OF COMMUNICATION DISORDERS 2023; 101:106298. [PMID: 36623377 DOI: 10.1016/j.jcomdis.2022.106298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 11/14/2022] [Accepted: 12/25/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Prosody serves central functions in language processing including linguistic functions (linguistic prosody), like structuring the speech signal. Impairments in production and comprehension of linguistic prosody have been described for persons with unilateral right (RHDP) or left hemisphere damage (LHDP). However, reported results differ with respect to the characteristics and severities of these impairments AIMS: We conducted a systematic literature review focusing on production and comprehension of linguistic prosody at the prosody-syntax interface (i.e., phrase or sentence level) in LHDP and RHDP. METHODS & PROCEDURES In a systematic literature search we included: (i) empirical studies with (ii) adult RHDP and/or LHDP (iii) investigating production and/or comprehension of linguistic prosody at the (iv) phrase or sentence level (v) reporting quantitative data on prosodic measures. We excluded overview papers; studies involving participants with dysarthria, apraxia of speech, foreign accent syndrome, psychiatric diseases, and/or neurodegenerative diseases; studies focusing primarily on emotional prosody; and on lexical stress / word level; studies of which no full text was available and/or that were published in a language other than English. We searched the databases BIOSIS, MEDLINE, EMBASE, PubMed, Web of Science, CINAHL, Cochrane Library, PSYNDEX, PsycINFO and speechBITE, last searched on January 13th 2022.We found 2,631 studies without duplicates. We identified 43 studies which were included into our systematic review. For data extraction and synthesis of results, we grouped studies by (i) modality (production vs. comprehension), (ii) function (syntactic structure vs. information structure), and (iii) by experiment task. For production studies, outcome measures were defined as the productive use of the different prosodic cues (lengthening, pause, f0, amplitude). For comprehension studies, performance measures (accuracy and reaction times) were defined as outcome measures. In accordance with the PRISMA 2020 statement (Page et al., 2021), we conducted a quality check to assess study risk of bias. Our review was pre-registered with PROSPERO (CRD42019120308). OUTCOMES & RESULTS Of the 43 studies reviewed, 30 studies involved RHDP (n = 309), assessing production in 15 studies and focusing on comprehension of prosody in 16 studies (one study investigated production and comprehension). LHDP (n = 438) were included in 35 studies of which 15 studied production and 21 evaluated comprehension of prosody (one study investigated production and comprehension). Despite the heterogeneity of results in the studies reviewed, our synthesis of results suggests that both LHDP and RHDP show limitations, but no complete impairment, in their production and/or comprehension of linguistic prosody. Prosodic limitations are evident in different areas of processing linguistic prosody, like syntactic disambiguation or the distinction between sentence types. There is a tendency towards more severe limitations in LHDP as compared to RHDP. CONCLUSIONS We only included published studies into our review and did not perform an assessment of risk of reporting bias as well as systematic certainty assessments of the outcomes. Despite these limitations, we conclude that both groups show deficits in production and comprehension of linguistic prosody, but neither LHDP nor RHDP are completely impaired in their prosodic processing. This suggests that prosody is a relevant communicative resource for LHDP and RHDP worth being addressed in speech-language-therapy.
Collapse
Affiliation(s)
- Carola de Beer
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany; Faculty of Linguistics and Literary Studies & Medical School OWL, University of Bielefeld, Germany.
| | - Isabell Wartenburger
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany
| | - Clara Huttenlauch
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany
| | - Sandra Hanne
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany
| |
Collapse
|
4
|
Abstract
OBJECTIVE To identify which aspects of prosody are negatively affected subsequent to right hemisphere brain damage (RHD) and to evaluate the methodological quality of the constituent studies. METHOD Twenty-one electronic databases were searched to identify articles from 1970 to February 2020 by entering keywords. Eligibility criteria for articles included a focus on adults with acquired RHD, prosody as the primary research topic, and publication in a peer-reviewed journal. A quality appraisal was conducted using a rubric adapted from Downs and Black (1998). RESULTS Of the 113 articles appraised as eligible and appropriate for inclusion, 71 articles were selected to undergo data extraction for both meta-analyses of population effect size estimates and qualitative synthesis. Across all domains of prosody, the effect estimate was g = 2.51 [95% CI (1.94, 3.09), t = 8.66, p < 0.0001], based on 129 contrasts between RHD and non-brain-damaged healthy controls (NBD), indicating a significant random effects model. This effect size was driven by findings in emotional prosody, g = 2.48 [95% CI (1.76, 3.20), t = 6.88, p < 0.0001]. Overall, studies of higher quality (rpb = 0.18, p < 0.001) and higher sample size/contrast ratio (rpb = 0.25, p < 0.001) were more likely to report significant differences between RHD and NBD participants. CONCLUSIONS The results confirm consistent evidence for emotional prosody deficits in the RHD population. Inconsistent evidence was observed across linguistic prosody domains and pervasive methodological issues were identified across studies, regardless of their prosody focus. These findings highlight the need for more rigorous and sufficiently high-powered designs to examine prosody subsequent to RHD, particularly within the linguistic prosody domain.
Collapse
|
5
|
Yang SY. Acoustic cues associated with Korean sarcastic utterances produced by right- and left-hemisphere damaged individuals. JOURNAL OF COMMUNICATION DISORDERS 2022; 98:106229. [PMID: 35688010 DOI: 10.1016/j.jcomdis.2022.106229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 03/19/2022] [Accepted: 06/02/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Sarcasm, prevalent in everyday conversation, refers to the use of words that express negative attitudes toward persons or events. Acoustic cues associated with sarcasm have been reported to vary across studies and the relative importance of particular acoustic parameters for signaling sarcasm has not been fully determined. The hemispheric specialization for the production of acoustic cues has been a matter of controversy. This study investigated the possible prosodic cues associated with Korean sarcastic utterances and the differential effect of left hemisphere damage (LHD) or right hemisphere damage (RHD) on the production of acoustic features of Korean sarcastic utterances. METHOD Twenty one native speakers of Korean (7 individuals with LHD, 7 individuals with RHD, and 7 healthy controls (HC)) produced six Korean utterances in two different modes: sarcastic and literal. Utterances validated by sarcasm ratings by native listeners were analyzed acoustically utilizing durational and fundamental frequency (F0) measures. RESULTS Listeners' ratings and acoustic analyses indicated that sarcastic utterances in Korean were produced with a combination of multiple acoustic cues. Discriminant function analyses and multiple linear regression showed that LHD and RHD differentially affected the production of acoustic cues associated with sarcasm. CONCLUSION LHD negatively affects the production of durational cues, while RHD negatively affects the production of F0 cues.
Collapse
Affiliation(s)
- Seung-Yun Yang
- Department of Communication Arts, Sciences, and Disorders, Brooklyn College / CUNY, 2900 Bedford Avenue, Brooklyn, NY 11210, United States; Brain and Behavior Laboratory, Geriatrics Division, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Road, Building 35, Orangeburg, NY 10962, United States.
| |
Collapse
|
6
|
van de Ven V, Waldorp L, Christoffels I. Hippocampus plays a role in speech feedback processing. Neuroimage 2020; 223:117319. [PMID: 32882376 DOI: 10.1016/j.neuroimage.2020.117319] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 06/10/2020] [Accepted: 08/25/2020] [Indexed: 11/26/2022] Open
Abstract
There is increasing evidence that the hippocampus is involved in language production and verbal communication, although little is known about its possible role. According to one view, hippocampus contributes semantic memory to spoken language. Alternatively, hippocampus is involved in the processing the (mis)match between expected sensory consequences of speaking and the perceived speech feedback. In the current study, we re-analysed functional magnetic resonance (fMRI) data of two overt picture-naming studies to test whether hippocampus is involved in speech production and, if so, whether the results can distinguish between a "pure memory" versus a "prediction" account of hippocampal involvement. In both studies, participants overtly named pictures during scanning while hearing their own speech feedback unimpededly or impaired by a superimposed noise mask. Results showed decreased hippocampal activity when speech feedback was impaired, compared to when feedback was unimpeded. Further, we found increased functional coupling between auditory cortex and hippocampus during unimpeded speech feedback, compared to impaired feedback. Finally, we found significant functional coupling between a hippocampal/supplementary motor area (SMA) interaction term and auditory cortex, anterior cingulate cortex and cerebellum during overt picture naming, but not during listening to one's own pre-recorded voice. These findings indicate that hippocampus plays a role in speech production that is in accordance with a "prediction" view of hippocampal functioning.
Collapse
Affiliation(s)
- Vincent van de Ven
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, the Netherlands.
| | | | - Ingrid Christoffels
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, the Netherlands
| |
Collapse
|
7
|
Baqué L. How do persons with apraxia of speech deal with morphological stress in Spanish? A preliminary study. CLINICAL LINGUISTICS & PHONETICS 2019; 34:131-168. [PMID: 31146601 DOI: 10.1080/02699206.2019.1622155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 05/12/2019] [Accepted: 05/19/2019] [Indexed: 06/09/2023]
Abstract
Equal stress across adjacent syllables and extended syllable durations are amongst the most salient features of acquired Apraxia of Speech (AOS). Most studies conclude that there is a deficit in durational cue processing, whereas the other acoustic stress correlates remain relatively unimpaired. Spanish is a free-stress language in which stress patterns are contrastive, especially in verbal forms (e.g. lavo /'labo/ '[I] wash' vs lavó /la'bo/ '[He/she] washed'). The aim of this preliminary study is to determine whether persons with AOS are able to make the intended stress pattern identifiable and, if so, to determine which acoustic cues they use to avoid the 'equal stress' phenomenon. The results show that, for each parameter considered (duration, intensity, fundamental frequency), apraxic participants' productions differed from those of controls to varying degrees depending on the task. However, 91.7% of the apraxic participants' realisations were perceived as corresponding to the intended tense and person. These results are interpreted as deriving from a motoric deficit affecting morphological stress processing by subjects with AOS combined with an idiosyncratic compensatory use of the stress cues in order to avoid 'equal stress'.
Collapse
Affiliation(s)
- Lorraine Baqué
- Department Filologia Francesa i Romànica, Universitat Autònoma de Barcelona, Bellaterra (Cerdanyola del Vallès, Barcelona), Spain
| |
Collapse
|
8
|
Multi-task prioritization during the performance of a postural–manual and communication task. Exp Brain Res 2019; 237:927-938. [DOI: 10.1007/s00221-019-05473-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 01/08/2019] [Indexed: 12/01/2022]
|
9
|
Blake ML. Right-Hemisphere Pragmatic Disorders. PERSPECTIVES IN PRAGMATICS, PHILOSOPHY & PSYCHOLOGY 2017. [DOI: 10.1007/978-3-319-47489-2_10] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
10
|
Sowman PF, Ryan M, Johnson BW, Savage G, Crain S, Harrison E, Martin E, Burianová H. Grey matter volume differences in the left caudate nucleus of people who stutter. BRAIN AND LANGUAGE 2017; 164:9-15. [PMID: 27693846 DOI: 10.1016/j.bandl.2016.08.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Revised: 08/22/2016] [Accepted: 08/28/2016] [Indexed: 06/06/2023]
Abstract
The cause of stuttering has many theoretical explanations. A number of research groups have suggested changes in the volume and/or function of the striatum as a causal agent. Two recent studies in children and one in adults who stutter (AWS) report differences in striatal volume compared that seen in controls; however, the laterality and nature of this anatomical volume difference is not consistent across studies. The current study investigated whether a reduction in striatal grey matter volume, comparable to that seen in children who stutter (CWS), would be found in AWS. Such a finding would support claims that an anatomical striatal anomaly plays a causal role in stuttering. We used voxel-based morphometry to examine the structure of the striatum in a group of AWS and compared it to that in a group of matched adult control subjects. Results showed a statistically significant group difference for the left caudate nucleus, with smaller mean volume in the group of AWS. The caudate nucleus, one of three main structures within the striatum, is thought to be critical for the planning and modulation of movement sequencing. The difference in striatal volume found here aligns with theoretical accounts of stuttering, which suggest it is a motor control disorder that arises from deficient articulatory movement selection and sequencing. Whilst the current study provides further evidence of a striatal volume difference in stuttering at the group level compared to controls, the significant overlap between AWS and controls suggests this difference is unlikely to be diagnostic of stuttering.
Collapse
Affiliation(s)
- Paul F Sowman
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; Australian Research Council Centre of Excellence in Cognition and Its Disorders, Australia; Perception and Action Research Centre, Faculty of Human Sciences, Macquarie University, New South Wales 2109, Australia.
| | - Margaret Ryan
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; Australian Research Council Centre of Excellence in Cognition and Its Disorders, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; Australian Research Council Centre of Excellence in Cognition and Its Disorders, Australia
| | - Greg Savage
- Australian Research Council Centre of Excellence in Cognition and Its Disorders, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia
| | - Stephen Crain
- Australian Research Council Centre of Excellence in Cognition and Its Disorders, Australia; Department of Linguistics, Macquarie University, New South Wales 2109, Australia
| | - Elisabeth Harrison
- Department of Linguistics, Macquarie University, New South Wales 2109, Australia
| | - Erin Martin
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia
| | - Hana Burianová
- Centre for Advanced Imaging, The University of Queensland, Queensland 4072, Australia
| |
Collapse
|
11
|
Wright AE, Davis C, Gomez Y, Posner J, Rorden C, Hillis AE, Tippett DC. Acute Ischemic Lesions Associated with Impairments in Expression and Recognition of Affective Prosody. ACTA ACUST UNITED AC 2016. [PMID: 28626799 DOI: 10.1044/persp1.sig2.82] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
PURPOSE We aimed to: (a) review existing data on the neural basis of affective prosody;(b) test the hypothesis that there are double dissociations in impairments of expression and recognition of affective prosody; and (c) identify areas of infarct associated with impaired expression and/or recognition of affective prosody after acute right hemisphere (RH) ischemic stroke. METHODS Participants were tested on recognition of emotional prosody in content-neutral sentences. Expression was evaluated by measuring variability in fundamental frequency. Voxel-based symptom mapping was used to identify areas associated with severity of expressive deficits. RESULTS We found that 9/23 patients had expressive prosody impairments; 5/9 of these patients also had impaired recognition of affective prosody; 2/9 had selective deficits in expressive prosody; recognition was not tested in 2/9. Another 6/23 patients had selective impairment in recognition of affective prosody. Severity of expressive deficits was associated with lesions in right temporal pole; patients with temporal pole lesions had deficits in expression and recognition. CONCLUSIONS Expression and recognition of prosody can be selectively impaired. Damage to right anterior temporal pole is associated with impairment of both, indicating a role of this structure in a mechanism shared by expression and production of affective prosody.
Collapse
Affiliation(s)
- Amy E Wright
- Department of Neurology, Johns Hopkins University, School of Medicine, Baltimore, MD
| | - Cameron Davis
- Department of Neurology, Johns Hopkins University, School of Medicine, Baltimore, MD
| | - Yessenia Gomez
- Department of Neurology, Johns Hopkins University, School of Medicine, Baltimore, MD
| | - Joseph Posner
- Department of Neurology, Johns Hopkins University, School of Medicine, Baltimore, MD
| | - Christopher Rorden
- Center for Aphasia Research and Rehabilitation, University of South Carolina, Columbia, SC
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University, School of Medicine, Baltimore, MD. Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD. Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD
| | - Donna C Tippett
- Department of Neurology, Johns Hopkins University, School of Medicine, Baltimore, MD. Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD. Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| |
Collapse
|
12
|
Yang SY, Van Lancker Sidtis D. Production of Korean Idiomatic Utterances Following Left- and Right-Hemisphere Damage: Acoustic Studies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:267-280. [PMID: 26556625 DOI: 10.1044/2015_jslhr-l-15-0109] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2015] [Accepted: 11/02/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE This study investigates the effects of left- and right-hemisphere damage (LHD and RHD) on the production of idiomatic or literal expressions utilizing acoustic analyses. METHOD Twenty-one native speakers of Korean with LHD or RHD and in a healthy control (HC) group produced 6 ditropically ambiguous (idiomatic or literal) sentences in 2 different speech tasks: elicitation and repetition. Utterances were analyzed using durational and fundamental-frequency (F0) measures. Listeners' goodness ratings (how well each utterance represented its category: idiomatic or literal) were correlated with acoustic measures. RESULTS During the elicitation tasks, the LHD group differed significantly from the HC group in durational measures. Significant differences between the RHD and HC groups were seen in F0 measures. However, for the repetition tasks, the LHD and RHD groups produced utterances comparable to the HC group's performance. Using regression analysis, selected F0 cues were found to be significant predictors for goodness ratings by listeners. CONCLUSIONS Using elicitation, speakers in the LHD group were deficient in producing durational cues, whereas RHD negatively affected the production of F0 cues. Performance differed for elicitation and repetition, indicating a task effect. Listeners' goodness ratings were highly correlated with the production of certain acoustic cues. Both the acoustic and functional hypotheses of hemispheric specialization were supported for idiom production.
Collapse
|
13
|
Joukar F, Mansour-Ghanaei F, Naghipour MR, Asgharnezhad M. Immune Responses to Single-Dose Versus Double-Dose Hepatitis B Vaccines in Healthcare Workers not Responding to the Primary Vaccine Series: A Randomized Clinical Trial. HEPATITIS MONTHLY 2016; 16:e32799. [PMID: 27148385 PMCID: PMC4852093 DOI: 10.5812/hepatmon.32799] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2015] [Revised: 01/25/2016] [Accepted: 01/27/2016] [Indexed: 12/11/2022]
Abstract
BACKGROUND Recommendations to immunize healthcare workers (HCWs) against hepatitis B are well known. However, a proportion of individuals do not respond to the primary standard three-dose HB vaccination schedule. OBJECTIVES The current study aimed to evaluate whether a double-dose HB booster vaccine could induce better protective anti-HB titers than a single-dose booster in non-protected HCWs. MATERIALS AND METHODS This was a randomized clinical trial. A total of 91 HCWs not responding to the primary vaccine series in 2014 were enrolled. The participants were randomized into two groups that received a double dose of the HB vaccine containing 40 µg of antigen or a single dose of the HB vaccine containing 20 µg of antigen in three doses (at zero, one and six months after vaccination). Blood samples were collected before vaccinations and 28 days after the third dose to assess the seroconversion rate, according to the anti-HB antibody titer threshold of > 10 mIU/mL. RESULTS The seroconversion rates were 93.2% and 87.2% after the first booster doses of the double-dose and single-dose HB vaccines, respectively (P = 0.64). In the double-dose HB vaccine group, the seroconversion rate was 97.8% compared with 89.6% in the single-dose group following the second vaccine dose (P = 0.83). All of the participants in both groups were seroprotected after the third HB vaccine dose. CONCLUSIONS Both the single- and double-dose HB vaccines were adequately immunogenic, and the double-dose HB vaccine was not significantly more immunogenic than the single-dose vaccine in terms of the seroconversion rates of HCWs who had not responded to the primary vaccine series.
Collapse
Affiliation(s)
- Farahnaz Joukar
- Gastrointestinal and Liver Diseases Research Center (GLDRC), Guilan University of Medical Sciences, Rasht, IR Iran
| | - Fariborz Mansour-Ghanaei
- Gastrointestinal and Liver Diseases Research Center (GLDRC), Guilan University of Medical Sciences, Rasht, IR Iran
- Corresponding Author: Fariborz Mansour-Ghanaei, Gastrointestinal and Liver Diseases Research Center (GLDRC), Guilan University of Medical Sciences, Rasht, IR Iran. E-mail: ;
| | - Mohammad-Reza Naghipour
- Gastrointestinal and Liver Diseases Research Center (GLDRC), Guilan University of Medical Sciences, Rasht, IR Iran
| | - Mehrnaz Asgharnezhad
- Gastrointestinal and Liver Diseases Research Center (GLDRC), Guilan University of Medical Sciences, Rasht, IR Iran
| |
Collapse
|
14
|
Guranski K, Podemski R. Emotional prosody expression in acoustic analysis in patients with right hemisphere ischemic stroke. Neurol Neurochir Pol 2015; 49:113-20. [DOI: 10.1016/j.pjnns.2015.03.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2015] [Accepted: 03/12/2015] [Indexed: 10/23/2022]
|
15
|
Dietrich S, Hertrich I, Ackermann H. Training of ultra-fast speech comprehension induces functional reorganization of the central-visual system in late-blind humans. Front Hum Neurosci 2013; 7:701. [PMID: 24167485 PMCID: PMC3805979 DOI: 10.3389/fnhum.2013.00701] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2013] [Accepted: 10/03/2013] [Indexed: 11/13/2022] Open
Abstract
Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per seconds (s)—exceeding by far the maximum performance level of untrained listeners (ca. 8 syl/s). Previous findings indicate the central-visual system to contribute to the processing of accelerated speech in blind subjects. As an extension, the present training study addresses the issue whether acquisition of ultra-fast (18 syl/s) speech perception skills induces de novo central-visual hemodynamic activation in late-blind participants. Furthermore, we asked to what extent subjects with normal or residual vision can improve understanding of accelerated verbal utterances by means of specific training measures. To these ends, functional magnetic resonance imaging (fMRI) was performed while subjects were listening to forward and reversed sentence utterances of moderately fast and ultra-fast syllable rates (8 or 18 syl/s) prior to and after a training period of ca. 6 months. Four of six participants showed—independently from residual visual functions—considerable enhancement of ultra-fast speech perception (about 70% points correctly repeated words) whereas behavioral performance did not change in the two remaining participants. Only subjects with very low visual acuity displayed training-induced hemodynamic activation of the central-visual system. By contrast, participants with moderately impaired or even normal visual acuity showed, instead, increased right-hemispheric frontal or bilateral anterior temporal lobe responses after training. All subjects with significant training effects displayed a concomitant increase of hemodynamic activation of left-hemispheric SMA. In spite of similar behavioral performance, trained “experts” appear to use distinct strategies of ultra-fast speech processing depending on whether the occipital cortex is still deployed for visual processing.
Collapse
Affiliation(s)
- Susanne Dietrich
- Department of General Neurology, Center for Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen Tübingen, Germany
| | | | | |
Collapse
|
16
|
Dietrich S, Hertrich I, Ackermann H. Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar - a functional magnetic resonance imaging (fMRI) study. BMC Neurosci 2013; 14:74. [PMID: 23879896 PMCID: PMC3847124 DOI: 10.1186/1471-2202-14-74] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2012] [Accepted: 07/17/2013] [Indexed: 11/30/2022] Open
Abstract
Background Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate. Results Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv). Conclusions Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory.
Collapse
Affiliation(s)
- Susanne Dietrich
- Center for Neurology/Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str, 3, D-72076, Tübingen, Germany.
| | | | | |
Collapse
|
17
|
Rhys CS, Ulbrich C, Ordin M. Adaptation to aphasia: grammar, prosody and interaction. CLINICAL LINGUISTICS & PHONETICS 2013; 27:46-71. [PMID: 23237417 DOI: 10.3109/02699206.2012.736010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This paper investigates recurrent use of the phrase very good by a speaker with non-fluent agrammatic aphasia. Informal observation of the speaker's interaction reveals that she appears to be an effective conversational partner despite very severe word retrieval difficulties that result in extensive reliance on variants of the phrase very good. The question that this paper addresses using an essentially conversation analytic framework is: What is the speaker achieving through these variants of very good and what are the linguistic and interactional resources that she draws on to achieve these communicative effects? Tokens of very good in the corpus were first analyzed in a bottom-up fashion, attending to sequential position, structure and participant orientation. This revealed distinct uses that were subsequently subjected to detailed acoustic analysis in order to investigate specific prosodic characteristics within and across the interactional variants. We identified specific clusters of prosodic cues that were exploited by the speaker to differentiate interactional uses of very good. The analysis thus shows how, in the adaptation to aphasia, the speaker exploits the rich interface between prosody, grammar and interaction both to manage the interactional demands of conversation and to communicate propositional content.
Collapse
Affiliation(s)
- Catrin S Rhys
- School of Communication and Institute for Research in Social Science, University of Ulster, Newtownabbey BT37 0QB, UK.
| | | | | |
Collapse
|
18
|
Huber JE, Darling M, Francis EJ, Zhang D. Impact of typical aging and Parkinson's disease on the relationship among breath pausing, syntax, and punctuation. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2012; 21:368-79. [PMID: 22846880 PMCID: PMC3804060 DOI: 10.1044/1058-0360(2012/11-0059)] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
PURPOSE The present study examines the impact of typical aging and Parkinson's disease (PD) on the relationship among breath pausing, syntax, and punctuation. METHOD Thirty young adults, 25 typically aging older adults, and 15 individuals with PD participated. Fifteen participants were age- and sex-matched to the individuals with PD. Participants read a passage aloud 2 times. Utterance length, location of breath pauses relative to punctuation and syntax, and number of disfluencies and mazes were measured. RESULTS Older adults produced shorter utterances, a smaller percentage of breaths at major boundaries, and a greater percentage of breaths at minor boundaries than did young adults, but there was no significant difference between older adults and individuals with PD on these measures. Individuals with PD took a greater percentage of breaths at locations unrelated to a syntactic boundary than did control participants. Individuals with PD produced more mazes than did control participants. Breaths were significantly correlated with punctuation for all groups. CONCLUSIONS Changes in breath-pausing patterns in older adults are likely due to changes in respiratory physiology. However, in individuals with PD, such changes appear to result from a combination of changes to respiratory physiology and cognition.
Collapse
|
19
|
Schirmer A, Fox PM, Grandjean D. On the spatial organization of sound processing in the human temporal lobe: a meta-analysis. Neuroimage 2012; 63:137-47. [PMID: 22732561 DOI: 10.1016/j.neuroimage.2012.06.025] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2012] [Revised: 06/15/2012] [Accepted: 06/18/2012] [Indexed: 12/19/2022] Open
Abstract
In analogy to visual object recognition, proposals have been made that auditory object recognition is organized by sound class (e.g., vocal/non-vocal, linguistic/non-linguistic) and linked to several pathways or processing streams with specific functions. To test these proposals, we analyzed temporal lobe activations from 297 neuroimaging studies on vocal, musical and environmental sound processing. We found that all sound classes elicited activations anteriorly, posteriorly and ventrally of primary auditory cortex. However, rather than being sound class (e.g., voice) or attribute (e.g., complexity) specific, these processing streams correlated with sound knowledge or experience. Specifically, an anterior stream seemed to support general, sound class independent sound recognition and discourse-level semantic processing. A posterior stream could be best explained as supporting the embodiment of sound associated actions and a ventral stream as supporting multimodal conceptual representations. Vocalizations and music engaged these streams evenly in the left and right hemispheres, whereas environmental sounds produced a left-lateralized pattern. Together, these results both challenge and confirm existing proposal of temporal lobe specialization. Moreover, they suggest that the temporal lobe maintains the neuroanatomical building blocks for an all-purpose sound comprehension system that, instead of being preset for a particular sound class, is shaped in interaction with an individual's sonic environment.
Collapse
Affiliation(s)
- Annett Schirmer
- National University of Singapore, Department of Psychology, Singapore.
| | | | | |
Collapse
|
20
|
Perrone-Bertolotti M, Dohen M, Lœvenbruck H, Sato M, Pichat C, Baciu M. Neural correlates of the perception of contrastive prosodic focus in French: a functional magnetic resonance imaging study. Hum Brain Mapp 2012; 34:2574-91. [PMID: 22488985 DOI: 10.1002/hbm.22090] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2011] [Revised: 02/18/2012] [Accepted: 03/06/2012] [Indexed: 11/11/2022] Open
Abstract
This functional magnetic resonance imaging (fMRI) study aimed at examining the cerebral regions involved in the auditory perception of prosodic focus using a natural focus detection task. Two conditions testing the processing of simple utterances in French were explored, narrow-focused versus broad-focused. Participants performed a correction detection task. The utterances in both conditions had exactly the same segmental, lexical, and syntactic contents, and only differed in their prosodic realization. The comparison between the two conditions therefore allowed us to examine processes strictly associated with prosodic focus processing. To assess the specific effect of pitch on hemispheric specialization, a parametric analysis was conducted using a parameter reflecting pitch variations specifically related to focus. The comparison between the two conditions reveals that brain regions recruited during the detection of contrastive prosodic focus can be described as a right-hemisphere dominant dual network consisting of (a) ventral regions which include the right posterosuperior temporal and bilateral middle temporal gyri and (b) dorsal regions including the bilateral inferior frontal, inferior parietal and left superior parietal gyri. Our results argue for a dual stream model of focus perception compatible with the asymmetric sampling in time hypothesis. They suggest that the detection of prosodic focus involves an interplay between the right and left hemispheres, in which the computation of slowly changing prosodic cues in the right hemisphere dynamically feeds an internal model concurrently used by the left hemisphere, which carries out computations over shorter temporal windows.
Collapse
Affiliation(s)
- Marcela Perrone-Bertolotti
- Laboratoire de Psychologie et NeuroCognition, UMR CNRS 5105, Université Pierre Mendès-France, Grenoble, France
| | | | | | | | | | | |
Collapse
|
21
|
Abstract
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.
Collapse
Affiliation(s)
- Chua Shi Min
- Department of Psychology, National University of Singapore, Singapore
| | | |
Collapse
|
22
|
Bélanger N, Baum SR, Titone D. Use of prosodic cues in the production of idiomatic and literal sentences by individuals with right- and left-hemisphere damage. BRAIN AND LANGUAGE 2009; 110:38-42. [PMID: 19339042 DOI: 10.1016/j.bandl.2009.02.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2008] [Revised: 01/14/2009] [Accepted: 02/17/2009] [Indexed: 05/27/2023]
Abstract
The neural bases of prosody during the production of literal and idiomatic interpretations of literally plausible idioms was investigated. Left- and right-hemisphere-damaged participants and normal controls produced literal and idiomatic versions of idioms (He hit the books.) All groups modulated duration to distinguish the interpretations. LHD patients, however, showed typical speech timing difficulties. RHD patients did not differ from the normal controls. The results partially support a differential lateralization of prosodic cues in the two cerebral hemispheres [Van Lancker, D., & Sidtis, J. J. (1992). The identification of affective-prosodic stimuli by left- and right-hemisphere-damaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963-970]. Furthermore, extended final word lengthening appears to mark idiomaticity.
Collapse
Affiliation(s)
- Nathalie Bélanger
- School of Communication Sciences and Disorders, McGill University, 1266 Pine Avenue West, Montreal, Quebec H3G 1A8, Canada
| | | | | |
Collapse
|
23
|
Walker JP, Joseph L, Goodman J. The production of linguistic prosody in subjects with aphasia. CLINICAL LINGUISTICS & PHONETICS 2009; 23:529-549. [PMID: 19585312 DOI: 10.1080/02699200902946944] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This study investigated the production of linguistic prosody in subjects with left hemisphere damage (LHD). Three experiments involving the production of lexical stress in nouns vs verbs, compound nouns vs tag constructions, and echo questions vs statements were conducted. Acoustic measurements (fundamental frequency (F(0)), duration and amplitude) of the prosodic structures were examined and naive listeners were asked to identify the meanings of the utterances. The results of the acoustic measurements indicated that LHD subjects did not produce prosodic structures that were comparable to control subjects to convey different linguistic meanings in all three experiments. Naive listeners had greater difficulty identifying the intended meanings of the utterances produced by the LHD subjects than control subjects in all three experiments. The results suggest that the left hemisphere plays a role in the production of linguistic prosody.
Collapse
Affiliation(s)
- Judy P Walker
- Department of Communication Sciences and Disorders, University of Maine, Orono, ME, USA.
| | | | | |
Collapse
|
24
|
Gregory SW, Kalkhoff W, Harkness SK, Paull JL. Targeted high and low speech frequency bands to right and left ears respectively improve task performance and perceived sociability in dyadic conversations. Laterality 2009; 14:423-40. [DOI: 10.1080/13576500802598181] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
25
|
Carlson K, Frazier L, Clifton C. How prosody constrains comprehension: A limited effect of prosodic packaging. LINGUA. INTERNATIONAL REVIEW OF GENERAL LINGUISTICS. REVUE INTERNATIONALE DE LINGUISTIQUE GENERALE 2009; 119:1066-1082. [PMID: 21461181 PMCID: PMC3066009 DOI: 10.1016/j.lingua.2008.11.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Prosody has a large impact on language processing. We contrast two views of how prosody and intonation might exert their effects. On a 'prosodic packaging' approach, prosodic boundaries structure the linguistic input into perceptual and memory units, with the consequence that material in earlier packages is less accessible for linguistic processing than material in the current package. This approach claims that such lessened accessibility holds true for the comprehension of all constructions, regardless of the particular kind of linguistic dependency that needs to be established using the earlier constituent. A 'specialized role' approach, by contrast, attributes to prosodic boundaries a role in making grouping decisions when building hierarchical structure, but attributes to pitch accents the major role in determining the accessibility of a constituent. The results of four listening studies with replacive sentences (Diane thought Patrick was entertaining, not Louise) support the predictions of the specialized role hypothesis over the prosodic packaging approach.
Collapse
Affiliation(s)
- Katy Carlson
- 419 Combs, Morehead State University, Morehead, KY 40371 USA , +1-606-783-2782 (tel), +1-606-783-9112 (fax)
| | - Lyn Frazier
- Department of Linguistics, University of Massachusetts, Amherst, MA 01003 USA
| | - Charles Clifton
- Department of Psychology, University of Massachusetts, Amherst, MA 01003 USA
| |
Collapse
|
26
|
Lausberg H, Zaidel E, Cruz RF, Ptito A. Speech-independent production of communicative gestures: Evidence from patients with complete callosal disconnection. Neuropsychologia 2007; 45:3092-104. [PMID: 17651766 DOI: 10.1016/j.neuropsychologia.2007.05.010] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2006] [Revised: 05/09/2007] [Accepted: 05/28/2007] [Indexed: 10/23/2022]
Abstract
Recent neuropsychological, psycholinguistic, and evolutionary theories on language and gesture associate communicative gesture production exclusively with left hemisphere language production. An argument for this approach is the finding that right-handers with left hemisphere language dominance prefer the right hand for communicative gestures. However, several studies have reported distinct patterns of hand preferences for different gesture types, such as deictics, batons, or physiographs, and this calls for an alternative hypothesis. We investigated hand preference and gesture types in spontaneous gesticulation during three semi-standardized interviews of three right-handed patients and one left-handed patient with complete callosal disconnection, all with left hemisphere dominance for praxis. Three of them, with left hemisphere language dominance, exhibited a reliable left-hand preference for spontaneous communicative gestures despite their left hand agraphia and apraxia. The fourth patient, with presumed bihemispheric language representation, revealed a consistent right-hand preference for gestures. All four patients displayed batons, tosses, and shrugs more often with the left hand/shoulder, but exhibited a right hand preference for pantomime gestures. We conclude that the hand preference for certain gesture types cannot be predicted by hemispheric dominance for language or by handedness. We found distinct hand preferences for specific gesture types. This suggests a conceptual specificity of the left and right hand gestures. We propose that left hand gestures are related to specialized right hemisphere functions, such as prosody or emotion, and that they are generated independently of left hemisphere language production. Our findings challenge the traditional neuropsychological and psycholinguistic view on communicative gesture production.
Collapse
Affiliation(s)
- Hedda Lausberg
- Department of Neurology, Charité Campus Benjamin Franklin, Berlin, Germany.
| | | | | | | |
Collapse
|
27
|
Yuan W, Szaflarski JP, Schmithorst VJ, Schapiro M, Byars AW, Strawsburg RH, Holland SK. fMRI shows atypical language lateralization in pediatric epilepsy patients. Epilepsia 2006; 47:593-600. [PMID: 16529628 PMCID: PMC1402337 DOI: 10.1111/j.1528-1167.2006.00474.x] [Citation(s) in RCA: 87] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
PURPOSE The goal of this study was to compare language lateralization between pediatric epilepsy patients and healthy children. METHODS Two groups of subjects were evaluated with functional magnetic resonance imaging (fMRI) by using a silent verb-generation task. The first group included 18 pediatric epilepsy patients, whereas the control group consisted of 18 age/gender/handedness-matched healthy subjects. RESULTS A significant difference in hemispheric lateralization index (LI) was found between children with epilepsy (mean LI =-0.038) and the age/gender/handedness-matched healthy control subjects (mean LI=0.257; t=6.490, p<0.0001). A dramatic difference also was observed in the percentage of children with epilepsy (77.78%) who had atypical LI (right-hemispheric or bilateral, LI<0.1) when compared with the age/gender/handedness-matched group (11.11%; chi(2)=16.02, p<0.001). A linear regression analysis showed a trend toward increasing language lateralization with age in healthy controls (R(2)=0.152; p=0.108). This association was not observed in pediatric epilepsy subjects (R(2)=0.004, p=0.80). A significant association between language LI and epilepsy duration also was found (R(2)=0.234, p<0.05). CONCLUSIONS This study shows that epilepsy during childhood is associated with neuroplasticity and reorganization of language function.
Collapse
Affiliation(s)
- Weihong Yuan
- Cincinnati Children’s Hospital Medical Center, Imaging Research Center, Cincinnati, OH, USA
- Corresponding Author Name: Weihong Yuan, Address: Cincinnati Children’s Hospital Medical Center Imaging Research Center ML 5031 3333 Burnet Ave., Cincinnati, OH 45229 Phone: 513-636-2862, Fax: 513-636-3754, E-mail:
| | | | - Vincent J. Schmithorst
- Cincinnati Children’s Hospital Medical Center, Imaging Research Center, Cincinnati, OH, USA
| | - Mark Schapiro
- Cincinnati Children’s Hospital Medical Center, Division of Neurology, Cincinnati, OH, USA
| | - Anna W. Byars
- Cincinnati Children’s Hospital Medical Center, Division of Neurology, Cincinnati, OH, USA
| | | | - Scott K. Holland
- Cincinnati Children’s Hospital Medical Center, Imaging Research Center, Cincinnati, OH, USA
| |
Collapse
|
28
|
Shah AP, Baum SR, Dwivedi VD. Neural substrates of linguistic prosody: evidence from syntactic disambiguation in the productions of brain-damaged patients. BRAIN AND LANGUAGE 2006; 96:78-89. [PMID: 15922444 DOI: 10.1016/j.bandl.2005.04.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2004] [Revised: 02/22/2005] [Accepted: 04/15/2005] [Indexed: 05/02/2023]
Abstract
The present investigation focussed on the neural substrates underlying linguistic distinctions that are signalled by prosodic cues. A production experiment was conducted to examine the ability of left- (LHD) and right- (RHD) hemisphere-damaged patients and normal controls to use temporal and fundamental frequency cues to disambiguate sentences which include one or more Intonational Phrase level prosodic boundaries. Acoustic analyses of subjects' productions of three sentence types-parentheticals, appositives, and tags-showed that LHD speakers, compared to RHD and normal controls, exhibited impairments in the control of temporal parameters signalling phrase boundaries, including inconsistent patterns of pre-boundary lengthening and longer-than-normal pause durations in non-boundary positions. Somewhat surprisingly, a perception test presented to a group of normal native listeners showed listeners experienced greatest difficulty in identifying the presence or absence of boundaries in the productions of the RHD speakers. The findings support a cue lateralization hypothesis in which prosodic domain plays an important role.
Collapse
Affiliation(s)
- Amee P Shah
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada.
| | | | | |
Collapse
|
29
|
Schirmer A, Lui M, Maess B, Escoffier N, Chan M, Penney TB. Task and sex modulate the brain response to emotional incongruity in Asian listeners. Emotion 2006; 6:406-17. [PMID: 16938082 DOI: 10.1037/1528-3542.6.3.406] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In order to recognize banter or sarcasm in social interactions, listeners must integrate verbal and vocal emotional expressions. Here, we investigated event-related potential correlates of this integration in Asian listeners. We presented emotional words spoken with congruous or incongruous emotional prosody. When listeners classified word meaning as positive or negative and ignored prosody, incongruous trials elicited a larger late positivity than congruous trials in women but not in men. Sex differences were absent when listeners evaluated the congruence between word meaning and emotional prosody. The similarity of these results to those obtained in Western listeners suggests that sex differences in emotional speech processing depend on attentional focus and may reflect culturally independent mechanisms.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, University of Georgia, Athens, GA, USA.
| | | | | | | | | | | |
Collapse
|
30
|
Wildgruber D, Ackermann H, Kreifelts B, Ethofer T. Cerebral processing of linguistic and emotional prosody: fMRI studies. PROGRESS IN BRAIN RESEARCH 2006; 156:249-68. [PMID: 17015084 DOI: 10.1016/s0079-6123(06)56013-3] [Citation(s) in RCA: 196] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
During acoustic communication in humans, information about a speaker's emotional state is predominantly conveyed by modulation of the tone of voice (emotional or affective prosody). Based on lesion data, a right hemisphere superiority for cerebral processing of emotional prosody has been assumed. However, the available clinical studies do not yet provide a coherent picture with respect to interhemispheric lateralization effects of prosody recognition and intrahemispheric localization of the respective brain regions. To further delineate the cerebral network engaged in the perception of emotional tone, a series of experiments was carried out based upon functional magnetic resonance imaging (fMRI). The findings obtained from these investigations allow for the separation of three successive processing stages during recognition of emotional prosody: (1) extraction of suprasegmental acoustic information predominantly subserved by right-sided primary and higher order acoustic regions; (2) representation of meaningful suprasegmental acoustic sequences within posterior aspects of the right superior temporal sulcus; (3) explicit evaluation of emotional prosody at the level of the bilateral inferior frontal cortex. Moreover, implicit processing of affective intonation seems to be bound to subcortical regions mediating automatic induction of specific emotional reactions such as activation of the amygdala in response to fearful stimuli. As concerns lower level processing of the underlying suprasegmental acoustic cues, linguistic and emotional prosody seem to share the same right hemisphere neural resources. Explicit judgment of linguistic aspects of speech prosody, however, appears to be linked to left-sided language areas whereas bilateral orbitofrontal cortex has been found involved in explicit evaluation of emotional prosody. These differences in hemispheric lateralization effects might explain that specific impairments in nonverbal emotional communication subsequent to focal brain lesions are relatively rare clinical observations as compared to the more frequent aphasic disorders.
Collapse
Affiliation(s)
- D Wildgruber
- Department of Psychiatry, University of Tübingen, Osianderstr. 24, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
31
|
Abstract
Time is a fundamental dimension of behavior and as such underlies the perception and production of speech. This paper reviews patient and neuroimaging studies that investigated brain structures that support temporal aspects of speech. The left-frontal cortex, the basal ganglia, and the cerebellum represent structures that have been implicated repeatedly. A comparison with the structures involved in the timing of non-speech events (e.g., tones, lights, finger movements) suggests both commonalities and differences: while the basal ganglia and the cerebellum contribute to the timing of speech and non-speech events, the contribution of left-frontal cortex seems to be specific to speech or rapidly changing acoustic information. Motivated by these commonalities and differences, this paper presents assumptions about the function of basal ganglia, cerebellum, and cortex in the timing of speech.
Collapse
Affiliation(s)
- Annett Schirmer
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany.
| |
Collapse
|
32
|
Astésano C, Besson M, Alter K. Brain potentials during semantic and prosodic processing in French. ACTA ACUST UNITED AC 2004; 18:172-84. [PMID: 14736576 DOI: 10.1016/j.cogbrainres.2003.10.002] [Citation(s) in RCA: 84] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The present experiment was aimed at investigating the on-line processing of semantic and prosodic information. We recorded the Event-Related brain Potentials (ERPs) to semantically and/or prosodically congruous and incongruous sentences that were presented aurally, to study the time course of semantic and prosodic processing, and to determine whether these two processes are independent or interactive. The prosodic mismatch was produced by cross-splicing the beginning of statements with the end of questions, and vice-versa. Subjects had to decide whether the sentences were semantically or prosodically congruous in two different attention conditions. Results showed that a right centro-parietal negative component (N400) was associated with semantic mismatch, and a left temporo-parietal positive component (P800) was associated with prosodic mismatch. Thus, these two electrophysiological markers of semantic and prosodic processing differed in their polarity, latency and scalp distribution. These differences may indicate that the two processes stem from different underlying generators. However, the finding that the P800 elicited by prosodic mismatch was larger when the sentences were semantically incongruous than congruous suggests that the two processes may be interactive.
Collapse
Affiliation(s)
- Corine Astésano
- Institut de Neurosciences Physiologiques et Cognitives, CNRS, 31, Chemin Joseph Aiguier, 13402 Marseille cedex 20, France.
| | | | | |
Collapse
|
33
|
Valaki CE, Maestu F, Simos PG, Zhang W, Fernandez A, Amo CM, Ortiz TM, Papanicolaou AC. Cortical organization for receptive language functions in Chinese, English, and Spanish: a cross-linguistic MEG study. Neuropsychologia 2004; 42:967-79. [PMID: 14998711 DOI: 10.1016/j.neuropsychologia.2003.11.019] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2003] [Revised: 10/01/2003] [Accepted: 11/11/2003] [Indexed: 11/28/2022]
Abstract
Chinese differs from Indo-European languages in both its written and spoken forms. Being a tonal language, tones convey lexically meaningful information. The current study examines patterns of neurophysiological activity in temporal and temporoparietal brain areas as speakers of two Indo-European languages (Spanish and English) and speakers of Mandarin-Chinese were engaged in a spoken-word recognition task that is used clinically for the presurgical determination of hemispheric dominace for receptive language functions. Brain magnetic activation profiles were obtained from 92 healthy adult volunteers: 30 monolingual native speakers of Mandarin-Chinese, 20 Spanish-speaking, and 42 native speakers of American English. Activation scans were acquired in two different whole-head MEG systems using identical testing methods. Results indicate that (a) the degree of hemispheric asymmetry in the duration of neurophysiological activity in temporal and temporoparietal regions was reduced in the Chinese group, (b) the proportion of individuals who showed bilaterally symmetric activation was significantly higher in this group, and (c) group differences in functional hemispheric asymmetry were first noted after the initial sensory processing of the word stimuli. Furthermore, group differences in the degree of hemispheric asymmetry were primarily due to greater degree of activation in the right temporoparietal region in the Chinese group, suggesting increased participation of this region in the spoken word recognition in Mandarin-Chinese.
Collapse
Affiliation(s)
- C E Valaki
- Facultad de Medicina, Centro de Magnetoencefalografia Dr. Perez Modrego, Universidad Complutense de Madrid, Pabellon No. 8, Avendia Complutense, Madrid, Spain
| | | | | | | | | | | | | | | |
Collapse
|
34
|
Abstract
Psycholinguistic models of sentence parsing are primarily based on reading rather than auditory processing data. Moreover, both prosodic information and its potential orthographic equivalent, i.e., punctuation, have been largely ignored until recently. The unavailability of experimental online methods is one likely reason for this neglect. Here I give an overview of six event-related brain potential (ERP) studies demonstrating that the processing of both prosodic boundaries in natural speech and commas during silent reading can determine syntax parsing immediately. In ERPs, speech boundaries and commas reliably elicit a similar online brain response, termed the Closure Positive Shift (CPS). This finding points to a common mechanism, suggesting that commas serve as visual triggers for covert phonological phrasing. Alternative CPS accounts are tested and the relationship between the CPS and other ERP components, including the P600/SPS, is addressed.
Collapse
|
35
|
Gandour J, Dzemidzic M, Wong D, Lowe M, Tong Y, Hsieh L, Satthamnuwong N, Lurito J. Temporal integration of speech prosody is shaped by language experience: an fMRI study. BRAIN AND LANGUAGE 2003; 84:318-336. [PMID: 12662974 DOI: 10.1016/s0093-934x(02)00505-9] [Citation(s) in RCA: 83] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Differences in hemispheric functions underlying speech perception may be related to the size of temporal integration windows over which prosodic features (e.g., pitch) span in the speech signal. Chinese tone and intonation, both signaled by variations in pitch contours, span over shorter (local) and longer (global) temporal domains, respectively. This cross-linguistic (Chinese and English) study uses functional magnetic resonance imaging to show that pitch contours associated with tones are processed in the left hemisphere by Chinese listeners only, whereas pitch contours associated with intonation are processed predominantly in the right hemisphere. These findings argue against the view that all aspects of speech prosody are lateralized to the right hemisphere, and promote the idea that varying-sized temporal integration windows reflect a neurobiological adaptation to meet the 'prosodic needs' of a particular language.
Collapse
Affiliation(s)
- Jack Gandour
- Department of Audiology and Speech Sciences, Purdue University, Heavilon Hall, West Lafayette, IN 47907-1353, USA.
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Meyer M, Alter K, Friederici AD, Lohmann G, von Cramon DY. FMRI reveals brain regions mediating slow prosodic modulations in spoken sentences. Hum Brain Mapp 2002; 17:73-88. [PMID: 12353242 PMCID: PMC6871847 DOI: 10.1002/hbm.10042] [Citation(s) in RCA: 245] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
By means of fMRI measurements, the present study identifies brain regions in left and right peri-sylvian areas that subserve grammatical or prosodic processing. Normal volunteers heard 1) normal sentences; 2) so-called syntactic sentences comprising syntactic, but no lexical-semantic information; and 3) manipulated speech signals comprising only prosodic information, i.e., speech melody. For all conditions, significant blood oxygenation signals were recorded from the supratemporal plane bilaterally. Left hemisphere areas that surround Heschl gyrus responded more strongly during the two sentence conditions than to speech melody. This finding suggests that the anterior and posterior portions of the superior temporal region (STR) support lexical-semantic and syntactic aspects of sentence processing. In contrast, the right superior temporal region, in especially the planum temporale, responded more strongly to speech melody. Significant brain activation in the fronto-opercular cortices was observed when participants heard pseudo sentences and was strongest during the speech melody condition. In contrast, the fronto-opercular area is not prominently involved in listening to normal sentences. Thus, the functional activation in fronto-opercular regions increases as the grammatical information available in the sentence decreases. Generally, brain responses to speech melody were stronger in right than left hemisphere sites, suggesting a particular role of right cortical areas in the processing of slow prosodic modulations.
Collapse
Affiliation(s)
- Martin Meyer
- Max-Planck-Institute of Cognitive Neuroscience, Leipzig, Germany.
| | | | | | | | | |
Collapse
|