1
|
Exploring individual differences in native phonetic perception and their link to nonnative phonetic perception. J Exp Psychol Hum Percept Perform 2024; 50:370-394. [PMID: 38300566 DOI: 10.1037/xhp0001191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
Abstract
Adults differ considerably in their perception of both native and nonnative phonemes. For instance, when presented with continua of native phonemes on two-alternative forced choice (2AFC) or visual analog scaling (VAS) tasks, some people show sudden changes in responses (i.e., steep identification slopes) and others show gradual changes (i.e., shallow identification slopes). Moreover, some adults are more successful than others at learning unfamiliar phonemes. The predictors of these individual differences and the relationships between them are poorly understood. It also remains unclear to what extent different tasks (2AFC vs. VAS) may reflect distinct individual differences in perception. In two experiments, we addressed these questions by examining the relationships between individual differences in performance on native and nonnative phonetic perception tasks. We found that shallow 2AFC identification slopes were not related to shallow VAS identification slopes but were related to inconsistent VAS responses. Additionally, our results suggest that consistent native perception may play a role in promoting successful nonnative perception. These findings help characterize the nature of individual differences in phonetic perception and contribute to our understanding of how to measure such differences. This work also has implications for encouraging successful acquisition of new languages in adulthood. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
2
|
Phrase parsing in a second language as indexed by the closure positive shift: The impact of language experience and acoustic cue salience. Eur J Neurosci 2023; 58:3838-3858. [PMID: 37667595 DOI: 10.1111/ejn.16134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/22/2023] [Accepted: 08/12/2023] [Indexed: 09/06/2023]
Abstract
Despite the importance of prosodic processing in utterance parsing, a majority of studies investigating boundary localization in a second language focus on word segmentation. The goal of the present study was to investigate the parsing of phrase boundaries in first and second languages from different prosodic typologies (stress-timed vs. syllable-timed). Fifty English-French bilingual adults who varied in native language (French or English) and second language proficiency listened to English and French utterances with different prosodic structures while event-related brain potentials were recorded. The utterances were built around target words presented either in phrase-final position (bearing phrase-final lengthening) or in penultimate position. Each participant listened to both English and French stimuli, providing data in their native language (used as reference) and their second language. Target words in phrase-final position elicited closure positive shifts across listeners in both languages, regardless of the language-specific acoustic cues associated with phrase-final lengthening (shorter phrase-final lengthening in English compared to French). Interestingly, directional effects were observed, where learning to parse English as a second language in a native-like manner seemed to require a higher proficiency level than learning to parse French as a second language. This pattern of results supports the idea that L2 listeners need to learn to recognize L2-specific phrase-final lengthening regardless of the apparent similarity across languages and that some language combinations might present greater challenges than others.
Collapse
|
3
|
Age of Acquisition Modulates Alpha Power During Bilingual Speech Comprehension in Noise. Front Psychol 2022; 13:865857. [PMID: 35548507 PMCID: PMC9083356 DOI: 10.3389/fpsyg.2022.865857] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 03/11/2022] [Indexed: 12/20/2022] Open
Abstract
Research on bilingualism has grown exponentially in recent years. However, the comprehension of speech in noise, given the ubiquity of both bilingualism and noisy environments, has seen only limited focus. Electroencephalogram (EEG) studies in monolinguals show an increase in alpha power when listening to speech in noise, which, in the theoretical context where alpha power indexes attentional control, is thought to reflect an increase in attentional demands. In the current study, English/French bilinguals with similar second language (L2) proficiency and who varied in terms of age of L2 acquisition (AoA) from 0 (simultaneous bilinguals) to 15 years completed a speech perception in noise task. Participants were required to identify the final word of high and low semantically constrained auditory sentences such as "Stir your coffee with a spoon" vs. "Bob could have known about the spoon" in both of their languages and in both noise (multi-talker babble) and quiet during electrophysiological recording. We examined the effects of language, AoA, semantic constraint, and listening condition on participants' induced alpha power during speech comprehension. Our results show an increase in alpha power when participants were listening in their L2, suggesting that listening in an L2 requires additional attentional control compared to the first language, particularly early in processing during word identification. Additionally, despite similar proficiency across participants, our results suggest that under difficult processing demands, AoA modulates the amount of attention required to process the second language.
Collapse
|
4
|
Bilingual language experience and the neural underpinnings of working memory. Neuropsychologia 2021; 163:108081. [PMID: 34728242 DOI: 10.1016/j.neuropsychologia.2021.108081] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 10/04/2021] [Accepted: 10/29/2021] [Indexed: 12/01/2022]
Abstract
A longstanding question in cognitive neuroscience and in the bilingualism literature is how early language experience influences brain development and cognitive outcomes, and whether these effects are global or specific to language-related processes. The current investigation examined the effect of the timing of language learning on the performance and neural correlates of phonological and non-verbal working memory, subcomponents of executive function. Three groups of bilinguals, who varied in terms of the timing of second language learning (i.e., simultaneous bilinguals learned their two languages from birth; early and late bilinguals who learned their second language before or after 5 years of age, respectively), performed phonological and non-verbal working memory tasks in the magnetic resonance imaging scanner. Results showed that there were no group differences in performance on either of the tasks, or in the neural correlates of performance of the non-verbal task. However, critically, we showed that despite similar behavioural performance, the groups differed in the patterns of neural recruitment during performance of the phonological working memory task. The pattern of group differences was non-linear, demonstrating similar neural recruitment for simultaneous and late bilinguals that differed from early bilinguals. Findings from the current study suggest a dynamic mapping between the brain and cognition, contributing to our current understanding of the effect of the timing of language learning on cognitive processes and demonstrating a specific effect on language-related executive function.
Collapse
|
5
|
Spoken Word Segmentation in First and Second Language: When ERP and Behavioral Measures Diverge. Front Psychol 2021; 12:705668. [PMID: 34603133 PMCID: PMC8485064 DOI: 10.3389/fpsyg.2021.705668] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/18/2021] [Indexed: 11/24/2022] Open
Abstract
Previous studies of word segmentation in a second language have yielded equivocal results. This is not surprising given the differences in the bilingual experience and proficiency of the participants and the varied experimental designs that have been used. The present study tried to account for a number of relevant variables to determine if bilingual listeners are able to use native-like word segmentation strategies. Here, 61 French-English bilingual adults who varied in L1 (French or English) and language dominance took part in an audiovisual integration task while event-related brain potentials (ERPs) were recorded. Participants listened to sentences built around ambiguous syllable strings (which could be disambiguated based on different word segmentation patterns), during which an illustration was presented on screen. Participants were asked to determine if the illustration was related to the heard utterance or not. Each participant listened to both English and French utterances, providing segmentation patterns that included both their native language (used as reference) and their L2. Interestingly, different patterns of results were observed in the event-related potentials (online) and behavioral (offline) results, suggesting that L2 participants showed signs of being able to adapt their segmentation strategies to the specifics of the L2 (online ERP results), but that the extent of the adaptation varied as a function of listeners' language experience (offline behavioral results).
Collapse
|
6
|
Visual feedback of the tongue influences speech adaptation to a physical modification of the oral cavity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:718. [PMID: 34470311 DOI: 10.1121/10.0005520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/15/2021] [Indexed: 06/13/2023]
Abstract
Studies examining sensorimotor adaptation of speech to changing sensory conditions have demonstrated a central role for both auditory and somatosensory feedback in speech motor learning. The potential influence of visual feedback of oral articulators, which is not typically available during speech production but may nonetheless enhance oral motor control, remains poorly understood. The present study explores the influence of ultrasound visual feedback of the tongue on adaptation of speech production (focusing on the sound /s/) to a physical perturbation of the oral articulators (prosthesis altering the shape of the hard palate). Two visual feedback groups were tested that differed in the two-dimensional plane being imaged (coronal or sagittal) during practice producing /s/ words, along with a no-visual-feedback control group. Participants in the coronal condition were found to adapt their speech production across a broader range of acoustic spectral moments and syllable contexts than the no-feedback controls. In contrast, the sagittal group showed reduced adaptation compared to no-feedback controls. The results indicate that real-time visual feedback of the tongue is spontaneously integrated during speech motor adaptation, with effects that can enhance or interfere with oral motor learning depending on compatibility of the visual articulatory information with requirements of the speaking task.
Collapse
|
7
|
Near native-like stress pattern perception in English-French bilinguals as indexed by the mismatch negativity. BRAIN AND LANGUAGE 2021; 213:104892. [PMID: 33333337 DOI: 10.1016/j.bandl.2020.104892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 07/06/2020] [Accepted: 11/29/2020] [Indexed: 06/12/2023]
Abstract
We examined lexical stress processing in English-French bilinguals. Auditory mismatch negativity (MMN) responses were recorded in response to English and French pseudowords, whose primary stress occurred either on a language-consistent "usual" or language-inconsistent "unusual" syllable. In most conditions, the pseudowords elicited two consecutive MMNs, and somewhat surprisingly, these MMNs were not systematically modulated by bilingual experience. This suggests that it is possible to achieve native-like pre-attentive processing of lexical stress, even in a language that one has not learned since birth.
Collapse
|
8
|
Earlier age of second language learning induces more robust speech encoding in the auditory brainstem in adults, independent of amount of language exposure during early childhood. BRAIN AND LANGUAGE 2020; 207:104815. [PMID: 32535187 DOI: 10.1016/j.bandl.2020.104815] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/20/2020] [Accepted: 05/27/2020] [Indexed: 06/11/2023]
Abstract
Learning a second language (L2) at a young age is a driving factor of functional neuroplasticity in the auditory brainstem. To date, it remains unclear whether these effects remain stable until adulthood and to what degree the amount of exposure to the L2 in early childhood might affect their outcome. We compared three groups of adult English-French bilinguals in their ability to categorize English vowels in relation to their frequency following responses (FFR) evoked by the same vowels. At the time of testing, cognitive abilities as well as fluency in both languages were matched between the (1) simultaneous bilinguals (SIM, N = 18); (2) sequential bilinguals with L1-English (N = 14); and (3) sequential bilinguals with L1-French (N = 11). Our results show that the L1-English group show sharper category boundaries in identification of the vowels compared to the L1-French group. Furthermore, the same pattern was reflected in the FFRs (i.e., larger FFR responses in L1-English > SIM > L1-French), while again only the difference between the L1-English and the L1-French group was statistically significant; nonetheless, there was a trend towards larger FFR in SIM compared to L1-French. Our data extends previous literature showing that exposure to a language during the first years of life induces functional neuroplasticity in the auditory brainstem that remains stable until at least young adulthood. Furthermore, the findings suggest that amount of exposure (i.e., 100% vs. 50%) to that language does not differentially shape the robustness of the perceptual abilities or the auditory brainstem encoding of phonetic categories of the language. Statement of significance: Previous studies have indicated that early age of L2 acquisition induces functional neuroplasticity in the auditory brainstem during processing of the L2. This study compared three groups of adult bilinguals who differed in their age of L2 acquisition as well as the amount of exposure to the L2 during early childhood. We demonstrate for the first time that the neuroplastic effect in the brainstem remains stable until young adulthood and that the amount of L2 exposure does not influence behavioral or brainstem plasticity. Our study provides novel insights into low-level auditory plasticity as a function of varying bilingual experience.
Collapse
|
9
|
Sensorimotor adaptation across the speech production workspace in response to a palatal perturbation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1163. [PMID: 32113266 DOI: 10.1121/10.0000672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 01/15/2020] [Indexed: 06/10/2023]
Abstract
Talkers have been shown to adapt the production of multiple vowel sounds simultaneously in response to altered auditory feedback. The present study extends this work by exploring the adaptation of speech production to a physical alteration of the vocal tract involving a palatal prosthesis that impacts both somatosensory and auditory feedback during the production of a range of consonants and vowels. Acoustic and kinematic measures of the tongue were used to examine the impact of the physical perturbation across the various speech sounds, and to assess learned changes following 20 min of speech practice involving the production of complex, variable sentences. As in prior studies, acoustic analyses showed perturbation and adaptation effects primarily for sounds directly involving interaction with the palate. Analyses of tongue kinematics, however, revealed systematic, robust effects of the perturbation and subsequent motor learning across the full range of speech sounds. The results indicate that speakers are able to reconfigure oral motor patterns during the production of multiple speech sounds spanning the articulatory workspace following a physical alteration of the vocal tract.
Collapse
|
10
|
The Relationship Between Speech Perceptual Discrimination and Speech Production in Parkinson's Disease. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4256-4268. [PMID: 31738857 DOI: 10.1044/2019_jslhr-s-18-0425] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Purpose We recently demonstrated that individuals with Parkinson's disease (PD) respond differentially to specific altered auditory feedback parameters during speech production. Participants with PD respond more robustly to pitch and less robustly to formant manipulations compared to control participants. In this study, we investigated whether differences in perceptual processing may in part underlie these compensatory differences in speech production. Methods Pitch and formant feedback manipulations were presented under 2 conditions: production and listening. In the production condition, 15 participants with PD and 15 age- and gender-matched healthy control participants judged whether their own speech output was manipulated in real time. During the listening task, participants judged whether paired tokens of their previously recorded speech samples were the same or different. Results Under listening, 1st formant manipulation discrimination was significantly reduced for the PD group compared to the control group. There was a trend toward better discrimination of pitch in the PD group, but the group difference was not significant. Under the production condition, the ability of participants with PD to identify pitch manipulations was greater than that of the controls. Conclusion The findings suggest perceptual processing differences associated with acoustic parameters of fundamental frequency and 1st formant perturbations in PD. These findings extend our previous results, indicating that different patterns of compensation to pitch and 1st formant shifts may reflect a combination of sensory and motor mechanisms that are differentially influenced by basal ganglia dysfunction.
Collapse
|
11
|
Adaptive and selective production of syllable duration and fundamental frequency as word segmentation cues by French-English bilinguals. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4255. [PMID: 31893706 DOI: 10.1121/1.5134781] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 10/31/2019] [Indexed: 06/10/2023]
Abstract
Despite the significant impact of prosody on L2 speakers' intelligibility, few studies have examined the production of prosodic cues associated with word segmentation in non-native or non-dominant languages. Here, 62 French-English bilingual adults, who varied in L1 (French or English) and language dominance, produced sentences built around syllable strings that can be produced either as one bisyllabic word or two monosyllabic words. Each participant produced both English and French utterances, providing both native productions (used as reference) and L2 productions. Acoustic analyses of the mean fundamental frequency (F0) and duration of both syllables of the ambiguous string revealed that speakers' relative language dominance affected the speakers' prosodic cue production over and above L1. Speakers also produced different prosodic patterns in English and French, suggesting that the production of prosodic cues associated with word-segmentation is both adaptive (modified by language experience) and selective (specific to each language).
Collapse
|
12
|
How Do French-English Bilinguals Pull Verb Particle Constructions Off? Factors Influencing Second Language Processing of Unfamiliar Structures at the Syntax-Semantics Interface. Front Psychol 2018; 9:1885. [PMID: 30405466 PMCID: PMC6202929 DOI: 10.3389/fpsyg.2018.01885] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Accepted: 09/14/2018] [Indexed: 11/13/2022] Open
Abstract
An important challenge in bilingualism research is to understand the mechanisms underlying sentence processing in a second language and whether they are comparable to those underlying native processing. Here, we focus on verb-particle constructions (VPCs) that are among the most difficult elements to acquire in L2 English. The verb and the particle form a unit, which often has a non-compositional meaning (e.g., look up or chew out), making the combined structure semantically opaque. However, bilinguals with higher levels of English proficiency can develop a good knowledge of the semantic properties of VPCs (Blais and Gonnerman, 2013). A second difficulty is that in a sentence context, the particle can be shifted after the direct object of the verb (e.g., The professor looked it up). The processing is more challenging when the object is long (e.g., The professor looked the student's last name up). This shifted structure favors syntactic processing at the expense of VPC semantic processing. We sought to determine whether or not bilinguals' reading time (RT) patterns would be similar to those observed for native monolinguals (Gonnerman and Hayes, 2005) when reading VPCs in sentential contexts. French-English bilinguals were tested for English language proficiency, working memory and explicit VPC semantic knowledge. During a self-paced reading task, participants read 78 sentences with VPCs that varied according to parameters that influence native speakers' reading dynamics: verb-particle transparency, particle adjacency and length of the object noun phrase (NP; 2, 3, or 5 words). RTs in a critical region that included verbs, NPs and particles were measured. Results revealed that RTs were modulated by participants' English proficiency, with higher proficiency associated with shorter RTs. Examining participants' explicit semantic knowledge of VPCs and working memory, only readers with more native-like knowledge of VPCs and a high working memory presented RT patterns that were similar to those of monolinguals. Therefore, given the necessary lexical and computational resources, bilingual processing of novel structures at the syntax-semantics interface follows the principles influencing native processing. The findings are in keeping with theories that postulate similar representations and processing in L1 and L2 modulated by processing difficulty.
Collapse
|
13
|
Bilingualism in the real world: How proficiency, emotion, and personality in a second language impact communication in clinical and legal settings. TRANSLATIONAL ISSUES IN PSYCHOLOGICAL SCIENCE 2017. [DOI: 10.1037/tps0000103] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Punctuation and Implicit Prosody in Silent Reading: An ERP Study Investigating English Garden-Path Sentences. Front Psychol 2016; 7:1375. [PMID: 27695428 PMCID: PMC5023661 DOI: 10.3389/fpsyg.2016.01375] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Accepted: 08/29/2016] [Indexed: 11/13/2022] Open
Abstract
This study presents the first two ERP reading studies of comma-induced effects of covert (implicit) prosody on syntactic parsing decisions in English. The first experiment used a balanced 2 × 2 design in which the presence/absence of commas determined plausibility (e.g., John, said Mary, was the nicest boy at the party vs. John said Mary was the nicest boy at the party). The second reading experiment replicated a previous auditory study investigating the role of overt prosodic boundaries in closure ambiguities (Pauker et al., 2011). In both experiments, commas reliably elicited CPS components and generally played a dominant role in determining parsing decisions in the face of input ambiguity. The combined set of findings provides further evidence supporting the claim that mechanisms subserving speech processing play an active role during silent reading.
Collapse
|
15
|
The role of the ventro-lateral prefrontal cortex in idiom comprehension: An rTMS study. Neuropsychologia 2016; 91:360-370. [PMID: 27609125 DOI: 10.1016/j.neuropsychologia.2016.09.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Revised: 08/22/2016] [Accepted: 09/04/2016] [Indexed: 11/17/2022]
Abstract
Previous research is equivocal with respect to the neural substrates of idiom processing. Particularly elusive is the role of the left ventro-lateral prefrontal cortex (VLPFC), a region implicated in semantic control generally. Although fMRI studies have shown that the left VLPFC is active during idiom processing (see Rapp et al. (2012), for review), rTMS studies have failed to corroborate a clear role of this prefrontal region (e.g., Oliveri et al., 2004). We investigated this issue using a semantic meaningfulness judgment task that compared idiom comprehension following rTMS-stimulation to the left VLPFC relative to a control site (vertex). We also investigated whether individual differences in general cognitive capacity among comprehenders modulated the effects of rTMS. The results indicate that left VLPFC stimulation particularly affected the processing of low-familiar idioms, possibly because these items involve a maximal semantic conflict between a salient literal and less-known figurative meaning. Of note, this pattern only emerged for comprehenders with higher cognitive control capacity, possibly because they were more likely to activate or maintain multiple semantic representations during idiom processing, which required VLPFC integrity. Taken together, the results support the importance of the left VLPFC to idiom processing.
Collapse
|
16
|
Please say what this word is-Vowel-extrinsic normalization in the sensorimotor control of speech. J Exp Psychol Hum Percept Perform 2016; 42:1039-47. [PMID: 26820250 DOI: 10.1037/xhp0000209] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The extent to which the adaptive nature of speech perception influences the acoustic targets underlying speech production is not well understood. For example, listeners can rapidly accommodate to talker-dependent phonetic properties-a process known as vowel-extrinsic normalization-without altering their speech output. Recent evidence, however, shows that reinforcement-based learning in vowel perception alters the processing of speech auditory feedback, impacting sensorimotor control during vowel production. This suggests that more automatic and ubiquitous forms of perceptual plasticity, such as those characterizing perceptual talker normalization, may also impact the sensorimotor control of speech. To test this hypothesis, we set out to examine the possible effects of vowel-extrinsic normalization on experimental subjects' interpretation of their own speech outcomes. By combining a well-known manipulation of vowel-extrinsic normalization with speech auditory-motor adaptation, we show that exposure to different vowel spectral properties subsequently alters auditory feedback processing during speech production, thereby influencing speech motor adaptation. These findings extend the scope of perceptual normalization processes to include auditory feedback and support the idea that naturally occurring adaptations found in speech perception impact speech production. (PsycINFO Database Record
Collapse
|
17
|
Phonological processing in speech perception: What do sonority differences tell us? BRAIN AND LANGUAGE 2015; 149:77-83. [PMID: 26186232 DOI: 10.1016/j.bandl.2015.06.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Revised: 06/15/2015] [Accepted: 06/16/2015] [Indexed: 06/04/2023]
Abstract
Previous research has associated the inferior frontal and posterior temporal brain regions with a number of phonological processes. In order to identify how these specific brain regions contribute to phonological processing, we manipulated subsyllabic phonological complexity and stimulus modality during speech perception using fMRI. Subjects passively attended to visual or auditory pseudowords. Similar to previous studies, a bilateral network of cortical regions was recruited during the presentation of visual and auditory stimuli. Moreover, pseudowords recruited a similar network of regions as words and letters. Few regions in the whole-brain results revealed neural processing differences associated with phonological complexity independent of modality of presentation. In an ROI analysis, the only region sensitive to phonological complexity was the posterior part of the inferior frontal gyrus (IFGpo), with the complexity effect only present for print. In sum, the sensitivity of phonological brain areas depends on the modality of stimulus presentation and task demands.
Collapse
|
18
|
Misleading Bias-Driven Expectations in Referential Processing and the Facilitative Role of Contrastive Accent. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2015; 44:623-650. [PMID: 25015025 DOI: 10.1007/s10936-014-9306-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Probabilistic preferences are often facilitative in language processing and may assist in discourse prediction. However, occasionally these sources of information may lead to inaccurate expectations. The current study investigated a test case of this scenario. An eye-tracking experiment examined the interpretation of ambiguous personal pronouns in the context of implicit causality biases. We tested whether reference resolution may be facilitated online by contrastive accent in cases of a bias-inconsistent referent. Implicit causality biases directed looks to the biased noun phrase; however, when the name of the bias-inconsistent antecedent was accented (e.g., JOHN envied Bill because he ...), this tendency was modulated. Contrastive accent seems to dampen the occasionally confusing prediction of implicit causality biases in referential processing. This demonstrates one way in which the spoken language comprehension system copes with occasional misguidance of otherwise helpful probabilistic information.
Collapse
|
19
|
Individual differences in executive control relate to metaphor processing: an eye movement study of sentence reading. Front Hum Neurosci 2015; 8:1057. [PMID: 25628557 PMCID: PMC4292575 DOI: 10.3389/fnhum.2014.01057] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2014] [Accepted: 12/18/2014] [Indexed: 12/04/2022] Open
Abstract
Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.
Collapse
|
20
|
Lexical-perceptual integration influences sensorimotor adaptation in speech. Front Hum Neurosci 2014; 8:208. [PMID: 24860460 PMCID: PMC4029003 DOI: 10.3389/fnhum.2014.00208] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Accepted: 03/24/2014] [Indexed: 11/26/2022] Open
Abstract
A combination of lexical bias and altered auditory feedback was used to investigate the influence of higher-order linguistic knowledge on the perceptual aspects of speech motor control. Subjects produced monosyllabic real words or pseudo-words containing the vowel [ε] (as in “head”) under conditions of altered auditory feedback involving a decrease in vowel first formant (F1) frequency. This manipulation had the effect of making the vowel sound more similar to [I] (as in “hid”), affecting the lexical status of produced words in two Lexical-Change (LC) groups (either changing them from real words to pseudo-words: e.g., less—liss, or pseudo-words to real words: e.g., kess—kiss). Two Non-Lexical-Change (NLC) control groups underwent the same auditory feedback manipulation during the production of [ε] real- or pseudo-words, only without any resulting change in lexical status (real words to real words: e.g., mess—miss, or pseudo-words to pseudo-words: e.g., ness—niss). The results from the LC groups indicate that auditory-feedback-based speech motor learning is sensitive to the lexical status of the stimuli being produced, in that speakers tend to keep their acoustic speech outcomes within the auditory-perceptual space corresponding to the task-related side of the word/non-word boundary (real words or pseudo-words). For the NLC groups, however, no such effect of lexical status is observed.
Collapse
|
21
|
On the role of the supramarginal gyrus in phonological processing and verbal working memory: Evidence from rTMS studies. Neuropsychologia 2014; 53:39-46. [DOI: 10.1016/j.neuropsychologia.2013.10.015] [Citation(s) in RCA: 92] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Revised: 10/14/2013] [Accepted: 10/24/2013] [Indexed: 11/28/2022]
|
22
|
Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:2975-2987. [PMID: 24116433 DOI: 10.1121/1.4818740] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In a previous paper [Ménard et al., J. Acoust. Soc. Am. 126, 1406-1414 (2009)], it was demonstrated that, despite enhanced auditory discrimination abilities for synthesized vowels, blind adult French speakers produced vowels that were closer together in the acoustic space than those produced by sighted adult French speakers, suggesting finer control of speech production in the sighted speakers. The goal of the present study is to further investigate the articulatory effects of visual deprivation on vowels produced by 11 blind and 11 sighted adult French speakers. Synchronous ultrasound, acoustic, and video recordings of the participants articulating the ten French oral vowels were made. Results show that sighted speakers produce vowels that are spaced significantly farther apart in the acoustic vowel space than blind speakers. Furthermore, blind speakers use smaller differences in lip protrusion but larger differences in tongue position and shape than their sighted peers to produce rounding and place of articulation contrasts. Trade-offs between lip and tongue positions were examined. Results are discussed in the light of the perception-for-action control theory.
Collapse
|
23
|
Abstract
Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions [1]. Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures. Pitch changes are an integral part of both spoken language and song. Despite sharing some of the same psychological and neural mechanisms, the authors conclude there are fundamental differences between them.
Collapse
|
24
|
Abstract
Sensorimotor integration is important for motor learning. The inferior parietal lobe, through its connections with the frontal lobe and cerebellum, has been associated with multisensory integration and sensorimotor adaptation for motor behaviors other than speech. In the present study, the contribution of the inferior parietal cortex to speech motor learning was evaluated using repetitive transcranial magnetic stimulation (rTMS) prior to a speech motor adaptation task. Subjects' auditory feedback was altered in a manner consistent with the auditory consequences of an unintended change in tongue position during speech production, and adaptation performance was used to evaluate sensorimotor plasticity and short-term learning. Prior to the feedback alteration, rTMS or sham stimulation was applied over the left supramarginal gyrus (SMG). Subjects who underwent the sham stimulation exhibited a robust adaptive response to the feedback alteration whereas subjects who underwent rTMS exhibited a diminished adaptive response. The results suggest that the inferior parietal region, in and around SMG, plays a role in sensorimotor adaptation for speech. The interconnections of the inferior parietal cortex with inferior frontal cortex, cerebellum and primary sensory areas suggest that this region may be an important component in learning and adapting sensorimotor patterns for speech.
Collapse
|
25
|
Articulatory and acoustic adaptation to palatal perturbation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 129:2112-2120. [PMID: 21476667 DOI: 10.1121/1.3557030] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Previous work has established that speakers have difficulty making rapid compensatory adjustments in consonant production (especially in fricatives) for structural perturbations of the vocal tract induced by artificial palates with thicker-than-normal alveolar regions. The present study used electromagnetic articulography and simultaneous acoustic recordings to estimate tongue configurations during production of [s š t k] in the presence of a thin and a thick palate, before and after a practice period. Ten native speakers of English participated in the study. In keeping with previous acoustic studies, fricatives were more affected by the palate than were the stops. The thick palate lowered the center of gravity and the jaw was lower and the tongue moved further backwards and downwards. Center of gravity measures revealed complete adaptation after training, and with practice, subjects' decreased interlabial distance. The fact that adaptation effects were found for [k], which are produced with an articulatory gesture not directly impeded by the palatal perturbation, suggests a more global sensorimotor recalibration that extends beyond the specific articulatory target.
Collapse
|
26
|
Effects of cooperating and conflicting prosody in spoken English garden path sentences: ERP evidence for the boundary deletion hypothesis. J Cogn Neurosci 2011; 23:2731-51. [PMID: 21281091 DOI: 10.1162/jocn.2011.21610] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.
Collapse
|
27
|
The neural underpinnings of semantic ambiguity and anaphora. Brain Res 2010; 1311:93-109. [DOI: 10.1016/j.brainres.2009.09.102] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2009] [Revised: 09/24/2009] [Accepted: 09/26/2009] [Indexed: 11/29/2022]
|
28
|
Production and perception of French vowels by congenitally blind adults and sighted adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:1406-14. [PMID: 19739754 DOI: 10.1121/1.3158930] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The goal of this study is to investigate the production and perception of French vowels by blind and sighted speakers. 12 blind adults and 12 sighted adults served as subjects. The auditory-perceptual abilities of each subject were evaluated by discrimination tests (AXB). At the production level, ten repetitions of the ten French oral vowels were recorded. Formant values and fundamental frequency values were extracted from the acoustic signal. Measures of contrasts between vowel categories were computed and compared for each feature (height, place of articulation, roundedness) and group (blind, sighted). The results reveal a significant effect of group (blind vs sighted) on production, with sighted speakers producing vowels that are spaced further apart in the vowel space than those of blind speakers. A group effect emerged for a subset of the perceptual contrasts examined, with blind speakers having higher peak discrimination scores than sighted speakers. Results suggest an important role of visual input in determining speech goals.
Collapse
|
29
|
Use of prosodic cues in the production of idiomatic and literal sentences by individuals with right- and left-hemisphere damage. BRAIN AND LANGUAGE 2009; 110:38-42. [PMID: 19339042 DOI: 10.1016/j.bandl.2009.02.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2008] [Revised: 01/14/2009] [Accepted: 02/17/2009] [Indexed: 05/27/2023]
Abstract
The neural bases of prosody during the production of literal and idiomatic interpretations of literally plausible idioms was investigated. Left- and right-hemisphere-damaged participants and normal controls produced literal and idiomatic versions of idioms (He hit the books.) All groups modulated duration to distinguish the interpretations. LHD patients, however, showed typical speech timing difficulties. RHD patients did not differ from the normal controls. The results partially support a differential lateralization of prosodic cues in the two cerebral hemispheres [Van Lancker, D., & Sidtis, J. J. (1992). The identification of affective-prosodic stimuli by left- and right-hemisphere-damaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963-970]. Furthermore, extended final word lengthening appears to mark idiomaticity.
Collapse
|
30
|
Perceptual recalibration of speech sounds following speech motor learning. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 125:1103-13. [PMID: 19206885 DOI: 10.1121/1.3058638] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.
Collapse
|
31
|
[Oxidized fibrinogen and its relationship with hemostasis disturbances and endothelial dysfunction during coronary heart disease and myocardial infarction]. KARDIOLOGIIA 2009; 49:4-8. [PMID: 19772496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The highest oxidative modification of fibrinogen was found in acute myocardial infarction (MI) men and it was 1.26 and 1.56 times higher in comparison with coronary heart disease (CHD) men with anamnesis of MI and with men without CHD, respectively. Increased oxidized fibrinogen level correlated with increased levels of plasma lipid peroxidation products, Willebrand factor, fibrin degradation products, accelerated leukocyte-platelet aggregation and decreased level of plasma NO metabolites. Associations of oxidized fibrinogen with MI and typical parameters of thrombosis and hypercoagulatory hemostasis disturbances and endothelial function were revealed.
Collapse
|
32
|
The Effects of Contextual Strength on Phonetic Identification in Younger and Older Listeners. Exp Aging Res 2008; 34:232-50. [DOI: 10.1080/03610730802070183] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
33
|
Comprehension of grammatical and emotional prosody is impaired in Alzheimer's disease. Neuropsychology 2008; 22:188-95. [PMID: 18331161 DOI: 10.1037/0894-4105.22.2.188] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Previous research has demonstrated impairment in comprehension of emotional prosody in individuals diagnosed with Alzheimer's disease (AD). The present pilot study further explored the prosodic processing impairment in AD, aiming to extend our knowledge to encompass both grammatical and emotional prosody processing. As expected, impairments were seen in emotional prosody. AD individuals were also found to be impaired in detecting sentence modality, suggesting that impairments in affective prosody processing in AD may be ascribed to a more general prosodic processing impairment, specifically in comprehending prosodic information signaled across the sentence level. AD participants were at a very mild stage of the disease, suggesting that prosody impairments occur early in the disease course.
Collapse
|
34
|
An electrophysiological study of mood, modal context, and anaphora. Brain Res 2006; 1117:135-53. [PMID: 16997288 DOI: 10.1016/j.brainres.2006.07.048] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2005] [Revised: 07/11/2006] [Accepted: 07/11/2006] [Indexed: 11/26/2022]
Abstract
We investigated whether modal information elicited empirical effects with regard to discourse processing. That is, like tense information, one of the linguistic factors shown to be relevant in organizing a discourse representation is modality, where the mood of an utterance indicates whether or not it is asserted. Event-related potentials (ERPs) were used in order to address the question of the qualitative nature of discourse processing, as well as the time course of this process. This experiment investigated pronoun resolution in two-sentence discourses, where context sentences either contained a hypothetical or actual Noun Phrase antecedent. The other factor in this 2x2 experiment was type of continuation sentence, which included or excluded a modal auxiliary (e.g., must, should) and contained a pronoun. Intuitions suggest that hypothetical antecedents followed by pronouns asserted to exist present ungrammaticality, unlike actual antecedents followed by such pronouns. Results confirmed the grammatical intuition that the former discourse displays anomaly, unlike the latter (control) discourse. That is, at the Verb position in continuation sentences, we found frontal positivity, consistent with the family of P600 components, and not an N400 effect, which suggests that the anomalous target sentences caused a revision in discourse structure. Furthermore, sentences exhibiting modal information resulted in negative-going waveforms at other points in the continuation sentence, indicating that modality affects the overall structural complexity of discourse representation.
Collapse
|
35
|
Electropalatographic, acoustic, and perceptual data on adaptation to a palatal perturbation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2006; 119:2372-81. [PMID: 16642850 DOI: 10.1121/1.2173520] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Exploring the compensatory responses of the speech production system to perturbation has provided valuable insights into speech motor control. The present experiment was conducted to examine compensation for one such perturbation-a palatal perturbation in the production of the fricative /s/. Subjects wore a specially designed electropalatographic (EPG) appliance with a buildup of acrylic over the alveolar ridge as well as a normal EPG palate. In this way, compensatory tongue positioning could be assessed during a period of target specific and intense practice and compared to nonperturbed conditions. Electropalatographic, acoustic, and perceptual analyses of productions of /asa/ elicited from nine speakers over the course of a one-hour practice period were conducted. Acoustic and perceptual results confirmed earlier findings, which showed improvement in production with a thick artificial palate in place over the practice period; the EPG data showed overall increased maximum contact as well as increased medial and posterior contact for speakers with the thick palate in place, but little change over time. Negative aftereffects were observed in the productions with the thin palate, indicating recalibration of sensorimotor processes in the face of the oral-articulatory perturbation. Findings are discussed with regard to the nature of adaptive articulatory skills.
Collapse
|
36
|
Neural substrates of linguistic prosody: evidence from syntactic disambiguation in the productions of brain-damaged patients. BRAIN AND LANGUAGE 2006; 96:78-89. [PMID: 15922444 DOI: 10.1016/j.bandl.2005.04.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2004] [Revised: 02/22/2005] [Accepted: 04/15/2005] [Indexed: 05/02/2023]
Abstract
The present investigation focussed on the neural substrates underlying linguistic distinctions that are signalled by prosodic cues. A production experiment was conducted to examine the ability of left- (LHD) and right- (RHD) hemisphere-damaged patients and normal controls to use temporal and fundamental frequency cues to disambiguate sentences which include one or more Intonational Phrase level prosodic boundaries. Acoustic analyses of subjects' productions of three sentence types-parentheticals, appositives, and tags-showed that LHD speakers, compared to RHD and normal controls, exhibited impairments in the control of temporal parameters signalling phrase boundaries, including inconsistent patterns of pre-boundary lengthening and longer-than-normal pause durations in non-boundary positions. Somewhat surprisingly, a perception test presented to a group of normal native listeners showed listeners experienced greatest difficulty in identifying the presence or absence of boundaries in the productions of the RHD speakers. The findings support a cue lateralization hypothesis in which prosodic domain plays an important role.
Collapse
|
37
|
Processing homonymy and polysemy: effects of sentential context and time-course following unilateral brain damage. BRAIN AND LANGUAGE 2005; 95:365-82. [PMID: 16298667 DOI: 10.1016/j.bandl.2005.03.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2004] [Revised: 02/25/2005] [Accepted: 03/01/2005] [Indexed: 05/05/2023]
Abstract
The present study investigated the abilities of left-hemisphere-damaged (LHD) non-fluent aphasic, right-hemisphere-damaged (RHD), and normal control individuals to access, in sentential biasing contexts, the multiple meanings of three types of ambiguous words, namely homonyms (e.g., "punch"), metonymies (e.g., "rabbit"), and metaphors (e.g., "star"). Furthermore, the predictions of the "suppression deficit" and "coarse semantic coding" hypotheses, which have been proposed to account for RH language function/dysfunction, were tested. Using an auditory semantic priming paradigm, ambiguous words were incorporated in dominant- or subordinate-biasing sentence-primes followed after a short (100 ms) or long (1,000 ms) interstimulus interval (ISI) by dominant-meaning-related, subordinate-meaning-related or unrelated target words. For all three types of ambiguous words, both the effects of context and ISI were obvious in the performance of normal control subjects, who showed multiple meaning activation at the short ISI, but eventually, at the long ISI, contextually appropriate meaning selection. Largely similar performance was exhibited by the LHD non-fluent aphasic patients as well. In contrast, RHD patients showed limited effects of context, and no effects of the time-course of processing. In addition, although homonymous and metonymous words showed similar patterns of activation (i.e., both meanings were activated at both ISIs), RHD patients had difficulties activating the subordinate meanings of metaphors, suggesting a selective problem with figurative meanings. Although the present findings do not provide strong support for either the "coarse semantic coding" or the "suppression deficit" hypotheses, they are viewed as being more consistent with the latter, according to which RH damage leads to deficits suppressing alternative meanings of ambiguous words that become incompatible with the context.
Collapse
|
38
|
Unilateral brain damage effects on processing homonymous and polysemous words. BRAIN AND LANGUAGE 2005; 93:308-326. [PMID: 15862856 DOI: 10.1016/j.bandl.2004.10.011] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2004] [Revised: 10/20/2004] [Accepted: 10/25/2004] [Indexed: 05/24/2023]
Abstract
Using an auditory semantic priming paradigm, the present study investigated the abilities of left-hemisphere-damaged (LHD) non-fluent aphasic, right-hemisphere-damaged (RHD) and normal control individuals to access, out of context, the multiple meanings of three types of ambiguous words, namely homonyms (e.g., "punch"), metonymies (e.g., "rabbit"), and metaphors (e.g., "star"). In addition, the study tested certain predictions of the "suppression deficit" and "coarse semantic coding" hypotheses that have been proposed to account for the linguistic deficits typically observed after RH damage. Homonymous, metonymous, and metaphorical words were used as primes followed after a short (100 ms) or a long (1000 ms) inter-stimulus interval (ISI) by dominant-meaning-related, subordinate-meaning-related or unrelated target words. No significant group effects were found, and for both ISIs, dominant- and subordinate-related targets were facilitated relative to unrelated control targets for the homonymy and metonymy conditions. In contrast, for the metaphor condition, only targets related to the dominant meaning were facilitated. These findings provide only partial support for the "suppression deficit" hypothesis and no support for the "coarse semantic coding" hypothesis (as interpreted herein) indicating that patients with focal LH or RH damage can access the multiple meanings of ambiguous words and exhibit processing abilities comparable to those of older normal control subjects, at least at the single-word level.
Collapse
|
39
|
Hemispheric contributions to lexical ambiguity resolution in a discourse context: Evidence from individuals with unilateral left and right hemisphere lesions. Brain Cogn 2005; 57:70-83. [PMID: 15629218 DOI: 10.1016/j.bandc.2004.08.023] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/12/2004] [Indexed: 11/26/2022]
Abstract
In the present study, a cross-modal semantic priming task was used to investigate the ability of left-hemisphere-damaged (LHD) nonfluent aphasic, right-hemisphere-damaged (RHD) and non-brain-damaged (NBD) control subjects to use a discourse context to resolve lexically ambiguous words. Subjects first heard four-sentence discourse passages ending in ambiguous words and after an inter-stimulus interval (ISI) of either 0 or 750 ms, made lexical decisions on first- or second-meaning related visual targets. NBD control subjects, at the 0 ms ISI, only activated contextually appropriate meanings, though significant effects, as a group, were only seen in second-meaning biased contexts. Surprisingly, at the 750 ms ISI, these subjects activated both appropriate and inappropriate meanings in first-meaning biased contexts. With respect to the LHD nonfluent aphasic patients, the majority activated first meanings regardless of context at the 0 ms ISI, though effects for the group were not significant. At the 750 ms ISI, these patients again activated first meanings regardless of context, with significant effects for the group only seen in first-meaning biased contexts. With regard to the RHD patients, the majority activated second meanings regardless of context at the 0 ms ISI and first meanings regardless of context at the 750 ms ISI, though, as a group, the effects were not significant. In light of our previous findings (Grindrod & Baum, 2003, submitted), the present data are interpreted as supporting the notion that damage to the left hemisphere disrupts either lexical access processes or the time course of lexical activation, whereas damage to the right hemisphere impairs the use of context and leads to activation of ambiguous word meanings based on meaning frequency.
Collapse
|
40
|
[Prevalence of electrocardiographic signs of pulmonary hypertension in a sample of male population of Novosibirsk]. KARDIOLOGIIA 2003; 42:57-9. [PMID: 12494076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/28/2023]
Abstract
Prevalence of ECG signs of right ventricular hypertrophy and pulmonary hypertension was assessed in a representative sample (n=715) of nonorganized male population (age 25-64 years) of Novosibirsk studied within a framework of WHO MONICA project. Other methods of investigation included Rose questionnaire, anthropometry, ECG interpreted with Minnesota code, and echocardiography. Echocardiography data were used as reference for determination of sensitivity and specificity of ECG-criteria of pulmonary hypertension.
Collapse
|
41
|
Temporal parameters as cues to phrasal boundaries: a comparison of processing by left- and right-hemisphere brain-damaged individuals. BRAIN AND LANGUAGE 2003; 87:385-399. [PMID: 14642541 DOI: 10.1016/s0093-934x(03)00138-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Two experiments were conducted to examine the ability of left- (LHD) and right-hemisphere-damaged (RHD) patients and normal controls to use temporal cues in rendering phrase grouping decisions. The phrase "pink and black and green" was manipulated to signal a boundary after "pink" or after "black" by altering pre-boundary word durations and pause durations at the boundary in a stepwise fashion. Stimuli were presented to listeners auditorily along with a card with three alternative groupings of colored squares from which to select the presented alternative. Results revealed that normal controls were able to use both temporal cues to identify the intended grouping. In contrast, LHD patients required longer than normal pause durations to consistently identify the intended grouping, suggesting a higher than normal threshold for perception of temporal prosodic cues. Surprisingly, the RHD patients exhibited great difficulty with the task, perhaps due to the limited acoustic cues available in the stimuli.
Collapse
|
42
|
Sensitivity to prosodic structure in left- and right-hemisphere-damaged individuals. BRAIN AND LANGUAGE 2003; 87:278-289. [PMID: 14585297 DOI: 10.1016/s0093-934x(03)00109-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
An experiment was conducted in order to determine whether left- (LHD) and right-hemisphere-damaged (RHD) patients exhibit sensitivity to prosodic information that is used in syntactic disambiguation. Following the work of, a cross-modal lexical decision task was performed by LHD and RHD subjects, as well as by adults without brain pathology (NC). Subjects listened to sentences with attachment ambiguities with either congruent or incongruent prosody, while performing a visual lexical decision task. Results showed that each of the unilaterally damaged populations differed from each other, as well as from the NCs in terms of sensitivity regarding prosodic cues. Specifically, the RHD group was insensitive to sentence prosody as a whole. This was in contrast to the LHD patients, who responded to the prosodic manipulation, but in the unexpected direction. Results are discussed in terms of current hypotheses regarding the hemispheric lateralization of prosodic cues.
Collapse
|
43
|
Sensitivity to local sentence context information in lexical ambiguity resolution: evidence from left- and right-hemisphere-damaged individuals. BRAIN AND LANGUAGE 2003; 85:503-523. [PMID: 12744960 DOI: 10.1016/s0093-934x(03)00072-5] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Using a cross-modal semantic priming paradigm, the present study investigated the ability of left-hemisphere-damaged (LHD) nonfluent aphasic, right-hemisphere-damaged (RHD) and non-brain-damaged (NBD) control subjects to use local sentence context information to resolve lexically ambiguous words. Critical sentences were manipulated such that they were either unbiased, or biased toward one of two meanings of sentence-final equibiased ambiguous words. Sentence primes were presented auditorily, followed after a short (0 ms) or long (750 ms) interstimulus interval (ISI) by the presentation of a first- or second-meaning related visual target, on which subjects made a lexical decision. At the short ISI, neither patient group appeared to be influenced by context, in sharp contrast to the performance of the NBD control subjects. LHD nonfluent aphasic subjects activated both meanings of ambiguous words regardless of context, whereas RHD subjects activated only the first meaning in unbiased and second-meaning biased contexts. At the long ISI, LHD nonfluent aphasic subjects failed to show evidence of activation of either meaning, while RHD individuals activated first meanings in unbiased contexts and contextually appropriate meanings in second-meaning biased contexts. These findings suggest that both left (LH) and right hemisphere (RH) damage lead to deficits in using local contextual information to complete the process of ambiguity resolution. LH damage seems to spare initial access to word meanings, but initially impairs the ability to use context and results in a faster than normal decay of lexical activation. RH damage appears to initially disrupt access to context, resulting in an over-reliance on frequency in the activation of ambiguous word meanings.
Collapse
|
44
|
Sensitivity to sub-syllabic constituents in brain-damaged patients: evidence from word games. BRAIN AND LANGUAGE 2002; 83:237-248. [PMID: 12387796 DOI: 10.1016/s0093-934x(02)00034-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Two experiments were conducted to examine whether left- (LHD) and right-hemisphere-damaged (RHD) patients exhibit sensitivity to sub-syllabic constituents (i.e., onsets and codas) in the generation of nonwords, using a word games paradigm adapted from. Four groups of individuals (including LHD fluent and nonfluent aphasic patients, RHD patients and normal controls) were trained to add syllables to monosyllabic CVC nonwords either after the initial consonant (Experiment 1) or prior to the final consonant (Experiment 2) to create bisyllabic nonwords. Experimental stimuli consisting of CCVC or CVCC nonwords tested whether participants would preserve or split the onset and coda constituents in producing the novel bisyllabic nonwords. Results revealed that the majority of subjects demonstrated sensitivity to the sub-syllabic constituents, preserving the onsets and codas. The fluent aphasic patients exhibited a greater than normal tendency to split the onset and coda constituents; however, the small number of individuals in that group whose data met inclusion criteria limits the conclusions that may be drawn from these findings. The results are discussed in relation to theories of phonological deficits in aphasia.
Collapse
|
45
|
Association of the CCR2 chemokine receptor gene polymorphism with myocardial infarction. DOKLADY BIOLOGICAL SCIENCES : PROCEEDINGS OF THE ACADEMY OF SCIENCES OF THE USSR, BIOLOGICAL SCIENCES SECTIONS 2002; 385:367-70. [PMID: 12469616 DOI: 10.1023/a:1019973120289] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
46
|
Sentence context effects and the timecourse of lexical ambiguity resolution in nonfluent aphasia. Brain Cogn 2002; 48:381-5. [PMID: 12030472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
In the current neurolinguistic literature, two proposals have been put forth to account for the deficit in ambiguous word processing observed in nonfluent aphasic patients. One proposal maintains that these individuals are impaired in the selective access of ambiguous word meanings, while the other argues that they are impaired in the process of contextual selection. In the present study, nonfluent aphasic patients and elderly control subjects participated in a semantic priming experiment, in which ambiguous words were presented in different biasing contexts at both a 0- and 750-ms ISI. At both ISIs, control subjects showed context-selective access, in that only contextually appropriate meanings were activated. In contrast, at the 0-ms ISI, the nonfluent aphasic patients showed activation of both meanings of the ambiguous word regardless of context. Only at the 750-ms ISI did they exhibit context-selective access. These results are consistent with the proposal that left hemisphere damage causing nonfluent aphasia results in an impairment in rapidly integrating lexical-semantic information into context.
Collapse
|
47
|
Production of stress retraction by left- and right-hemisphere-damaged patients. BRAIN AND LANGUAGE 2001; 79:482-494. [PMID: 11781055 DOI: 10.1006/brln.2001.2562] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
An acoustic-perceptual investigation of a phonological phenomenon in which stress is retracted in double-stressed words (e.g., thirTEEN vs THIRteen MEN) was undertaken to identify the locus of functional impairments in speech prosody. Subjects included left-hemisphere-damaged (LHD) and right-hemisphere-damaged (RHD) patients and nonneurological controls. They were instructed to read sentences containing double-stressed target words in the presence of a clause boundary or its absence. Whereas all three groups of subjects were capable of manipulating the acoustic parameters that signal a shift in stress, there were some differences between the performance of the patient groups and that of the normal controls. Further, stress production deficits were more severe in LHD aphasic patients than in RHD patients. LHD speakers exhibited deficits in the control of both temporal and F0 cues. Their F0 disturbance appears to be secondary to a primary deficit in temporal control at the phase or sentence level, as an increased number of continuation rises found for the LHD patients seemed to arise from lengthy pauses within sentences. Findings are highlighted to address the nature of breakdown in speech prosody and the competing views of prosodic lateralization.
Collapse
|
48
|
Abstract
The ability of RHD patients to use context under conditions of increased processing demands was examined. Subjects monitored for words in auditorily presented sentences of three context types-normal, semantically anomalous, and random, at three rates of speech normal, 70% compressed (Experiment 1) and 60% compressed (Experiment 2). Effects of semantics and syntax were found for the RHD and normal groups under the normal rate of speech condition. Using compressed rates of speech, the effect of syntax disappeared, but the effect of semantics remained. Importantly, and contrary to expectations, the RHD group was similar to normals in continuing to demonstrate an effect of semantic context under conditions of increased processing demands. Results are discussed relative to contemporary theories of laterality, based on studies with normals, that suggest that the involvement of the left versus right hemisphere in context use may depend upon the type of contextual information being processed.
Collapse
|
49
|
Contextual influences on phonetic identification in aphasia: the effects of speaking rate and semantic bias. BRAIN AND LANGUAGE 2001; 76:266-281. [PMID: 11247645 DOI: 10.1006/brln.2000.2386] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Two experiments examined the influence of context on stop-consonant voicing identification in fluent and nonfluent aphasic patients and normal controls. Listeners were required to label the initial stop in a target word varying along a voice onset time (VOT) continuum as either voiced or voiceless ([b]/[p] or [d]/[t]). Target stimuli were presented in sentence contexts in which the rate of speech of the sentence context (Experiment 1) or the semantic bias of the context (Experiment 2) was manipulated. The results revealed that all subject groups were sensitive to the contextual influences, although the extent of the context effects varied somewhat across groups and across experiments. In addition, a number of patients in both the fluent and nonfluent aphasic groups could not consistently identify even endpoint stimuli, confirming phonetic categorization impairments previously shown in such individuals. Results are discussed with respect to the potential reliance by aphasic patients on higher level context to compensate for phonetic perception deficits.
Collapse
|
50
|
Context use by right-hemisphere-damaged individuals under a compressed speech condition. Brain Cogn 2000; 43:315-9. [PMID: 10857716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
Abstract
The effect of increased processing demands on context use by RHD individuals was examined using a word-monitoring task. Subjects were required to monitor for a target word in sentences that were either normal, semantically anomalous, or both syntactically and semantically anomalous. Stimuli were presented at two rates of speech--normal and compressed to 70% of normal. Contrary to expectations, the RHD group performed similar to normals in demonstrating an effect of context at both rates of speech. Results are discussed relative to recent studies of normal brain functioning that suggest that the involvement of the LH versus the RH in context use depends upon the type of contextual information being processed.
Collapse
|