1
|
Attout L, Grégoire C, Querella P, Majerus S. Neural evidence for a separation of semantic and phonological control processes. Neuropsychologia 2022; 176:108377. [PMID: 36183802 DOI: 10.1016/j.neuropsychologia.2022.108377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 09/08/2022] [Accepted: 09/25/2022] [Indexed: 11/16/2022]
Abstract
There remain major doubts about the nature and domain specificity of inhibitory control processes, both within and between cognitive domains. This study examined inhibitory processes within the language domain, by contrasting semantic versus phonological inhibitory control. In an fMRI experiment, elderly participants performed phonological and semantic inhibitory control tasks involving resistance to highly or weakly interfering stimuli. In the semantic domain, inhibitory control effects, contrasting high vs. low interference control levels, were observed at univariate and multivariate levels in all fronto-parieto-temporal region-of-interests. In the phonological domain, inhibitory control effects were observed only at multivariate levels, and were restricted to the pars triangularis of the bilateral inferior frontal gyrus and to the left middle temporal gyrus. Critically, no reliable multivariate cross-domain prediction of neural patterns associated with inhibitory control was observed. This study supports a functional dissociation of the neural substrates associated with inhibitory control for phonological vs. semantic domains.
Collapse
Affiliation(s)
- Lucie Attout
- Psychology and Neuroscience of Cognition Research Unit, University of Liège, Belgium; Fund for Scientific Research FNRS, 1000, Brussels, Belgium.
| | - Coline Grégoire
- Psychology and Neuroscience of Cognition Research Unit, University of Liège, Belgium
| | - Pauline Querella
- Psychology and Neuroscience of Cognition Research Unit, University of Liège, Belgium
| | - Steve Majerus
- Psychology and Neuroscience of Cognition Research Unit, University of Liège, Belgium; Fund for Scientific Research FNRS, 1000, Brussels, Belgium
| |
Collapse
|
2
|
Mechtenberg H, Xie X, Myers EB. Sentence predictability modulates cortical response to phonetic ambiguity. BRAIN AND LANGUAGE 2021; 218:104959. [PMID: 33930722 PMCID: PMC8513138 DOI: 10.1016/j.bandl.2021.104959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 03/02/2021] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Phonetic categories have undefined edges, such that individual tokens that belong to different speech sound categories may occupy the same region in acoustic space. In continuous speech, there are multiple sources of top-down information (e.g., lexical, semantic) that help to resolve the identity of an ambiguous phoneme. Of interest is how these top-down constraints interact with ambiguity at the phonetic level. In the current fMRI study, participants passively listened to sentences that varied in semantic predictability and in the amount of naturally-occurring phonetic competition. The left middle frontal gyrus, angular gyrus, and anterior inferior frontal gyrus were sensitive to both semantic predictability and the degree of phonetic competition. Notably, greater phonetic competition within non-predictive contexts resulted in a negatively-graded neural response. We suggest that uncertainty at the phonetic-acoustic level interacts with uncertainty at the semantic level-perhaps due to a failure of the network to construct a coherent meaning.
Collapse
Affiliation(s)
- Hannah Mechtenberg
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Mansfield, CT 06269, USA.
| | - Xin Xie
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Mansfield, CT 06269, USA; Department of Psychological Sciences, University of Connecticut, Storrs, Mansfield, CT 06269, USA.
| |
Collapse
|
3
|
Guediche S, de Bruin A, Caballero-Gaudes C, Baart M, Samuel AG. Second-language word recognition in noise: Interdependent neuromodulatory effects of semantic context and crosslinguistic interactions driven by word form similarity. Neuroimage 2021; 237:118168. [PMID: 34000398 DOI: 10.1016/j.neuroimage.2021.118168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/05/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022] Open
Abstract
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.
Collapse
Affiliation(s)
- Sara Guediche
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain.
| | | | | | - Martijn Baart
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, the Netherlands
| | - Arthur G Samuel
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Stony Brook University, NY 11794-2500, United States; Ikerbasque Foundation, Spain
| |
Collapse
|
4
|
de Zubicaray GI, McMahon KL, Arciuli J. A Sound Explanation for Motor Cortex Engagement during Action Word Comprehension. J Cogn Neurosci 2020; 33:129-145. [PMID: 33054555 DOI: 10.1162/jocn_a_01640] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Comprehending action words often engages similar brain regions to those involved in perceiving and executing actions. This finding has been interpreted as support for grounding of conceptual processing in motor representations or that conceptual processing involves motor simulation. However, such demonstrations cannot confirm the nature of the mechanism(s) responsible, as word comprehension involves multiple processes (e.g., lexical, semantic, morphological, phonological). In this study, we tested whether this motor cortex engagement instead reflects processing of statistical regularities in sublexical phonological features. Specifically, we measured brain activity in healthy participants using functional magnetic resonance imaging while they performed an auditory lexical decision paradigm involving monosyllabic action words associated with specific effectors (face, arm, and leg). We show that nonwords matched to the action words in terms of their phonotactic probability elicit common patterns of activation. In addition, we show that a measure of the action words' phonological typicality, the extent to which a word's phonology is typical of other words in the grammatical category to which it belongs (i.e., more or less verb-like), is responsible for their activating a significant portion of primary and premotor cortices. These results indicate motor cortex engagement during action word comprehension is more likely to reflect processing of statistical regularities in sublexical phonological features than conceptual processing. We discuss the implications for current neurobiological models of language, all of which implicitly or explicitly assume that the relationship between the sound of a word and its meaning is arbitrary.
Collapse
Affiliation(s)
| | - Katie L McMahon
- Queensland University of Technology (QUT), Brisbane, Australia.,Royal Brisbane & Women's Hospital, Australia
| | | |
Collapse
|
5
|
Luthra S, Guediche S, Blumstein SE, Myers EB. Neural substrates of subphonemic variation and lexical competition in spoken word recognition. LANGUAGE, COGNITION AND NEUROSCIENCE 2019; 34:151-169. [PMID: 31106225 PMCID: PMC6516505 DOI: 10.1080/23273798.2018.1531140] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g., peacock). Eyetracking data indicated that participants were slowed when a voiced onset competitor (e.g., beaker) was also displayed, and this effect was amplified when acoustic-phonetic competition was increased. Simultaneously-collected fMRI data showed that several brain regions were sensitive to the presence of the onset competitor, including the supramarginal, middle temporal, and inferior frontal gyri, and functional connectivity analyses revealed that the coordinated activity of left frontal regions depends on both acoustic-phonetic and lexical factors. Taken together, results suggest a role for frontal brain structures in resolving lexical competition, particularly as atypical acoustic-phonetic information maps on to the lexicon.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychological Sciences, University of Connecticut 406 Babbidge Road, Unit 1020, Storrs, CT, USA 06269
| | - Sara Guediche
- BCBL. Basque Center on Cognition, Brain and Language Mikeletegi Pasealekua, 69, 20009 Donostia, Gipuzkoa, Spain
| | - Sheila E Blumstein
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University 190 Thayer Street, Providence, RI, USA 02912
- Brown Institute for Brain Science, Brown University 2 Stimson Ave, Providence, RI, USA 02912
| | - Emily B Myers
- Department of Psychological Sciences, University of Connecticut 406 Babbidge Road, Unit 1020, Storrs, CT, USA 06269
- Department of Speech, Language & Hearing Sciences, University of Connecticut 850 Bolton Road, Unit 1085, Storrs, CT, USA 06269
- Haskins Laboratories 300 George Street, Suite 900, New Haven, CT, USA 06511
| |
Collapse
|
6
|
Guo J, Li D, Bi Y, Chen C. Modulating Effects of Contextual Emotions on the Neural Plasticity Induced by Word Learning. Front Hum Neurosci 2018; 12:464. [PMID: 30532700 PMCID: PMC6266032 DOI: 10.3389/fnhum.2018.00464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 11/02/2018] [Indexed: 11/13/2022] Open
Abstract
Recently, numerous studies have investigated the neurocognitive mechanism of learning words in isolation or in semantic contexts. However, emotion as an important influencing factor on novel word learning has not been fully considered in the previous studies. In addition, the effects of emotion on word learning and the underlying neural mechanism have not been systematically investigated. Sixteen participants were trained to learn novel concrete or abstract words under negative, neutral, and positive contextual emotions over 3 days; then, fMRI scanning was done during the testing sessions on day 1 and day 3. We compared the brain activations in day 1 and day 3 to investigate the role of contextual emotions in learning different types of words and the corresponding neural plasticity changes. Behaviorally, the performance of the words learned in the negative context was lower than those in the neutral and positive contexts, which indicated that contextual emotions had a significant impact on novel word learning. Correspondingly, the functional plasticity changes of the right angular gyrus (AG), bilateral insula, and anterior cingulate cortex (ACC) induced by word learning were modulated by the contextual emotions. The insula also was sensitive to the concreteness of the learned words. More importantly, the functional plasticity changes of the left inferior frontal gyrus (left IFG) and left fusiform gyrus (left FG) were interactively influenced by the contextual emotions and concreteness, suggesting that the contextual emotional information had a discriminable effect on different types of words in the neural mechanism level. These results demonstrate that emotional information in contexts is inevitably involved in word learning. The role of contextual emotions in brain plasticity for learning is discussed.
Collapse
Affiliation(s)
- Jingjing Guo
- Key Laboratory of Behavior and Cognitive Neuroscience in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, China
- *Correspondence: Jingjing Guo
| | - Dingding Li
- Key Laboratory of Behavior and Cognitive Neuroscience in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, China
| | - Yanling Bi
- Key Laboratory of Behavior and Cognitive Neuroscience in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, China
| | - Chunhui Chen
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Chunhui Chen
| |
Collapse
|
7
|
Xie X, Myers E. Left Inferior Frontal Gyrus Sensitivity to Phonetic Competition in Receptive Language Processing: A Comparison of Clear and Conversational Speech. J Cogn Neurosci 2017; 30:267-280. [PMID: 29160743 DOI: 10.1162/jocn_a_01208] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens. Yet, these studies have often used artificially manipulated speech and/or metalinguistic tasks, which arguably may recruit neural regions that are not critical for natural speech recognition. Indeed, a prominent model of speech processing, the dual-stream model, posits that the LIFG is not involved in prelexical processing in receptive language processing. In the current study, we exploited natural variation in phonetic competition in the speech signal to investigate the neural systems sensitive to phonetic competition as listeners engage in a receptive language task. Participants heard nonsense sentences spoken in either a clear or conversational register as neural activity was monitored using fMRI. Conversational sentences contained greater phonetic competition, as estimated by measures of vowel confusability, and these sentences also elicited greater activation in a region in the LIFG. Sentence-level phonetic competition metrics uniquely correlated with LIFG activity as well. This finding is consistent with the hypothesis that the LIFG responds to competition at multiple levels of language processing and that recruitment of this region does not require an explicit phonological judgment.
Collapse
|
8
|
Kwok VPY, Dan G, Yakpo K, Matthews S, Fox PT, Li P, Tan LH. A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones. Front Hum Neurosci 2017; 11:375. [PMID: 28798670 PMCID: PMC5526909 DOI: 10.3389/fnhum.2017.00375] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Accepted: 07/06/2017] [Indexed: 11/13/2022] Open
Abstract
The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well.
Collapse
Affiliation(s)
- Veronica P Y Kwok
- Center for Language and Brain, Shenzhen Institute of NeuroscienceShenzhen, China
| | - Guo Dan
- Neuroimaging Laboratory, School of Biomedical Engineering, Shenzhen University Health Science CenterShenzhen, China.,Guangdong Key Laboratory of Biomedical Information Detection and Ultrasound ImagingShenzhen, China
| | - Kofi Yakpo
- Department of Linguistics, School of Humanities, University of Hong KongHong Kong, Hong Kong
| | - Stephen Matthews
- Department of Linguistics, School of Humanities, University of Hong KongHong Kong, Hong Kong
| | - Peter T Fox
- Center for Language and Brain, Shenzhen Institute of NeuroscienceShenzhen, China.,Neuroimaging Laboratory, School of Biomedical Engineering, Shenzhen University Health Science CenterShenzhen, China.,Research Imaging Institute, University of Texas Health Science Center at San AntonioSan Antonio, TX, United States.,South Texas Veterans Health Care SystemSan Antonio, TX, United States
| | - Ping Li
- Center for Language and Brain, Shenzhen Institute of NeuroscienceShenzhen, China.,Department of Psychology, and Center for Brain, Behavior, and Cognition, Pennsylvania State UniversityUniversity Park, PA, United States
| | - Li-Hai Tan
- Center for Language and Brain, Shenzhen Institute of NeuroscienceShenzhen, China.,Neuroimaging Laboratory, School of Biomedical Engineering, Shenzhen University Health Science CenterShenzhen, China.,Guangdong Key Laboratory of Biomedical Information Detection and Ultrasound ImagingShenzhen, China
| |
Collapse
|
9
|
Rogers JC, Davis MH. Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds. J Cogn Neurosci 2017; 29:919-936. [PMID: 28129061 DOI: 10.1162/jocn_a_01096] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.
Collapse
Affiliation(s)
- Jack C Rogers
- MRC Cognition & Brain Sciences Unit, Cambridge, UK.,University of Birmingham
| | | |
Collapse
|
10
|
Myers EB, Mesite LM. Neural Systems Underlying Perceptual Adjustment to Non-Standard Speech Tokens. JOURNAL OF MEMORY AND LANGUAGE 2014; 76:80-93. [PMID: 25092949 PMCID: PMC4118215 DOI: 10.1016/j.jml.2014.06.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
It has long been noted that listeners use top-down information from context to guide perception of speech sounds. A recent line of work employing a phenomenon termed 'perceptual learning for speech' shows that listeners use top-down information to not only resolve the identity of perceptually ambiguous speech sounds, but also to adjust perceptual boundaries in subsequent processing of speech from the same talker. Even so, the neural mechanisms that underlie this process are not well understood. Of particular interest is whether this type of adjustment comes about because of a retuning of sensitivities to phonetic category structure early in the neural processing stream or whether the boundary shift results from decision-related or attentional mechanisms further downstream. In the current study, neural activation was measured using fMRI as participants categorized speech sounds that were perceptually shifted as a result of exposure to these sounds in lexically-unambiguous contexts. Sensitivity to lexically-mediated shifts in phonetic categorization emerged in right hemisphere frontal and middle temporal regions, suggesting that the perceptual learning for speech phenomenon relies on the adjustment of perceptual criteria downstream from primary auditory cortex. By the end of the session, this same sensitivity was seen in left superior temporal areas, which suggests that a rapidly-adapting system may be accompanied by more slowly evolving shifts in regions of the brain related to phonetic processing.
Collapse
Affiliation(s)
- Emily B. Myers
- University of Connecticut, Department of Speech, Language, and Hearing Sciences, 850 Bolton Road, Storrs, CT 06269
- University of Connecticut, Department of Psychology, 406 Babbidge Road, Storrs, CT 06269
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912
- Haskins Laboratories, 300 George Street #900, New Haven, CT 06511
- Corresponding Author: Emily Myers, Department of Speech, Language, and Hearing Sciences, University of Connecticut, 850 Bolton Road, Storrs, CT 06269, , 860-486-2630
| | - Laura M. Mesite
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912
- Haskins Laboratories, 300 George Street #900, New Haven, CT 06511
| |
Collapse
|
11
|
Alpha and theta brain oscillations index dissociable processes in spoken word recognition. Neuroimage 2014; 97:387-95. [DOI: 10.1016/j.neuroimage.2014.04.005] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2013] [Revised: 03/26/2014] [Accepted: 04/03/2014] [Indexed: 11/21/2022] Open
|