1
|
Gavino MF, Goldrick M. The perception of code-switched speech in noise. JASA Express Lett 2024; 4:035204. [PMID: 38501961 DOI: 10.1121/10.0025375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 03/05/2024] [Indexed: 03/20/2024]
Abstract
This study investigates heritage bilingual speakers' perception of naturalistic code-switched sentences (i.e., use of both languages in one sentence). Studies of single word perception suggest that code-switching is more difficult to perceive than single language speech. However, such difficulties may not extend to more naturalistic sentences, where predictability and other cues may serve to ameliorate such difficulties. Fifty-four Mexican-American Spanish heritage bilinguals transcribed sentences in noise in English, Spanish, and code-switched blocks. Participants were better at perceiving speech in single language blocks than code-switched blocks. The results indicate that increased language co-activation when perceiving code-switching results in significant processing costs.
Collapse
Affiliation(s)
- Maria Fernanda Gavino
- Department of Psychiatry, University of California, San Diego, La Jolla, California 92093, USA
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, ,
| |
Collapse
|
2
|
Kim SE, Chernyak BR, Seleznova O, Keshet J, Goldrick M, Bradlow AR. Automatic recognition of second language speech-in-noise. JASA Express Lett 2024; 4:025204. [PMID: 38350077 DOI: 10.1121/10.0024877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 01/21/2024] [Indexed: 02/15/2024]
Abstract
Measuring how well human listeners recognize speech under varying environmental conditions (speech intelligibility) is a challenge for theoretical, technological, and clinical approaches to speech communication. The current gold standard-human transcription-is time- and resource-intensive. Recent advances in automatic speech recognition (ASR) systems raise the possibility of automating intelligibility measurement. This study tested 4 state-of-the-art ASR systems with second language speech-in-noise and found that one, whisper, performed at or above human listener accuracy. However, the content of whisper's responses diverged substantially from human responses, especially at lower signal-to-noise ratios, suggesting both opportunities and limitations for ASR--based speech intelligibility modeling.
Collapse
Affiliation(s)
- Seung-Eun Kim
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Bronya R Chernyak
- Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, , , , , ,
| | - Olga Seleznova
- Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, , , , , ,
| | - Joseph Keshet
- Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, , , , , ,
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Ann R Bradlow
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
3
|
Aly M, Colunga E, Crockett MJ, Goldrick M, Gomez P, Kung FYH, McKee PC, Pérez M, Stilwell SM, Diekman AB. Changing the culture of peer review for a more inclusive and equitable psychological science. J Exp Psychol Gen 2023; 152:3546-3565. [PMID: 37676130 DOI: 10.1037/xge0001461] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
Peer review is a core component of scientific practice. Although peer review ideally improves research and promotes rigor, it also has consequences for what types of research are published and cited and who wants to (and is able to) advance in research-focused careers. Despite these consequences, few reviewers or editors receive training or oversight to ensure their feedback is helpful, professional, and culturally sensitive. Here, we critically examine the peer-review system in psychology and neuroscience at multiple levels, from ideas to institutions, interactions, and individuals. We highlight initiatives that aim to change the normative negativity of peer review and provide authors with constructive, actionable feedback that is sensitive to diverse identities, methods, topics, and environments. We conclude with a call to action for how individuals, groups, and organizations can improve the culture of peer review. We provide examples of how changes in the peer-review system can be made with an eye to diversity (increasing the range of identities and experiences constituting the field), equity (fair processes and outcomes across groups), and inclusion (experiences that promote belonging across groups). These changes can improve scientists' experience of peer review, promote diverse perspectives and identities, and enhance the quality and impact of science. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Mariam Aly
- Department of Psychology, Columbia University
| | - Eliana Colunga
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | | | | | - Pablo Gomez
- Department of Psychology, California State University San Bernardino
| | | | - Paul C McKee
- Department of Psychology and Neuroscience, Duke University
| | | | - Sarah M Stilwell
- Department of Health Behavior and Health Education, University of Michigan
| | - Amanda B Diekman
- Department of Psychological and Brain Sciences, Indiana University Bloomington
| |
Collapse
|
4
|
Hitczenko K, Segal Y, Keshet J, Goldrick M, Mittal VA. Speech characteristics yield important clues about motor function: Speech variability in individuals at clinical high-risk for psychosis. Schizophrenia (Heidelb) 2023; 9:60. [PMID: 37717025 PMCID: PMC10505148 DOI: 10.1038/s41537-023-00382-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 07/24/2023] [Indexed: 09/18/2023]
Abstract
BACKGROUND AND HYPOTHESIS Motor abnormalities are predictive of psychosis onset in individuals at clinical high risk (CHR) for psychosis and are tied to its progression. We hypothesize that these motor abnormalities also disrupt their speech production (a highly complex motor behavior) and predict CHR individuals will produce more variable speech than healthy controls, and that this variability will relate to symptom severity, motor measures, and psychosis-risk calculator risk scores. STUDY DESIGN We measure variability in speech production (variability in consonants, vowels, speech rate, and pausing/timing) in N = 58 CHR participants and N = 67 healthy controls. Three different tasks are used to elicit speech: diadochokinetic speech (rapidly-repeated syllables e.g., papapa…, pataka…), read speech, and spontaneously-generated speech. STUDY RESULTS Individuals in the CHR group produced more variable consonants and exhibited greater speech rate variability than healthy controls in two of the three speech tasks (diadochokinetic and read speech). While there were no significant correlations between speech measures and remotely-obtained motor measures, symptom severity, or conversion risk scores, these comparisons may be under-powered (in part due to challenges of remote data collection during the COVID-19 pandemic). CONCLUSION This study provides a thorough and theory-driven first look at how speech production is affected in this at-risk population and speaks to the promise and challenges facing this approach moving forward.
Collapse
Affiliation(s)
- Kasia Hitczenko
- Laboratoire de Sciences Cognitives et Psycholinguistique, Département d'Études Cognitives, ENS, EHESS, CNRS, PSL University, Paris, France.
| | - Yael Segal
- Faculty of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Joseph Keshet
- Faculty of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
- Cognitive Science Program, Northwestern University, Evanston, IL, USA
- Institute for Policy Research, Northwestern University, Evanston, IL, USA
| | - Vijay A Mittal
- Department of Psychology, Northwestern University, Evanston, IL, USA
- Cognitive Science Program, Northwestern University, Evanston, IL, USA
- Institute for Policy Research, Northwestern University, Evanston, IL, USA
- Department of Psychiatry, Northwestern University, Evanston, IL, USA
- Medical Social Sciences, Northwestern University, Chicago, IL, USA
- Institute for Innovations in Developmental Sciences, Evanston/Chicago, IL, USA
| |
Collapse
|
5
|
Abstract
Theories of speech production have proposed that in contexts where multiple languages are produced, bilinguals inhibit the dominant language with the goal of making both languages equally accessible. This process often overshoots this goal, leading to a surprising pattern: better performance in the nondominant vs. dominant language, or reversed language dominance effects. However, the reliability of this effect in single word production studies with cued language switches has been challenged by a recent meta-analysis. Correcting for errors in this analysis, we find that dominance effects are reliably reduced and reversed during language mixing. Reversed dominance has also consistently been reported in the production of connected speech elicited by reading aloud of mixed language paragraphs. When switching, bilinguals produced translation-equivalent intrusion errors (e.g., saying pero instead of but) more often when intending to produce words in the dominant language. We show this dominant language vulnerability is not exclusive to switching out of the nondominant language and extends to non-switch words, linking connected speech results to patterns first reported in single word studies. Reversed language dominance is a robust phenomenon that reflects the tip of the iceberg of inhibitory control of the dominant language in bilingual language production.
Collapse
Affiliation(s)
| | - Tamar H. Gollan
- Department of Psychiatry, University of California, San Diego
| |
Collapse
|
6
|
Smolensky P, McCoy RT, Fernandez R, Goldrick M, Gao J. Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems. AI MAG 2022. [DOI: 10.1002/aaai.12065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Paul Smolensky
- Department of Cognitive Science Johns Hopkins University Baltimore Maryland USA
- Microsoft Research Redmond Washington USA
| | | | | | - Matthew Goldrick
- Department of Linguistics Northwestern University Evanston Illinois USA
| | | |
Collapse
|
7
|
Goldrick M. An Impoverished Epistemology Holds Back Cognitive Science Research. Cogn Sci 2022; 46:e13199. [PMID: 36070855 DOI: 10.1111/cogs.13199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 08/03/2022] [Accepted: 08/23/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern University.,Department of Psychology, Northwestern University.,Cognitive Science Program, Northwestern University
| |
Collapse
|
8
|
Affiliation(s)
- Kasia Hitczenko
- Department of Linguistics, Northwestern University, Evanston, IL, USA,To whom correspondence should be addressed; Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, IL 60208, USA; tel: 847-491-5831, fax: 847-491-3770, e-mail:
| | - Henry R Cowan
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, IL, USA,Department of Psychology, Northwestern University, Evanston, IL, USA,Institute for Innovations in Developmental Sciences, Northwestern University, Evanston/Chicago, IL, USA
| | - Vijay A Mittal
- Department of Psychology, Northwestern University, Evanston, IL, USA,Institute for Innovations in Developmental Sciences, Northwestern University, Evanston/Chicago, IL, USA,Department of Psychiatry, Northwestern University, Chicago, IL, USA,Institute for Policy Research, Northwestern University, Evanston, IL, USA,Medical Social Sciences, Northwestern University, Chicago, IL, USA
| |
Collapse
|
9
|
Alderete J, Baese-Berk M, Leung K, Goldrick M. Cascading activation in phonological planning and articulation: Evidence from spontaneous speech errors. Cognition 2021; 210:104577. [PMID: 33609911 PMCID: PMC8009837 DOI: 10.1016/j.cognition.2020.104577] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 11/05/2020] [Accepted: 12/23/2020] [Indexed: 11/27/2022]
Abstract
Speaking involves both retrieving the sounds of a word (phonological planning) and realizing these selected sounds in fluid speech (articulation). Recent phonetic research on speech errors has argued that multiple candidate sounds in phonological planning can influence articulation because the pronunciation of mis-selected error sounds is slightly skewed towards unselected target sounds. Yet research to date has only examined these phonetic distortions in experimentally-elicited errors, leaving doubt as to whether they reflect tendencies in spontaneous speech. Here, we analyzed the pronunciation of speech errors of English-speaking adults in natural conversations relative to matched correct words by the same speakers, and found the conjectured phonetic distortions. Comparison of these data with a larger set of experimentally-elicited errors failed to reveal significant differences between the two types of errors. These findings provide ecologically-valid data supporting models that allow for information about multiple planning representations to simultaneously influence speech articulation.
Collapse
|
10
|
Abstract
The language and speech of individuals with psychosis reflect their impairments in cognition and motor processes. These language disturbances can be used to identify individuals with and at high risk for psychosis, as well as help track and predict symptom progression, allowing for early intervention and improved outcomes. However, current methods of language assessment-manual annotations and/or clinical rating scales-are time intensive, expensive, subject to bias, and difficult to administer on a wide scale, limiting this area from reaching its full potential. Computational methods that can automatically perform linguistic analysis have started to be applied to this problem and could drastically improve our ability to use linguistic information clinically. In this article, we first review how these automated, computational methods work and how they have been applied to the field of psychosis. We show that across domains, these methods have captured differences between individuals with psychosis and healthy controls and can classify individuals with high accuracies, demonstrating the promise of these methods. We then consider the obstacles that need to be overcome before these methods can play a significant role in the clinical process and provide suggestions for how the field should address them. In particular, while much of the work thus far has focused on demonstrating the successes of these methods, we argue that a better understanding of when and why these models fail will be crucial toward ensuring these methods reach their potential in the field of psychosis.
Collapse
Affiliation(s)
- Kasia Hitczenko
- Department of Linguistics, Northwestern University, Evanston,
IL,To whom correspondence should be addressed; Northwestern University, 2016
Sheridan Road, Evanston, IL 60208; tel: 847-491-5831, fax: 847-491-3770, e-mail:
| | - Vijay A Mittal
- Department of Psychology, Northwestern University, Evanston, IL,Department of Psychiatry, Northwestern University, Chicago, IL,Institute for Policy Research, Northwestern University, Evanston,
IL,Medical Social Sciences, Northwestern University, Chicago, IL,Institute for Innovations in Developmental Sciences, Northwestern
University, Evanston and Chicago, IL
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston,
IL,Institute for Innovations in Developmental Sciences, Northwestern
University, Evanston and Chicago, IL
| |
Collapse
|
11
|
Goldrick M, Shrem Y, Kilbourn-Ceron O, Baus C, Keshet J. Using automated acoustic analysis to explore the link between planning and articulation in second language speech production. Lang Cogn Neurosci 2020; 36:824-839. [PMID: 34485588 PMCID: PMC8411898 DOI: 10.1080/23273798.2020.1805118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Accepted: 07/11/2020] [Indexed: 06/13/2023]
Abstract
Speakers learning a second language show systematic differences from native speakers in the retrieval, planning, and articulation of speech. A key challenge in examining the interrelationship between these differences at various stages of production is the need for manual annotation of fine-grained properties of speech. We introduce a new method for automatically analyzing voice onset time (VOT), a key phonetic feature indexing differences in sound systems cross-linguistically. In contrast to previous approaches, our method allows reliable measurement of prevoicing, a dimension of VOT variation used by many languages. Analysis of VOTs, word durations, and reaction times from German-speaking learners of Spanish (Baus et al., 2013) suggest that while there are links between the factors impacting planning and articulation, these two processes also exhibit some degree of independence. We discuss the implications of these findings for theories of speech production and future research in bilingual language processing.
Collapse
|
12
|
Sichlinger L, Cibelli E, Goldrick M, Mittal VA. Clinical correlates of aberrant conversational turn-taking in youth at clinical high-risk for psychosis. Schizophr Res 2019; 204:419-420. [PMID: 30172593 PMCID: PMC6395525 DOI: 10.1016/j.schres.2018.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 08/06/2018] [Accepted: 08/09/2018] [Indexed: 11/16/2022]
Affiliation(s)
- Laura Sichlinger
- Northwestern University, Department of Psychology, Evanston, IL, USA; Northwestern University, Department of Linguistics, Evanston, IL, USA; King's College London, Institute of Psychiatry, Psychology and Neuroscience, London, UK.
| | - Emily Cibelli
- Northwestern University, Department of Linguistics, Evanston, IL, USA
| | - Matthew Goldrick
- Northwestern University, Department of Linguistics, Evanston, IL, USA
| | - Vijay A. Mittal
- Northwestern University, Department of Psychology, Evanston, IL, USA,Northwestern University, Department of Psychiatry, Chicago IL, USA,Northwestern University, Institute for Policy Research, Evanston, IL, USA,Northwestern University, Medical Social Sciences, Chicago, IL, USA,Institute for Innovations in Developmental Sciences, Evanston/Chicago, IL, USA
| |
Collapse
|
13
|
Abstract
The current study investigated how aging affects production and self-correction of errors in connected speech elicited via a read aloud task. Thirty-five cognitively healthy older and 56 younger participants read aloud 6 paragraphs in each of three conditions increasing in difficulty: (a) normal, (b) nouns-swapped (in which nouns were shuffled across pairs of sentences in each paragraph), and (c) exchange (in which adjacent words in every two sentences were reversed in order). Reading times and errors increased with task difficulty, but self-correction rates were lowest in the nouns-swapped condition. Older participants read aloud more slowly, and after controlling for aging-related advantages in vocabulary knowledge, produced more speech errors (especially in the normal condition), and self-corrected errors less often than younger participants. Exploratory analysis of error types revealed that aging increased the rate of function word substitution errors (saying the instead of a), whereas younger participants omitted content words more often than did older participants. This pattern of aging deficits reveals powerful effects of vocabulary knowledge on speech production and suggests aging speakers can compensate for aging-related decline in control over speech production with their higher vocabulary knowledge and careful attention to speech planning in more difficult speaking conditions. These results suggest a model of speech production in which planning of speech is relatively automatic, whereas monitoring and self-correction are more attention-demanding, in turn leaving speech production relatively intact in aging. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
14
|
Goldrick M, McClain R, Cibelli E, Adi Y, Gustafson E, Moers C, Keshet J. The influence of lexical selection disruptions on articulation. J Exp Psychol Learn Mem Cogn 2018; 45:1107-1141. [PMID: 30024252 DOI: 10.1037/xlm0000633] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Interactive models of language production predict that it should be possible to observe long-distance interactions; effects that arise at one level of processing influence multiple subsequent stages of representation and processing. We examine the hypothesis that disruptions arising in nonform-based levels of planning-specifically, lexical selection-should modulate articulatory processing. A novel automatic phonetic analysis method was used to examine productions in a paradigm yielding both general disruptions to formulation processes and, more specifically, overt errors during lexical selection. This analysis method allowed us to examine articulatory disruptions at multiple levels of analysis, from whole words to individual segments. Baseline performance by young adults was contrasted with young speakers' performance under time pressure (which previous work has argued increases interaction between planning and articulation) and performance by older adults (who may have difficulties inhibiting nontarget representations, leading to heightened interactive effects). The results revealed the presence of interactive effects. Our new analysis techniques revealed these effects were strongest in initial portions of responses, suggesting that speech is initiated as soon as the first segment has been planned. Interactive effects did not increase under response pressure, suggesting interaction between planning and articulation is relatively fixed. Unexpectedly, lexical selection disruptions appeared to yield some degree of facilitation in articulatory processing (possibly reflecting semantic facilitation of target retrieval) and older adults showed weaker, not stronger interactive effects (possibly reflecting weakened connections between lexical and form-level representations). (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
15
|
Abstract
Romani, Galuzzi, Guariglia, and Goslin (Comparing phoneme frequency, age of acquisition and loss in aphasia: Implications for phonological universals. Cognitive Neuropsychology) used speech error data from individuals with acquired impairments to argue that independent from articulatory complexity, within-language distributional regularities influence the processing of sound structure in speech production. Converging evidence from unimpaired speakers is reviewed, focusing on speech errors in language production. Future research should examine how articulatory and frequency factors are integrated in language processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- a Department of Linguistics , Northwestern University , Evanston , IL , USA
| |
Collapse
|
16
|
Abstract
Speakers track the probability that a word will occur in a particular context and utilize this information during phonetic processing. For example, content words that have high probability within a discourse tend to be realized with reduced acoustic/articulatory properties. Such probabilistic information may influence L1 and L2 speech processing in distinct ways (reflecting differences in linguistic experience across groups and the overall difficulty of L2 speech processing). To examine this issue, L1 and L2 speakers performed a referential communication task, describing sequences of simple actions. The two groups of speakers showed similar effects of discourse-dependent probabilistic information on production, suggesting that L2 speakers can successfully track discourse-dependent probabilities and use such information to modulate phonetic processing.
Collapse
Affiliation(s)
- Erin Gustafson
- Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston IL, 60208 USA
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston IL, 60208 USA
| |
Collapse
|
17
|
Adi Y, Keshet J, Cibelli E, Goldrick M. SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS. Proc IEEE Int Conf Acoust Speech Signal Process 2017; 2017:2422-2426. [PMID: 29033692 DOI: 10.1109/icassp.2017.7952591] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We describe and analyze a simple and effective algorithm for sequence segmentation applied to speech processing tasks. We propose a neural architecture that is composed of two modules trained jointly: a recurrent neural network (RNN) module and a structured prediction model. The RNN outputs are considered as feature functions to the structured model. The overall model is trained with a structured loss function which can be designed to the given segmentation task. We demonstrate the effectiveness of our method by applying it to two simple tasks commonly used in phonetic studies: word segmentation and voice onset time segmentation. Results suggest the proposed model is superior to previous methods, obtaining state-of-the-art results on the tested datasets.
Collapse
Affiliation(s)
- Yossi Adi
- Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Joseph Keshet
- Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Emily Cibelli
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| |
Collapse
|
18
|
Abstract
Phonotactics-constraints on the position and combination of speech sounds within syllables-are subject to statistical differences that gradiently affect speaker and listener behavior (e.g., Vitevitch & Luce, 1999). What statistical properties drive the acquisition of such constraints? Because they are naturally highly correlated, previous work has been unable to dissociate the contribution of 2 properties: contextual variability (the number of unique phonological contexts in which a phonotactic pattern appears) and exemplar strength (the overall number of times the pattern appears). Using an artificial language learning paradigm, 3 experiments disentangled the effects of variability and strength, indexed by type and token frequency, respectively, on the learning of gradient phonotactics. When the 2 factors were decorrelated (Experiment 2), participants showed greater generalization of patterns advantaged for contextual variability, but not those advantaged for exemplar strength. When the 2 factors were anticorrelated (Experiment 3), participants preferred patterns advantaged in contextual variability, even though they were disadvantaged for exemplar strength. These results suggest that contextual variability is the key force driving phonotactic learning, as it allows learners to home in on the invariant features of the input. (PsycINFO Database Record
Collapse
Affiliation(s)
- Thomas Denby
- Department of Linguistics, Northwestern University
| | | | - Sean Arn
- Department of Linguistics, Northwestern University
| | | | | |
Collapse
|
19
|
Abstract
The current study investigated the possibility that language switches could be relatively automatically triggered by context. Single-word switches, in which bilinguals switched languages on a single word in midsentence and then immediately switched back, were contrasted with more complete whole-language switches, in which bilinguals completed a full phrase (or more) in the switched to language before switching back. Speech production was elicited by asking Spanish-English bilinguals to read aloud mixed-language paragraphs that manipulated switch type (single word, whole language), part of speech (switches on function or content words), and default language (dominant language English or nondominant Spanish). Switching difficulty was measured by production of translation-equivalent language intrusion errors (e.g., mistakenly saying pero instead of but). Controlling for word length (more errors on short vs. long words), intrusions were produced most often with function word targets in the single-word switch condition, and whole-language switches reduced production of intrusion errors for function but not content word targets. Speakers were also more likely to produce intrusions when intending to produce words in the dominant language-a reversed dominance effect. Finally, switches out of the default language elicited many errors, but switches back into the default language rarely elicited errors. The context-sensitivity of switching difficulty, particularly for function words, implies that some language switches are triggered automatically by control processes involving selection of a default language at a syntactic level. At a later processing stage, an independent form-level monitoring process prevents production of some planned intrusion errors before they are produced overtly. (PsycINFO Database Record
Collapse
Affiliation(s)
- Tamar H Gollan
- Department of Psychiatry, University of California, San Diego
| | | |
Collapse
|
20
|
Fink A, Oppenheim GM, Goldrick M. Interactions between Lexical Access and Articulation. Lang Cogn Neurosci 2017; 33:12-24. [PMID: 29399594 PMCID: PMC5793891 DOI: 10.1080/23273798.2017.1348529] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Accepted: 06/23/2017] [Indexed: 05/31/2023]
Abstract
This study investigates the interaction of lexical access and articulation in spoken word production, examining two dimensions along which theories vary. First, does articulatory variation reflect a fixed plan, or do lexical access-articulatory interactions continue after response initiation? Second, to what extent are interactive mechanisms hard-wired properties of the production system, as opposed to flexible? In two picture-naming experiments, we used semantic neighbor manipulations to induce lexical and conceptual co-activation. Our results provide evidence for multiple sources of interaction, both before and after response initiation. While interactive effects can vary across participants, we do not find strong evidence of variation of effects within individuals, suggesting that these interactions are relatively fixed features of each individual's production system.
Collapse
Affiliation(s)
- Angela Fink
- Northwestern University, Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston, IL 60626
| | - Gary M Oppenheim
- Bangor University, School of Psychology, Adeilad Brigantia, Penrallt Road, Bangor, Gwynedd LL57 2AS, UK
- Rice University, Department of Psychology, Houston, TX 77251
- University of California San Diego, Center for Research in Language, 9500 Gilman Dr, La Jolla, CA 92037
| | - Matthew Goldrick
- Northwestern University, Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston, IL 60626
| |
Collapse
|
21
|
Abstract
Though bilinguals know many more words than monolinguals, within each language bilinguals exhibit some processing disadvantages, extending to sublexical processes specifying the sound structure of words (Gollan & Goldrick, Cognition, 125(3), 491-497, 2012). This study investigated the source of this bilingual disadvantage. Spanish-English bilinguals, Mandarin-English bilinguals, and English monolinguals repeated tongue twisters composed of English nonwords. Twister materials were made up of sound sequences that are unique to the English language (nonoverlapping) or sound sequences that are highly similar-yet phonetically distinct-in the two languages for the bilingual groups (overlapping). If bilingual disadvantages in tongue-twister production result from competition between phonetic representations in their two languages, bilinguals should have more difficulty selecting an intended target when similar sounds are activated in the overlapping sound sequences. Alternatively, if bilingual disadvantages reflect the relatively reduced frequency of use of sound sequences, bilinguals should have greater difficulty in the nonoverlapping condition (as the elements of such sound sequences are limited to a single language). Consistent with the frequency-lag account, but not the competition account, both Spanish-English and Mandarin-English bilinguals were disadvantaged in tongue-twister production only when producing twisters with nonoverlapping sound sequences. Thus, the bilingual disadvantage in tongue-twister production likely reflects reduced frequency of use of sound sequences specific to each language.
Collapse
Affiliation(s)
- Chuchu Li
- Department of Psychiatry, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA, 92093-0948, USA.
| | | | - Tamar H Gollan
- Department of Psychiatry, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA, 92093-0948, USA
| |
Collapse
|
22
|
Brehm L, Goldrick M. Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions. J Exp Psychol Learn Mem Cogn 2017; 43:1537-1556. [PMID: 28287766 DOI: 10.1037/xlm0000390] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing. (PsycINFO Database Record
Collapse
Affiliation(s)
- Laurel Brehm
- Department of Linguistics, Northwestern University
| | | |
Collapse
|
23
|
Adi Y, Keshet J, Cibelli E, Gustafson E, Clopper C, Goldrick M. Automatic measurement of vowel duration via structured prediction. J Acoust Soc Am 2016; 140:4517. [PMID: 28040034 PMCID: PMC5392101 DOI: 10.1121/1.4972527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Revised: 10/04/2016] [Accepted: 12/06/2016] [Indexed: 06/06/2023]
Abstract
A key barrier to making phonetic studies scalable and replicable is the need to rely on subjective, manual annotation. To help meet this challenge, a machine learning algorithm was developed for automatic measurement of a widely used phonetic measure: vowel duration. Manually-annotated data were used to train a model that takes as input an arbitrary length segment of the acoustic signal containing a single vowel that is preceded and followed by consonants and outputs the duration of the vowel. The model is based on the structured prediction framework. The input signal and a hypothesized set of a vowel's onset and offset are mapped to an abstract vector space by a set of acoustic feature functions. The learning algorithm is trained in this space to minimize the difference in expectations between predicted and manually-measured vowel durations. The trained model can then automatically estimate vowel durations without phonetic or orthographic transcription. Results comparing the model to three sets of manually annotated data suggest it outperformed the current gold standard for duration measurement, an hidden Markov model-based forced aligner (which requires orthographic or phonetic transcription as an input).
Collapse
Affiliation(s)
- Yossi Adi
- Department of Computer Science, Bar-Ilan University, Ramat-Gan, 52900, Israel
| | - Joseph Keshet
- Department of Computer Science, Bar-Ilan University, Ramat-Gan, 52900, Israel
| | - Emily Cibelli
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Erin Gustafson
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Cynthia Clopper
- Department of Linguistics, Ohio State University, Columbus, Ohio 43210, USA
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
24
|
Houlden A, Goldrick M, Brough D, Vizi E, Lénárt N, Martinecz B, Roberts I, Denes A. Brain injury induces specific changes in the caecal microbiota of mice via altered autonomic activity and mucoprotein production. Brain Behav Immun 2016; 57:10-20. [PMID: 27060191 PMCID: PMC5021180 DOI: 10.1016/j.bbi.2016.04.003] [Citation(s) in RCA: 228] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 03/09/2016] [Accepted: 04/05/2016] [Indexed: 12/28/2022] Open
Abstract
Intestinal microbiota are critical for health with changes associated with diverse human diseases. Research suggests that altered intestinal microbiota can profoundly affect brain function. However, whether altering brain function directly affects the microbiota is unknown. Since it is currently unclear how brain injury induces clinical complications such as infections or paralytic ileus, key contributors to prolonged hospitalization and death post-stroke, we tested in mice the hypothesis that brain damage induced changes in the intestinal microbiota. Experimental stroke altered the composition of caecal microbiota, with specific changes in Peptococcaceae and Prevotellaceae correlating with the extent of injury. These effects are mediated by noradrenaline release from the autonomic nervous system with altered caecal mucoprotein production and goblet cell numbers. Traumatic brain injury also caused changes in the gut microbiota, confirming brain injury effects gut microbiota. Changes in intestinal microbiota after brain injury may affect recovery and treatment of patients should appreciate such changes.
Collapse
Affiliation(s)
- A. Houlden
- Faculty of Life Sciences, University of Manchester, Manchester, UK
| | - M. Goldrick
- Faculty of Life Sciences, University of Manchester, Manchester, UK
| | - D. Brough
- Faculty of Life Sciences, University of Manchester, Manchester, UK
| | - E.S. Vizi
- Laboratory of Drug Research, Institute of Experimental Medicine, Hungarian Academy of Sciences, P.O.B. 67, H-1450 Budapest, Hungary,Department of Pharmacology and Pharmacotherapy, Semmelweis University, Budapest, Hungary
| | - N. Lénárt
- Laboratory of Neuroimmunology, Institute of Experimental Medicine, Budapest, Hungary
| | - B. Martinecz
- Laboratory of Neuroimmunology, Institute of Experimental Medicine, Budapest, Hungary
| | - I.S. Roberts
- Faculty of Life Sciences, University of Manchester, Manchester, UK,Corresponding authors at: Faculty of Life Sciences, University of Manchester, Manchester, UK (I.S. Roberts); Laboratory of Neuroimmunology, Institute of Experimental Medicine, Budapest, Hungary (A. Denes).Faculty of Life SciencesUniversity of ManchesterManchesterUK
| | - A. Denes
- Faculty of Life Sciences, University of Manchester, Manchester, UK,Laboratory of Neuroimmunology, Institute of Experimental Medicine, Budapest, Hungary,Corresponding authors at: Faculty of Life Sciences, University of Manchester, Manchester, UK (I.S. Roberts); Laboratory of Neuroimmunology, Institute of Experimental Medicine, Budapest, Hungary (A. Denes).Faculty of Life SciencesUniversity of ManchesterManchesterUK
| |
Collapse
|
25
|
Abstract
The current study investigated the roles of grammaticality and executive control on bilingual language selection by examining production speed and failures of language control, or intrusion errors (e.g., saying el instead of the), in young and aging bilinguals. Production of mixed-language connected speech was elicited by asking Spanish-English bilinguals to read aloud paragraphs that had mostly grammatical (conforming to naturally occurring constraints) or mostly ungrammatical (haphazard mixing) language switches, and low or high switching rate. Mixed-language speech was slower and less accurate when switch-rate was high, but especially (for speed) or only (for intrusion errors) if switches were also ungrammatical. Executive function ability (measured with a variety of tasks in young bilinguals in Experiment 1, and aging bilinguals in Experiment 2), slowed production and increased intrusion rate in a generalized fashion, but with little or no interaction with grammaticality. Aging effects appeared to reflect reduced monitoring ability (evidenced by a lower rate of self-corrected intrusions). These results demonstrate robust effects of grammatical encoding on language selection, and imply that executive control influences bilingual language production only after sentence planning and lexical selection.
Collapse
|
26
|
Mugler EM, Goldrick M, Rosenow JM, Tate MC, Slutzky MW. Decoding of articulatory gestures during word production using speech motor and premotor cortical activity. Annu Int Conf IEEE Eng Med Biol Soc 2016; 2015:5339-42. [PMID: 26737497 DOI: 10.1109/embc.2015.7319597] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Brain-machine interfaces that directly translate attempted speech from the speech motor areas could change the lives of people with complete paralysis. However, it remains uncertain exactly how speech production is encoded in cortex. Improving this understanding could greatly improve brain-machine interface design. Specifically, it is not clear to what extent the different levels of speech production (phonemes, or speech sounds, and articulatory gestures, which describe the movements of the articulator muscles) are represented in the motor cortex. Using electrocorticographic (ECoG) electrodes on the cortical surface, we recorded neural activity from speech motor and premotor areas during speech production. We decoded both gestures and phonemes using the neural signals. Overall classification accuracy was higher for gestures than phonemes. In particular, gestures were better represented in the primary sensorimotor cortices, while phonemes were better represented in more anterior areas.
Collapse
|
27
|
Abstract
During language production planning, multiple candidate representations are implicitly activated prior to articulation. Lexical representations that are phonologically related to the target (phonological neighbors) are known to influence phonetic properties of the target word. However, the question of which dimensions of phonological similarity contribute to such lexical-phonetic effects remains unanswered. In the present study, we reanalyze phonetic data from a previous study, examining the contrasting predictions of different definitions of phonological similarity. Our results suggest that similarity at the level of position-specific phonological segments best predicts the influence of neighbor activation on phonetic properties of initial consonants.
Collapse
Affiliation(s)
- Melinda Fricke
- Psychology Department, Center for Language Science, Pennsylvania State University, 112 Moore Building, University Park, PA 16802 USA
| | - Melissa M Baese-Berk
- Department of Linguistics, University of Oregon, 279 Straub Hall, 1290 University of Oregon, Eugene, OR 97403-1290 USA,
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, IL 60208 USA,
| |
Collapse
|
28
|
Abstract
When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German bilinguals, German-English bilinguals and English monolinguals listened for target words in spoken English sentences while their eye-movements were recorded. Bilinguals' eye-movements reflected weaker lexical access relative to monolinguals; furthermore, the effect of semantic constraint differed across first versus second language processing. Specifically, English-native bilinguals showed fewer overall looks to target items, regardless of sentence constraint; German-native bilinguals activated target items more slowly and maintained target activation over a longer period of time in the low-constraint condition compared with monolinguals. No eye movements to cross-linguistic competitors were observed, suggesting that these lexical access disadvantages were present during bilingual spoken sentence comprehension even in the absence of overt interlingual competition.
Collapse
Affiliation(s)
- Anthony Shook
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA.
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Caroline Engstler
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA
| |
Collapse
|
29
|
Abstract
Over the past several decades, an increasing number of empirical studies have documented the interaction of information across the traditional linguistic modules of phonetics, phonology, and lexicon. For example, the frequency with which a word occurs influences its phonetic properties of its sounds; high frequency words tend to be reduced relative to low frequency words. Lexicalist Exemplar Models have been successful in accounting for this body of results through a single mechanism, exemplars- memory representations that integrate lexical, phonological, and phonetic information into a single structure. We review recent studies that suggest there are critical limitations to assuming that phonetic variation solely reflects the storage of word labels and sound structure in exemplars. Specifically, these studies show that factors related to the on-line retrieval and planning of lexical items also influence phonetic variation. The implications of these findings for exemplar models are discussed; the relationship of exemplar storage to the broader cognitive system is examined, as well as alternative theoretical frameworks incorporating gradience at all levels of linguistic representation.
Collapse
Affiliation(s)
- Angela Fink
- Department of Linguistics, Northwestern University
| | | |
Collapse
|
30
|
Mugler EM, Goldrick M, Slutzky MW. Cortical encoding of phonemic context during word production. Annu Int Conf IEEE Eng Med Biol Soc 2015; 2014:6790-3. [PMID: 25571555 DOI: 10.1109/embc.2014.6945187] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Brain-computer interfaces that directly decode speech could restore communication to locked-in individuals. However, decoding speech from brain signals still faces many challenges. We investigated decoding of phonemes - the smallest separable parts of speech - from ECoG signals during word production. We expanded on previous efforts to identify specific phoneme by identifying phonemes by where in the word they were formed. We evaluated how the context of phonemes in words affects classification results using linear discriminant analysis. The decoding accuracy of our linear classifier indicated the degree to which the context of a phoneme can be determined from the cortical signal significantly greater than chance. Further, we identified the spectrotemporal features that contributed most to successful decoding of phonemic classes. Finally, we discuss how this can augment speech decoding for neural interfaces.
Collapse
|
31
|
Abstract
Vowel durations are most often utilized in studies addressing specific issues in phonetics. Thus far this has been hampered by a reliance on subjective, labor-intensive manual annotation. Our goal is to build an algorithm for automatic accurate measurement of vowel duration, where the input to the algorithm is a speech segment contains one vowel preceded and followed by consonants (CVC). Our algorithm is based on a deep neural network trained at the frame level on manually annotated data from a phonetic study. Specifically, we try two deep-network architectures: convolutional neural network (CNN), and deep belief network (DBN), and compare their accuracy to an HMM-based forced aligner. Results suggest that CNN is better than DBN, and both CNN and HMM-based forced aligner are comparable in their results, but neither of them yielded the same predictions as models fit to manually annotated data.
Collapse
Affiliation(s)
- Yossi Adi
- Dept. of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Joseph Keshet
- Dept. of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Matthew Goldrick
- Dept. of Linguistics, Northwestern University, Evanston, IL, USA
| |
Collapse
|
32
|
Abstract
It is well known that multilingual speakers' nonnative productions are accented. Do these deviations from monolingual productions simply reflect the mislearning of nonnative sound categories, or can difficulties in processing speech sounds also contribute to a speaker's accent? Such difficulties are predicted by interactive theories of production, which propose that nontarget representations, partially activated during lexical access, influence phonetic processing. We examined this possibility using language switching, a task that is well known to disrupt multilingual speech production. We found that these disruptions extend to the articulation of individual speech sounds. When native Spanish speakers are required to unexpectedly switch the language of production between Spanish and English, their speech becomes more accented than when they do not switch languages (particularly for cognate targets). These findings suggest that accents reflect not only difficulty in acquiring second-language speech sounds but also the influence of representations partially activated during on-line speech processing.
Collapse
|
33
|
Abstract
Research with speakers with acquired production difficulties has suggested phonetic processing is more difficult in tasks that require semantic processing. The current research examined whether similar effects are found in bilingual phonetic processing. English-French bilinguals' productions in picture naming (which requires semantic processing) were compared to those elicited by repetition (which does not require semantic processing). Picture naming elicited slower, more accented speech than repetition. These results provide additional support for theories integrating cognitive and phonetic processes in speech production and suggest that bilingual speech research must take cognitive factors into account when assessing the structure of non-native sound systems.
Collapse
Affiliation(s)
- Erin Gustafson
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, Illinois 60208 , ,
| | - Caroline Engstler
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, Illinois 60208 , ,
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, Illinois 60208 , ,
| |
Collapse
|
34
|
Abstract
Lexical neighbors (words sharing phonological structure with a target word) have been shown to influence the expression of phonetic contrasts for vowels and initial voiceless consonants. Focusing on minimal pair neighbors (e.g., bud-but), this research extends this work by examining the production of voiced as well as voiceless stops in both initial and final syllable/word position. The results show minimal pair neighbors can result both in enhancement and reduction of voicing contrasts (in initial vs final position), and differentially affect voiced vs voiceless consonants. These diverse effects of minimal pair neighbors serve to constrain interactive theories of language processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, Illinois 60208, USA.
| | | | | |
Collapse
|
35
|
Smolensky P, Goldrick M, Mathis D. Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition. Cogn Sci 2013; 38:1102-38. [DOI: 10.1111/cogs.12047] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Revised: 05/09/2012] [Accepted: 05/24/2012] [Indexed: 11/30/2022]
Affiliation(s)
- Paul Smolensky
- Department of Cognitive Science; Johns Hopkins University
| | | | - Donald Mathis
- Department of Cognitive Science; Johns Hopkins University
| |
Collapse
|
36
|
|
37
|
Abstract
Most theories of spelling propose two major processes for translating between orthography and phonology: a lexical process for retrieving the spellings of familiar words and a sublexical process for assembling the spellings of unfamiliar letter strings based on knowledge of the systematic correspondences between phonemes and graphemes. We investigated how the lexical and sublexical processes function and interact in spelling by selectively interfering with the sublexical process in a dysgraphic individual. By comparing spelling performance under normal conditions and under conditions of sublexical disruption we were able to gain insight into the functioning and the unique contributions of the sublexical process. The results support the hypothesis that the sublexical process serves to strengthen a target word and provide it with a competitive advantage over orthographically and phonologically similar word neighbours that are in competition with the target for selection.
Collapse
|
38
|
|
39
|
Goldrick M, Baker HR, Murphy A, Baese-Berk M. Interaction and representational integration: evidence from speech errors. Cognition 2011; 121:58-72. [PMID: 21669409 DOI: 10.1016/j.cognition.2011.05.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2010] [Revised: 04/21/2011] [Accepted: 05/19/2011] [Indexed: 10/18/2022]
Abstract
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston, IL 60208, USA.
| | | | | | | |
Collapse
|
40
|
Hayes KS, Bancroft AJ, Goldrick M, Portsmouth C, Roberts IS, Grencis RK. Exploitation of the intestinal microflora by the parasitic nematode Trichuris muris. Science 2010; 328:1391-4. [PMID: 20538949 DOI: 10.1126/science.1187703] [Citation(s) in RCA: 222] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The inhabitants of the mammalian gut are not always relatively benign commensal bacteria but may also include larger and more parasitic organisms, such as worms and protozoa. At some level, all these organisms are capable of interacting with each other. We found that successful establishment of the chronically infecting parasitic nematode Trichuris muris in the large intestine of mice is dependent on microflora and coincident with modulation of the host immune response. By reducing the number of bacteria in the host animal, we significantly reduced the number of hatched T. muris eggs. Critical interactions between bacteria (microflora) and parasites (macrofauna) introduced a new dynamic to the intestinal niche, which has fundamental implications for our current concepts of intestinal homeostasis and regulation of immunity.
Collapse
Affiliation(s)
- K S Hayes
- Faculty of Life Sciences, University of Manchester, Manchester M13 9PT, UK
| | | | | | | | | | | |
Collapse
|
41
|
Peramunage D, Blumstein SE, Myers EB, Goldrick M, Baese-Berk M. Phonological neighborhood effects in spoken word production: an fMRI study. J Cogn Neurosci 2010; 23:593-603. [PMID: 20350185 DOI: 10.1162/jocn.2010.21489] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the stimulus set. Behavioral results showed longer voice-onset time for MP target words, replicating earlier behavioral results [Baese-Berk, M., & Goldrick, M. Mechanisms of interaction in speech production. Language and Cognitive Processes, 24, 527-554, 2009]. fMRI results revealed reduced activation for MP words compared to NMP words in a network including left posterior superior temporal gyrus, the supramarginal gyrus, inferior frontal gyrus, and precentral gyrus. These findings support cascade models of spoken word production and show that neural activation at the lexical level modulates activation in those brain regions involved in lexical selection, phonological planning, and, ultimately, motor plans for production. The facilitatory effects for words with MP neighbors suggest that competition effects reflect the overlap inherent in the phonological representation of the target word and its MP neighbor.
Collapse
|
42
|
Abstract
Many theories of language production and perception assume that in the normal course of processing a word, additional non-target words (lexical neighbors) become active. The properties of these neighbors can provide insight into the structure of representations and processing mechanisms in the language processing system. To infer the properties of neighbors, we examined the non-semantic errors produced in both spoken and written word production by four individuals who suffered neurological injury. Using converging evidence from multiple language tasks, we first demonstrate that the errors originate in disruption to the processes involved in the retrieval of word form representations from long-term memory. The targets and errors produced were then examined for their similarity along a number of dimensions. A novel statistical simulation procedure was developed to determine the significance of the observed similarities between targets and errors relative to multiple chance baselines. The results reveal that in addition to position-specific form overlap (the only consistent claim of traditional definitions of neighborhood structure) the dimensions of lexical frequency, grammatical category, target length and initial segment independently contribute to the activation of non-target words in both spoken and written production. Additional analyses confirm the relevance of these dimensions for word production showing that, in both written and spoken modalities, the retrieval of a target word is facilitated by increasing neighborhood density, as defined by the results of the target-error analyses.
Collapse
Affiliation(s)
- Matthew Goldrick
- Department of Cognitive Science, Johns Hopkins University
- Department of Linguistics, Northwestern University
| | | | - Brenda Rapp
- Department of Cognitive Science, Johns Hopkins University
| |
Collapse
|
43
|
Abstract
Many theories predict the presence of interactive effects involving information represented by distinct cognitive processes in speech production. There is considerably less agreement regarding the precise cognitive mechanisms that underlie these interactive effects. For example, are they driven by purely production-internal mechanisms (e.g., Dell, 1986) or do they reflect the influence of perceptual monitoring mechanisms on production processes (e.g., Roelofs, 2004)? Acoustic analyses reveal the phonetic realization of words is influenced by their word-specific properties-supporting the presence of interaction between lexical-level and phonetic information in speech production. A second experiment examines what mechanisms are responsible for this interactive effect. The results suggest the effect occurs on-line and is not purely driven by listener modeling. These findings are consistent with the presence of an interactive mechanism that is online and internal to the production system.
Collapse
|
44
|
Abstract
Phonological grammars characterize distinctions between relatively well-formed (unmarked) and relatively ill-formed (marked) phonological structures. We review evidence that markedness influences speech error probabilities. Specifically, although errors result in both unmarked as well as marked structures, there is a markedness asymmetry: errors are more likely to produce unmarked outcomes. We show that stochastic disruption to the computational mechanisms realizing a Harmonic Grammar (HG) can account for the broad empirical patterns of speech errors. We demonstrate that our proposal can account for the general markedness asymmetry. We also develop methods for linking particular HG proposals to speech error distributions, and illustrate these methods using a simple HG and a set of initial consonant errors in English.
Collapse
|
45
|
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, IL 60208, USA.
| | | |
Collapse
|
46
|
Goldrick M. Does like attract like? Exploring the relationship between errors and representational structure in connectionist networks. Cogn Neuropsychol 2008; 25:287-313. [DOI: 10.1080/02643290701417939] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL 60208, USA.
| |
Collapse
|
47
|
Goldrick M, Rapp B. Lexical and post-lexical phonological representations in spoken production. Cognition 2007; 102:219-60. [PMID: 16483561 DOI: 10.1016/j.cognition.2005.12.010] [Citation(s) in RCA: 129] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2005] [Revised: 12/13/2005] [Accepted: 12/22/2005] [Indexed: 12/01/2022]
Abstract
Theories of spoken word production generally assume a distinction between at least two types of phonological processes and representations: lexical phonological processes that recover relatively arbitrary aspects of word forms from long-term memory and post-lexical phonological processes that specify the predictable aspects of phonological representations. In this work we examine the spoken production of two brain-damaged individuals. We use their differential patterns of accuracy across the tasks of spoken naming and repetition to establish that they suffer from distinct deficits originating fairly selectively within lexical or post-lexical processes. Independent and detailed analyses of their spoken productions reveal contrasting patterns that provide clear support for a distinction between two types of phonological representations: those that lack syllabic and featural information and are sensitive to lexical factors such as lexical frequency and neighborhood density, and those that include syllabic and featural information and are sensitive to detailed properties of phonological structure such as phoneme frequency and syllabic constituency.
Collapse
|
48
|
|
49
|
Goldrick M, Blumstein SE. Cascading activation from phonological planning to articulatory processes: Evidence from tongue twisters. ACTA ACUST UNITED AC 2006. [DOI: 10.1080/01690960500181332] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
50
|
|