1
|
Luthra S, Crinnion AM, Saltzman D, Magnuson JS. Do They Know It's Christmash? Lexical Knowledge Directly Impacts Speech Perception. Cogn Sci 2024; 48:e13449. [PMID: 38773754 DOI: 10.1111/cogs.13449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychology, Carnegie Mellon University
| | | | - David Saltzman
- Department of Psychological Sciences, University of Connecticut
| | - James S Magnuson
- Department of Psychological Sciences, University of Connecticut
- BCBL - Basque Center on Cognition, Brain and Language, Donostia - San Sebastián, Spain
- Ikerbasque - Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
2
|
Magnuson JS, You H, Hannagan T. Lexical Feedback in the Time-Invariant String Kernel (TISK) Model of Spoken Word Recognition. J Cogn 2024; 7:38. [PMID: 38681820 PMCID: PMC11049678 DOI: 10.5334/joc.362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 04/03/2024] [Indexed: 05/01/2024] Open
Abstract
The Time-Invariant String Kernel (TISK) model of spoken word recognition (Hannagan, Magnuson & Grainger, 2013; You & Magnuson, 2018) is an interactive activation model with many similarities to TRACE (McClelland & Elman, 1986). However, by replacing most time-specific nodes in TRACE with time-invariant open-diphone nodes, TISK uses orders of magnitude fewer nodes and connections than TRACE. Although TISK performed remarkably similarly to TRACE in simulations reported by Hannagan et al., the original TISK implementation did not include lexical feedback, precluding simulation of top-down effects, and leaving open the possibility that adding feedback to TISK might fundamentally alter its performance. Here, we demonstrate that when lexical feedback is added to TISK, it gains the ability to simulate top-down effects without losing the ability to simulate the fundamental phenomena tested by Hannagan et al. Furthermore, with feedback, TISK demonstrates graceful degradation when noise is added to input, although parameters can be found that also promote (less) graceful degradation without feedback. We review arguments for and against feedback in cognitive architectures, and conclude that feedback provides a computationally efficient basis for robust constraint-based processing.
Collapse
Affiliation(s)
- James S. Magnuson
- BCBL: Basque Center on Cognition, Brain & Language, Donostia-San Sebastián, Spain
- Ikerbasque: Basque Foundation for Science, Bilbao, Spain
| | - Heejo You
- Department of Psychological Sciences and CT Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, USA
| | - Thomas Hannagan
- Hyundai Motor Group Robotics LAB, Uiwang, South Korea
- Stellantis Group, The Netherlands
| |
Collapse
|
3
|
Crinnion AM, Luthra S, Gaston P, Magnuson JS. Resolving competing predictions in speech: How qualitatively different cues and cue reliability contribute to phoneme identification. Atten Percept Psychophys 2024; 86:942-961. [PMID: 38383914 DOI: 10.3758/s13414-024-02849-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2024] [Indexed: 02/23/2024]
Abstract
Listeners have many sources of information available in interpreting speech. Numerous theoretical frameworks and paradigms have established that various constraints impact the processing of speech sounds, but it remains unclear how listeners might simultaneously consider multiple cues, especially those that differ qualitatively (i.e., with respect to timing and/or modality) or quantitatively (i.e., with respect to cue reliability). Here, we establish that cross-modal identity priming can influence the interpretation of ambiguous phonemes (Exp. 1, N = 40) and show that two qualitatively distinct cues - namely, cross-modal identity priming and auditory co-articulatory context - have additive effects on phoneme identification (Exp. 2, N = 40). However, we find no effect of quantitative variation in a cue - specifically, changes in the reliability of the priming cue did not influence phoneme identification (Exp. 3a, N = 40; Exp. 3b, N = 40). Overall, we find that qualitatively distinct cues can additively influence phoneme identification. While many existing theoretical frameworks address constraint integration to some degree, our results provide a step towards understanding how information that differs in both timing and modality is integrated in online speech perception.
Collapse
Affiliation(s)
| | | | | | - James S Magnuson
- University of Connecticut, Storrs, CT, USA
- BCBL. Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
- Ikerbasque. Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
4
|
Magnuson JS, Crinnion AM, Luthra S, Gaston P, Grubb S. Contra assertions, feedback improves word recognition: How feedback and lateral inhibition sharpen signals over noise. Cognition 2024; 242:105661. [PMID: 37944313 DOI: 10.1016/j.cognition.2023.105661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 10/17/2023] [Accepted: 11/02/2023] [Indexed: 11/12/2023]
Abstract
Whether top-down feedback modulates perception has deep implications for cognitive theories. Debate has been vigorous in the domain of spoken word recognition, where competing computational models and agreement on at least one diagnostic experimental paradigm suggest that the debate may eventually be resolvable. Norris and Cutler (2021) revisit arguments against lexical feedback in spoken word recognition models. They also incorrectly claim that recent computational demonstrations that feedback promotes accuracy and speed under noise (Magnuson et al., 2018) were due to the use of the Luce choice rule rather than adding noise to inputs (noise was in fact added directly to inputs). They also claim that feedback cannot improve word recognition because feedback cannot distinguish signal from noise. We have two goals in this paper. First, we correct the record about the simulations of Magnuson et al. (2018). Second, we explain how interactive activation models selectively sharpen signals via joint effects of feedback and lateral inhibition that boost lexically-coherent sublexical patterns over noise. We also review a growing body of behavioral and neural results consistent with feedback and inconsistent with autonomous (non-feedback) architectures, and conclude that parsimony supports feedback. We close by discussing the potential for synergy between autonomous and interactive approaches.
Collapse
Affiliation(s)
- James S Magnuson
- University of Connecticut. Storrs, CT, USA; BCBL. Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain; Ikerbasque. Basque Foundation for Science, Bilbao, Spain.
| | | | | | | | | |
Collapse
|
5
|
Luthra S, Mechtenberg H, Giorio C, Theodore RM, Magnuson JS, Myers EB. Using TMS to evaluate a causal role for right posterior temporal cortex in talker-specific phonetic processing. Brain Lang 2023; 240:105264. [PMID: 37087863 PMCID: PMC10286152 DOI: 10.1016/j.bandl.2023.105264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 04/06/2023] [Accepted: 04/08/2023] [Indexed: 05/03/2023]
Abstract
Theories suggest that speech perception is informed by listeners' beliefs of what phonetic variation is typical of a talker. A previous fMRI study found right middle temporal gyrus (RMTG) sensitivity to whether a phonetic variant was typical of a talker, consistent with literature suggesting that the right hemisphere may play a key role in conditioning phonetic identity on talker information. The current work used transcranial magnetic stimulation (TMS) to test whether the RMTG plays a causal role in processing talker-specific phonetic variation. Listeners were exposed to talkers who differed in how they produced voiceless stop consonants while TMS was applied to RMTG, left MTG, or scalp vertex. Listeners subsequently showed near-ceiling performance in indicating which of two variants was typical of a trained talker, regardless of previous stimulation site. Thus, even though the RMTG is recruited for talker-specific phonetic processing, modulation of its function may have only modest consequences.
Collapse
Affiliation(s)
| | | | | | | | - James S Magnuson
- University of Connecticut, United States; BCBL. Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | | |
Collapse
|
6
|
Brown KS, Yee E, Joergensen G, Troyer M, Saltzman E, Rueckl J, Magnuson JS, McRae K. Investigating the Extent to which Distributional Semantic Models Capture a Broad Range of Semantic Relations. Cogn Sci 2023; 47:e13291. [PMID: 37183557 DOI: 10.1111/cogs.13291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 03/20/2023] [Accepted: 04/07/2023] [Indexed: 05/16/2023]
Abstract
Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior work typically has addressed this question using limited human data that are restricted to semantic similarity and/or general semantic relatedness. We tested eight DSMs that are popular in current cognitive and psycholinguistic research (positive pointwise mutual information; global vectors; and three variations each of Skip-gram and continuous bag of words (CBOW) using word, context, and mean embeddings) on a theoretically motivated, rich set of semantic relations involving words from multiple syntactic classes and spanning the abstract-concrete continuum (19 sets of ratings). We found that, overall, the DSMs are best at capturing overall semantic similarity and also can capture verb-noun thematic role relations and noun-noun event-based relations that play important roles in sentence comprehension. Interestingly, Skip-gram and CBOW performed the best in terms of capturing similarity, whereas GloVe dominated the thematic role and event-based relations. We discuss the theoretical and practical implications of our results, make recommendations for users of these models, and demonstrate significant differences in model performance on event-based relations.
Collapse
Affiliation(s)
- Kevin S Brown
- Department of Pharmaceutical Sciences, Oregon State University
- School of Chemical, Biological, and Environmental Engineering, Oregon State University
| | - Eiling Yee
- Department of Psychological Sciences, University of Connecticut
| | | | | | | | - Jay Rueckl
- Department of Psychological Sciences, University of Connecticut
| | - James S Magnuson
- Department of Psychological Sciences, University of Connecticut
- BCBL, Basque Center on Cognition, Brain, & Language
- Ikerbasque, Basque Foundation for Science
| | - Ken McRae
- Department of Psychology, University of Western Ontario
| |
Collapse
|
7
|
Luthra S, Magnuson JS, Myers EB. Right Posterior Temporal Cortex Supports Integration of Phonetic and Talker Information. Neurobiol Lang (Camb) 2023; 4:145-177. [PMID: 37229142 PMCID: PMC10205075 DOI: 10.1162/nol_a_00091] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 11/08/2022] [Indexed: 05/27/2023]
Abstract
Though the right hemisphere has been implicated in talker processing, it is thought to play a minimal role in phonetic processing, at least relative to the left hemisphere. Recent evidence suggests that the right posterior temporal cortex may support learning of phonetic variation associated with a specific talker. In the current study, listeners heard a male talker and a female talker, one of whom produced an ambiguous fricative in /s/-biased lexical contexts (e.g., epi?ode) and one who produced it in /∫/-biased contexts (e.g., friend?ip). Listeners in a behavioral experiment (Experiment 1) showed evidence of lexically guided perceptual learning, categorizing ambiguous fricatives in line with their previous experience. Listeners in an fMRI experiment (Experiment 2) showed differential phonetic categorization as a function of talker, allowing for an investigation of the neural basis of talker-specific phonetic processing, though they did not exhibit perceptual learning (likely due to characteristics of our in-scanner headphones). Searchlight analyses revealed that the patterns of activation in the right superior temporal sulcus (STS) contained information about who was talking and what phoneme they produced. We take this as evidence that talker information and phonetic information are integrated in the right STS. Functional connectivity analyses suggested that the process of conditioning phonetic identity on talker information depends on the coordinated activity of a left-lateralized phonetic processing system and a right-lateralized talker processing system. Overall, these results clarify the mechanisms through which the right hemisphere supports talker-specific phonetic processing.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - James S. Magnuson
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
- Basque Center on Cognition Brain and Language (BCBL), Donostia-San Sebastián, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Emily B. Myers
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
- Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
8
|
Saltzman D, Luthra S, Myers EB, Magnuson JS. Attention, task demands, and multitalker processing costs in speech perception. J Exp Psychol Hum Percept Perform 2021; 47:1673-1680. [PMID: 34881952 DOI: 10.1037/xhp0000963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Determining how human listeners achieve phonetic constancy despite a variable mapping between the acoustics of speech and phonemic categories is the longest standing challenge in speech perception. A clue comes from studies where the talker changes randomly between stimuli, which slows processing compared with a single-talker baseline. These multitalker processing costs have been observed most often in speeded monitoring paradigms, where participants respond whenever a specific item occurs. Notably, the conventional paradigm imposes attentional demands via two forms of varied mapping in mixed-talker conditions. First, target recycling (i.e., allowing items to serve as targets on some trials but as distractors on others) potentially prevents the development of task automaticity. Second, in mixed trials, participants must respond to two unique stimuli (i.e., one target produced by each talker), whereas in blocked conditions, they need respond to only one token (i.e., multiple target tokens). We seek to understand how attentional demands influence talker normalization, as measured by multitalker processing costs. Across four experiments, multitalker processing costs persisted when target recycling was not allowed but diminished when only one stimulus served as the target on mixed trials. We discuss the logic of using varied mapping to elicit attentional effects and implications for theories of speech perception. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- David Saltzman
- Department of Psychological Sciences, University of Connecticut
| | - Sahil Luthra
- Department of Psychological Sciences, University of Connecticut
| | - Emily B Myers
- Department of Psychological Sciences, University of Connecticut
| | | |
Collapse
|
9
|
Luthra S, Saltzman D, Myers EB, Magnuson JS. Listener expectations and the perceptual accommodation of talker variability: A pre-registered replication. Atten Percept Psychophys 2021; 83:2367-2376. [PMID: 33948883 PMCID: PMC8096357 DOI: 10.3758/s13414-021-02317-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2021] [Indexed: 11/08/2022]
Abstract
Researchers have hypothesized that in order to accommodate variability in how talkers produce their speech sounds, listeners must perform a process of talker normalization. Consistent with this proposal, several studies have shown that spoken word recognition is slowed when speech is produced by multiple talkers compared with when all speech is produced by one talker (a multitalker processing cost). Nusbaum and colleagues have argued that talker normalization is modulated by attention (e.g., Nusbaum & Morin, 1992, Speech Perception, Production and Linguistic Structure, pp. 113-134). Some of the strongest evidence for this claim is from a speeded monitoring study where a group of participants who expected to hear two talkers showed a multitalker processing cost, but a separate group who expected one talker did not (Magnuson & Nusbaum, 2007, Journal of Experimental Psychology, 33[2], 391-409). In that study, however, the sample size was small and the crucial interaction was not significant. In this registered report, we present the results of a well-powered attempt to replicate those findings. In contrast to the previous study, we did not observe multitalker processing costs in either of our groups. To rule out the possibility that the null result was due to task constraints, we conducted a second experiment using a speeded classification task. As in Experiment 1, we found no influence of expectations on talker normalization, with no multitalker processing cost observed in either group. Our data suggest that the previous findings of Magnuson and Nusbaum (2007) be regarded with skepticism and that talker normalization may not be permeable to high-level expectations.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, 06269-1020, USA.
| | - David Saltzman
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, 06269-1020, USA
| | - Emily B Myers
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, 06269-1020, USA
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| | - James S Magnuson
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, 06269-1020, USA
- BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
10
|
Abstract
A challenge for listeners is to learn the appropriate mapping between acoustics and phonetic categories for an individual talker. Lexically guided perceptual learning (LGPL) studies have shown that listeners can leverage lexical knowledge to guide this process. For instance, listeners learn to interpret ambiguous /s/-/∫/ blends as /s/ if they have previously encountered them in /s/-biased contexts like epi?ode. Here, we examined whether the degree of preceding lexical support might modulate the extent of perceptual learning. In Experiment 1, we first demonstrated that perceptual learning could be obtained in a modified LGPL paradigm where listeners were first biased to interpret ambiguous tokens as one phoneme (e.g., /s/) and then later as another (e.g., /∫/). In subsequent experiments, we tested whether the extent of learning differed depending on whether targets encountered predictive contexts or neutral contexts prior to the auditory target (e.g., epi?ode). Experiment 2 used auditory sentence contexts (e.g., "I love The Walking Dead and eagerly await every new . . ."), whereas Experiment 3 used written sentence contexts. In Experiment 4, participants did not receive sentence contexts but rather saw the written form of the target word (episode) or filler text (########) prior to hearing the critical auditory token. While we consistently observed effects of context on in-the-moment processing of critical words, the size of the learning effect was not modulated by the type of context. We hypothesize that boosting lexical support through preceding context may not strongly influence perceptual learning when ambiguous speech sounds can be identified solely from lexical information. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
11
|
Luthra S, Peraza‐Santiago G, Beeson K, Saltzman D, Crinnion AM, Magnuson JS. Robust Lexically Mediated Compensation for Coarticulation: Christmash Time Is Here Again. Cogn Sci 2021; 45:e12962. [PMID: 33877697 PMCID: PMC8243960 DOI: 10.1111/cogs.12962] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 02/10/2021] [Accepted: 02/19/2021] [Indexed: 11/30/2022]
Abstract
A long-standing question in cognitive science is how high-level knowledge is integrated with sensory input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech sound, but do such effects reflect direct top-down influences on perception or merely postperceptual biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context phoneme (e.g., /s/ in Christma#) can alter the perceived place of articulation of a subsequent target phoneme (e.g., the initial phoneme of a stimulus from a tapes-capes continuum), consistent with the influence of an unambiguous context phoneme in the same position. Because this phoneme-to-phoneme compensation for coarticulation is considered sublexical, scientists agree that evidence for LCfC would constitute strong support for top-down interaction. However, results from previous LCfC studies have been inconsistent, and positive effects have often been small. Here, we conducted extensive piloting of stimuli prior to testing for LCfC. Specifically, we ensured that context items elicited robust phoneme restoration (e.g., that the final phoneme of Christma# was reliably identified as /s/) and that unambiguous context-final segments (e.g., a clear /s/ at the end of Christmas) drove reliable compensation for coarticulation for a subsequent target phoneme. We observed robust LCfC in a well-powered, preregistered experiment with these pretested items (N = 40) as well as in a direct replication study (N = 40). These results provide strong evidence in favor of computational models of spoken word recognition that include top-down feedback.
Collapse
Affiliation(s)
| | | | | | | | | | - James S. Magnuson
- Psychological SciencesUniversity of Connecticut
- BCBL, Basque Center on Cognition Brain and Language
- Ikerbasque, Basque Foundation for Science
| |
Collapse
|
12
|
Luthra S, You H, Rueckl JG, Magnuson JS. Friends in Low-Entropy Places: Orthographic Neighbor Effects on Visual Word Identification Differ Across Letter Positions. Cogn Sci 2020; 44:e12917. [PMID: 33274485 PMCID: PMC8211392 DOI: 10.1111/cogs.12917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 06/24/2020] [Accepted: 10/01/2020] [Indexed: 11/28/2022]
Abstract
Visual word recognition is facilitated by the presence of orthographic neighbors that mismatch the target word by a single letter substitution. However, researchers typically do not consider where neighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number of enemies at each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over-time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychological Sciences, University of Connecticut
- Connecticut Institute for the Brain and Cognitive Sciences
| | - Heejo You
- Department of Psychological Sciences, University of Connecticut
| | - Jay G. Rueckl
- Department of Psychological Sciences, University of Connecticut
- Connecticut Institute for the Brain and Cognitive Sciences
- Haskins Laboratories
| | - James S. Magnuson
- Department of Psychological Sciences, University of Connecticut
- Connecticut Institute for the Brain and Cognitive Sciences
- Haskins Laboratories
| |
Collapse
|
13
|
Malins JG, Landi N, Ryherd K, Frijters JC, Magnuson JS, Rueckl JG, Pugh KR, Sevcik R, Morris R. Is that a pibu or a pibo? Children with reading and language deficits show difficulties in learning and overnight consolidation of phonologically similar pseudowords. Dev Sci 2020; 24:e13023. [PMID: 32691904 PMCID: PMC7988620 DOI: 10.1111/desc.13023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Revised: 07/04/2020] [Accepted: 07/11/2020] [Indexed: 01/24/2023]
Abstract
Word learning is critical for the development of reading and language comprehension skills. Although previous studies have indicated that word learning is compromised in children with reading disability (RD) or developmental language disorder (DLD), it is less clear how word learning difficulties manifest in children with comorbid RD and DLD. Furthermore, it is unclear whether word learning deficits in RD or DLD include difficulties with offline consolidation of newly learned words. In the current study, we employed an artificial lexicon learning paradigm with an overnight design to investigate how typically developing (TD) children (N = 25), children with only RD (N = 93), and children with both RD and DLD (N = 34) learned and remembered a set of phonologically similar pseudowords. Results showed that compared to TD children, children with RD exhibited: (i) slower growth in discrimination accuracy for cohort item pairs sharing an onset (e.g. pibu‐pibo), but not for rhyming item pairs (e.g. pibu‐dibu); and (ii) lower discrimination accuracy for both cohort and rhyme item pairs on Day 2, even when accounting for differences in Day 1 learning. Moreover, children with comorbid RD and DLD showed learning and retention deficits that extended to unrelated item pairs that were phonologically dissimilar (e.g. pibu‐tupa), suggestive of broader impairments compared to children with only RD. These findings provide insights into the specific learning deficits underlying RD and DLD and motivate future research concerning how children use phonological similarity to guide the organization of new word knowledge.
Collapse
Affiliation(s)
- Jeffrey G Malins
- Department of Psychology, Georgia State University, Atlanta, GA, USA.,Haskins Laboratories, New Haven, CT, USA
| | - Nicole Landi
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Kayleigh Ryherd
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Jan C Frijters
- Faculty of Social Sciences, Department of Child and Youth Studies, Brock University, St. Catharines, ON, Canada
| | - James S Magnuson
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Jay G Rueckl
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Kenneth R Pugh
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA.,Department of Linguistics, Yale University, New Haven, CT, USA.,Department of Diagnostic Radiology, Yale University School of Medicine, New Haven, CT, USA
| | - Rose Sevcik
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Robin Morris
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
14
|
Magnuson JS, You H, Luthra S, Li M, Nam H, Escabí M, Brown K, Allopenna PD, Theodore RM, Monto N, Rueckl JG. EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition. Cogn Sci 2020; 44:e12823. [DOI: 10.1111/cogs.12823] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 12/11/2019] [Accepted: 02/05/2020] [Indexed: 11/28/2022]
Affiliation(s)
- James S. Magnuson
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
| | - Heejo You
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
| | - Sahil Luthra
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
| | - Monica Li
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
- Haskins Laboratories
| | - Hosung Nam
- Haskins Laboratories
- Department of English Language and Literature Korea University
| | - Monty Escabí
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
- Electrical and Computer Engineering University of Connecticut
- Biomedical Engineering University of Connecticut
| | - Kevin Brown
- Departments of Pharmaceutical Sciences and Chemical, Biological, and Environmental Engineering Oregon State University
| | - Paul D. Allopenna
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
| | - Rachel M. Theodore
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Speech, Language, and Hearing Sciences University of Connecticut
| | - Nicholas Monto
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Speech, Language, and Hearing Sciences University of Connecticut
| | - Jay G. Rueckl
- Connecticut Institute for the Brain and Cognitive Sciences University of Connecticut
- Psychological Sciences University of Connecticut
- Haskins Laboratories
| |
Collapse
|
15
|
Rakhlin N, Landi N, Lee M, Magnuson JS, Naumova OY, Ovchinnikova IV, Grigorenko EL. Cohesion of Cortical Language Networks During Word Processing Is Predicted by a Common Polymorphism in the
SETBP1
Gene. New Dir Child Adolesc Dev 2020; 2020:131-155. [DOI: 10.1002/cad.20331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
| | | | | | | | | | | | - Elena L. Grigorenko
- Haskins Laboratories
- Yale University
- University of Houston
- Saint-Petersburg State University
- Moscow State University for Psychology and Education
| |
Collapse
|
16
|
Li MY, Braze D, Kukona A, Johns CL, Tabor W, Van Dyke JA, Mencl WE, Shankweiler DP, Pugh KR, Magnuson JS. Individual differences in subphonemic sensitivity and phonological skills. J Mem Lang 2019; 107:195-215. [PMID: 31431796 PMCID: PMC6701851 DOI: 10.1016/j.jml.2019.03.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Many studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have underspecified (or "fuzzy") phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have overspecified phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification.
Collapse
Affiliation(s)
- Monica Y.C. Li
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Brain Imaging Research Center, University of Connecticut,
Storrs, CT 06269-1271, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - David Braze
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - Anuenue Kukona
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
- School of Applied Social Sciences, De Montfort University,
The Gateway, Leicester, LE1 9BH, UK
| | | | - Whitney Tabor
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - Julie A. Van Dyke
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - W. Einar Mencl
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
- Department of Linguistics, Yale University, New Haven, CT
06520, USA
| | - Donald P. Shankweiler
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - Kenneth R. Pugh
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Brain Imaging Research Center, University of Connecticut,
Storrs, CT 06269-1271, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
- Department of Linguistics, Yale University, New Haven, CT
06520, USA
| | - James S. Magnuson
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Brain Imaging Research Center, University of Connecticut,
Storrs, CT 06269-1271, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| |
Collapse
|
17
|
Brown KS, Allopenna PD, Hunt WR, Steiner R, Saltzman E, McRae K, Magnuson JS. Universal Features in Phonological Neighbor Networks. Entropy (Basel) 2018; 20:e20070526. [PMID: 33265615 PMCID: PMC7513050 DOI: 10.3390/e20070526] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 06/29/2018] [Accepted: 07/10/2018] [Indexed: 11/16/2022]
Abstract
Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two forms [words] that differ by one phoneme) compete significantly for recognition as a spoken word is heard. This definition of phonological similarity can be extended to an entire corpus of forms to produce a phonological neighbor network (PNN). We study PNNs for five languages: English, Spanish, French, Dutch, and German. Consistent with previous work, we find that the PNNs share a consistent set of topological features. Using an approach that generates random lexicons with increasing levels of phonological realism, we show that even random forms with minimal relationship to any real language, combined with only the empirical distribution of language-specific phonological form lengths, are sufficient to produce the topological properties observed in the real language PNNs. The resulting pseudo-PNNs are insensitive to the level of lingustic realism in the random lexicons but quite sensitive to the shape of the form length distribution. We therefore conclude that “universal” features seen across multiple languages are really string universals, not language universals, and arise primarily due to limitations in the kinds of networks generated by the one-step neighbor definition. Taken together, our results indicate that caution is warranted when linking the dynamics of human spoken word recognition to the topological properties of PNNs, and that the investigation of alternative similarity metrics for phonological forms should be a priority.
Collapse
Affiliation(s)
- Kevin S. Brown
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Physics, University of Connecticut, Storrs, CT 06269, USA
- Institute for Systems Genomics, University of Connecticut, Storrs, CT 06269, USA
- Connecticut Institute for the Brain & Cognitive Sciences, Storrs, CT 06269, USA
- Correspondence: ; Tel.: +1-860-486-6975
| | - Paul D. Allopenna
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - William R. Hunt
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Rachael Steiner
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Elliot Saltzman
- Department of Physical Therapy and Athletic Training, Boston University, Boston, MA 02215, USA
| | - Ken McRae
- Department of Psychology, University of Western Ontario, London, ON N6A 5C2, Canada
- Brain & Mind Institute, University of Western Ontario, London, ON N6A 5C2, Canada
| | - James S. Magnuson
- Connecticut Institute for the Brain & Cognitive Sciences, Storrs, CT 06269, USA
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
18
|
Johns CL, Jahn AA, Jones HR, Kush D, Molfese PJ, Van Dyke JA, Magnuson JS, Tabor W, Mencl WE, Shankweiler DP, Braze D. Individual differences in decoding skill, print exposure, and cortical structure in young adults. Lang Cogn Neurosci 2018; 33:1275-1295. [PMID: 30505876 PMCID: PMC6258201 DOI: 10.1080/23273798.2018.1476727] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 05/04/2018] [Indexed: 06/09/2023]
Abstract
This exploratory study investigated relations between individual differences in cortical grey matter structure and young adult readers' cognitive profiles. Whole-brain analyses revealed neuroanatomical correlations with word and nonword reading ability (decoding), and experience with printed matter. Decoding was positively correlated with grey matter volume (GMV) in left superior temporal sulcus, and thickness (GMT) in right superior temporal gyrus. Print exposure was negatively correlated with GMT in left inferior frontal gyrus (pars opercularis) and left fusiform gyrus (including the visual word form area). Both measures also correlated with supramarginal gyrus (SMG), but in spatially distinct subregions: decoding was positively associated with GMV in left anterior SMG, and print exposure was negatively associated with GMT in left posterior SMG. Our comprehensive approach to assessment both confirms and refines our understanding of the novel relation between the structure of pSMG and proficient reading, and unifies previous research relating cortical structure and reading skill.
Collapse
Affiliation(s)
- Clinton L. Johns
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
| | - Andrew A. Jahn
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
| | - Hannah R. Jones
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Melora Hall, P.O. Box 270266, Rochester, NY, 14627-0266, U.S.A
| | - Dave Kush
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Department of Language and Literature, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway
| | - Peter J. Molfese
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institutes of Mental Health, National Institutes of Health, Department of Health and Human Services
| | - Julie A. Van Dyke
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT, 06269-1272, U.S.A
| | - James S. Magnuson
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT, 06269-1020, U.S.A
- Brain Imaging Research Center, University of Connecticut, 850 Bolton Road, Unit 1271, Storrs, CT, 06269-1271, U.S.A
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT, 06269-1272, U.S.A
| | - Whitney Tabor
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT, 06269-1020, U.S.A
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT, 06269-1272, U.S.A
| | - W. Einar Mencl
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
| | - Donald P. Shankweiler
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT, 06269-1020, U.S.A
| | - David Braze
- Haskins Laboratories, 300 George St., Suite 900, New Haven, CT, 06511, U.S.A
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT, 06269-1272, U.S.A
| |
Collapse
|
19
|
Magnuson JS, Mirman D, Luthra S, Strauss T, Harris HD. Interaction in Spoken Word Recognition Models: Feedback Helps. Front Psychol 2018; 9:369. [PMID: 29666593 PMCID: PMC5891609 DOI: 10.3389/fpsyg.2018.00369] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Accepted: 03/06/2018] [Indexed: 11/13/2022] Open
Abstract
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
Collapse
Affiliation(s)
- James S. Magnuson
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| | - Daniel Mirman
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL, United States
| | - Sahil Luthra
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| | - Ted Strauss
- McConnell Brain Imaging Centre, McGill University, Montreal, QC, Canada
| | - Harlan D. Harris
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| |
Collapse
|
20
|
Landi N, Malins JG, Frost SJ, Magnuson JS, Molfese P, Ryherd K, Rueckl JG, Mencl WE, Pugh KR. Neural representations for newly learned words are modulated by overnight consolidation, reading skill, and age. Neuropsychologia 2018; 111:133-144. [PMID: 29366948 PMCID: PMC5866766 DOI: 10.1016/j.neuropsychologia.2018.01.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 01/11/2018] [Accepted: 01/12/2018] [Indexed: 11/22/2022]
Abstract
Word learning depends not only on efficient online binding of phonological, orthographic and lexical information, but also on consolidation of new word representations into permanent lexical memory. Work on word learning under a variety of contexts indicates that reading and language skill impact facility of word learning in both print and speech. In addition, recent research finds that individuals with language impairments show deficits in both initial word form learning and in maintaining newly learned representations over time, implicating mechanisms associated with maintenance that may be driven by deficits in overnight consolidation. Although several recent studies have explored the neural bases of overnight consolidation of newly learned words, no extant work has examined individual differences in overnight consolidation at the neural level. The current study addresses this gap in the literature by investigating how individual differences in reading and language skills modulate patterns of neural activation associated with newly learned words following a period of overnight consolidation. Specifically, a community sample of adolescents and young adults with significant variability in reading and oral language (vocabulary) ability were trained on two spoken artificial lexicons, one in the evening on the day before fMRI scanning and one in the morning just prior to scanning. Comparisons of activation between words that were trained and consolidated vs. those that were trained but not consolidated revealed increased cortical activation in a number of language associated and memory associated regions. In addition, individual differences in age, reading skill and vocabulary modulated learning rate in our artificial lexicon learning task and the size of the cortical consolidation effect in the precuneus/posterior cingulate, such that older readers and more skilled readers had larger cortical consolidation effects in this learning-critical region. These findings suggest that age (even into late adolescence) and reading and language skills are important individual differences that affect overnight consolidation of newly learned words. These findings have significant implications for understanding reading and language disorders and should inform pedagogical models.
Collapse
Affiliation(s)
- Nicole Landi
- University of Connecticut & Haskins Laboratories, United States.
| | | | | | | | | | - Kayleigh Ryherd
- University of Connecticut & Haskins Laboratories, United States
| | - Jay G Rueckl
- University of Connecticut & Haskins Laboratories, United States
| | | | - Kenneth R Pugh
- University of Connecticut & Haskins Laboratories, United States
| |
Collapse
|
21
|
Kukona A, Braze D, Johns CL, Mencl WE, Van Dyke JA, Magnuson JS, Pugh KR, Shankweiler DP, Tabor W. The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill. Acta Psychol (Amst) 2016; 171:72-84. [PMID: 27723471 PMCID: PMC5138490 DOI: 10.1016/j.actpsy.2016.09.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2016] [Revised: 09/16/2016] [Accepted: 09/21/2016] [Indexed: 11/15/2022] Open
Abstract
Recent studies have found considerable individual variation in language comprehenders' predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders' prediction ability (e.g., as reflected in predictive eye movements to a white cake on hearing "The boy will eat the white…"). Simultaneously, comprehension-based measures predicted participants' ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible white car). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results.
Collapse
Affiliation(s)
- Anuenue Kukona
- Division of Psychology, De Montfort University, Leicester, United Kingdom.
| | - David Braze
- Haskins Laboratories, New Haven, CT, United States
| | | | | | | | - James S Magnuson
- Haskins Laboratories, New Haven, CT, United States; Department of Psychology, University of Connecticut, Storrs, CT, United States
| | - Kenneth R Pugh
- Haskins Laboratories, New Haven, CT, United States; Department of Psychology, University of Connecticut, Storrs, CT, United States
| | - Donald P Shankweiler
- Haskins Laboratories, New Haven, CT, United States; Department of Psychology, University of Connecticut, Storrs, CT, United States
| | - Whitney Tabor
- Haskins Laboratories, New Haven, CT, United States; Department of Psychology, University of Connecticut, Storrs, CT, United States
| |
Collapse
|
22
|
Olmstead AJ, Viswanathan N, Magnuson JS. Direct and Real: Carol A. Fowler's Theory and Approach to Science. Ecological Psychology 2016. [DOI: 10.1080/10407413.2016.1195176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Kornilov SA, Rakhlin N, Koposov R, Lee M, Yrigollen C, Caglayan AO, Magnuson JS, Mane S, Chang JT, Grigorenko EL. Genome-Wide Association and Exome Sequencing Study of Language Disorder in an Isolated Population. Pediatrics 2016; 137:peds.2015-2469. [PMID: 27016271 PMCID: PMC4811310 DOI: 10.1542/peds.2015-2469] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/26/2016] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Developmental language disorder (DLD) is a highly prevalent neurodevelopmental disorder associated with negative outcomes in different domains; the etiology of DLD is unknown. To investigate the genetic underpinnings of DLD, we performed genome-wide association and whole exome sequencing studies in a geographically isolated population with a substantially elevated prevalence of the disorder (ie, the AZ sample). METHODS DNA samples were collected from 359 individuals for the genome-wide association study and from 12 severely affected individuals for whole exome sequencing. Multifaceted phenotypes, representing major domains of expressive language functioning, were derived from collected speech samples. RESULTS Gene-based analyses revealed a significant association between SETBP1 and complexity of linguistic output (P = 5.47 × 10(-7)). The analysis of exome variants revealed coding sequence variants in 14 genes, most of which play a role in neural development. Targeted enrichment analysis implicated myocyte enhancer factor-2 (MEF2)-regulated genes in DLD in the AZ population. The main findings were successfully replicated in an independent cohort of children at risk for related disorders (n = 372). CONCLUSIONS MEF2-regulated pathways were identified as potential candidate pathways in the etiology of DLD. Several genes (including the candidate SETBP1 and other MEF2-related genes) seem to jointly influence certain, but not all, facets of the DLD phenotype. Even when genetic and environmental diversity is reduced, DLD is best conceptualized as etiologically complex. Future research should establish whether the signals detected in the AZ population can be replicated in other samples and languages and provide further characterization of the identified pathway.
Collapse
Affiliation(s)
- Sergey A. Kornilov
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut;,Department of Psychology, University of Connecticut, Storrs, Connecticut;,Haskins Laboratories, New Haven, Connecticut;,Department of Psychology, Moscow State University, Moscow, Russia;,Department of Psychology, Saint Petersburg State University, Saint Petersburg, Russia
| | - Natalia Rakhlin
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut;,Department of Communication Sciences and Disorders, Wayne State University, Detroit, Michigan
| | - Roman Koposov
- Regional Centre for Child and Youth Mental Health and Child Welfare, UiT The Arctic University of Norway, Tromsø, Norway
| | - Maria Lee
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut
| | - Carolyn Yrigollen
- The Children's Hospital of Philadelphia, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ahmet Okay Caglayan
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut;,Department of Medical Genetics, Istanbul Bilim University, Istanbul, Turkey; and
| | - James S. Magnuson
- Department of Psychology, University of Connecticut, Storrs, Connecticut;,Haskins Laboratories, New Haven, Connecticut
| | - Shrikant Mane
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut
| | - Joseph T. Chang
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut
| | - Elena L. Grigorenko
- Child Study Center, School of Medicine, Yale University, New Haven, Connecticut;,Haskins Laboratories, New Haven, Connecticut;,Department of Psychology, Saint Petersburg State University, Saint Petersburg, Russia;,Moscow State University for Psychology and Education, Moscow, Russia
| |
Collapse
|
24
|
Braze D, Katz L, Magnuson JS, Mencl WE, Tabor W, Van Dyke JA, Gong T, Johns CL, Shankweiler DP. Vocabulary does not complicate the simple view of reading. Read Writ 2015; 29:435-451. [PMID: 26941478 PMCID: PMC4761369 DOI: 10.1007/s11145-015-9608-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Gough and Tunmer's (1986) simple view of reading (SVR) proposed that reading comprehension (RC) is a function of language comprehension (LC) and word recognition/decoding. Braze et al. (2007) presented data suggesting an extension of the SVR in which knowledge of vocabulary (V) affected RC over and above the effects of LC. Tunmer and Chapman (2012) found a similar independent contribution of V to RC when the data were analyzed by hierarchical regression. However, additional analysis by factor analysis and structural equation modeling indicated that the effect of V on RC was, in fact, completely captured by LC itself and there was no need to posit a separate direct effect of V on RC. In the present study, we present new data from young adults with sub-optimal reading skill (N = 286). Latent variable and regression analyses support Gough and Tunmer's original proposal and the conclusions of Tunmer and Chapman that V can be considered a component of LC and not an independent contributor to RC.
Collapse
Affiliation(s)
| | - Leonard Katz
- />Haskins Laboratories, New Haven, CT USA
- />University of Connecticut, Storrs, CT USA
| | - James S. Magnuson
- />Haskins Laboratories, New Haven, CT USA
- />University of Connecticut, Storrs, CT USA
| | | | - Whitney Tabor
- />Haskins Laboratories, New Haven, CT USA
- />University of Connecticut, Storrs, CT USA
| | | | - Tao Gong
- />Haskins Laboratories, New Haven, CT USA
| | | | - Donald P. Shankweiler
- />Haskins Laboratories, New Haven, CT USA
- />University of Connecticut, Storrs, CT USA
| |
Collapse
|
25
|
Sadat J, Martin CD, Magnuson JS, Alario FX, Costa A. Breaking Down the Bilingual Cost in Speech Production. Cogn Sci 2015; 40:1911-1940. [PMID: 26498431 DOI: 10.1111/cogs.12315] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Revised: 06/26/2015] [Accepted: 08/19/2015] [Indexed: 11/30/2022]
Abstract
Bilinguals have been shown to perform worse than monolinguals in a variety of verbal tasks. This study investigated this bilingual verbal cost in a large-scale picture-naming study conducted in Spanish. We explored how individual characteristics of the participants and the linguistic properties of the words being spoken influence this performance cost. In particular, we focused on the contributions of lexical frequency and phonological similarity across translations. The naming performance of Spanish-Catalan bilinguals speaking in their dominant and non-dominant language was compared to that of Spanish monolinguals. Single trial naming latencies were analyzed by means of linear mixed models accounting for individual effects at the participant and item level. While decreasing lexical frequency was shown to increase naming latencies in all groups, this variable by itself did not account for the bilingual cost. In turn, our results showed that the bilingual cost disappeared when naming words with high phonological similarity across translations. In short, our results show that frequency of use can play a role in the emergence of the bilingual cost, but that phonological similarity across translations should be regarded as one of the most important variables that determine the bilingual cost in speech production. Low phonological similarity across translations yields worse performance in bilinguals and promotes the bilingual cost in naming performance. The implications of our results for the effect of phonological similarity across translations within the bilingual speech production system are discussed.
Collapse
Affiliation(s)
- Jasmin Sadat
- Department of Information and Communications Technologies, Pompeu Fabra University.,Laboratory of Cognitive Psychology, CNRS and Aix-Marseille University
| | - Clara D Martin
- Basque Center on Cognition, Brain and Language.,IKERBASQUE, Basque Foundation for Science
| | - James S Magnuson
- Department of Psychology, University of Connecticut.,Haskins Laboratories
| | | | - Albert Costa
- Department of Information and Communications Technologies, Pompeu Fabra University.,Catalan Institution for Research and Advanced Studies, (ICREA)
| |
Collapse
|
26
|
Zhang C, Pugh KR, Mencl WE, Molfese PJ, Frost SJ, Magnuson JS, Peng G, Wang WSY. Functionally integrated neural processing of linguistic and talker information: An event-related fMRI and ERP study. Neuroimage 2015; 124:536-549. [PMID: 26343322 DOI: 10.1016/j.neuroimage.2015.08.064] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2014] [Revised: 08/15/2015] [Accepted: 08/28/2015] [Indexed: 11/16/2022] Open
Abstract
Speech signals contain information of both linguistic content and a talker's voice. Conventionally, linguistic and talker processing are thought to be mediated by distinct neural systems in the left and right hemispheres respectively, but there is growing evidence that linguistic and talker processing interact in many ways. Previous studies suggest that talker-related vocal tract changes are processed integrally with phonetic changes in the bilateral posterior superior temporal gyrus/superior temporal sulcus (STG/STS), because the vocal tract parameter influences the perception of phonetic information. It is yet unclear whether the bilateral STG is also activated by the integral processing of another parameter - pitch, which influences the perception of lexical tone information and is related to talker differences in tone languages. In this study, we conducted separate functional magnetic resonance imaging (fMRI) and event-related potential (ERP) experiments to examine the spatial and temporal loci of interactions of lexical tone and talker-related pitch processing in Cantonese. We found that the STG was activated bilaterally during the processing of talker changes when listeners attended to lexical tone changes in the stimuli and during the processing of lexical tone changes when listeners attended to talker changes, suggesting that lexical tone and talker processing are functionally integrated in the bilateral STG. It extends the previous study, providing evidence for a general neural mechanism of integral phonetic and talker processing in the bilateral STG. The ERP results show interactions of lexical tone and talker processing 500-800ms after auditory word onset (a simultaneous posterior P3b and a frontal negativity). Moreover, there is some asymmetry in the interaction, such that unattended talker changes affect linguistic processing more than vice versa, which may be related to the ambiguity that talker changes cause in speech perception and/or attention bias to talker changes. Our findings have implications for understanding the neural encoding of linguistic and talker information.
Collapse
Affiliation(s)
- Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China; Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| | - Kenneth R Pugh
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology, University of Connecticut, Storrs, CT, USA; Department of Linguistics, Yale University, New Haven, CT, USA
| | - W Einar Mencl
- Haskins Laboratories, New Haven, CT, USA; Department of Linguistics, Yale University, New Haven, CT, USA
| | - Peter J Molfese
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology, University of Connecticut, Storrs, CT, USA
| | | | - James S Magnuson
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology, University of Connecticut, Storrs, CT, USA
| | - Gang Peng
- Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; CUHK-PKU-UST Joint Research Centre for Language and Human Complexity, The Chinese University of Hong Kong, Hong Kong, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong, China.
| | - William S-Y Wang
- Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; CUHK-PKU-UST Joint Research Centre for Language and Human Complexity, The Chinese University of Hong Kong, Hong Kong, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong, China; Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
27
|
Magnuson JS. Phoneme restoration and empirical coverage of interactive activation and adaptive resonance models of human speech processing. J Acoust Soc Am 2015; 137:1481-92. [PMID: 25786959 PMCID: PMC4368586 DOI: 10.1121/1.4904543] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2014] [Revised: 11/17/2014] [Accepted: 11/25/2014] [Indexed: 06/04/2023]
Abstract
Grossberg and Kazerounian [(2011). J. Acoust. Soc. Am. 130, 440-460] present a model of sequence representation for spoken word recognition, the cARTWORD model, which simulates essential aspects of phoneme restoration. Grossberg and Kazerounian also include simulations with the TRACE model presented by McClelland and Elman [(1986). Cognit. Psychol. 18, 1-86] that seem to indicate that TRACE cannot simulate phoneme restoration. Grossberg and Kazerounian also claim cARTWORD should be preferred to TRACE because of TRACE's implausible approach to sequence representation (reduplication of time-specific units) and use of non-modulatory feedback (i.e., without position-specific bottom-up support). This paper responds to Grossberg and Kazerounian first with TRACE simulations that account for phoneme restoration when appropriately constructed noise is used (and with minor changes to TRACE phoneme definitions), then reviews the case for reduplicated units and feedback as implemented in TRACE, as well as TRACE's broad and deep coverage of empirical data. Finally, it is argued that cARTWORD is not comparable to TRACE because cARTWORD cannot represent sequences with repeated elements, has only been implemented with small phoneme and lexical inventories, and has been applied to only one phenomenon (phoneme restoration). Without evidence that cARTWORD captures a similar range and detail of human spoken language processing as alternative models, it is premature to prefer cARTWORD to TRACE.
Collapse
Affiliation(s)
- James S Magnuson
- Department of Psychology, University of Connecticut, Storrs, Connecticut 06269
| |
Collapse
|
28
|
Scarf D, Terrace H, Colombo M, Magnuson JS. Eye movements reveal planning in humans: A comparison with Scarf and Colombo's (2009) monkeys. J Exp Psychol Anim Learn Cogn 2015; 40:178-84. [PMID: 24364670 DOI: 10.1037/xan0000008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
On sequential response tasks, a long pause preceding the first response is thought to reflect participants taking time to plan a sequence of responses. By tracking the eye movements of two monkeys (Macaca fascicularis), Scarf and Colombo (2009, Eye Movements During List Execution Reveal No Planning in Monkeys [Macaca fascicularis], Journal of Experimental Psychology: Animal Behavior Processes, Vol. 35, pp. 587-592) demonstrated that, at least with respect to monkeys, the long pause preceding the first response is not necessarily the product of planning. In the present experiment, we tracked the eye movements of adult humans using the paradigm employed by Scarf and Colombo and found that, in contrast to monkeys, the pause preceding the first item is indicative of planning in humans. These findings highlight the fact that similar response time profiles, displayed by human and nonhuman animals, do not necessarily reflect similar underlying cognitive operations.
Collapse
|
29
|
Collisson BA, Grela B, Spaulding T, Rueckl JG, Magnuson JS. Individual differences in the shape bias in preschool children with specific language impairment and typical language development: theoretical and clinical implications. Dev Sci 2014; 18:373-88. [DOI: 10.1111/desc.12219] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2013] [Accepted: 05/22/2014] [Indexed: 11/28/2022]
Affiliation(s)
| | - Bernard Grela
- Department of Speech; Language and Hearing Sciences; University of Connecticut; USA
| | - Tammie Spaulding
- Department of Speech; Language and Hearing Sciences; University of Connecticut; USA
| | - Jay G. Rueckl
- Department of Psychology; University of Connecticut, and Haskins Laboratories; USA
| | - James S. Magnuson
- Department of Psychology; University of Connecticut, and Haskins Laboratories; USA
| |
Collapse
|
30
|
Viswanathan N, Magnuson JS, Fowler CA. Information for coarticulation: Static signal properties or formant dynamics? J Exp Psychol Hum Percept Perform 2014; 40:1228-36. [PMID: 24730744 DOI: 10.1037/a0036214] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Perception of a speech segment changes depending on properties of surrounding segments in a phenomenon called compensation for coarticulation (Mann, 1980). The nature of information that drives these perceptual changes is a matter of debate. One account attributes perceptual shifts to low-level auditory system contrast effects based on static portions of the signal (e.g., third formant [F3] center or average frequency; Lotto & Kluender, 1998). An alternative account is that listeners' perceptual shifts result from listeners attuning to the acoustic effects of gestural overlap and that this information for coarticulation is necessarily dynamic (Fowler, 2006). In a pair of experiments, we used sinewave speech precursors to investigate the nature of information for compensation for coarticulation. In Experiment 1, as expected by both accounts, we found that sinewave speech precursors produce shifts in following segments. In Experiment 2, we investigated whether effects in Experiment 1 were driven by static F3 offsets of sinewave speech precursors, or by dynamic relationships among their formants. We temporally reversed F1 and F2 in sinewave precursors, preserving static F3 offset and average F1, F2 and F3 frequencies, but disrupting dynamic formant relationships. Despite having identical F3s, selectively reversed precursors produced effects that were significantly smaller and restricted to only a small portion of the continuum. We conclude that dynamic formant relations rather than static properties of the precursor provide information for compensation for coarticulation.
Collapse
|
31
|
Kukona A, Cho PW, Magnuson JS, Tabor W. Lexical interference effects in sentence processing: evidence from the visual world paradigm and self-organizing models. J Exp Psychol Learn Mem Cogn 2014; 40:326-47. [PMID: 24245535 PMCID: PMC4033295 DOI: 10.1037/a0034903] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports nonintegration or late integration. Here we report on a self-organizing neural network framework that addresses 1 aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In 2 simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report 2 experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like "The boy will eat the white …" while viewing visual displays with objects like a white cake (i.e., a predictable direct object of "eat"), white car (i.e., an object not predicted by "eat," but consistent with "white"), and distractors. In line with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context.
Collapse
|
32
|
Britt AE, Mirman D, Kornilov SA, Magnuson JS. Effect of repetition proportion on language-driven anticipatory eye movements. Acta Psychol (Amst) 2014; 145:128-38. [PMID: 24345674 PMCID: PMC4067486 DOI: 10.1016/j.actpsy.2013.10.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2013] [Revised: 08/28/2013] [Accepted: 10/04/2013] [Indexed: 10/25/2022] Open
Abstract
Previous masked priming research in word recognition has demonstrated that repetition priming is influenced by experiment-wise information structure, such as proportion of target repetition. Research using naturalistic tasks and eye-tracking has shown that people use linguistic knowledge to anticipate upcoming words. We examined whether the proportion of target repetition within an experiment can have a similar effect on anticipatory eye movements. We used a word-to-picture matching task (i.e., the visual world paradigm) with target repetition proportion carefully controlled. Participants' eye movements were tracked starting when the pictures appeared, one second prior to the onset of the target word. Targets repeated from the previous trial were fixated more than other items during this preview period when target repetition proportion was high and less than other items when target repetition proportion was low. These results indicate that linguistic anticipation can be driven by short-term within-experiment trial structure, with implications for the generalization of priming effects, the bases of anticipatory eye movements, and experiment design.
Collapse
Affiliation(s)
- Allison E Britt
- Moss Rehabilitation Research Institute, Elkins Park, PA 19027, USA.
| | - Daniel Mirman
- Moss Rehabilitation Research Institute, Elkins Park, PA 19027, USA
| | - Sergey A Kornilov
- Department of Psychology, University of Connecticut, Storrs, CT 06269, USA; Haskins Laboratories, New Haven, CT 06511, USA
| | - James S Magnuson
- Department of Psychology, University of Connecticut, Storrs, CT 06269, USA; Haskins Laboratories, New Haven, CT 06511, USA
| |
Collapse
|
33
|
Kornilov SA, Landi N, Rakhlin N, Fang SY, Grigorenko EL, Magnuson JS. Attentional but not pre-attentive neural measures of auditory discrimination are atypical in children with developmental language disorder. Dev Neuropsychol 2014; 39:543-67. [PMID: 25350759 PMCID: PMC4399717 DOI: 10.1080/87565641.2014.960964] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n = 23) and typically developing (n = 16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder.
Collapse
Affiliation(s)
- Sergey A. Kornilov
- University of Connecticut, Storrs, Connecticut, USA
- Yale University, New Haven, Connecticut, USA
- Haskins Laboratories, New Haven, Connecticut, USA
- Moscow State University, Moscow, Russia
| | - Nicole Landi
- Yale University, New Haven, Connecticut, USA
- Haskins Laboratories, New Haven, Connecticut, USA
| | | | - Shin-Yi Fang
- Pennsylvania State University, State College, Pennsylvania, USA
| | - Elena L. Grigorenko
- Yale University, New Haven, Connecticut, USA
- Haskins Laboratories, New Haven, Connecticut, USA
- Moscow City University for Psychology and Education, Moscow, Russia
| | - James S. Magnuson
- University of Connecticut, Storrs, Connecticut, USA
- Haskins Laboratories, New Haven, Connecticut, USA
| |
Collapse
|
34
|
Abstract
How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition-including visual word recognition-have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power.
Collapse
Affiliation(s)
- Thomas Hannagan
- Laboratoire de Psychologie Cognitive, CNRS/Aix-Marseille UniversityMarseille, France
| | - James S. Magnuson
- Department of Psychology, University of ConnecticutStorrs, CT, USA
- Haskins LaboratoriesNew Haven, CT, USA
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, CNRS/Aix-Marseille UniversityMarseille, France
| |
Collapse
|
35
|
Viswanathan N, Magnuson JS, Fowler CA. Similar response patterns do not imply identical origins: an energetic masking account of nonspeech effects in compensation for coarticulation. J Exp Psychol Hum Percept Perform 2012; 39:1181-92. [PMID: 23148469 DOI: 10.1037/a0030735] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Nonspeech materials are widely used to identify basic mechanisms underlying speech perception. For instance, they have been used to examine the origin of compensation for coarticulation, the observation that listeners' categorization of phonetic segments depends on neighboring segments (Mann, 1980). Specifically, nonspeech precursors matched to critical formant frequencies of speech precursors have been shown to produce similar categorization shifts as speech contexts. This observation has been interpreted to mean that spectrally contrastive frequency relations between neighboring segments underlie the categorization shifts observed after speech, as well as nonspeech precursors (Lotto & Kluender, 1998). From the gestural perspective, however, categorization shifts in speech contexts occur because of listeners' sensitivity to acoustic information for coarticulatory gestural overlap in production; in nonspeech contexts, this occurs because of energetic masking of acoustic information for gestures. In 2 experiments, we distinguish the energetic masking and spectral contrast accounts. In Experiment 1, we investigated the effects of varying precursor tone frequency on speech categorization. Consistent only with the masking account, tonal effects were greater for frequencies close enough to those in the target syllables for masking to occur. In Experiment 2, we filtered the target stimuli to simulate effects of masking and obtained behavioral outcomes that closely resemble those with nonspeech tones. We conclude that masking provides the more plausible account of nonspeech context effects. More generally, we suggest that similar results from the use of speech and nonspeech materials do not automatically imply identical origins and that the use of nonspeech in speech studies entails careful examination of the nature of information in the nonspeech materials.
Collapse
Affiliation(s)
- Navin Viswanathan
- Department of Psychology, State University of New York, New Paltz, NY 12561-2440, USA.
| | | | | |
Collapse
|
36
|
Mirman D, Yee E, Blumstein SE, Magnuson JS. Theories of spoken word recognition deficits in aphasia: evidence from eye-tracking and computational modeling. Brain Lang 2011; 117:53-68. [PMID: 21371743 PMCID: PMC3076537 DOI: 10.1016/j.bandl.2011.01.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2010] [Revised: 01/10/2011] [Accepted: 01/23/2011] [Indexed: 05/06/2023]
Abstract
We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot-parrot) and cohort (e.g., beaker-beetle) competitors. Broca's aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke's aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke's aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control.
Collapse
Affiliation(s)
- Daniel Mirman
- Moss Rehabilitation Research Institute, 50 Township Line Rd., Elkins Park, PA 19027, USA.
| | | | | | | |
Collapse
|
37
|
Kukona A, Fang SY, Aicher KA, Chen H, Magnuson JS. The time course of anticipatory constraint integration. Cognition 2011; 119:23-42. [PMID: 21237450 DOI: 10.1016/j.cognition.2010.12.002] [Citation(s) in RCA: 75] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2008] [Revised: 11/27/2010] [Accepted: 12/01/2010] [Indexed: 11/19/2022]
Abstract
Several studies have demonstrated that as listeners hear sentences describing events in a scene, their eye movements anticipate upcoming linguistic items predicted by the unfolding relationship between scene and sentence. While this may reflect active prediction based on structural or contextual expectations, the influence of local thematic priming between words has not been fully examined. In Experiment 1, we presented verbs (e.g., arrest) in active (Subject-Verb-Object) sentences with displays containing verb-related patients (e.g., crook) and agents (e.g., policeman). We examined patient and agent fixations following the verb, after the agent role had been filled by another entity, but prior to bottom-up specification of the object. Participants were nearly as likely to fixate agents "anticipatorily" as patients, even though the agent role was already filled. However, the patient advantage suggested simultaneous influences of both local priming and active prediction. In Experiment 2, using passive sentences (Object-Verb-Subject), we found stronger, but still graded influences of role prediction when more time elapsed between verb and target, and more syntactic cues were available. We interpret anticipatory fixations as emerging from constraint-based processes that involve both non-predictive thematic priming and active prediction.
Collapse
Affiliation(s)
- Anuenue Kukona
- Department of Psychology, University of Connecticut, Storrs, CT 06269, USA.
| | | | | | | | | |
Collapse
|
38
|
Viswanathan N, Magnuson JS, Fowler CA. Compensation for coarticulation: disentangling auditory and gestural theories of perception of coarticulatory effects in speech. J Exp Psychol Hum Percept Perform 2010; 36:1005-15. [PMID: 20695714 DOI: 10.1037/a0018391] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different explanations for the phenomenon of compensation for coarticulation (CfC). An example of CfC is that if a speaker produces a gesture with a front place of articulation, it may be pulled slightly backwards if it follows a back place of articulation, and listeners' category boundaries shift (compensate) accordingly. The gestural account appeals to direct attunement to coarticulation to explain CfC, whereas the auditory account explains it by spectral contrast. In previous studies, spectral contrast and gestural consequences of coarticulation have been correlated, such that both accounts made identical predictions. We identify a liquid context in Tamil that disentangles contrast and coarticulation, such that the two accounts make different predictions. In a standard CfC task in Experiment 1, gestural coarticulation rather than spectral contrast determined the direction of CfC. Experiments 2, 3, and 4 demonstrated that tone analogues of the speech precursors failed to produce the same effects observed in Experiment 1, suggesting that simple spectral contrast cannot account for the findings of Experiment 1.
Collapse
Affiliation(s)
- Navin Viswanathan
- Department of Psychology, University of Connecticut and Haskins Laboratories, New Haven, Connecticut, USA.
| | | | | |
Collapse
|
39
|
|
40
|
|
41
|
|
42
|
Mirman D, Graf Estes K, Magnuson JS. Computational Modeling of Statistical Learning: Effects of Transitional Probability Versus Frequency and Links to Word Learning. Infancy 2010; 15:471-486. [DOI: 10.1111/j.1532-7078.2009.00023.x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
43
|
Braze D, Mencl WE, Tabor W, Pugh KR, Constable RT, Fulbright RK, Magnuson JS, Van Dyke JA, Shankweiler DP. Unification of sentence processing via ear and eye: an fMRI study. Cortex 2010; 47:416-31. [PMID: 20117764 DOI: 10.1016/j.cortex.2009.11.005] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2008] [Revised: 06/15/2009] [Accepted: 10/10/2009] [Indexed: 11/30/2022]
Abstract
We present new evidence based on fMRI for the existence and neural architecture of an abstract supramodal language system that can integrate linguistic inputs arising from different modalities such that speech and print each activate a common code. Working with sentence material, our aim was to find out where the putative supramodal system is located and how it responds to comprehension challenges. To probe these questions we examined BOLD activity in experienced readers while they performed a semantic categorization task with matched written or spoken sentences that were either well-formed or contained anomalies of syntactic form or pragmatic content. On whole-brain scans, both anomalies increased net activity over non-anomalous baseline sentences, chiefly at left frontal and temporal regions of heteromodal cortex. The anomaly-sensitive sites correspond approximately to those that previous studies (Michael et al., 2001; Constable et al., 2004) have found to be sensitive to other differences in sentence complexity (object relative minus subject relative). Regions of interest (ROIs) were defined by peak response to anomaly averaging over modality conditions. Each anomaly-sensitive ROI showed the same pattern of response across sentence types in each modality. Voxel-by-voxel exploration over the whole brain based on a cosine similarity measure of common function confirmed the specificity of supramodal zones.
Collapse
Affiliation(s)
- David Braze
- Haskins Laboratories, 300 George Street, New Haven, Connecticut 06511, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
44
|
Abstract
Semantic similarity effects provide critical insight into the organization of semantic knowledge and the nature of semantic processing. In the present study, we examined the dynamics of semantic similarity effects by using the visual world eyetracking paradigm. Four objects were shown on a computer monitor, and participants were instructed to click on a named object, during which time their gaze position was recorded. The likelihood of fixating competitor objects was predicted by the degree of semantic similarity to the target concept. We found reliable, graded competition that depended on degree of target-competitor similarity, even for distantly related items for which priming has not been found in previous priming studies. Time course measures revealed a consistently earlier fixation peak for near semantic neighbors relative to targets. Computational investigations with an attractor dynamical model, a spreading activation model, and a decision model revealed that a combination of excitatory and inhibitory mechanisms is required to obtain such peak timing, providing new constraints on models of semantic processing.
Collapse
Affiliation(s)
- Daniel Mirman
- Moss Rehabilitation Research Institute, Philadelphia, Pennsylvania, USA.
| | | |
Collapse
|
45
|
Abstract
Previous research indicates that mental representations of word meanings are distributed along both semantic and syntactic dimensions such that nouns and verbs are relatively distinct from one another. Two experiments examined the effect of representational distance between meanings on recognition of ambiguous spoken words by comparing recognition of unambiguous words, noun-verb homonyms, and noun-noun homonyms. In Experiment 1, auditory lexical decision was fastest for unambiguous words, slower for noun-verb homonyms, and slowest for noun-noun homonyms. In Experiment 2, response times for matching spoken words to pictures followed the same pattern and eye fixation time courses revealed converging, gradual time course differences between conditions. These results indicate greater competition between meanings of ambiguous words when the meanings are from the same grammatical class (noun-noun homonyms) than they when are from different grammatical classes (noun-verb homonyms).
Collapse
Affiliation(s)
- Daniel Mirman
- Moss Rehabilitation Research Institute, Philadelphia, PA
| | | | | | | |
Collapse
|
46
|
Stephen DG, Mirman D, Magnuson JS, Dixon JA. Lévy-like diffusion in eye movements during spoken-language comprehension. Phys Rev E Stat Nonlin Soft Matter Phys 2009; 79:056114. [PMID: 19518528 PMCID: PMC3694355 DOI: 10.1103/physreve.79.056114] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2008] [Revised: 02/05/2009] [Indexed: 05/27/2023]
Abstract
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Collapse
Affiliation(s)
- Damian G Stephen
- Department of Psychology, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, Connecticut 06269-1020, USA
| | | | | | | |
Collapse
|
47
|
Magnuson JS, Tanenhaus MK, Aslin RN. Immediate effects of form-class constraints on spoken word recognition. Cognition 2008; 108:866-73. [PMID: 18675408 DOI: 10.1016/j.cognition.2008.06.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2007] [Revised: 05/24/2008] [Accepted: 06/07/2008] [Indexed: 11/29/2022]
Abstract
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.
Collapse
Affiliation(s)
- James S Magnuson
- Department of Psychology, University of Connecticut, New Haven, CT 06269-1020, USA.
| | | | | |
Collapse
|
48
|
Mirman D, Dixon JA, Magnuson JS. Statistical and computational models of the visual world paradigm: Growth curves and individual differences. J Mem Lang 2008; 59:475-494. [PMID: 19060958 PMCID: PMC2593828 DOI: 10.1016/j.jml.2007.11.006] [Citation(s) in RCA: 184] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Time course estimates from eye tracking during spoken language processing (the "visual world paradigm", or VWP) have enabled progress on debates regarding fine-grained details of activation and competition over time. There are, however, three gaps in current analyses of VWP data: consideration of time in a statistically rigorous manner, quantification of individual differences, and distinguishing linguistic effects from non-linguistic effects. To address these gaps, we have developed an approach combining statistical and computational modeling. The statistical approach (growth curve analysis, a technique explicitly designed to assess change over time at group and individual levels) provides a rigorous means of analyzing time course data. We introduce the method and its application to VWP data. We also demonstrate the potential for assessing whether differences in group or individual data are best explained by linguistic processing or decisional aspects of VWP tasks through comparison of growth curve analyses and computational modeling, and discuss the potential benefits for studying typical and atypical language processing.
Collapse
|
49
|
Withrow KP, Newman JR, Skipper JB, Gleysteen JP, Magnuson JS, Zinn K, Rosenthal EL. Assessment of bevacizumab conjugated to Cy5.5 for detection of head and neck cancer xenografts. Technol Cancer Res Treat 2008; 7:61-6. [PMID: 18198926 DOI: 10.1177/153303460800700108] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Optical fluorescent technology has the potential to deliver real time imaging of cancer into the operating room and the clinic. To determine the efficacy of fluorescently labeled anti-vascular endothelial growth factor (VEGF) antibody to be used as a cancer specific optical contrast agent to guide surgical resections, we evaluated the sensitivity and specificity of this agent to detect microscopic residual disease in a preclinical model of head and neck squamous cell carcinoma (HNSCC). Using a flank murine model, mice were xenografted with SCC-1 tumor cells and injected with anti-VEGF antibody (bevacizumab) conjugated to an optically active fluorophore (Cy5.5). Tumors underwent sub-total resections and were assessed for the presence of residual disease by fluorescent stereomicroscopy. Expected positive and negative biopsies were taken according to the presence or absence of fluorescence, respectively. Histology was used to confirm the presence or absence of disease. Biopsies taken from areas of fluorescence within the wound bed (n=18) were found to be histologically malignant in all but one biopsy. Samples taken from a non-fluorescing tumor bed (n=15) were found to be histologically benign in 11 of 15. These findings correlated with a sensitivity and specificity of 80.9% and 91.7%, respectively. This data supports previous data presented by this group and supports further investigation of fluorescently labeled anti-tumor antibodies to detect disease in the surgical setting.
Collapse
Affiliation(s)
- K P Withrow
- Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Alabama at Birmingham, Birmingham, AL 35294-0012, USA
| | | | | | | | | | | | | |
Collapse
|
50
|
Mirman D, Magnuson JS, Estes KG, Dixon JA. The link between statistical segmentation and word learning in adults. Cognition 2008; 108:271-80. [PMID: 18355803 DOI: 10.1016/j.cognition.2008.02.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2006] [Revised: 11/13/2007] [Accepted: 02/05/2008] [Indexed: 11/29/2022]
Abstract
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, a previous study found that infants learn consistent labels, but not inconsistent or neutral labels.
Collapse
Affiliation(s)
- Daniel Mirman
- Department of Psychology, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT 06269-1020, USA; Haskins Laboratories, New Haven, CT 06511, USA.
| | | | | | | |
Collapse
|