1
|
Magnuson JS, Luthra S. Simple Recurrent Networks are Interactive. Psychon Bull Rev 2025; 32:1032-1040. [PMID: 39537950 DOI: 10.3758/s13423-024-02608-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/19/2024] [Indexed: 11/16/2024]
Abstract
There is disagreement among cognitive scientists as to whether a key computational framework - the Simple Recurrent Network (SRN; Elman, Machine Learning, 7(2), 195-225, 1991; Elman, Cognitive Science, 14(2), 179-211, 1990) - is a feedforward system. SRNs have been essential tools in advancing theories of learning, development, and processing in cognitive science for more than three decades. If SRNs were feedforward systems, there would be pervasive theoretical implications: Anything an SRN can do would therefore be explainable without interaction (feedback). However, despite claims that SRNs (and by extension recurrent neural networks more generally) are feedforward (Norris, 1993), this is not the case. Feedforward networks by definition are acyclic graphs - they contain no loops. SRNs contain loops - from hidden units back to hidden units with a time delay - and are therefore cyclic graphs. As we demonstrate, they are interactive in the sense normally implied for networks with feedback connections between layers: In an SRN, bottom-up inputs are inextricably mixed with previous model-internal computations. Inputs are transmitted to hidden units by multiplying them by input-to-hidden weights. However, hidden units simultaneously receive their own previous activations as input via hidden-to-hidden connections with a one-step time delay (typically via context units). These are added to the input-to-hidden values, and the sums are transformed by an activation function. Thus, bottom-up inputs are mixed with the products of potentially many preceding transformations of inputs and model-internal states. We discuss theoretical implications through a key example from psycholinguistics where the status of SRNs as feedforward or interactive has crucial ramifications.
Collapse
Affiliation(s)
- James S Magnuson
- BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain.
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain.
- University of Connecticut, Storrs, CT, USA.
| | | |
Collapse
|
2
|
Norris D, McQueen JM. Why might there be lexical-prelexical feedback in speech recognition? Cognition 2025; 255:106025. [PMID: 39616821 DOI: 10.1016/j.cognition.2024.106025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 10/21/2024] [Accepted: 11/25/2024] [Indexed: 12/09/2024]
Abstract
In reply to Magnuson, Crinnion, Luthra, Gaston, and Grubb (2023), we challenge their conclusion that on-line activation feedback improves word recognition. This type of feedback is instantiated in the TRACE model (McClelland & Elman, 1986) as top-down spread of activation from lexical to phoneme nodes. We give two main reasons why Magnuson et al.'s demonstration that activation feedback speeds up word recognition in TRACE is not informative about whether activation feedback helps humans recognize words. First, the same speed-up could be achieved by changing other parameters in TRACE. Second, more fundamentally, there is room for improvement in TRACE's performance only because the model, unlike Bayesian models, is suboptimal. We also challenge Magnuson et al.'s claim that the available empirical data support activation feedback. The data they base this claim on are open to alternative explanations and there are data against activation feedback that they do not discuss. We argue, therefore, that there are no computational or empirical grounds to conclude that activation feedback benefits human spoken-word recognition and indeed no theoretical grounds why activation feedback would exist. Other types of feedback, for example feedback to support perceptual learning, likely do exist, precisely because they can help listeners recognize words.
Collapse
Affiliation(s)
- Dennis Norris
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
| | - James M McQueen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, NL, Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, NL, Netherlands
| |
Collapse
|
3
|
Sarrett ME, McMurray B. Timecourse of bottom-up and top-down language processing during a picture-based semantic priming task. LANGUAGE, COGNITION AND NEUROSCIENCE 2024; 40:122-144. [PMID: 40308946 PMCID: PMC12040426 DOI: 10.1080/23273798.2024.2409136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 09/16/2024] [Indexed: 05/02/2025]
Abstract
Understanding spoken language requires rapid analysis of incoming information at multiple levels. Information at lower levels (e.g. acoustic/phonetic) cascades forward to affect processing at higher levels (e.g. lexical/semantic), and higher-level information may feed back to influence lower-level processing. Most studies have sought to examine a single stage of processing in isolation. Consequently, there is a poor understanding of how different stages relate temporally. In the present study, we characterise multiple stages of linguistic processing simultaneously as they unfold. Listeners (N=30) completed a priming task while we collected their EEG, where a picture (e.g. of a peach) biased them to expect a target word from a minimal pair (e.g. beach/peach). We examine the processes of perceptual gradiency, semantic integration, and top-down feedback, to yield a more complete understanding of how these processes relate in time. Then, we discuss how the results from simplified priming paradigms may compare to more naturalistic settings.
Collapse
Affiliation(s)
| | - Bob McMurray
- Department of Psychological & Brain Sciences, University of Iowa
| |
Collapse
|
4
|
Luthra S, Crinnion AM, Saltzman D, Magnuson JS. Do They Know It's Christmash? Lexical Knowledge Directly Impacts Speech Perception. Cogn Sci 2024; 48:e13449. [PMID: 38773754 PMCID: PMC11228965 DOI: 10.1111/cogs.13449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychology, Carnegie Mellon University
| | | | - David Saltzman
- Department of Psychological Sciences, University of Connecticut
| | - James S Magnuson
- Department of Psychological Sciences, University of Connecticut
- BCBL - Basque Center on Cognition, Brain and Language, Donostia - San Sebastián, Spain
- Ikerbasque - Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
5
|
Crinnion AM, Luthra S, Gaston P, Magnuson JS. Resolving competing predictions in speech: How qualitatively different cues and cue reliability contribute to phoneme identification. Atten Percept Psychophys 2024; 86:942-961. [PMID: 38383914 PMCID: PMC11233028 DOI: 10.3758/s13414-024-02849-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2024] [Indexed: 02/23/2024]
Abstract
Listeners have many sources of information available in interpreting speech. Numerous theoretical frameworks and paradigms have established that various constraints impact the processing of speech sounds, but it remains unclear how listeners might simultaneously consider multiple cues, especially those that differ qualitatively (i.e., with respect to timing and/or modality) or quantitatively (i.e., with respect to cue reliability). Here, we establish that cross-modal identity priming can influence the interpretation of ambiguous phonemes (Exp. 1, N = 40) and show that two qualitatively distinct cues - namely, cross-modal identity priming and auditory co-articulatory context - have additive effects on phoneme identification (Exp. 2, N = 40). However, we find no effect of quantitative variation in a cue - specifically, changes in the reliability of the priming cue did not influence phoneme identification (Exp. 3a, N = 40; Exp. 3b, N = 40). Overall, we find that qualitatively distinct cues can additively influence phoneme identification. While many existing theoretical frameworks address constraint integration to some degree, our results provide a step towards understanding how information that differs in both timing and modality is integrated in online speech perception.
Collapse
Affiliation(s)
| | | | | | - James S Magnuson
- University of Connecticut, Storrs, CT, USA
- BCBL. Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
- Ikerbasque. Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
6
|
Steffman J, Sundara M. Disentangling the Role of Biphone Probability From Neighborhood Density in the Perception of Nonwords. LANGUAGE AND SPEECH 2024; 67:166-202. [PMID: 37161351 DOI: 10.1177/00238309231164982] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In six experiments we explored how biphone probability and lexical neighborhood density influence listeners' categorization of vowels embedded in nonword sequences. We found independent effects of each. Listeners shifted categorization of a phonetic continuum to create a higher probability sequence, even when neighborhood density was controlled. Similarly, listeners shifted categorization to create a nonword from a denser neighborhood, even when biphone probability was controlled. Next, using a visual world eye-tracking task, we determined that biphone probability information is used rapidly by listeners in perception. In contrast, task complexity and irrelevant variability in the stimuli interfere with neighborhood density effects. These results support a model in which both biphone probability and neighborhood density independently affect word recognition, but only biphone probability effects are observed early in processing.
Collapse
|
7
|
Magnuson JS, Crinnion AM, Luthra S, Gaston P, Grubb S. Contra assertions, feedback improves word recognition: How feedback and lateral inhibition sharpen signals over noise. Cognition 2024; 242:105661. [PMID: 37944313 PMCID: PMC11238470 DOI: 10.1016/j.cognition.2023.105661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 10/17/2023] [Accepted: 11/02/2023] [Indexed: 11/12/2023]
Abstract
Whether top-down feedback modulates perception has deep implications for cognitive theories. Debate has been vigorous in the domain of spoken word recognition, where competing computational models and agreement on at least one diagnostic experimental paradigm suggest that the debate may eventually be resolvable. Norris and Cutler (2021) revisit arguments against lexical feedback in spoken word recognition models. They also incorrectly claim that recent computational demonstrations that feedback promotes accuracy and speed under noise (Magnuson et al., 2018) were due to the use of the Luce choice rule rather than adding noise to inputs (noise was in fact added directly to inputs). They also claim that feedback cannot improve word recognition because feedback cannot distinguish signal from noise. We have two goals in this paper. First, we correct the record about the simulations of Magnuson et al. (2018). Second, we explain how interactive activation models selectively sharpen signals via joint effects of feedback and lateral inhibition that boost lexically-coherent sublexical patterns over noise. We also review a growing body of behavioral and neural results consistent with feedback and inconsistent with autonomous (non-feedback) architectures, and conclude that parsimony supports feedback. We close by discussing the potential for synergy between autonomous and interactive approaches.
Collapse
Affiliation(s)
- James S Magnuson
- University of Connecticut. Storrs, CT, USA; BCBL. Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain; Ikerbasque. Basque Foundation for Science, Bilbao, Spain.
| | | | | | | | | |
Collapse
|
8
|
McQueen JM, Jesse A, Mitterer H. Lexically Mediated Compensation for Coarticulation Still as Elusive as a White Christmash. Cogn Sci 2023; 47:e13342. [PMID: 37715483 DOI: 10.1111/cogs.13342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 07/20/2023] [Accepted: 08/30/2023] [Indexed: 09/17/2023]
Abstract
Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.
Collapse
Affiliation(s)
| | - Alexandra Jesse
- Department of Psychological and Brain Sciences, University of Massachusetts
| | | |
Collapse
|
9
|
Shen J, Wu J. Speech Recognition in Noise Performance Measured Remotely Versus In-Laboratory From Older and Younger Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2391-2397. [PMID: 35442717 PMCID: PMC9567433 DOI: 10.1044/2022_jslhr-21-00557] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 12/23/2021] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Abstract
PURPOSE This study examined the performance difference between remote and in-laboratory test modalities with a speech recognition in noise task in older and younger adults. METHOD Four groups of participants (younger remote, younger in-laboratory, older remote, and older in-laboratory) were tested on a speech recognition in noise protocol with 72 sentences. RESULTS While the younger remote group performed more poorly than the younger in-laboratory group, older participants' performance was comparable between the two modality groups, particularly in the easy to moderately difficult conditions. These results persisted after controlling for demographic variables (e.g., age, gender, and education). CONCLUSION While these findings generally support the feasibility of remote data collection with older participants for research on speech perception, they also suggest that technological proficiency is an important factor that affects performance on remote testing in the aging population.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA
| | - Jingwei Wu
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA
| |
Collapse
|
10
|
Abstract
The human brain exhibits the remarkable ability to categorize speech sounds into distinct, meaningful percepts, even in challenging tasks like learning non-native speech categories in adulthood and hearing speech in noisy listening conditions. In these scenarios, there is substantial variability in perception and behavior, both across individual listeners and individual trials. While there has been extensive work characterizing stimulus-related and contextual factors that contribute to variability, recent advances in neuroscience are beginning to shed light on another potential source of variability that has not been explored in speech processing. Specifically, there are task-independent, moment-to-moment variations in neural activity in broadly-distributed cortical and subcortical networks that affect how a stimulus is perceived on a trial-by-trial basis. In this review, we discuss factors that affect speech sound learning and moment-to-moment variability in perception, particularly arousal states—neurotransmitter-dependent modulations of cortical activity. We propose that a more complete model of speech perception and learning should incorporate subcortically-mediated arousal states that alter behavior in ways that are distinct from, yet complementary to, top-down cognitive modulations. Finally, we discuss a novel neuromodulation technique, transcutaneous auricular vagus nerve stimulation (taVNS), which is particularly well-suited to investigating causal relationships between arousal mechanisms and performance in a variety of perceptual tasks. Together, these approaches provide novel testable hypotheses for explaining variability in classically challenging tasks, including non-native speech sound learning.
Collapse
|
11
|
Brodbeck C, Bhattasali S, Cruz Heredia AAL, Resnik P, Simon JZ, Lau E. Parallel processing in speech perception with local and global representations of linguistic context. eLife 2022; 11:72056. [PMID: 35060904 PMCID: PMC8830882 DOI: 10.7554/elife.72056] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 01/16/2022] [Indexed: 12/03/2022] Open
Abstract
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.
Collapse
Affiliation(s)
| | | | | | | | | | - Ellen Lau
- Department of Linguistics, University of Maryland
| |
Collapse
|
12
|
Kapnoula EC, McMurray B. Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking. BRAIN AND LANGUAGE 2021; 223:105031. [PMID: 34628259 PMCID: PMC11251822 DOI: 10.1016/j.bandl.2021.105031] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 07/29/2021] [Accepted: 09/22/2021] [Indexed: 06/13/2023]
Abstract
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.
Collapse
Affiliation(s)
- Efthymia C Kapnoula
- Dept. of Psychological and Brain Sciences, University of Iowa, United States; DeLTA Center, University of Iowa, United States; Basque Center on Cognition, Brain and Language, Spain.
| | - Bob McMurray
- Dept. of Psychological and Brain Sciences, University of Iowa, United States; DeLTA Center, University of Iowa, United States; Dept. of Communication Sciences and Disorders, DeLTA Center, University of Iowa, United States; Dept. of Linguistics, DeLTA Center, University of Iowa, United States
| |
Collapse
|
13
|
Falandays JB, Nguyen B, Spivey MJ. Is prediction nothing more than multi-scale pattern completion of the future? Brain Res 2021; 1768:147578. [PMID: 34284021 DOI: 10.1016/j.brainres.2021.147578] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/28/2021] [Accepted: 06/29/2021] [Indexed: 11/18/2022]
Abstract
While the notion of the brain as a prediction machine has been extremely influential and productive in cognitive science, there are competing accounts of how best to model and understand the predictive capabilities of brains. One prominent framework is of a "Bayesian brain" that explicitly generates predictions and uses resultant errors to guide adaptation. We suggest that the prediction-generation component of this framework may involve little more than a pattern completion process. We first describe pattern completion in the domain of visual perception, highlighting its temporal extension, and show how this can entail a form of prediction in time. Next, we describe the forward momentum of entrained dynamical systems as a model for the emergence of predictive processing in non-predictive systems. Then, we apply this reasoning to the domain of language, where explicitly predictive models are perhaps most popular. Here, we demonstrate how a connectionist model, TRACE, exhibits hallmarks of predictive processing without any representations of predictions or errors. Finally, we present a novel neural network model, inspired by reservoir computing models, that is entirely unsupervised and memoryless, but nonetheless exhibits prediction-like behavior in its pursuit of homeostasis. These explorations demonstrate that brain-like systems can get prediction "for free," without the need to posit formal logical representations with Bayesian probabilities or an inference machine that holds them in working memory.
Collapse
Affiliation(s)
- J Benjamin Falandays
- Department of Cognitive and Information Sciences, University of California, Merced, United States
| | - Benjamin Nguyen
- Department of Cognitive and Information Sciences, University of California, Merced, United States
| | - Michael J Spivey
- Department of Cognitive and Information Sciences, University of California, Merced, United States.
| |
Collapse
|
14
|
Kapnoula EC. On the Locus of L2 Lexical Fuzziness: Insights From L1 Spoken Word Recognition and Novel Word Learning. Front Psychol 2021; 12:689052. [PMID: 34305748 PMCID: PMC8295481 DOI: 10.3389/fpsyg.2021.689052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 06/15/2021] [Indexed: 11/13/2022] Open
Abstract
The examination of how words are learned can offer valuable insights into the nature of lexical representations. For example, a common assessment of novel word learning is based on its ability to interfere with other words; given that words are known to compete with each other (Luce and Pisoni, 1998; Dahan et al., 2001), we can use the capacity of a novel word to interfere with the activation of other lexical representations as a measure of the degree to which it is integrated into the mental lexicon (Leach and Samuel, 2007). This measure allows us to assess novel word learning in L1 or L2, but also the degree to which representations from the two lexica interact with each other (Marian and Spivey, 2003). Despite the somewhat independent lines of research on L1 and L2 word learning, common patterns emerge across the two literatures (Lindsay and Gaskell, 2010; Palma and Titone, 2020). In both cases, lexicalization appears to follow a similar trajectory. In L1, newly encoded words often fail at first to engage in competition with known words, but they do so later, after they have been better integrated into the mental lexicon (Gaskell and Dumay, 2003; Dumay and Gaskell, 2012; Bakker et al., 2014). Similarly, L2 words generally have a facilitatory effect, which can, however, become inhibitory in the case of more robust (high-frequency) lexical representations. Despite the similar pattern, L1 lexicalization is described in terms of inter-lexical connections (Leach and Samuel, 2007), leading to more automatic processing (McMurray et al., 2016); whereas in L2 word learning, lack of lexical inhibition is attributed to less robust (i.e., fuzzy) L2 lexical representations. Here, I point to these similarities and I use them to argue that a common mechanism may underlie similar patterns across the two literatures.
Collapse
|