1
|
Bosker HR, Badaya E, Corley M. Discourse Markers Activate Their, Like, Cohort Competitors. DISCOURSE PROCESSES 2021. [DOI: 10.1080/0163853x.2021.1924000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Hans Rutger Bosker
- Psychology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | | | - Martin Corley
- Psychology, PPLS, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
2
|
Does signal reduction imply predictive coding in models of spoken word recognition? Psychon Bull Rev 2021; 28:1381-1389. [PMID: 33852158 PMCID: PMC8367925 DOI: 10.3758/s13423-021-01924-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/24/2021] [Indexed: 12/29/2022]
Abstract
Pervasive behavioral and neural evidence for predictive processing has led to claims that language processing depends upon predictive coding. Formally, predictive coding is a computational mechanism where only deviations from top-down expectations are passed between levels of representation. In many cognitive neuroscience studies, a reduction of signal for expected inputs is taken as being diagnostic of predictive coding. In the present work, we show that despite not explicitly implementing prediction, the TRACE model of speech perception exhibits this putative hallmark of predictive coding, with reductions in total lexical activation, total lexical feedback, and total phoneme activation when the input conforms to expectations. These findings may indicate that interactive activation is functionally equivalent or approximant to predictive coding or that caution is warranted in interpreting neural signal reduction as diagnostic of predictive coding.
Collapse
|
3
|
Ayasse ND, Wingfield A. The Two Sides of Linguistic Context: Eye-Tracking as a Measure of Semantic Competition in Spoken Word Recognition Among Younger and Older Adults. Front Hum Neurosci 2020; 14:132. [PMID: 32327987 PMCID: PMC7161414 DOI: 10.3389/fnhum.2020.00132] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Accepted: 03/20/2020] [Indexed: 12/17/2022] Open
Abstract
Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process.
Collapse
Affiliation(s)
- Nicolai D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
4
|
|
5
|
Li MY, Braze D, Kukona A, Johns CL, Tabor W, Van Dyke JA, Mencl WE, Shankweiler DP, Pugh KR, Magnuson JS. Individual differences in subphonemic sensitivity and phonological skills. JOURNAL OF MEMORY AND LANGUAGE 2019; 107:195-215. [PMID: 31431796 PMCID: PMC6701851 DOI: 10.1016/j.jml.2019.03.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Many studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have underspecified (or "fuzzy") phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have overspecified phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification.
Collapse
Affiliation(s)
- Monica Y.C. Li
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Brain Imaging Research Center, University of Connecticut,
Storrs, CT 06269-1271, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - David Braze
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - Anuenue Kukona
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
- School of Applied Social Sciences, De Montfort University,
The Gateway, Leicester, LE1 9BH, UK
| | | | - Whitney Tabor
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - Julie A. Van Dyke
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - W. Einar Mencl
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
- Department of Linguistics, Yale University, New Haven, CT
06520, USA
| | - Donald P. Shankweiler
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| | - Kenneth R. Pugh
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Brain Imaging Research Center, University of Connecticut,
Storrs, CT 06269-1271, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
- Department of Linguistics, Yale University, New Haven, CT
06520, USA
| | - James S. Magnuson
- Department of Psychological Sciences, University of
Connecticut, Storrs, CT 06269-1020, USA
- Connecticut Institute for the Brain and Cognitive Sciences,
University of Connecticut, Storrs, CT 06269-1272, USA
- Brain Imaging Research Center, University of Connecticut,
Storrs, CT 06269-1271, USA
- Haskins Laboratories, 300 George St., New Haven, CT 06510,
USA
| |
Collapse
|
6
|
Viebahn MC, McQueen JM, Ernestus M, Frauenfelder UH, Bürki A. How much does orthography influence the processing of reduced word forms? Evidence from novel-word learning about French schwa deletion. Q J Exp Psychol (Hove) 2018; 71:2378-2394. [DOI: 10.1177/1747021817741859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study examines the influence of orthography on the processing of reduced word forms. For this purpose, we compared the impact of phonological variation with the impact of spelling-sound consistency on the processing of words that may be produced with or without the vowel schwa. Participants learnt novel French words in which the vowel schwa was present or absent in the first syllable. In Experiment 1, the words were consistently produced without schwa or produced in a variable manner (i.e., sometimes produced with and sometimes produced without schwa). In Experiment 2, words were always produced in a consistent manner, but an orthographic exposure phase was included in which words that were produced without schwa were either spelled with or without the letter <e>. Results from naming and eye-tracking tasks suggest that both phonological variation and spelling-sound consistency influence the processing of spoken novel words. However, the influence of phonological variation outweighs the effect of spelling-sound consistency. Our findings therefore suggest that the influence of orthography on the processing of reduced word forms is relatively small.
Collapse
Affiliation(s)
- Malte C Viebahn
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- Laboratoire de Psycholinguistique Expérimentale, University of Geneva, Geneva, Switzerland
| | - James M McQueen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Mirjam Ernestus
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Ulrich H Frauenfelder
- Laboratoire de Psycholinguistique Expérimentale, University of Geneva, Geneva, Switzerland
| | - Audrey Bürki
- Laboratoire de Psycholinguistique Expérimentale, University of Geneva, Geneva, Switzerland
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| |
Collapse
|
7
|
Increased exposure and phonetic context help listeners recognize allophonic variants. Atten Percept Psychophys 2018; 80:1539-1558. [DOI: 10.3758/s13414-018-1525-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8
|
Uddin S, Heald SLM, Van Hedger SC, Klos S, Nusbaum HC. Understanding environmental sounds in sentence context. Cognition 2018; 172:134-143. [PMID: 29272740 PMCID: PMC6309373 DOI: 10.1016/j.cognition.2017.12.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Revised: 11/16/2017] [Accepted: 12/14/2017] [Indexed: 01/01/2023]
Abstract
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.
Collapse
Affiliation(s)
- Sophia Uddin
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA.
| | - Shannon L M Heald
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| | - Stephen C Van Hedger
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| | - Serena Klos
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| | - Howard C Nusbaum
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| |
Collapse
|
9
|
The interplay of local attraction, context and domain-general cognitive control in activation and suppression of semantic distractors during sentence comprehension. Psychon Bull Rev 2017; 23:1942-1953. [PMID: 27230894 DOI: 10.3758/s13423-016-1068-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During sentence comprehension, real-time identification of a referent is driven both by local, context-independent lexical information and by more global sentential information related to the meaning of the utterance as a whole. This paper investigates the cognitive factors that limit the consideration of referents that are supported by local lexical information but not supported by more global sentential information. In an eye-tracking paradigm, participants heard sentences like "She will eat the red pear" while viewing four black-and-white (colorless) line-drawings. In the experimental condition, the display contained a "local attractor" (e.g., a heart), which was locally compatible with the adjective but incompatible with the context ("eat"). In the control condition, the local attractor was replaced by a picture which was incompatible with the adjective (e.g., "igloo"). A second factor manipulated contextual constraint, by using either a constraining verb (e.g., "eat"), or a non-constraining one (e.g., "see"). Results showed consideration of the local attractor, the magnitude of which was modulated by verb constraint, but also by each subject's cognitive control abilities, as measured in a separate Flanker task run on the same subjects. The findings are compatible with a processing model in which the interplay between local attraction, context, and domain-general control mechanisms determines the consideration of possible referents.
Collapse
|
10
|
Zhuang J, Devereux BJ. Phonological and syntactic competition effects in spoken word recognition: evidence from corpus-based statistics. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 32:221-235. [PMID: 28164141 PMCID: PMC5214227 DOI: 10.1080/23273798.2016.1241886] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 09/12/2016] [Indexed: 06/06/2023]
Abstract
As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system - phonological, lexical, and syntactic - which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions. This suggests that phrasal contexts reduce lexico-phonological competition by eliminating form-class inconsistent cohort candidates. A corpus-derived measure of lexico-syntactic competition was associated with greater activation in LIFG for verbs in phrases, but not for isolated verbs, indicating that lexico-syntactic information is boosted by the phrasal context. Together, these findings indicate that LIFG plays a general role in resolving different kinds of linguistic competition.
Collapse
Affiliation(s)
- Jie Zhuang
- Brain Imaging and Analysis Center, Duke University, Durham, NC27710, USA
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge, UK
| | - Barry J. Devereux
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
11
|
Norris D, McQueen JM, Cutler A. Prediction, Bayesian inference and feedback in speech recognition. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 31:4-18. [PMID: 26740960 PMCID: PMC4685608 DOI: 10.1080/23273798.2015.1081703] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Accepted: 08/05/2015] [Indexed: 05/19/2023]
Abstract
Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
Collapse
Affiliation(s)
- Dennis Norris
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
| | - James M. McQueen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Anne Cutler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- MARCS Institute, University of Western Sydney, Penrith South, NSW2751, Australia
| |
Collapse
|
12
|
Brusini P, Brun M, Brunet I, Christophe A. Listeners Exploit Syntactic Structure On-Line to Restrict Their Lexical Search to a Subclass of Verbs. Front Psychol 2015; 6:1841. [PMID: 26696917 PMCID: PMC4678230 DOI: 10.3389/fpsyg.2015.01841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Accepted: 11/13/2015] [Indexed: 11/13/2022] Open
Abstract
Many experiments have shown that listeners actively build expectations about up-coming words, rather than simply waiting for information to accumulate. The online construction of a syntactic structure is one of the cues that listeners may use to construct strong expectations about the possible words they will be exposed to. For example, speakers of verb-final languages use pre-verbal arguments to predict on-line the kind of arguments that are likely to occur next (e.g., Kamide, 2008, for a review). Although in SVO languages information about a verb's arguments typically follows the verb, some languages use pre-verbal object pronouns, potentially allowing listeners to build on-line expectations about the nature of the upcoming verb. For instance, if a pre-verbal direct object pronoun is heard, then the following verb has to be able to enter a transitive structure, thus excluding intransitive verbs. To test this, we used French, in which object pronouns have to appear pre-verbally, to investigate whether listeners use this cue to predict the occurrence of a transitive verb. In a word detection task, we measured the number of false alarms to sentences that contained a transitive verb whose first syllable was homophonous to the target monosyllabic verb (e.g., target “dort” /dɔʁ/ to sleep and false alarm verb “dorlote” /dɔʁlɔt/ to cuddle). The crucial comparison involved two sentence types, one without a pre-verbal object clitic, for which an intransitive verb was temporarily a plausible option (e.g., “Il dorlote” / He cuddles) and the other with a pre-verbal object clitic, that made the appearance of an intransitive verb impossible (“Il le dorlote” / He cuddles it). Results showed a lower rate of false alarms for sentences with a pre-verbal object pronoun (3%) compared to locally ambiguous sentences (about 20%). Participants rapidly incorporate information about a verb's argument structure to constrain lexical access to verbs that match the expected subcategorization frame.
Collapse
Affiliation(s)
- Perrine Brusini
- Language, Cognition and Development Lab, Cognitive Neuroscience Department, Scuola Internazionale Superiore di Studi Avanzati Trieste, Italy ; Laboratoire de Sciences Cognitives et de Psycholinguistique, École des Hautes Études en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique, École Normale Supérieure (ENS) Paris, France
| | - Mélanie Brun
- Laboratoire de Sciences Cognitives et de Psycholinguistique, École des Hautes Études en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique, École Normale Supérieure (ENS) Paris, France ; Laboratoire Psychologie de la Perception, Université Paris Descartes Paris, France
| | - Isabelle Brunet
- Laboratoire de Sciences Cognitives et de Psycholinguistique, École des Hautes Études en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique, École Normale Supérieure (ENS) Paris, France ; Département d'Etudes Cognitives, Ecole Normale Supérieure - PSL Research University Paris, France
| | - Anne Christophe
- Laboratoire de Sciences Cognitives et de Psycholinguistique, École des Hautes Études en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique, École Normale Supérieure (ENS) Paris, France ; Département d'Etudes Cognitives, Ecole Normale Supérieure - PSL Research University Paris, France
| |
Collapse
|
13
|
Abstract
When perceiving spoken language, listeners must match the incoming acoustic phonetic input to lexical representations in memory. Models that quantify this process propose that the input activates multiple lexical representations in parallel and that these activated representations compete for recognition (Weber & Scharenborg, 2012). In two experiments, we assessed how grammatically constraining contexts alter the process of lexical competition. The results suggest that grammatical context constrains the lexical candidates that are activated to grammatically appropriate competitors. Stimulus words with little competition from items of the same grammatical class benefit more from the addition of grammatical context than do words with more within-class competition. The results provide evidence that top-down contextual information is integrated in the early stages of word recognition. We propose adding a grammatical class level of analysis to existing models of word recognition to account for these findings.
Collapse
|
14
|
Farmer TA, Yan S, Bicknell K, Tanenhaus MK. Form-to-expectation matching effects on first-pass eye movement measures during reading. J Exp Psychol Hum Percept Perform 2015; 41:958-76. [PMID: 25915072 PMCID: PMC4516711 DOI: 10.1037/xhp0000054] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent Electroencephalography/Magnetoencephalography (EEG/MEG) studies suggest that when contextual information is highly predictive of some property of a linguistic signal, expectations generated from context can be translated into surprisingly low-level estimates of the physical form-based properties likely to occur in subsequent portions of the unfolding signal. Whether form-based expectations are generated and assessed during natural reading, however, remains unclear. We monitored eye movements while participants read phonologically typical and atypical nouns in noun-predictive contexts (Experiment 1), demonstrating that when a noun is strongly expected, fixation durations on first-pass eye movement measures, including first fixation duration, gaze duration, and go-past times, are shorter for nouns with category typical form-based features. In Experiments 2 and 3, typical and atypical nouns were placed in sentential contexts normed to create expectations of variable strength for a noun. Context and typicality interacted significantly at gaze duration. These results suggest that during reading, form-based expectations that are translated from higher-level category-based expectancies can facilitate the processing of a word in context, and that their effect on lexical processing is graded based on the strength of category expectancy.
Collapse
|
15
|
Chen Q, Mirman D. Interaction between phonological and semantic representations: time matters. Cogn Sci 2015; 39:538-58. [PMID: 25155249 PMCID: PMC4607034 DOI: 10.1111/cogs.12156] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2012] [Revised: 08/26/2013] [Accepted: 01/07/2014] [Indexed: 11/30/2022]
Abstract
Computational modeling and eye-tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences activation of semantically related concepts during spoken word recognition (Apfelbaum, Blumstein, & McMurray, 2011). The model made a novel prediction: Semantic input modulates the effect of phonological neighbors on target word processing, producing an approximately inverted-U-shaped pattern with a high phonological density advantage at an intermediate level of semantic input-in contrast to the typical disadvantage for high phonological density words in spoken word recognition. This prediction was confirmed with a new analysis of the Apfelbaum et al. data and in a visual world paradigm experiment with preview duration serving as a manipulation of strength of semantic input. These results are consistent with our previous claim that strongly active neighbors produce net inhibitory effects and weakly active neighbors produce net facilitative effects.
Collapse
Affiliation(s)
- Qi Chen
- Center for Studies of Psychological Application and School of Psychology, South China Normal University
- Moss Rehabilitation Research Institute
| | - Daniel Mirman
- Moss Rehabilitation Research Institute
- Department of Psychology, Drexel University
| |
Collapse
|
16
|
Brown M, Salverda AP, Gunlogson C, Tanenhaus MK. Interpreting prosodic cues in discourse context. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 30:149-166. [PMID: 25599081 PMCID: PMC4294268 DOI: 10.1080/01690965.2013.862285] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Two visual-world experiments investigated whether and how quickly discourse-based expectations about the prosodic realization of spoken words modulate interpretation of acoustic-prosodic cues. Experiment 1 replicated effects of segmental lengthening on activation of onset-embedded words (e.g. pumpkin) using resynthetic manipulation of duration and fundamental frequency (F0). In Experiment 2, the same materials were preceded by instructions establishing information-structural differences between competing lexical alternatives (i.e. repeated vs. newly-assigned thematic roles) in critical instructions. Eye-movements generated upon hearing the critical target word revealed a significant interaction between information structure and target-word realization: Segmental lengthening and pitch excursion elicited more fixations to the onset-embedded competitor when the target word remained in the same thematic role, but not when its thematic role changed. These results suggest that information structure modulates the interpretation of acoustic-prosodic cues by influencing expectations about fine-grained acoustic-phonetic properties of the unfolding utterance.
Collapse
Affiliation(s)
- Meredith Brown
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, US
| | - Anne Pier Salverda
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, US
| | - Christine Gunlogson
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, US
- Department of Linguistics, University of Rochester, Rochester, NY, US
| | - Michael K. Tanenhaus
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, US
- Department of Linguistics, University of Rochester, Rochester, NY, US
| |
Collapse
|
17
|
McClelland JL, Mirman D, Bolger DJ, Khaitan P. Interactive Activation and Mutual Constraint Satisfaction in Perception and Cognition. Cogn Sci 2014; 38:1139-89. [DOI: 10.1111/cogs.12146] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2011] [Revised: 11/01/2013] [Accepted: 11/02/2013] [Indexed: 11/27/2022]
Affiliation(s)
| | - Daniel Mirman
- Department of Psychology; Drexel University and Moss Rehabilitation Research Institute
| | | | | |
Collapse
|
18
|
Kukona A, Cho PW, Magnuson JS, Tabor W. Lexical interference effects in sentence processing: evidence from the visual world paradigm and self-organizing models. J Exp Psychol Learn Mem Cogn 2014; 40:326-47. [PMID: 24245535 PMCID: PMC4033295 DOI: 10.1037/a0034903] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports nonintegration or late integration. Here we report on a self-organizing neural network framework that addresses 1 aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In 2 simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report 2 experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like "The boy will eat the white …" while viewing visual displays with objects like a white cake (i.e., a predictable direct object of "eat"), white car (i.e., an object not predicted by "eat," but consistent with "white"), and distractors. In line with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context.
Collapse
|
19
|
Brock J, Nation K. The Hardest Butter to Button: Immediate Context Effects in Spoken Word Identification. Q J Exp Psychol (Hove) 2014; 67:114-23. [DOI: 10.1080/17470218.2013.791331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
According to some theories, the context in which a spoken word is heard has no impact on the earliest stages of word identification. This view has been challenged by recent studies indicating an interactive effect of context and acoustic similarity on language-mediated eye movements. However, an alternative explanation for these results is that participants looked less at acoustically similar objects in constraining contexts simply because they were looking more at other objects that were cued by the context. The current study addressed this concern whilst providing a much finer grained analysis of the temporal evolution of context effects. Thirty-two adults listened to sentences while viewing a computer display showing four objects. As expected, shortly after the onset of a target word (e.g., “button”) in a neutral context, participants saccaded preferentially towards a cohort competitor of the word (e.g., butter). This effect was significantly reduced when the preceding verb made the competitor an unlikely referent (e.g., “Sam fastened the button”), even though there were no other contextually congruent objects in the display. Moreover, the time-course of these two effects was identical to within approximately 30 ms, indicating that certain forms of contextual information can have a near-immediate effect on word identification.
Collapse
Affiliation(s)
- Jon Brock
- Australian Research Council Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - Kate Nation
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
20
|
Shook A, Marian V. Bimodal bilinguals co-activate both languages during spoken comprehension. Cognition 2012; 124:314-24. [PMID: 22770677 DOI: 10.1016/j.cognition.2012.05.014] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/26/2012] [Accepted: 05/18/2012] [Indexed: 10/28/2022]
Abstract
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals' and English monolinguals' eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing item than at phonologically unrelated items and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension.
Collapse
Affiliation(s)
- Anthony Shook
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | | |
Collapse
|
21
|
Huettig F, Rommers J, Meyer AS. Using the visual world paradigm to study language processing: a review and critical evaluation. Acta Psychol (Amst) 2011; 137:151-71. [PMID: 21288498 DOI: 10.1016/j.actpsy.2010.11.003] [Citation(s) in RCA: 325] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2010] [Revised: 11/18/2010] [Accepted: 11/19/2010] [Indexed: 11/20/2022] Open
Abstract
We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
Collapse
Affiliation(s)
- Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, The Netherlands
| | | | | |
Collapse
|
22
|
Kukona A, Fang SY, Aicher KA, Chen H, Magnuson JS. The time course of anticipatory constraint integration. Cognition 2011; 119:23-42. [PMID: 21237450 DOI: 10.1016/j.cognition.2010.12.002] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2008] [Revised: 11/27/2010] [Accepted: 12/01/2010] [Indexed: 11/19/2022]
Abstract
Several studies have demonstrated that as listeners hear sentences describing events in a scene, their eye movements anticipate upcoming linguistic items predicted by the unfolding relationship between scene and sentence. While this may reflect active prediction based on structural or contextual expectations, the influence of local thematic priming between words has not been fully examined. In Experiment 1, we presented verbs (e.g., arrest) in active (Subject-Verb-Object) sentences with displays containing verb-related patients (e.g., crook) and agents (e.g., policeman). We examined patient and agent fixations following the verb, after the agent role had been filled by another entity, but prior to bottom-up specification of the object. Participants were nearly as likely to fixate agents "anticipatorily" as patients, even though the agent role was already filled. However, the patient advantage suggested simultaneous influences of both local priming and active prediction. In Experiment 2, using passive sentences (Object-Verb-Subject), we found stronger, but still graded influences of role prediction when more time elapsed between verb and target, and more syntactic cues were available. We interpret anticipatory fixations as emerging from constraint-based processes that involve both non-predictive thematic priming and active prediction.
Collapse
Affiliation(s)
- Anuenue Kukona
- Department of Psychology, University of Connecticut, Storrs, CT 06269, USA.
| | | | | | | | | |
Collapse
|
23
|
Revill KP, Tanenhaus MK, Aslin RN. Context and spoken word recognition in a novel lexicon. J Exp Psychol Learn Mem Cogn 2008; 34:1207-23. [PMID: 18763901 DOI: 10.1037/a0012796] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models.
Collapse
|