1
|
Guarino KF, Wakefield EM, Morrison RG, Richland LE. Why do children struggle on analogical reasoning tasks? Considering the role of problem format by measuring visual attention. Acta Psychol (Amst) 2022; 224:103505. [PMID: 35091207 DOI: 10.1016/j.actpsy.2022.103505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Revised: 09/26/2021] [Accepted: 01/14/2022] [Indexed: 11/19/2022] Open
Abstract
Given the importance of analogical reasoning to bootstrapping children's understanding of the world, why is this ability so challenging for children? Two common sources of error have been implicated: 1) children's inability to prioritize relational information during initial problem solving; 2) children's inability to disengage from salient distractors. Here, we use eye tracking to examine children and adults' looking patterns when solving scene analogies, finding that children and adults attended differently to distractors, and that this attention predicted performance. These results provide the most direct evidence to date that feature based distraction is an important way children and adults differ during early analogical reasoning. In contrast to recent work using propositional analogies, we find no differences in children and adults' prioritization of relational information during problem solving, and while there are some differences in general attentional strategies across age groups, neither prioritization of relational information nor attentional strategy predict successful problem solving. Together, our results suggest that analogy problem format should be taken into account when considering developmental factors in children's analogical reasoning.
Collapse
|
2
|
Rhythmic priming of grammaticality judgments in children: Duration matters. J Exp Child Psychol 2020; 197:104885. [DOI: 10.1016/j.jecp.2020.104885] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 04/28/2020] [Accepted: 04/28/2020] [Indexed: 12/15/2022]
|
3
|
Berglund-Barraza A, Tian F, Basak C, Evans JL. Word Frequency Is Associated With Cognitive Effort During Verbal Working Memory: A Functional Near Infrared Spectroscopy (fNIRS) Study. Front Hum Neurosci 2019; 13:433. [PMID: 31920592 PMCID: PMC6923201 DOI: 10.3389/fnhum.2019.00433] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 11/22/2019] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Psycholinguistic models traditionally view verbal working memory capacity as independent from linguistic features; connectionist models suggest otherwise. Moreover, lexical processing studies show high frequency words differ in cognitive effort from low frequency words, although these effects during concurrent processing of words in working memory are unknown. This novel study examines potential differences in cognitive effort, as measured by differences in HbO2 and Hb, for high frequency versus low frequency words during a working memory paradigm. METHODS A total of 21 neurologically typical participants (age 18-23) completed an auditory, n-back, working memory task comparing performance with high- as compared to low- frequency words. Hemodynamic changes in the prefrontal cortex were recorded with a continuous-wave functional near-infrared spectroscopy (fNIRS) device. Behavioral data (accuracy, reaction time) were recorded using E-prime. RESULTS Differences in word frequency were evident at both behavioral and neurological levels. Participants were more accurate, albeit slower in identifying the target two back in a sequence for low- as compared to high-frequency words. Patterns of hemodynamic changes were also significantly different between HF and LF conditions. CONCLUSION The results from this study indicate that the behavioral and neurological signatures inherent in holding high- versus low-frequency words in working memory differs significantly. Specifically, the findings from this study indicated that words differing in frequency place different demands on cognitive processing load in memory updating tasks.
Collapse
Affiliation(s)
- Amy Berglund-Barraza
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, TX, United States
| | - Fenghua Tian
- Department of Bioengineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Chandramalika Basak
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, TX, United States
| | - Julia L. Evans
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, TX, United States
| |
Collapse
|
4
|
Abstract
OBJECTIVES The present study investigated presentation modality differences in lexical encoding and working memory representations of spoken words of older, hearing-impaired adults. Two experiments were undertaken: a memory-scanning experiment and a stimulus gating experiment. The primary objective of experiment 1 was to determine whether memory encoding and retrieval and scanning speeds are different for easily identifiable words presented in auditory-visual (AV), auditory-only (AO), and visual-only (VO) modalities. The primary objective of experiment 2 was to determine if memory encoding and retrieval speed differences observed in experiment 1 could be attributed to the early availability of AV speech information compared with AO or VO conditions. DESIGN Twenty-six adults over age 60 years with bilateral mild to moderate sensorineural hearing loss participated in experiment 1, and 24 adults who took part in experiment 1 participated in experiment 2. An item recognition reaction-time paradigm (memory-scanning) was used in experiment 1 to measure (1) lexical encoding speed, that is, the speed at which an easily identifiable word was recognized and placed into working memory, and (2) retrieval speed, that is, the speed at which words were retrieved from memory and compared with similarly encoded words (memory scanning) presented in AV, AO, and VO modalities. Experiment 2 used a time-gated word identification task to test whether the time course of stimulus information available to participants predicted the modality-related memory encoding and retrieval speed results from experiment 1. RESULTS The results of experiment 1 revealed significant differences among the modalities with respect to both memory encoding and retrieval speed, with AV fastest and VO slowest. These differences motivated an examination of the time course of stimulus information available as a function of modality. Results from experiment 2 indicated the encoding and retrieval speed advantages for AV and AO words compared with VO words were mostly driven by the time course of stimulus information. The AV advantage seen in encoding and retrieval speeds is likely due to a combination of robust stimulus information available to the listener earlier in time and lower attentional demands compared with AO or VO encoding and retrieval. CONCLUSIONS Significant modality differences in lexical encoding and memory retrieval speeds were observed across modalities. The memory scanning speed advantage observed for AV compared with AO or VO modalities was strongly related to the time course of stimulus information. In contrast, lexical encoding and retrieval speeds for VO words could not be explained by the time-course of stimulus information alone. Working memory processes for the VO modality may be impacted by greater attentional demands and less information availability compared with the AV and AO modalities. Overall, these results support the hypothesis that the presentation modality for speech inputs (AV, AO, or VO) affects how older adult listeners with hearing loss encode, remember, and retrieve what they hear.
Collapse
|
5
|
Hoover JR. Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1226-1237. [PMID: 29800356 PMCID: PMC6195083 DOI: 10.1044/2018_jslhr-l-17-0099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 08/01/2017] [Accepted: 02/04/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD). METHOD Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant-vowel-consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class). RESULTS On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns. CONCLUSION The current study yields new insight into how children access lexical-phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical-phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.
Collapse
Affiliation(s)
- Jill R. Hoover
- Department of Communication Disorders, University of Massachusetts Amherst
| |
Collapse
|
6
|
Abstract
OBJECTIVES The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. DESIGN Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. RESULTS Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. CONCLUSIONS The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.
Collapse
|
7
|
Patro C, Mendel LL. Gated Word Recognition by Postlingually Deafened Adults With Cochlear Implants: Influence of Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:145-158. [PMID: 29242894 DOI: 10.1044/2017_jslhr-h-17-0141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/28/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. METHOD Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. RESULTS The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH. CONCLUSION Word recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.
Collapse
Affiliation(s)
| | - Lisa Lucks Mendel
- School of Communication Sciences & Disorders, University of Memphis, TN
| |
Collapse
|
8
|
Moradi S, Lidestam B, Rönnberg J. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli. Trends Hear 2016; 20:20/0/2331216516653355. [PMID: 27317667 PMCID: PMC5562342 DOI: 10.1177/2331216516653355] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
9
|
Molis MR, Kampel SD, McMillan GP, Gallun FJ, Dann SM, Konrad-Martin D. Effects of hearing and aging on sentence-level time-gated word recognition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:481-96. [PMID: 25815688 PMCID: PMC4635971 DOI: 10.1044/2015_jslhr-h-14-0098] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 11/21/2014] [Indexed: 05/16/2023]
Abstract
PURPOSE Aging is known to influence temporal processing, but its relationship to speech perception has not been clearly defined. To examine listeners' use of contextual and phonetic information, the Revised Speech Perception in Noise test (R-SPIN) was used to develop a time-gated word (TGW) task. METHOD In Experiment 1, R-SPIN sentence lists were matched on context, target-word length, and median word segment length necessary for target recognition. In Experiment 2, TGW recognition was assessed in quiet and in noise among adults of various ages with normal hearing to moderate hearing loss. Linear regression models of the minimum word duration necessary for correct identification and identification failure rates were developed. Age and hearing thresholds were modeled as continuous predictors with corrections for correlations among multiple measurements of the same participants. RESULTS While aging and hearing loss both had significant impacts on task performance in the most adverse listening condition (low context, in noise), for most conditions, performance was limited primarily by hearing loss. CONCLUSION Whereas hearing loss was strongly related to target-word recognition, the effect of aging was only weakly related to task performance. These results have implications for the design and evaluation of studies of hearing and aging.
Collapse
|
10
|
Rispens J, Baker A, Duinmeijer I. Word recognition and nonword repetition in children with language disorders: the effects of neighborhood density, lexical frequency, and phonotactic probability. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:78-92. [PMID: 25421294 DOI: 10.1044/2014_jslhr-l-12-0393] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2012] [Accepted: 10/10/2014] [Indexed: 06/04/2023]
Abstract
PURPOSE The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. METHOD Tasks measuring NWR and word recognition were administered to 5 groups of children: 2 groups of TD children (5 and 8 years old), children with specific language impairment (SLI), children with reading impairment (RI), and children with SLI+RI (all 7-8 years old). RESULTS High ND had a negative effect on word recognition in the older TD children and in the children with RI only. There was no ND effect in the younger children or in the children with SLI, who all had lower receptive vocabulary scores than the age-matched TD children and the RI groups. For all groups, NWR items with low PP were more difficult to repeat than items with high PP. This effect was especially pronounced in children with RI. CONCLUSION Both the stage of vocabulary development and the type of language impairment (SLI or RI) impact the way ND and PP affect word recognition and NWR.
Collapse
|
11
|
Moradi S, Lidestam B, Hällgren M, Rönnberg J. Gated auditory speech perception in elderly hearing aid users and elderly normal-hearing individuals: effects of hearing impairment and cognitive capacity. Trends Hear 2014; 18:18/0/2331216514545406. [PMID: 25085610 PMCID: PMC4227697 DOI: 10.1177/2331216514545406] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Sweden
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Sweden Department of Otorhinolaryngology/Section of Audiology, Linköping University Hospital, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
12
|
Moradi S, Lidestam B, Saremi A, Rönnberg J. Gated auditory speech perception: effects of listening conditions and cognitive capacity. Front Psychol 2014; 5:531. [PMID: 24926274 PMCID: PMC4040882 DOI: 10.3389/fpsyg.2014.00531] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 05/13/2014] [Indexed: 11/13/2022] Open
Abstract
This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Amin Saremi
- Division of Technical Audiology, Department of Clinical and Experimental Medicine, Linköping UniversityLinköping, Sweden
- Cluster of Excellence “Hearing4all”, Department for Neuroscience, Computational Neuroscience Group, Carl von Ossietzky University of OldenburgOldenburg, Germany
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| |
Collapse
|
13
|
Moradi S, Lidestam B, Rönnberg J. Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy. Front Psychol 2013; 4:359. [PMID: 23801980 PMCID: PMC3685792 DOI: 10.3389/fpsyg.2013.00359] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 05/31/2013] [Indexed: 11/15/2022] Open
Abstract
This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University Linköping, Sweden
| | | | | |
Collapse
|
14
|
Abstract
In the present study, the gating paradigm was used to measure how much perceptual information that was extracted from musical excerpts needs to be heard to provide judgments of familiarity and of emotionality. Nonmusicians heard segments of increasing duration (250, 500, 1,000 msec, etc.). The stimuli were segments from familiar and unfamiliar musical excerpts in Experiment 1 and from very moving and emotionally neutral musical excerpts in Experiment 2. Participants judged how familiar (Experiment 1) or how moving (Experiment 2) the excerpt was to them. Results show that a feeling of familiarity can be triggered by 500-msec segments, and that the distinction between moving and neutral can be made for 250-msec segments. This finding extends the observation of fast-acting cognitive and emotional processes from face and voice perception to music perception.
Collapse
|
15
|
Mainela-Arnold E, Evans JL, Coady J. Beyond capacity limitations II: effects of lexical processes on word recall in verbal working memory tasks in children with and without specific language impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2010; 53:1656-72. [PMID: 20705747 PMCID: PMC2982928 DOI: 10.1044/1092-4388(2010/08-0240)] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
PURPOSE This study investigated the impact of lexical processes on target word recall in sentence span tasks in children with and without specific language impairment (SLI). METHOD Participants were 42 children (ages 8;2-12;3 [years;months]): 21 with SLI and 21 typically developing peers matched on age and nonverbal IQ. Children completed a sentence span task in which target words to be recalled varied in word frequency and neighborhood density. Two measures of lexical processes were examined: the number of nontarget competitor words activated during a gating task (lexical cohort competition) and word definitions. RESULTS Neighborhood density had no effect on word recall for either group. However, both groups recalled significantly more high- than low-frequency words. Lexical cohort competition and specificity of semantic representations accounted for unique variance in the number of target word recalled in the SLI and chronological age-matched (CA) groups combined. CONCLUSIONS Performance on verbal working memory span tasks for both SLI and CA children is influenced by word frequency, lexical cohorts, and semantic representations. Future studies need to examine the extent to which verbal working memory capacity is a cognitive construct independent of extant language knowledge representations.
Collapse
|
16
|
Daltrozzo J, Tillmann B, Platel H, Schön D. Temporal aspects of the feeling of familiarity for music and the emergence of conceptual processing. J Cogn Neurosci 2010; 22:1754-69. [PMID: 19580391 DOI: 10.1162/jocn.2009.21311] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We tested whether the emergence of familiarity to a melody may trigger or co-occur with the processing of the concept(s) conveyed by emotions to, or semantic association with, the melody. With this objective, we recorded ERPs while participants were presented with highly familiar and less familiar melodies in a gating paradigm. The ERPs time locked to a tone of the melody called the "familiarity emergence point" showed a larger fronto-central negativity for highly familiar compared with less familiar melodies between 200 and 500 msec, with a peak latency around 400 msec. This latency and the sensitivity to the degree of familiarity/conceptual information suggest that this component was an N400, a marker of conceptual processing. Our data suggest that the feeling of familiarity evoked by a musical excerpt could be accompanied by other processing mechanisms at the conceptual level. Coupling the gating paradigm with ERP analyses might become a new avenue for investigating the neurocognitive basis of implicit musical knowledge.
Collapse
Affiliation(s)
- Jérôme Daltrozzo
- Mediterranean Institute of Cognitive Neurosciences, CNRS, Marseille, France.
| | | | | | | |
Collapse
|
17
|
Mainela-Arnold E, Evans JL, Coady JA. Lexical representations in children with SLI: evidence from a frequency-manipulated gating task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2008; 51:381-93. [PMID: 18367684 PMCID: PMC4707012 DOI: 10.1044/1092-4388(2008/028)] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
PURPOSE This study investigated lexical representations of children with specific language impairment (SLI) and typically developing, chronological age-matched (CA) peers on a frequency-manipulated gating task. The study tested the hypothesis that children with SLI have holistic phonological representations of words, that is, that children with SLI would exhibit smaller effects of neighborhood density on gating durations than CA peers and that children with SLI would be as efficient as CA peers in accessing high-frequency words but that they would differ from their age-matched peers in accessing low-frequency words. METHOD Thirty-two children (ages 8;5-12;3 [years;months]) participated: 16 children with SLI and 16 typically developing peers matched on age and nonverbal IQ. Children's word guesses after different gating durations were investigated. RESULTS Contrary to predictions, no group differences in effects of distributional regularity were found: Children in both groups required equally longer acoustic chunks to access words that were low in frequency and came from dense neighborhoods. However, children with SLI appeared to vacillate between multiple word candidates at significantly later gates when compared with children in the CA group. CONCLUSIONS Children with SLI did not exhibit evidence for phonologically holistic lexical representations. Instead, they appeared more vulnerable to competing words.
Collapse
Affiliation(s)
- Elina Mainela-Arnold
- Department of Communication Sciences and Disorders, Pennsylvania State University, 401K Ford Building, University Park, PA 16802-3100, USA.
| | | | | |
Collapse
|
18
|
Barkhuysen P, Krahmer E, Swerts M. The interplay between the auditory and visual modality for end-of-utterance detection. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:354-365. [PMID: 18177165 DOI: 10.1121/1.2816561] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The existence of auditory cues such as intonation, rhythm, and pausing that facilitate end-of-utterance detection is by now well established. It has been argued repeatedly that speakers may also employ visual cues to indicate that they are at the end of their utterance. This raises at least two questions, which are addressed in the current paper. First, which modalities do speakers use for signalling finality and nonfinality, and second, how sensitive are observers to these signals. Our goal is to investigate the relative contribution of three different conditions to end-of-utterance detection: the two unimodal ones, vision only and audio only, and their bimodal combination. Speaker utterances were collected via a novel semicontrolled production experiment, in which participants provided lists of words in an interview setting. The data thus collected were used in two perception experiments, which systematically compared responses to unimodal (audio only and vision only) and bimodal (audio-visual) stimuli. Experiment I is a reaction time experiment, which revealed that humans are significantly quicker in end-of-utterance detection when confronted with bimodal or audio-only stimuli, than for vision-only stimuli. No significant differences in reaction times were found between the bimodal and audio-only condition, and therefore a second experiment was conducted. Experiment II is a classification experiment, and showed that participants perform significantly better in the bimodal condition than in the two unimodal ones. Both the first and the second experiment revealed interesting differences between speakers in the various conditions, which indicates that some speakers are more expressive in the visual and others in the auditory modality.
Collapse
Affiliation(s)
- Pashiera Barkhuysen
- Communication & Cognition, Faculty of Arts, Tilburg University, P.O. Box 90153, NL-5000 LE Tilburg, The Netherlands
| | | | | |
Collapse
|
19
|
Venturaa P, Kolinsky R, Fernandesa S, Queridoa L, Morais J. Lexical restructuring in the absence of literacy. Cognition 2007; 105:334-61. [PMID: 17113063 DOI: 10.1016/j.cognition.2006.10.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2005] [Revised: 09/18/2006] [Accepted: 10/09/2006] [Indexed: 11/19/2022]
Abstract
Vocabulary growth was suggested to prompt the implementation of increasingly finer-grained lexical representations of spoken words in children (e.g., [Metsala, J. L., & Walley, A. C. (1998). Spoken vocabulary growth and the segmental restructuring of lexical representations: precursors to phonemic awareness and early reading ability. In J. L. Metsala & L. C. Ehri (Eds.), Word recognition in beginning literacy (pp. 89-120). Hillsdale, NJ: Erlbaum.]). Although literacy was not explicitly mentioned in this lexical restructuring hypothesis, the process of learning to read and spell might also have a significant impact on the specification of lexical representations (e.g., [Carroll, J. M., & Snowling, M. J. (2001). The effects of global similarity between stimuli on children's judgments of rime and alliteration. Applied Psycholinguistics, 22, 327-342.]; [Goswami, U. (2000). Phonological representations, reading development and dyslexia: Towards a cross-linguistic theoretical framework. Dyslexia, 6, 133-151.]). This is what we checked in the present study. We manipulated word frequency and neighborhood density in a gating task (Experiment 1) and a word-identification-in-noise task (Experiment 2) presented to Portuguese literate and illiterate adults. Ex-illiterates were also tested in Experiment 2 in order to disentangle the effects of vocabulary size and literacy. There was an interaction between word frequency and neighborhood density, which was similar in the three groups. These did not differ even for the words that are supposed to undergo lexical restructuring the latest (low frequency words from sparse neighborhoods). Thus, segmental lexical representations seem to develop independently of literacy. While segmental restructuring is not affected by literacy, it constrains the development of phoneme awareness as shown by the fact that, in Experiment 3, neighborhood density modulated the phoneme deletion performance of both illiterates and ex-illiterates.
Collapse
Affiliation(s)
- Paulo Venturaa
- Faculdade de Psicologia e de Ciências da Educação, Universidade de Lisboa, Portugal.
| | | | | | | | | |
Collapse
|
20
|
Bruno JL, Manis FR, Keating P, Sperling AJ, Nakamoto J, Seidenberg MS. Auditory word identification in dyslexic and normally achieving readers. J Exp Child Psychol 2007; 97:183-204. [PMID: 17359994 PMCID: PMC1952214 DOI: 10.1016/j.jecp.2007.01.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2006] [Revised: 12/18/2006] [Accepted: 01/16/2007] [Indexed: 11/22/2022]
Abstract
The integrity of phonological representation/processing in dyslexic children was explored with a gating task in which children listened to successively longer segments (gates) of a word. At each gate, the task was to decide what the entire word was. Responses were scored for overall accuracy as well as the children's sensitivity to coarticulation from the final consonant. As a group, dyslexic children were less able than normally achieving readers to detect coarticulation present in the vowel portion of the word, particularly on the most difficult items, namely those ending in a nasal sound. Hierarchical regression and path analyses indicated that phonological awareness mediated the relation of gating and general language ability to word and pseudoword reading ability.
Collapse
Affiliation(s)
- Jennifer L. Bruno
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Franklin R. Manis
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
- Corresponding author. Fax: 213-746-9082. E-mail address: (F. R. Manis)
| | - Patricia Keating
- Department of Linguistics, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Anne J. Sperling
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057
| | - Jonathan Nakamoto
- Department of Developmental Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Mark S. Seidenberg
- Department of Psychology, University of Wisconsin–Madison, Madison, WI 53706
| |
Collapse
|
21
|
Jescheniak JD, Hahne A, Hoffmann S, Wagner V. Phonological activation of category coordinates during speech planning is observable in children but not in adults: evidence for cascaded processing. J Exp Psychol Learn Mem Cogn 2006; 32:373-86. [PMID: 16569153 DOI: 10.1037/0278-7393.32.3.373] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is a long-standing debate in the area of speech production on the question of whether only words selected for articulation are phonologically activated (as maintained by serial-discrete models) or whether this is also true for their semantic competitors (as maintained by forward-cascading and interactive models). Past research has addressed this issue by testing whether retrieval of a target word (e.g., cat) affects--or is affected by--the processing of a word that is phonologically related to a semantic category coordinate of the target (e.g., doll, related to dog) and has consistently failed to obtain such mediated effects in adult speakers. The authors present a series of experiments demonstrating that mediated effects are present in children (around age 7) and diminish with increasing age. This observation provides further evidence for cascaded models of lexical retrieval.
Collapse
|
22
|
Bowey JA, Hirakis E. Testing the protracted lexical restructuring hypothesis: the effects of position and acoustic-phonetic clarity on sensitivity to mispronunciations in children and adults. J Exp Child Psychol 2006; 95:1-17. [PMID: 16546204 DOI: 10.1016/j.jecp.2006.02.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2005] [Revised: 02/03/2006] [Accepted: 02/03/2006] [Indexed: 11/26/2022]
Abstract
Although developmental increases in the size of the position effect within a mispronunciation detection task have been interpreted as consistent with a view of the lexical restructuring process as protracted, the position effect itself might not be reliable. The current research examined the effects of position and clarity of acoustic-phonetic information on sensitivity to mispronounced onsets in 5- and 6-year-olds and adults. Both children and adults showed a position effect only when mispronunciations also differed in the amount of relevant acoustic-phonetic information. Adults' sensitivity to mispronounced second-syllable onsets also reflected the availability of acoustic-phonetic information. The implications of these findings are discussed in relation to the lexical restructuring hypothesis.
Collapse
Affiliation(s)
- Judith A Bowey
- School of Psychology, University of Queensland, Brisbane, Qld 4072, Australia.
| | | |
Collapse
|
23
|
Sutherland D, Gillon GT. Assessment of Phonological Representations in Children With Speech Impairment. Lang Speech Hear Serv Sch 2005; 36:294-307. [PMID: 16389702 DOI: 10.1044/0161-1461(2005/030)] [Citation(s) in RCA: 70] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose:
This study explored the use of assessment tasks to examine underlying phonological representations in preschool children with speech impairment. The study also investigated the association between performance on phonological representation tasks and phonological awareness development.
Method:
The performance of 9 children (aged 3;09 [years;months] to 5;03) with moderate or severe speech impairment and 17 children of the same age with typical speech development was investigated on a range of novel receptive-based assessment tasks designed to tap underlying phonological representations.
Results:
Preschool children with speech impairment experienced more difficulty judging correct and incorrect speech productions of familiar multisyllable words and showed inferior performance in the ability to learn nonwords as compared to children without speech impairment. Performance on these tasks was moderately correlated with phonological awareness ability.
Clinical Implications:
Factors such as the precision and accessibility of underlying phonological representations of spoken words may contribute to problems in phonological awareness and subsequent reading development for young children with speech impairment. Receptive-based assessments that examine underlying phonological representations provide clinically relevant information for children with speech impairment.
Collapse
Affiliation(s)
- Dean Sutherland
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand.
| | | |
Collapse
|
24
|
Collison EA, Munson B, Carney AE. Relations among linguistic and cognitive skills and spoken word recognition in adults with cochlear implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2004; 47:496-508. [PMID: 15212564 DOI: 10.1044/1092-4388(2004/039)] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined spoken word recognition in adults with cochlear implants (CIs) to determine the extent to which linguistic and cognitive abilities predict variability in speech-perception performance. Both a traditional consonant-vowel-consonant (CVC)-repetition measure and a gated-word recognition measure (F. Grosjean, 1996) were used. Stimuli in the gated-word-recognition task varied in neighborhood density. Adults with CIs repeated CVC words less accurately than did age-matched adults with normal hearing sensitivity (NH). In addition, adults with CIs required more acoustic information to recognize gated words than did adults with NH. Neighborhood density had a smaller influence on gated-word recognition by adults with CIs than on recognition by adults with NH. With the exception of 1 outlying participant, standardized, norm-referenced measures of cognitive and linguistic abilities were not correlated with word-recognition measures. Taken together, these results do not support the hypothesis that cognitive and linguistic abilities predict variability in speech-perception performance in a heterogeneous group of adults with CIs. Findings are discussed in light of the potential role of auditory perception in mediating relations among cognitive and linguistic skill and spoken word recognition.
Collapse
|
25
|
Maillart C, Schelstraete MA, Hupet M. Phonological representations in children with SLI: a study of French. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2004; 47:187-198. [PMID: 15072538 DOI: 10.1044/1092-4388(2004/016)] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The present research examined the quality of the phonological representations of French children with specific language impairment (SLI) and those with normal language development (NLD). Twenty-five children with SLI and 50 children with NLD matched on lexical age level participated in an auditory lexical decision task. The observations gathered in our study can be summarized as follows. First, children with a higher receptive lexical level performed better, and this was true both for children with NLD and children with SLI. Second, both children with NLD and those with SLI were more likely to reject pseudowords resulting from a modification affecting the number of syllables of a word than pseudowords resulting from a slight modification with the number of syllables unchanged. This difference, however, was greater for the children with SLI, who appeared to have much difficulty rejecting pseudowords resulting from slight modifications. Finally, the performance of children with SLI was particularly poor when presented with pseudowords resulting from a slight modification at the beginning or the end of a word. These findings are interpreted as supporting the hypothesis of an under-specification of phonological representations in children with SLI.
Collapse
Affiliation(s)
- Christelle Maillart
- Unité Cognition & Développement, Faculté de Psychologie et des Sciences de l'Education, Universite catholique de Louvain, Louvain-la-Neuve, Belgium.
| | | | | |
Collapse
|
26
|
Bella SD, Peretz I, Aronoff N. Time course of melody recognition: A gating paradigm study. ACTA ACUST UNITED AC 2003; 65:1019-28. [PMID: 14674630 DOI: 10.3758/bf03194831] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.
Collapse
|
27
|
Hanauer JB, Brooks PJ. Developmental change in the cross-modal Stroop effect. PERCEPTION & PSYCHOPHYSICS 2003; 65:359-66. [PMID: 12785066 DOI: 10.3758/bf03194567] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
E. M. Elliott, Cowan, and Valle-Inclan (1998) reported a cross-modal Stroop-like interference effect in adults when an auditory distractor (a color or noncolor word) occurred simultaneously with a color patch to be named. Response times were slower with color as opposed to noncolor distractors. To distinguish two accounts of this phenomenon, we tested 4- to 11-year-olds and adults. The suppression hypothesis posits that the irrelevant word enters a phonological buffer and is injurious to color naming if the participant is unable to suppress its representation in time. The concurrent processing hypothesis states that interference occurs when the distractor and the color name are lexically accessed at the same time. Our finding that the cross-modal Stroop effect occurred in young children even with a distractor presented 500 msec in advance of the color patch favors the suppression account. Development in executive functioning may also contribute to the interference effect's becoming progressively weaker with age.
Collapse
Affiliation(s)
- Julie B Hanauer
- City University of New York, Staten Island, New York 10314, USA
| | | |
Collapse
|
28
|
Abstract
Speech processing in adults in continuous: as acoustic-phonetic information is heard, listeners' interpretation of the speech is updated incrementally. The present studies used a visual fixation technique to examine whether young children also interpret speech continuously. In Experiments 1 and 2, 24-month-old children looked at visual displays while hearing sentences. Sentences each contained a target word labeling one of the two displayed pictures. Children's latency to fixate the labeled picture was measured. Children's responses were delayed when the competing distractor picture's label overlapped phonetically with the target at onset (dog-doll), but not when the pictures' labels rhymed (ball-doll), showing that children monitored the speech stream incrementally for acoustic-phonetic information specifying the correct picture. In Experiment 3, adults' responses in the same task were found to be very similar to those of the 24-month-olds. This research shows that by 24 months, children can interpret speech continuously.
Collapse
Affiliation(s)
- D Swingley
- Department of Psychology, University Stanford, CA 94305, USA.
| | | | | |
Collapse
|
29
|
Erdeljac V, Mildner V. Temporal structure of spoken-word recognition in Croatian in light of the cohort theory. BRAIN AND LANGUAGE 1999; 68:95-103. [PMID: 10433745 DOI: 10.1006/brln.1999.2076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This article addresses two issues related to spoken-word recognition: the relationship between the amount of acoustic material and the degree of recognizability at the lexical and phonemic levels and the influence of segmentation ambiguities on the speed and success of the recognition process. The analyses were done on Croatian language materials using the gating paradigm. The results indicate that the degree of recognizability is directly proportional to signal duration and inversely proportional to the complexity of segmentation. A combination of bottom-up and top-down processing is involved in successful word recognition.
Collapse
Affiliation(s)
- V Erdeljac
- Faculty of Philosophy, University of Zagreb, Zagreb, Croatia.
| | | |
Collapse
|
30
|
Metsala JL. An examination of word frequency and neighborhood density in the development of spoken-word recognition. Mem Cognit 1997; 25:47-56. [PMID: 9046869 DOI: 10.3758/bf03197284] [Citation(s) in RCA: 142] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
In this study, the effects of word-frequency and phonological similarity relations in the development of spoken-word recognition were examined. Seven-, 9-, and 11-year-olds and adults listened to increasingly longer segments of high- and low-frequency monosyllabic words with many or few word neighbors that sounded similar (neighborhood density). Older children and adults required less of the acoustic-phonetic information to recognize words with few neighbors and low-frequency words than did younger children. Adults recognized high-frequency words with few neighbors on the basis of less input than did all three of the children's groups. All subjects showed a higher proportion of different-word guesses for words with many versus few neighbors. A frequency x neighborhood density interaction revealed that recognition is facilitated for high-frequency words with few versus many neighbors; the opposite was found for low-frequency words. Results are placed within a developmental framework on the emergence of the phoneme as a unit in perceptual processing.
Collapse
Affiliation(s)
- J L Metsala
- Department of Human Development, University of Maryland, College Park 20742, USA.
| |
Collapse
|
31
|
|