1
|
Brosseau-Lapré F, Schumaker J, Kluender KR. Perception of Medial Consonants by Preschoolers With and Without Speech Sound Disorders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3600-3610. [PMID: 32976079 PMCID: PMC8582902 DOI: 10.1044/2020_jslhr-20-00146] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 06/29/2020] [Accepted: 08/05/2020] [Indexed: 06/01/2023]
Abstract
Purpose This study compared perception of consonants in medial position by preschoolers, with and without speech sound disorder (SSD), with similar vocabulary and language skills. In addition, we investigated the association between speech perception and production skills. Method Participants were 36 monolingual English-speaking children with similar vocabulary and language skills, half with SSD and half with typical speech and language development (TD). Participants completed a speech perception task targeting phonemes /p, k, s, ɹ/ in /aCa/ disyllables and a comprehensive battery of speech and language measures. Results Children with SSD were significantly less accurate in perceiving speech sound distinctions relative to peers with TD. The phoneme /p/ was perceived significantly more accurately than the three other target phonemes. The correlation between overall perceptual accuracy and overall production accuracy was significant. Furthermore, perceptual accuracy of targets /k, s, ɹ/ was significantly correlated with production accuracy of these phonemes. Conclusions Many children with SSD have greater difficulty perceiving the specific speech sounds they misarticulate. Nonetheless, most children with SSD present with broader perceptual difficulties than peers with TD with similar vocabulary and language skills.
Collapse
Affiliation(s)
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| | - Keith R. Kluender
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| |
Collapse
|
2
|
Filippi P, Laaha S, Fitch WT. Utterance-final position and pitch marking aid word learning in school-age children. ROYAL SOCIETY OPEN SCIENCE 2017; 4:161035. [PMID: 28878961 PMCID: PMC5579076 DOI: 10.1098/rsos.161035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Accepted: 07/17/2017] [Indexed: 06/07/2023]
Abstract
We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word-meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
Collapse
Affiliation(s)
- Piera Filippi
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | - Sabine Laaha
- Department of Linguistics, University of Vienna, Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| |
Collapse
|
3
|
Endress AD, Langus A. Transitional probabilities count more than frequency, but might not be used for memorization. Cogn Psychol 2016; 92:37-64. [PMID: 27907807 DOI: 10.1016/j.cogpsych.2016.11.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Revised: 11/08/2016] [Accepted: 11/09/2016] [Indexed: 11/29/2022]
Abstract
Learners often need to extract recurring items from continuous sequences, in both vision and audition. The best-known example is probably found in word-learning, where listeners have to determine where words start and end in fluent speech. This could be achieved through universal and experience-independent statistical mechanisms, for example by relying on Transitional Probabilities (TPs). Further, these mechanisms might allow learners to store items in memory. However, previous investigations have yielded conflicting evidence as to whether a sensitivity to TPs is diagnostic of the memorization of recurring items. Here, we address this issue in the visual modality. Participants were familiarized with a continuous sequence of visual items (i.e., arbitrary or everyday symbols), and then had to choose between (i) high-TP items that appeared in the sequence, (ii) high-TP items that did not appear in the sequence, and (iii) low-TP items that appeared in the sequence. Items matched in TPs but differing in (chunk) frequency were much harder to discriminate than items differing in TPs (with no significant sensitivity to chunk frequency), and learners preferred unattested high-TP items over attested low-TP items. Contrary to previous claims, these results cannot be explained on the basis of the similarity of the test items. Learners thus weigh within-item TPs higher than the frequency of the chunks, even when the TP differences are relatively subtle. We argue that these results are problematic for distributional clustering mechanisms that analyze continuous sequences, and provide supporting computational results. We suggest that the role of TPs might not be to memorize items per se, but rather to prepare learners to memorize recurring items once they are presented in subsequent learning situations with richer cues.
Collapse
Affiliation(s)
| | - Alan Langus
- Cognitive Neuroscience Sector, International School for Advanced Studies, Trieste, Italy
| |
Collapse
|
4
|
White KS, Chambers KE, Miller Z, Jethava V. Listeners learn phonotactic patterns conditioned on suprasegmental cues. Q J Exp Psychol (Hove) 2016; 70:2560-2576. [PMID: 27734753 DOI: 10.1080/17470218.2016.1247896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Language learners are sensitive to phonotactic patterns from an early age, and can acquire both simple and 2nd-order positional restrictions contingent on segment identity (e.g., /f/ is an onset with /æ/but a coda with /ɪ/). The present study explored the learning of phonototactic patterns conditioned on a suprasegmental cue: lexical stress. Adults first heard non-words in which trochaic and iambic items had different consonant restrictions. In Experiment 1, participants trained with phonotactic patterns involving natural classes of consonants later falsely recognized novel items that were consistent with the training patterns (legal items), demonstrating that they had learned the stress-conditioned phonotactic patterns. However, this was only true for iambic items. In Experiment 2, participants completed a forced-choice test between novel legal and novel illegal items and were again successful only for the iambic items. Experiment 3 demonstrated learning for trochaic items when they were presented alone. Finally, in Experiment 4, in which the training phase was lengthened, participants successfully learned both sets of phonotactic patterns. These experiments provide evidence that learners consider more global phonological properties in the computation of phonotactic patterns, and that learners can acquire multiple sets of patterns simultaneously, even contradictory ones.
Collapse
Affiliation(s)
- Katherine S White
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | - Kyle E Chambers
- b Department of Psychology , Gustavus Adolphus College , St Peter , MN , USA
| | - Zachary Miller
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | - Vibhuti Jethava
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| |
Collapse
|
5
|
Zebra finches are able to learn affixation-like patterns. Anim Cogn 2015; 19:65-73. [PMID: 26297477 PMCID: PMC4701768 DOI: 10.1007/s10071-015-0913-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Revised: 08/03/2015] [Accepted: 08/05/2015] [Indexed: 01/01/2023]
Abstract
Adding an affix to transform a word is common across the world languages, with the edges of words more likely to carry out such a function. However, detecting affixation patterns is also observed in learning tasks outside the domain of language, suggesting that the underlying mechanism from which affixation patterns have arisen may not be language or even human specific. We addressed whether a songbird, the zebra finch, is able to discriminate between, and generalize, affixation-like patterns. Zebra finches were trained and tested in a Go/Nogo paradigm to discriminate artificial song element sequences resembling prefixed and suffixed ‘words.’ The ‘stems’ of the ‘words,’ consisted of different combinations of a triplet of song elements, to which a fourth element was added as either a ‘prefix’ or a ‘suffix.’ After training, the birds were tested with novel stems, consisting of either rearranged familiar element types or novel element types. The birds were able to generalize the affixation patterns to novel stems with both familiar and novel element types. Hence, the discrimination resulting from the training was not based on memorization of individual stimuli, but on a shared property among Go or Nogo stimuli, i.e., affixation patterns. Remarkably, birds trained with suffixation as Go pattern showed clear evidence of using both prefix and suffix, while those trained with the prefix as the Go stimulus used primarily the prefix. This finding illustrates that an asymmetry in attending to different affixations is not restricted to human languages.
Collapse
|
6
|
Minagawa-Kawai Y, Cristia A, Long B, Vendelin I, Hakuno Y, Dutat M, Filippin L, Cabrol D, Dupoux E. Insights on NIRS Sensitivity from a Cross-Linguistic Study on the Emergence of Phonological Grammar. Front Psychol 2013; 4:170. [PMID: 23596428 PMCID: PMC3627311 DOI: 10.3389/fpsyg.2013.00170] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2012] [Accepted: 03/20/2013] [Indexed: 11/13/2022] Open
Abstract
Each language has a unique set of phonemic categories and phonotactic rules which determine permissible sound sequences in that language. Behavioral research demonstrates that one's native language shapes the perception of both sound categories and sound sequences in adults, and neuroimaging results further indicate that the processing of native phonemes and phonotactics involves a left-dominant perisylvian brain network. Recent work using a novel technique, functional Near InfraRed Spectroscopy (NIRS), has suggested that a left-dominant network becomes evident toward the end of the first year of life as infants process phonemic contrasts. The present research project attempted to assess whether the same pattern would be seen for native phonotactics. We measured brain responses in Japanese- and French-learning infants to two contrasts: Abuna vs. Abna (a phonotactic contrast that is native in French, but not in Japanese) and Abuna vs. Abuuna (a vowel length contrast that is native in Japanese, but not in French). Results did not show a significant response to either contrast in either group, unlike both previous behavioral research on phonotactic processing and NIRS work on phonemic processing. To understand these null results, we performed similar NIRS experiments with Japanese adult participants. These data suggest that the infant null results arise from an interaction of multiple factors, involving the suitability of the experimental paradigm for NIRS measurements and stimulus perceptibility. We discuss the challenges facing this novel technique, particularly focusing on the optimal stimulus presentation which could yield strong enough hemodynamic responses when using the change detection paradigm.
Collapse
Affiliation(s)
- Yasuyo Minagawa-Kawai
- Graduate School of Human Relations, Keio UniversityTokyo, Japan
- Institut d’Etudes de la Cognition, Ecole Normale SupérieurParis, France
| | - Alejandrina Cristia
- Neurobiology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| | - Bria Long
- Department of Psychology, Harvard UniversityCambridge, MA, USA
| | - Inga Vendelin
- Laboratoire de Sciences Cognitives et Psycholinguistique, EHESSENS-IES, CNRS, Paris, France
| | - Yoko Hakuno
- Graduate School of Human Relations, Keio UniversityTokyo, Japan
| | - Michel Dutat
- Laboratoire de Sciences Cognitives et Psycholinguistique, EHESSENS-IES, CNRS, Paris, France
| | - Luca Filippin
- Laboratoire de Sciences Cognitives et Psycholinguistique, EHESSENS-IES, CNRS, Paris, France
| | | | - Emmanuel Dupoux
- Laboratoire de Sciences Cognitives et Psycholinguistique, EHESSENS-IES, CNRS, Paris, France
| |
Collapse
|
7
|
Mitchel AD, Weiss DJ. Learning across senses: cross-modal effects in multisensory statistical learning. J Exp Psychol Learn Mem Cogn 2011; 37:1081-91. [PMID: 21574745 PMCID: PMC4041380 DOI: 10.1037/a0023700] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.
Collapse
Affiliation(s)
- Aaron D Mitchel
- Department of Psychology and Program in Linguistics, Pennsylvania State University, 643 Moore Building, University Park, PA 16802, USA.
| | | |
Collapse
|
8
|
Skoruppa K, Peperkamp S. Adaptation to novel accents: feature-based learning of context-sensitive phonological regularities. Cogn Sci 2010; 35:348-66. [PMID: 21429003 DOI: 10.1111/j.1551-6709.2010.01152.x] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
This paper examines whether adults can adapt to novel accents of their native language that contain unfamiliar context-dependent phonological alternations. In two experiments, French participants listen to short stories read in accented speech. Their knowledge of the accents is then tested in a forced-choice identification task. In Experiment 1, two groups of listeners are exposed to newly created French accents in which certain vowels harmonize or disharmonize, respectively, to the rounding of the preceding vowel. Despite the cross-linguistic predominance of vowel harmony over disharmony, the two groups adapt equally well to both accents, suggesting that this typological difference is not reflected in perceptual learning. Experiment 2 further explores the mechanism underlying this type of phonological learning. Participants are exposed to an accent in which some vowels harmonize and others disharmonize, yielding an increased featural complexity. They adapt less well to this regularity, showing that adaptation to novel accents involves feature-based inferences.
Collapse
Affiliation(s)
- Katrin Skoruppa
- Department of Speech, Hearing and Phonetic Sciences, UCL Division of Psychology and Language Sciences, London, UK.
| | | |
Collapse
|