1
|
Ribas-Prats T, Cordero G, Lip-Sosa DL, Arenillas-Alcón S, Costa-Faidella J, Gómez-Roig MD, Escera C. Developmental Trajectory of the Frequency-Following Response During the First 6 Months of Life. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4785-4800. [PMID: 37944057 DOI: 10.1044/2023_jslhr-23-00104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
PURPOSE The aim of the present study is to characterize the maturational changes during the first 6 months of life in the neural encoding of two speech sound features relevant for early language acquisition: the stimulus fundamental frequency (fo), related to stimulus pitch, and the vowel formant composition, particularly F1. The frequency-following response (FFR) was used as a snapshot into the neural encoding of these two stimulus attributes. METHOD FFRs to a consonant-vowel stimulus /da/ were retrieved from electroencephalographic recordings in a sample of 80 healthy infants (45 at birth and 35 at the age of 1 month). Thirty-two infants (16 recorded at birth and 16 recorded at 1 month) returned for a second recording at 6 months of age. RESULTS Stimulus fo and F1 encoding showed improvements from birth to 6 months of age. Most remarkably, a significant improvement in the F1 neural encoding was observed during the first month of life. CONCLUSION Our results highlight the rapid and sustained maturation of the basic neural machinery necessary for the phoneme discrimination ability during the first 6 months of age.
Collapse
Affiliation(s)
- Teresa Ribas-Prats
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - Gaël Cordero
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - Diana Lucia Lip-Sosa
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Spain
| | - Sonia Arenillas-Alcón
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - María Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Spain
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| |
Collapse
|
2
|
Rudge AM, Coto J, Oster MM, Brooks BM, Soman U, Rufsvold R, Cejas I. Vocabulary Outcomes for 5-Year-Old Children Who Are Deaf or Hard of Hearing: Impact of Age at Enrollment in Specialized Early Intervention. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 27:262-268. [PMID: 35552664 DOI: 10.1093/deafed/enac009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 06/15/2023]
Abstract
The aims of this study were to examine vocabulary scores of 5-year-old children who are deaf or hard of hearing (DHH), as well as the impact of early enrollment in specialized intervention on vocabulary outcomes. Receptive and expressive vocabulary scores were analyzed for 342 five-year-old children who are DHH enrolled in specialized listening and spoken language intervention programs. Regression analyses were utilized to examine the effects of age at enrollment on vocabulary outcomes. Overall, participants achieved scores within normal test limits on receptive and expressive measures of vocabulary. Children who enrolled in intervention prior to 28 months of age had better vocabulary skills at 5 years old. The findings support that children who are DHH can understand and produce vocabulary at skill levels commensurate with their typically hearing peers, regardless of severity of hearing loss. Results highlight the crucial impact of specialized programs on children's lexical readiness to participate in general education settings by kindergarten.
Collapse
Affiliation(s)
| | - Jennifer Coto
- University of Miami, Miller School of Medicine, Miami, USA
| | | | | | - Uma Soman
- Carle Auditory Oral School, Carle Foundation Hospital, Urbana, USA
| | | | - Ivette Cejas
- University of Miami, Miller School of Medicine, Miami, USA
| |
Collapse
|
3
|
Millasseau J, Bruggeman L, Yuen I, Demuth K. Temporal cues to onset voicing contrasts in Australian English-speaking children. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:348. [PMID: 33514122 DOI: 10.1121/10.0003060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 12/14/2020] [Indexed: 06/12/2023]
Abstract
Voicing contrasts are lexically important for differentiating words in many languages (e.g., "bear" vs "pear"). Temporal differences in the voice onset time (VOT) and closure duration (CD) contribute to the voicing contrast in word-onset position. However, little is known about the acoustic realization of these voicing contrasts in Australian English-speaking children. This is essential for understanding the challenges faced by those with language delay. Therefore, the present study examined the VOT and CD values for word-initial stops as produced by 20 Australian English-speaking 4-5-year-olds. As anticipated, these children produced a systematic distinction between voiced and voiceless stops at all places of articulation (PoAs). However, although the children's VOT values for voiced stops were similar to those of adults, their VOTs for voiceless stops were longer. Like adults, the children also had different CD values for voiced and voiceless categories; however, these were systematically longer than those of adults. Even after adjusting for temporal differences by computing proportional ratios for the VOT and CD, children's voicing contrasts were not yet adultlike. These results suggest that children of this age are still developing appropriate timing and articulatory adjustments for voicing contrasts in the word-initial position.
Collapse
Affiliation(s)
- Julien Millasseau
- Department of Linguistics, Macquarie University, Sydney, 16 University Avenue, New South Wales 2109, Australia
| | - Laurence Bruggeman
- Australian Research Council Centre of Excellence for the Dynamics of Language, The MARCS Institute, Western Sydney University, Australia
| | - Ivan Yuen
- Department of Linguistics, Macquarie University, Sydney, 16 University Avenue, New South Wales 2109, Australia
| | - Katherine Demuth
- Department of Linguistics, Macquarie University, Sydney, 16 University Avenue, New South Wales 2109, Australia
| |
Collapse
|
4
|
Kokkinaki T, Vasdekis VGS. Beyond the Words: Comparing Interpersonal Engagement Between Maternal and Paternal Infant-Directed Speech Acts. Front Psychol 2020; 11:523551. [PMID: 33343435 PMCID: PMC7744289 DOI: 10.3389/fpsyg.2020.523551] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 11/10/2020] [Indexed: 11/23/2022] Open
Abstract
The present study investigates the way infants express their emotions in relation to parental feelings between maternal and paternal questions and direct requests. We therefore compared interpersonal engagement accompanying parental questions and direct requests between infant–mother and infant–father interactions. We video-recorded spontaneous communication between 11 infant–mother and 11 infant–father dyads—from the 2nd to the 6th month—in their home. The main results of this study are summarized as follows: (a) there are similarities in the way preverbal infants use their affections in spontaneous interactions with their mothers and fathers to express signs of sensitivity in sharing knowledge through questions and direct requests; and (b) the developmental trajectories of face-to-face emotional coordination in the course of parental questions descend in a similar way for both parents across the age range of this study. Regarding the developmental trajectories of emotional non-coordination, there is evidence of a linear trend in terms of age difference between the parents’ gender with fathers showing the steeper slope. The results are discussed in relation to the theory of intersubjectivity.
Collapse
Affiliation(s)
- Theano Kokkinaki
- Laboratory of Applied Psychology, Department of Psychology, University of Crete, Rethymnon, Greece
| | - Vassilis G S Vasdekis
- Department of Statistics, Athens University of Economics and Business, Athens, Greece
| |
Collapse
|
5
|
Moskowitz HS, Lee WW, Sussman ES. Response Advantage for the Identification of Speech Sounds. Front Psychol 2020; 11:1155. [PMID: 32655436 PMCID: PMC7325938 DOI: 10.3389/fpsyg.2020.01155] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/05/2020] [Indexed: 11/13/2022] Open
Abstract
The ability to distinguish among different types of sounds in the environment and to identify sound sources is a fundamental skill of the auditory system. This study tested responses to sounds by stimulus category (speech, music, and environmental) in adults with normal hearing to determine under what task conditions there was a processing advantage for speech. We hypothesized that speech sounds would be processed faster and more accurately than non-speech sounds under specific listening conditions and different behavioral goals. Thus, we used three different task conditions allowing us to compare detection and identification of sound categories in an auditory oddball paradigm and in a repetition-switch category paradigm. We found that response time and accuracy were modulated by the specific task demands. The sound category itself had no effect on sound detection outcomes but had a pronounced effect on sound identification. Faster and more accurate responses to speech were found only when identifying sounds. We demonstrate a speech processing "advantage" when identifying the sound category among non-categorical sounds and when detecting and identifying among categorical sounds. Thus, overall, our results are consistent with a theory of speech processing that relies on specialized systems distinct from music and other environmental sounds.
Collapse
Affiliation(s)
- Howard S Moskowitz
- Department of Otorhinolaryngology-Head and Neck Surgery, Albert Einstein College of Medicine, New York, NY, United States
| | - Wei Wei Lee
- Department of Neuroscience, Albert Einstein College of Medicine, New York, NY, United States
| | - Elyse S Sussman
- Department of Otorhinolaryngology-Head and Neck Surgery, Albert Einstein College of Medicine, New York, NY, United States.,Department of Neuroscience, Albert Einstein College of Medicine, New York, NY, United States
| |
Collapse
|
6
|
|
7
|
Sundara M, Ngon C, Skoruppa K, Feldman NH, Onario GM, Morgan JL, Peperkamp S. Young infants' discrimination of subtle phonetic contrasts. Cognition 2018; 178:57-66. [PMID: 29777983 DOI: 10.1016/j.cognition.2018.05.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 05/04/2018] [Accepted: 05/10/2018] [Indexed: 10/16/2022]
Abstract
It is generally accepted that infants initially discriminate native and non-native contrasts and that perceptual reorganization within the first year of life results in decreased discrimination of non-native contrasts, and improved discrimination of native contrasts. However, recent findings from Narayan, Werker, and Beddor (2010) surprisingly suggested that some acoustically subtle native-language contrasts might not be discriminated until the end of the first year of life. We first provide countervailing evidence that young English-learning infants can discriminate the Filipino contrast tested by Narayan et al. when tested in a more sensitive paradigm. Next, we show that young infants learning either English or French can also discriminate comparably subtle non-native contrasts from Tamil. These findings show that Narayan et al.'s null findings were due to methodological choices and indicate that young infants are sensitive to even subtle acoustic contrasts that cue phonetic distinctions cross-linguistically. Based on experimental results and acoustic analyses, we argue that instead of specific acoustic metrics, infant discrimination results themselves are the most informative about the salience of phonetic distinctions.
Collapse
Affiliation(s)
- Megha Sundara
- Dept. of Linguistics, University of California, Los Angeles, United States.
| | - Céline Ngon
- Laboratoire de Sciences Cognitives et Psycholinguistique (ENS - EHESS - CNRS), Département d'Études Cognitives, École Normale, Supérieure - PSL Research University, France
| | - Katrin Skoruppa
- Institute of Language Sciences and Communication, University of Neuchâtel, Switzerland
| | - Naomi H Feldman
- Dept. of Linguistics and UMIACS, University of Maryland, United States
| | - Glenda Molina Onario
- Dept. of Cognitive, Linguistic, and Psychological Sciences, Brown University, United States
| | - James L Morgan
- Dept. of Cognitive, Linguistic, and Psychological Sciences, Brown University, United States
| | - Sharon Peperkamp
- Laboratoire de Sciences Cognitives et Psycholinguistique (ENS - EHESS - CNRS), Département d'Études Cognitives, École Normale, Supérieure - PSL Research University, France; Maternité Port-Royal, APHP, France
| |
Collapse
|
8
|
MacDonald J. Hearing Lips and Seeing Voices: the Origins and Development of the 'McGurk Effect' and Reflections on Audio-Visual Speech Perception Over the Last 40 Years. Multisens Res 2018; 31:7-18. [PMID: 31264593 DOI: 10.1163/22134808-00002548] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2016] [Accepted: 01/20/2017] [Indexed: 11/19/2022]
Abstract
In 1976 Harry McGurk and I published a paper in Nature, entitled 'Hearing Lips and Seeing Voices'. The paper described a new audio-visual illusion we had discovered that showed the perception of auditorily presented speech could be influenced by the simultaneous presentation of incongruent visual speech. This hitherto unknown effect has since had a profound impact on audiovisual speech perception research. The phenomenon has come to be known as the 'McGurk effect', and the original paper has been cited in excess of 4800 times. In this paper I describe the background to the discovery of the effect, the rationale for the generation of the initial stimuli, the construction of the exemplars used and the serendipitous nature of the finding. The paper will also cover the reaction (and non-reaction) to the Nature publication, the growth of research on, and utilizing the 'McGurk effect' and end with some reflections on the significance of the finding.
Collapse
Affiliation(s)
- John MacDonald
- Department of Psychology, University of the West of Scotland, Paisley, PA1 2BE, UK
| |
Collapse
|
9
|
Hochmann JR, Benavides-Varela S, Fló A, Nespor M, Mehler J. Bias for Vocalic Over Consonantal Information in 6-Month-Olds. INFANCY 2017. [DOI: 10.1111/infa.12203] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Jean-Rémy Hochmann
- CNRS-Institut des Sciences Cognitives -Marc Jeannerod-UMR 5304; Univ Lyon
| | | | - Ana Fló
- Cognitive Neuroscience Department; SISSA, International School for Advanced Studies
| | - Marina Nespor
- Cognitive Neuroscience Department; SISSA, International School for Advanced Studies
| | - Jacques Mehler
- Cognitive Neuroscience Department; SISSA, International School for Advanced Studies
| |
Collapse
|
10
|
Abstract
Evidence is presented that 3- and 4-month-old infants are able to integrate two sounds with different sources and locations to form a coherent speech percept. Synthetic speech patterns were presented dichotically so that one ear received the third-formant transition appropriate for the syllable [da] or [ga], and the other car received the base, that is, the remaining acoustic information necessary for syllabic perception. Adults typically perceive these stimuli as a birdlike chirp at the ear receiving the transition and. depending on which transition is presented, as the syllable [da] or [ga] at the ear receiving the base. Infants discriminated the two dichotic patterns. They also discriminated them when the third-formant transitions were attenuated to the extent that infant listeners could not discriminate them when they were presented in isolation. The results support the contention that the infants integrated the two disparate sources of acoustic information into a coherent percept that is presumably phonetic in nature, and they are also consistent with the view that this organization arises from a specialized system for the perception of speech.
Collapse
|
11
|
Richardson K, Sussman JE. Discrimination and Identification of a Third Formant Frequency Cue to Place of Articulation by Young Children and Adults. LANGUAGE AND SPEECH 2017; 60:27-47. [PMID: 28326988 DOI: 10.1177/0023830915625680] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Typically-developing children, 4 to 6 years of age, and adults participated in discrimination and identification speech perception tasks using a synthetic consonant-vowel continuum ranging from /da/ to /ga/. The seven-step synthetic /da/-/ga/ continuum was created by adjusting the first 40 ms of the third formant frequency transition. For the discrimination task, listeners participated in a Change/No-Change paradigm with four different stimuli compared to the endpoint-1 /da/ token. For the identification task, listeners labeled each token along the /da/-/ga/ continuum as either "DA" or "GA." Results of the discrimination experiment showed that sensitivity to the third-formant transition cue improved for the adult listeners as the stimulus contrast increased, whereas the performance of the children remained poor across all stimulus comparisons. Results of the identification experiment support previous hypotheses of age-related differences in phonetic categorization. Results have implications for normative data on identification and discrimination tasks. These norms provide a metric against which children with auditory-based speech sound disorders can be compared. Furthermore, the results provide some insight into the developmental nature of categorical and non-categorical speech perception.
Collapse
|
12
|
McGurk H, MacDonald J. Auditory-Visual Coordination in the First Year of Life. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2016. [DOI: 10.1177/016502547800100303] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Two competing hypotheses concerning the nature of inter-modal development are outlined. The first sees the development proceeding from a state of discrete, independent sensory systems towards integration and synthesis between modalities. This is contrasted with a second viewpoint in which the early responsiveness of the organism is seen as a-modal and the developmental sequence as one of increasing sensory differentiation. A number of studies which have investigated auditory-visual co-ordination in young human infants, are reviewed. It is concluded that the data support the notion of ontogenetic development being a process of integration between sensory systems that are initially relatively independent.
Collapse
|
13
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Sloan AM, Kilgard MP. Similarity of cortical activity patterns predicts generalization behavior. PLoS One 2013; 8:e78607. [PMID: 24147140 PMCID: PMC3797841 DOI: 10.1371/journal.pone.0078607] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2013] [Accepted: 09/20/2013] [Indexed: 11/23/2022] Open
Abstract
Humans and animals readily generalize previously learned knowledge to new situations. Determining similarity is critical for assigning category membership to a novel stimulus. We tested the hypothesis that category membership is initially encoded by the similarity of the activity pattern evoked by a novel stimulus to the patterns from known categories. We provide behavioral and neurophysiological evidence that activity patterns in primary auditory cortex contain sufficient information to explain behavioral categorization of novel speech sounds by rats. Our results suggest that category membership might be encoded by the similarity of the activity pattern evoked by a novel speech sound to the patterns evoked by known sounds. Categorization based on featureless pattern matching may represent a general neural mechanism for ensuring accurate generalization across sensory and cognitive systems.
Collapse
Affiliation(s)
- Crystal T. Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
- * E-mail:
| | - Claudia A. Perez
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Ryan S. Carraway
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Kevin Q. Chang
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Jarod L. Roland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Andrew M. Sloan
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Michael P. Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| |
Collapse
|
14
|
White KS, Yee E, Blumstein SE, Morgan JL. Adults show less sensitivity to phonetic detail in unfamiliar words, too. JOURNAL OF MEMORY AND LANGUAGE 2013; 68:362-378. [PMID: 24065868 PMCID: PMC3779480 DOI: 10.1016/j.jml.2013.01.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Young word learners fail to discriminate phonetic contrasts in certain situations, an observation that has been used to support arguments that the nature of lexical representation and lexical processing changes over development. An alternative possibility, however, is that these failures arise naturally as a result of how word familiarity affects lexical processing. In the present work, we explored the effects of word familiarity on adults' use of phonetic detail. Participants' eye movements were monitored as they heard single-segment onset mispronunciations of words drawn from a newly learned artificial lexicon. In Experiment 1, single-feature onset mispronunciations were presented; in Experiment 2, participants heard two-feature onset mispronunciations. Word familiarity was manipulated in both experiments by presenting words with various frequencies during training. Both word familiarity and degree of mismatch affected adults' use of phonetic detail: in their looking behavior, participants did not reliably differentiate single-feature mispronunciations and correct pronunciations of low frequency words. For higher frequency words, participants differentiated both 1- and 2-feature mispronunciations from correct pronunciations. However, responses were graded such that 2-feature mispronunciations had a greater effect on looking behavior. These experiments demonstrate that the use of phonetic detail in adults, as in young children, is affected by word familiarity. Parallels between the two populations suggest continuity in the architecture underlying lexical representation and processing throughout development.
Collapse
Affiliation(s)
| | - Eiling Yee
- Basque Center on Cognition, Brain and Language, Spain
| | - Sheila E. Blumstein
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, United States
| | - James L. Morgan
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, United States
| |
Collapse
|
15
|
Zevin JD. A sensitive period for shibboleths: the long tail and changing goals of speech perception over the course of development. Dev Psychobiol 2012; 54:632-42. [PMID: 22714710 DOI: 10.1002/dev.20611] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2010] [Accepted: 08/30/2011] [Indexed: 11/11/2022]
Abstract
It is clear that the ability to learn new speech contrasts changes over development, such that learning to categorize speech sounds as native speakers of a language do is more difficult in adulthood than it is earlier in development. There is also a wealth of data concerning changes in the perception of speech sounds during infancy, such that infants quite rapidly progress from language-general to more language-specific perceptual biases. It is often suggested that the perceptual narrowing observed during infancy plays a causal role in the loss of plasticity observed in adulthood, but the relationship between these two phenomena is complicated. Here I consider the relationship between changes in sensitivity to speech sound categorization over the first 2 years of life, when they appear to reorganize quite rapidly, to the "long tail" of development throughout childhood, in the context of understanding the sensitive period for speech perception.
Collapse
Affiliation(s)
- Jason D Zevin
- Sackler Institute for Developmental Psychobiology, Weill Cornell Medical College, 1300 York Ave., Box 140, New York, New York 10065, USA.
| |
Collapse
|
16
|
Holt RF, Lalonde K. Assessing toddlers' speech-sound discrimination. Int J Pediatr Otorhinolaryngol 2012; 76:680-92. [PMID: 22402014 PMCID: PMC3335986 DOI: 10.1016/j.ijporl.2012.02.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2011] [Revised: 02/03/2012] [Accepted: 02/05/2012] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Valid and reliable methods for assessing speech perception in toddlers are lacking in the field, leading to conspicuous gaps in understanding how speech perception develops and limited clinical tools for assessing sensory aid benefit in toddlers. The objective of this investigation was to evaluate speech-sound discrimination in toddlers using modifications to the Change/No-Change procedure [1]. METHODS Normal-hearing 2- and 3-year-olds' discrimination of acoustically dissimilar ("easy") and similar ("hard") speech-sound contrasts were evaluated in a combined repeated measures and factorial design. Performance was measured in d'. Effects of contrast difficulty and age were examined, as was test-retest reliability, using repeated measures ANOVAs, planned post hoc tests, and correlation analyses. RESULTS The easy contrast (M=2.53) was discriminated better than the hard contrast (M=1.72) across all ages (p<.0001). The oldest group of children (M=3.13) discriminated the contrasts better than youngest (M=1.04; p<.0001) and the mid-age children (M=2.20; p=.037), who in turn discriminated the contrasts better than the youngest children (p=.010). Test-retest reliability was excellent (r=.886, p<.0001). Almost 90% of the children met the teaching criterion. The vast majority demonstrated the ability to be tested with the modified procedure and discriminated the contrasts. The few who did not were 2.5 years of age and younger. CONCLUSIONS The modifications implemented resulted, at least preliminarily, in a procedure that is reliable and sensitive to contrast difficulty and age in this young group of children, suggesting that these modifications are appropriate for this age group. With further development, the procedure holds promise for use in clinical populations who are believed to have core deficits in rapid phonological encoding, such as children with hearing loss or specific language impairment, children who are struggling to read, and second-language learners.
Collapse
Affiliation(s)
- Rachael Frush Holt
- Department of Speech and Hearing Sciences, Indiana University, 200 South Jordan Avenue, Bloomington, IN 47405, USA.
| | | |
Collapse
|
17
|
Song JY, Demuth K, Shattuck-Hufnagel S. The development of acoustic cues to coda contrasts in young children learning American English. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:3036-50. [PMID: 22501078 PMCID: PMC3339504 DOI: 10.1121/1.3687467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2011] [Revised: 01/21/2012] [Accepted: 01/26/2012] [Indexed: 05/30/2023]
Abstract
Research on children's speech perception and production suggests that consonant voicing and place contrasts may be acquired early in life, at least in word-onset position. However, little is known about the development of the acoustic correlates of later-acquired, word-final coda contrasts. This is of particular interest in languages like English where many grammatical morphemes are realized as codas. This study therefore examined how various non-spectral acoustic cues vary as a function of stop coda voicing (voiced vs. voiceless) and place (alveolar vs. velar) in the spontaneous speech of 6 American-English-speaking mother-child dyads. The results indicate that children as young as 1;6 exhibited many adult-like acoustic cues to voicing and place contrasts, including longer vowels and more frequent use of voice bar with voiced codas, and a greater number of bursts and longer post-release noise for velar codas. However, 1;6-year-olds overall exhibited longer durations and more frequent occurrence of these cues compared to mothers, with decreasing values by 2;6. Thus, English-speaking 1;6-year-olds already exhibit adult-like use of some of the cues to coda voicing and place, though implementation is not yet fully adult-like. Physiological and contextual correlates of these findings are discussed.
Collapse
Affiliation(s)
- Jae Yung Song
- Department of Communication Sciences and Disorders, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53211, USA.
| | | | | |
Collapse
|
18
|
|
19
|
|
20
|
Hayes RA, Slater AM, Longmore CA. Rhyming abilities in 9-month-olds: The role of the vowel and coda explored. COGNITIVE DEVELOPMENT 2009. [DOI: 10.1016/j.cogdev.2008.11.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
21
|
Maye J, Weiss DJ, Aslin RN. Statistical phonetic learning in infants: facilitation and feature generalization. Dev Sci 2008; 11:122-34. [PMID: 18171374 DOI: 10.1111/j.1467-7687.2007.00653.x] [Citation(s) in RCA: 145] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Over the course of the first year of life, infants develop from being generalized listeners, capable of discriminating both native and non-native speech contrasts, into specialized listeners whose discrimination patterns closely reflect the phonetic system of the native language(s). Recent work by Maye, Werker and Gerken (2002) has proposed a statistical account for this phenomenon, showing that infants may lose the ability to discriminate some foreign language contrasts on the basis of their sensitivity to the statistical distribution of sounds in the input language. In this paper we examine the process of enhancement in infant speech perception, whereby initially difficult phonetic contrasts become better discriminated when they define two categories that serve a functional role in the native language. In particular, we demonstrate that exposure to a bimodal statistical distribution in 8-month-old infants' phonetic input can lead to increased discrimination of difficult contrasts. In addition, this exposure also facilitates discrimination of an unfamiliar contrast sharing the same phonetic feature as the contrast presented during familiarization, suggesting that infants extract acoustic/phonetic information that is invariant across an abstract featural representation.
Collapse
Affiliation(s)
- Jessica Maye
- Department of Communication Sciences & Disorders and the Northwestern Institute on Complex Systems, Northwestern University, USA.
| | | | | |
Collapse
|
22
|
Mattock K, Molnar M, Polka L, Burnham D. The developmental course of lexical tone perception in the first year of life. Cognition 2008; 106:1367-81. [PMID: 17707789 DOI: 10.1016/j.cognition.2007.07.002] [Citation(s) in RCA: 101] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2006] [Revised: 05/15/2007] [Accepted: 07/03/2007] [Indexed: 10/22/2022]
Abstract
Perceptual reorganisation of infants' speech perception has been found from 6 months for consonants and earlier for vowels. Recently, similar reorganisation has been found for lexical tone between 6 and 9 months of age. Given that there is a close relationship between vowels and tones, this study investigates whether the perceptual reorganisation for tone begins earlier than 6 months. Non-tone language English and French infants were tested with the Thai low vs. rising lexical tone contrast, using the stimulus alternating preference procedure. Four- and 6-month-old infants discriminated the lexical tones, and there was no decline in discrimination performance across these ages. However, 9-month-olds failed to discriminate the lexical tones. This particular pattern of decline in nonnative tone discrimination over age indicates that perceptual reorganisation for tone does not parallel the developmentally prior decline observed in vowel perception. The findings converge with previous developmental cross-language findings on tone perception in English-language infants [Mattock, K., & Burnham, D. (2006). Chinese and English infants' tone perception: Evidence for perceptual reorganization. Infancy, 10(3)], and extend them by showing similar perceptual reorganisation for non-tone language infants learning rhythmically different non-tone languages (English and French).
Collapse
Affiliation(s)
- Karen Mattock
- School of Communication Sciences & Disorders, McGill University, 1266 Pine Avenue West, Montreal, Que., Canada H3G 1A8.
| | | | | | | |
Collapse
|
23
|
Distributed representation of perceptual categories in the auditory cortex. J Comput Neurosci 2007; 24:277-90. [PMID: 17917802 DOI: 10.1007/s10827-007-0055-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2007] [Revised: 07/26/2007] [Accepted: 09/05/2007] [Indexed: 10/22/2022]
Abstract
Categorical perception is a process by which a continuous stimulus space is partitioned to represent discrete sensory events. Early experience has been shown to shape categorical perception and enlarge cortical representations of experienced stimuli in the sensory cortex. The present study examines the hypothesis that enlargement in cortical stimulus representations is a mechanism of categorical perception. Perceptual discrimination and identification behaviors were analyzed in model auditory cortices that incorporated sound exposure-induced plasticity effects. The model auditory cortex with over-representations of specific stimuli exhibited categorical perception behaviors for those specific stimuli. These results indicate that enlarged stimulus representations in the sensory cortex may be a mechanism for categorical perceptual learning.
Collapse
|
24
|
|
25
|
|
26
|
McMurray B, Aslin RN. Infants are sensitive to within-category variation in speech perception. Cognition 2005; 95:B15-26. [PMID: 15694642 DOI: 10.1016/j.cognition.2004.07.005] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2004] [Accepted: 07/19/2004] [Indexed: 11/20/2022]
Abstract
Previous research on speech perception in both adults and infants has supported the view that consonants are perceived categorically; that is, listeners are relatively insensitive to variation below the level of the phoneme. More recent work, on the other hand, has shown adults to be systematically sensitive to within category variation [McMurray, B., Tanenhaus, M., & Aslin, R. (2002). Gradient effects of within-category phonetic variation on lexical access, Cognition, 86 (2), B33-B42.]. Additionally, recent evidence suggests that infants are capable of using within-category variation to segment speech and to learn phonetic categories. Here we report two studies of 8-month-old infants, using the head-turn preference procedure, that examine more directly infants' sensitivity to within-category variation. Infants were exposed to 80 repetitions of words beginning with either /b/ or /p/. After exposure, listening times to tokens of the same category with small variations in VOT were significantly different than to both the originally exposed tokens and to the cross-category-boundary competitors. Thus infants, like adults, show systematic sensitivity to fine-grained, within-category detail in speech perception.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | | |
Collapse
|
27
|
Holt LL, Lotto AJ, Diehl RL. Auditory discontinuities interact with categorization: implications for speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2004; 116:1763-1773. [PMID: 15478443 DOI: 10.1121/1.1778838] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Behavioral experiments with infants, adults, and nonhuman animals converge with neurophysiological findings to suggest that there is a discontinuity in auditory processing of stimulus components differing in onset time by about 20 ms. This discontinuity has been implicated as a basis for boundaries between speech categories distinguished by voice onset time (VOT). Here, it is investigated how this discontinuity interacts with the learning of novel perceptual categories. Adult listeners were trained to categorize nonspeech stimuli that mimicked certain temporal properties of VOT stimuli. One group of listeners learned categories with a boundary coincident with the perceptual discontinuity. Another group learned categories defined such that the perceptual discontinuity fell within a category. Listeners in the latter group required significantly more experience to reach criterion categorization performance. Evidence of interactions between the perceptual discontinuity and the learned categories extended to generalization tests as well. It has been hypothesized that languages make use of perceptual discontinuities to promote distinctiveness among sounds within a language inventory. The present data suggest that discontinuities interact with category learning. As such, "learnability" may play a predictive role in selection of language sound inventories.
Collapse
Affiliation(s)
- Lori L Holt
- Department of Psychology and the Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA.
| | | | | |
Collapse
|
28
|
Tsao FM, Liu HM, Kuhl PK. Speech Perception in Infancy Predicts Language Development in the Second Year of Life: A Longitudinal Study. Child Dev 2004; 75:1067-84. [PMID: 15260865 DOI: 10.1111/j.1467-8624.2004.00726.x] [Citation(s) in RCA: 268] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Infants' early phonetic perception is hypothesized to play an important role in language development. Previous studies have not assessed this potential link in the first 2 years of life. In this study, speech discrimination was measured in 6-month-old infants using a conditioned head-turn task. At 13, 16, and 24 months of age, language development was assessed in these same children using the MacArthur Communicative Development Inventory. Results demonstrated significant correlations between speech perception at 6 months of age and later language (word understanding, word production, phrase understanding). The finding that speech perception performance at 6 months predicts language at 2 years supports the idea that phonetic perception may play an important role in language acquisition.
Collapse
Affiliation(s)
- Feng-Ming Tsao
- Institute for Learning and Brain Sciences, University of Washington, USA.
| | | | | |
Collapse
|
29
|
Abstract
This chapter focuses on one of the first steps in comprehending spoken language: How do listeners extract the most fundamental linguistic elements-consonants and vowels, or the distinctive features which compose them-from the acoustic signal? We begin by describing three major theoretical perspectives on the perception of speech. Then we review several lines of research that are relevant to distinguishing these perspectives. The research topics surveyed include categorical perception, phonetic context effects, learning of speech and related nonspeech categories, and the relation between speech perception and production. Finally, we describe challenges facing each of the major theoretical perspectives on speech perception.
Collapse
Affiliation(s)
- Randy L Diehl
- Department of Psychology and Center for Perceptual Systems, University of Texas, Austin, Texas 78712-0187, USA.
| | | | | |
Collapse
|
30
|
Abstract
OBJECTIVE The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.
Collapse
Affiliation(s)
- Peter W Jusezyk
- Department of Psychology and Cognitive Science, Johns Hopkins University, Baltimore, Maryland, USA
| | | |
Collapse
|
31
|
Nittrouer S. Challenging the notion of innate phonetic boundaries. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2001; 110:1598-1605. [PMID: 11572369 DOI: 10.1121/1.1379078] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Numerous studies of infants' speech perception abilities have demonstrated that these young listeners have access to acoustic detail in the speech signal. Because these studies have used stimuli that could be described in terms of adult-defined phonetic categories, authors have concluded that infants innately recognize stimuli as members of these categories, as adults do. In fact, the predominant, current view of speech perception holds that infants are born with sensitivities for the universal set of phonetic boundaries, and that those boundaries supported by the ambient language are maintained, while those not supported by the ambient language dissolve. In this study, discrimination abilities of 46 infants and 75 3-year-olds were measured for several phonetic contrasts occurring in their native language, using natural and synthetic speech. The proportion of children who were able to discriminate any given contrast varied across contrasts, and no one contrast was discriminated by anything close to all of the children. While these results did not differ from those reported by others, the interpretation here is that we should reconsider the notion of innate phonetic categories and/or boundaries. Moreover, success rates did not differ for natural and synthetic speech, and so a minor conclusion was that children are not adversely affected by the use of synthetic stimuli in speech experiments.
Collapse
Affiliation(s)
- S Nittrouer
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| |
Collapse
|
32
|
Abstract
At the forefront of debates on language are new data demonstrating infants' early acquisition of information about their native language. The data show that infants perceptually "map" critical aspects of ambient language in the first year of life before they can speak. Statistical properties of speech are picked up through exposure to ambient language. Moreover, linguistic experience alters infants' perception of speech, warping perception in the service of language. Infants' strategies are unexpected and unpredicted by historical views. A new theoretical position has emerged, and six postulates of this position are described.
Collapse
Affiliation(s)
- P K Kuhl
- Department of Speech and Hearing Sciences, University of Washington, Box 357920, Seattle, WA 98195, USA.
| |
Collapse
|
33
|
|
34
|
Nittrouer S, Miller ME, Crowther CS, Manhart MJ. The effect of segmental order on fricative labeling by children and adults. PERCEPTION & PSYCHOPHYSICS 2000; 62:266-84. [PMID: 10723207 DOI: 10.3758/bf03205548] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examined whether children modify their perceptual weighting strategies for speech on the basis of the order of segments within a syllable, as adults do. To this end, fricative-vowel (FV) and vowel-fricative (VF) syllables were constructed with synthetic noises from an/[symbol: see text]/-to-/s/continuum combined with natural/a/and/u/portions with transitions appropriate for a preceding or a following /[symbol: see text]/or/s/. Stimuli were played in their original order to adults and children (ages of 7 and 5 years) in Experiment 1 and in reversed order in Experiment 2. The results for adults and, to a lesser extent, those for 7-year-olds replicated earlier results showing that adults assign different perceptual weights to acoustic properties, depending on segmental order. In contrast, results for 5-year-olds suggested that these listeners applied the same strategies during fricative labeling, regardless of segmental order. Thus, the flexibility to modify perceptual weighting strategies for speech according to segmental order apparently emerges with experience.
Collapse
Affiliation(s)
- S Nittrouer
- Boys Town National Research Hospital, Omaha, NE 68131, USA.
| | | | | | | |
Collapse
|
35
|
Jusczyk PW. Narrowing the distance to language: one step at a time. JOURNAL OF COMMUNICATION DISORDERS 1999; 32:207-222. [PMID: 10466094 DOI: 10.1016/s0021-9924(99)00014-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Infants' earliest attempts at word segmentation appear to be guided by a single source of information (e.g., English-learners initially rely on the predominant stress pattern of words). This initial strategy successfully identifies many potential words in the input, but mis-segments others. However, simply breaking the input into smaller chunks helps learners to identify other possible cues to the location of word boundaries in utterances. Because no one source of information is completely reliable, listeners must eventually rely on multiple cues to segment words. The development of such skills is not critical for developing a native language vocabulary, but also for acquiring the grammatical organization of utterances. Tracking familiar sound patterns, such as function words and grammatical morphemes, may help in learning about syntactic organization. One factor that facilitates learning about the distribution of such elements is sensitivity to boundaries of prosodic phrases. Access to such linguistically-relevant chunks also helps in tracking the distribution of words in the input.
Collapse
Affiliation(s)
- P W Jusczyk
- Department of Psychology, Johns Hopkins University, Baltimore, MD 21218-2686, USA.
| |
Collapse
|
36
|
Mattys SL, Jusczyk PW, Luce PA, Morgan JL. Phonotactic and prosodic effects on word segmentation in infants. Cogn Psychol 1999; 38:465-94. [PMID: 10334878 DOI: 10.1006/cogp.1999.0721] [Citation(s) in RCA: 207] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
This research examines the issue of speech segmentation in 9-month-old infants. Two cues known to carry probabilistic information about word boundaries were investigated: Phonotactic regularity and prosodic pattern. The stimuli used in four head turn preference experiments were bisyllabic CVC.CVC nonwords bearing primary stress in either the first or the second syllable (strong/weak vs. weak/strong). Stimuli also differed with respect to the phonotactic nature of their cross-syllabic C.C cluster. Clusters had either a low probability of occurring at a word juncture in fluent speech and a high probability of occurring inside of words ("within-word" clusters) or a high probability of occurring at a word juncture and a low probability of occurring inside of words ("between-word" clusters). Our results show that (1) 9-month-olds are sensitive to how phonotactic sequences typically align with word boundaries, (2) altering the stress pattern of the stimuli reverses infants' preference for phonotactic cluster types, (3) the prosodic cue to segmentation is more strongly relied upon than the phonotactic cue, and (4) a preference for high-probability between-word phonotactic sequences can be obtained either by placing stress on the second syllable of the stimuli or by inserting a pause between syllables. The implications of these results are discussed in light of an integrated multiple-cue approach to speech segmentation in infancy.
Collapse
Affiliation(s)
- S L Mattys
- Departments of Psychology and Cognitive Science, Johns Hopkins University, Baltimore, Maryland 21218-2686, USA.
| | | | | | | |
Collapse
|
37
|
Peperkamp S, Mehler J. Signed and spoken language: a unique underlying system? LANGUAGE AND SPEECH 1999; 42 ( Pt 2-3):333-346. [PMID: 10767993 DOI: 10.1177/00238309990420020901] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Sign language has only recently become a topic of investigation in cognitive neuroscience and psycholinguistics. In this paper, we review research from these two fields; in particular, we compare spoken and signed language by looking at data concerning either cortical representations or early acquisition. As to cognitive neuroscience, we show that clinical neuropsychological data regarding sign language is partially inconsistent with imaging data. Indeed, whereas both clinical neuropsychology and imagery show the involvement of the left hemisphere in sign language processing, only the latter highlights the importance of the right hemisphere. We discuss several possible interpretations of these contrasting findings. As to psycholinguistics, we survey research on the earliest stages of the acquisition of spoken language, and consider these stages in the acquisition of sign language. We conjecture that under favorable circumstances, deaf children exploit sign input to gain entry into the language system with the same facility as hearing children do with spoken input. More data, however, are needed in order to gain a fuller understanding of the relation of different kinds of natural languages to both the underlying anatomical representations and their early acquisition.
Collapse
Affiliation(s)
- S Peperkamp
- Laboratoire de Sciences Cognitives et Psycholinguistique, EHESS-CNRS, Paris, France.
| | | |
Collapse
|
38
|
Abstract
To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptual-motor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.
Collapse
Affiliation(s)
- J F Werker
- Department of Psychology, University of British Columbia, Vancouver, Canada.
| | | |
Collapse
|
39
|
|
40
|
Abstract
Infants' long-term retention of the sound patterns of words was explored by exposing them to recordings of three children's stories for 10 days during a 2-week period when they were 8 months old. After an interval of 2 weeks, the infants heard lists of words that either occurred frequently or did not occur in the stories. The infants listened significantly longer to the lists of story words. By comparison, a control group of infants who had not been exposed to the stories showed no such preference. The findings suggest that 8-month-olds are beginning to engage in long-term storage of words that occur frequently in speech, which is an important prerequisite for learning language.
Collapse
Affiliation(s)
- P W Jusczyk
- Department of Psychology, Johns Hopkins University, Baltimore, MD 21218, USA
| | | |
Collapse
|
41
|
Abstract
Long before they start talking, children are skilled at using eye contact, facial expression, and nonverbal gestures to communicate with other people. They also are able to discriminate speech sounds from an early age. Vocabulary learning builds on the child's knowledge about objects, actions, locations, properties, and stages gained as a result of sensorimotor development. Early word combinations allow children to express semantic relationships between these various referents. During the period from 2 to 4 years of age, children move from expressing their ideas in simple telegraphic speech to being able to ask questions, use negation, talk about past and future events, and describe complicated situations using sentences constructed according to complex grammatical rules.
Collapse
Affiliation(s)
- L Rescorla
- Department of Psychology, Bryn Mawr College, PA 19010, USA
| | | |
Collapse
|
42
|
Infant Speech Perception: Processing Characteristics, Representational Units, and The Learning of Words. PSYCHOLOGY OF LEARNING AND MOTIVATION 1997. [DOI: 10.1016/s0079-7421(08)60283-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register]
|
43
|
Abstract
Most speech research with infants occurs in quiet laboratory rooms with no outside distractions. However, in the real world, speech directed to infants often occurs in the presence of other competing acoustic signals. To learn language, infants need to attend to their caregiver's speech even under less than ideal listening conditions. We examined 7.5-month-old infants' abilities to selectively attend to a female talker's voice when a male voice was talking simultaneously. In three experiments, infants heard a target voice repeating isolated words while a distractor voice spoke fluently at one of three different intensities. Subsequently, infants heard passages produced by the target voice containing either the familiar words or novel words. Infants listened longer to the familiar words when the target voice was 10 dB or 5 dB more intense than the distractor, but not when the two voices were equally intense. In a fourth experiment, the assignment of words and passages to the familiarization and testing phases was reversed so that the passages and distractors were presented simultaneously during familiarization, and the infants were tested on the familiar and unfamiliar isolated words. During familiarization, the passages were 10 dB more intense than the distractors. The results suggest that this may be at the limits of what infants at this age can do in separating two different streams of speech. In conclusion, infants have some capacity to extract information from speech even in the face of a competing acoustic voice.
Collapse
Affiliation(s)
- R S Newman
- Department of Psychology, Park Hall, SUNY at Buffalo 14260, USA. rochelle%
| | | |
Collapse
|
44
|
|
45
|
Hohne EA, Jusczyk PW. Two-month-old infants' sensitivity to allophonic differences. PERCEPTION & PSYCHOPHYSICS 1994; 56:613-23. [PMID: 7816532 DOI: 10.3758/bf03208355] [Citation(s) in RCA: 49] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
The present study investigated 2-month-olds' abilities to discriminate allophonic differences that are potentially useful in segmenting fluent speech. Experiment 1 investigated infants' sensitivity to the kind of distinction that may signal the presence or absence of a word boundary. When tested with the high-amplitude sucking procedure, infants discriminated pairs of items, such as "nitrate" versus "night rate" and "nikrate" versus "nike rate". By greatly reducing the potential contribution of prosodic differences to these contrasts, Experiment 2 evaluated whether the allophonic differences for /t/ and /r/ were sufficient for infants to distinguish the "nitrate" versus "night rate" pair. Infants distinguished "nitrate" from a cross-spliced version of "night rate," which differed only in the allophones for /t/ and /r/ that it included. Thus, infants appear to possess one of the prerequisite capacities (i.e., the ability to discriminate allophonic distinctions) necessary to use allophonic information in segmenting fluent speech.
Collapse
Affiliation(s)
- E A Hohne
- Department of Psychology, SUNY at Buffalo 14260-4110
| | | |
Collapse
|
46
|
Abstract
We review recent work that shows that, during the early stages of language acquisition, molar properties such as prosody are important to the infant. We argue that the specification of these structures allows the infant to learn the language processing routines that adults employ.
Collapse
|
47
|
López-Bascuas LE. Procesamiento auditivo general y procesamiento específico en la percepción del habla (I): efectos derivados de la asignación de fronteras perceptivas. STUDIES IN PSYCHOLOGY 1994. [DOI: 10.1174/02109399460578971] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
48
|
Affiliation(s)
- P K Kuhl
- Department of Speech and Hearing Sciences, University of Washington, Seattle 98195
| |
Collapse
|
49
|
Abstract
Hemisphere asymmetry in phoneme perception was analyzed. Three basic mechanisms underlying phoneme perception are proposed. Left temporal lobe would be specialized in: (1) ultrashort auditory (echoic) memory; (2) higher resolution power for some language frequencies; and (3) recognition of rapidly changing and time-dependent auditory signals. An attempt was made to apply some neurophysiological mechanisms described for the visual system to phoneme recognition in the auditory system.
Collapse
Affiliation(s)
- A Ardila
- Instituto Colombiano de Neuropsicologia, Bogota
| |
Collapse
|
50
|
Hygge S, Rönnberg J, Larsby B, Arlinger S. Normal-hearing and hearing-impaired subjects' ability to just follow conversation in competing speech, reversed speech, and noise backgrounds. JOURNAL OF SPEECH AND HEARING RESEARCH 1992; 35:208-215. [PMID: 1370969 DOI: 10.1044/jshr.3501.208] [Citation(s) in RCA: 72] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
The performance on a conversation-following task by 24 hearing-impaired persons was compared with that of 24 matched controls with normal hearing in the presence of three background noises: (a) speech-spectrum random noise, (b) a male voice, and (c) the male voice played in reverse. The subjects' task was to readjust the sound level of a female voice (signal), every time the signal voice was attenuated, to the subjective level at which it was just possible to understand what was being said. To assess the benefit of lipreading, half of the material was presented audiovisually and half auditorily only. It was predicted that background speech would have a greater masking effect than reversed speech, which would in turn have a lesser masking effect than random noise. It was predicted that hearing-impaired subjects would perform more poorly than the normal-hearing controls in a background of speech. The influence of lipreading was expected to be constant across groups and conditions. The results showed that the hearing-impaired subjects were equally affected by the three background noises and that normal-hearing persons were less affected by the background speech than by noise. The performance of the normal-hearing persons was superior to that of the hearing-impaired subjects. The prediction about lipreading was confirmed. The results were explained in terms of the reduced temporal resolution by the hearing-impaired subjects.
Collapse
Affiliation(s)
- S Hygge
- National Swedish Institute for Building Research, Gävle
| | | | | | | |
Collapse
|