1
|
Wordform variability in infants' language environment and its effects on early word learning. Cognition 2024; 245:105694. [PMID: 38309042 DOI: 10.1016/j.cognition.2023.105694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 10/11/2023] [Accepted: 12/13/2023] [Indexed: 02/05/2024]
Abstract
Most research regarding early word learning in English tends to make the simplifying assumption that there exists a one-to-one mapping between concrete objects and their labels. In the current work, we provide evidence that runs counter to this assumption, aligning English with more morphologically-rich languages. We suggest that even in a morphologically-poor language like English, real world language input to infants does not provide tidy 1-to-1 mappings. Instead, infants encounter many variant wordforms for familiar nouns (e.g. dog∼doggy∼dogs). We explore this wordform variability in 44 English-learning infants' naturalistic environments using a longitudinal corpus of infant-available speech. We look at both the frequency and composition of wordform variability. We find two broad categories of variability: referent-changing alterations, where words were pluralized or compounded (e.g. coat∼raincoats); and wordplay, where words changed form without a notable change in referent (e.g. bird∼birdie). We further find that wordplay occurs with a limited number of lemmas that are usually early-learned, high-frequency, and shorter. When looking at all wordform variability, we find that individual words with higher levels of wordform variability are learned earlier than words with fewer wordforms, over and above the effect of frequency.
Collapse
|
2
|
A Brief Intervention to Teach Parents Naturalistic Language Facilitation Strategies. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024; 33:990-1003. [PMID: 38286034 DOI: 10.1044/2023_ajslp-23-00146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
PURPOSE This proof of concept study assessed the feasibility of a novel approach to teaching parents naturalistic language facilitation strategies in a single session. We investigated whether parents could learn to use the See and Say Sequence, which integrated responsive and language modeling strategies and measured the impacts that this intervention had on features of their input. METHOD Fourteen parent-child dyads participated in the study. Children ranged from 15 to 23 months of age and produced between three and 135 words. Five parents had concerns about their children's rate of language development. Parents were taught the See and Say Sequence during a brief single session (M = 18.98 min, SD = 2.65 min) using the Teach-Model-Coach-Review instructional process. We analyzed parents' use of the three See and Say Sequence components, total number of utterances, and mean turn length, as well as responsive and linguistic features of parent input before and after the brief intervention. RESULTS Following intervention, parents significantly increased their use of the three See and Say Sequence components and decreased their total number of utterances and mean turn length. In addition, the use of the See and Say Sequence components substantially altered the overall composition of parent input. CONCLUSIONS The results of this preliminary study demonstrate the feasibility of the See and Say Sequence in teaching responsive and language modeling strategies in a single session. We discuss the potential use and future evaluation of the See and Say Sequence as an option for early intervention service delivery.
Collapse
|
3
|
Parentese in infancy predicts 5-year language complexity and conversational turns. JOURNAL OF CHILD LANGUAGE 2024; 51:359-384. [PMID: 36748287 DOI: 10.1017/s0305000923000077] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Parental input is considered a key predictor of language achievement during the first years of life, yet relatively few studies have assessed its effects on longer-term outcomes. We assess the effects of parental quantity of speech, use of parentese (the acoustically exaggerated, clear, and higher-pitched speech), and turn-taking in infancy, on child language at 5 years. Using a longitudinal dataset of daylong LENA recordings collected with the same group of English-speaking infants (N=44) at 6, 10, 14, 18, 24 months and then again at 5 years, we demonstrate that parents' consistent (defined as stable and high) use of parentese in infancy was a potent predictor of lexical diversity, mean length of utterance, and frequency of conversational turn-taking between children and adults at Kindergarten entry. Together, these findings highlight the potential importance of a high-quality language learning environment in infancy for success at the start of formal schooling.
Collapse
|
4
|
Hebbian learning can explain rhythmic neural entrainment to statistical regularities. Dev Sci 2024:e13487. [PMID: 38372153 DOI: 10.1111/desc.13487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 12/26/2023] [Accepted: 01/29/2024] [Indexed: 02/20/2024]
Abstract
In many domains, learners extract recurring units from continuous sequences. For example, in unknown languages, fluent speech is perceived as a continuous signal. Learners need to extract the underlying words from this continuous signal and then memorize them. One prominent candidate mechanism is statistical learning, whereby learners track how predictive syllables (or other items) are of one another. Syllables within the same word predict each other better than syllables straddling word boundaries. But does statistical learning lead to memories of the underlying words-or just to pairwise associations among syllables? Electrophysiological results provide the strongest evidence for the memory view. Electrophysiological responses can be time-locked to statistical word boundaries (e.g., N400s) and show rhythmic activity with a periodicity of word durations. Here, I reproduce such results with a simple Hebbian network. When exposed to statistically structured syllable sequences (and when the underlying words are not excessively long), the network activation is rhythmic with the periodicity of a word duration and activation maxima on word-final syllables. This is because word-final syllables receive more excitation from earlier syllables with which they are associated than less predictable syllables that occur earlier in words. The network is also sensitive to information whose electrophysiological correlates were used to support the encoding of ordinal positions within words. Hebbian learning can thus explain rhythmic neural activity in statistical learning tasks without any memory representations of words. Learners might thus need to rely on cues beyond statistical associations to learn the words of their native language. RESEARCH HIGHLIGHTS: Statistical learning may be utilized to identify recurring units in continuous sequences (e.g., words in fluent speech) but may not generate explicit memory for words. Exposure to statistically structured sequences leads to rhythmic activity with a period of the duration of the underlying units (e.g., words). I show that a memory-less Hebbian network model can reproduce this rhythmic neural activity as well as putative encodings of ordinal positions observed in earlier research. Direct tests are needed to establish whether statistical learning leads to declarative memories for words.
Collapse
|
5
|
Relating the prosody of infant-directed speech to children's vocabulary size. JOURNAL OF CHILD LANGUAGE 2024; 51:217-233. [PMID: 36756779 DOI: 10.1017/s0305000923000041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
This study examines correlations between the prosody of infant-directed speech (IDS) and children's vocabulary size. We collected longitudinal speech data and vocabulary information from Dutch mother-child dyads with children aged 18 (N = 49) and 24 (N = 27) months old. We took speech context into consideration and distinguished between prosody when mothers introduce familiar vs. unfamiliar words to their children. The results show that IDS mean pitch predicts children's vocabulary growth between 18 and 24 months. In addition, the degree of prosodic modification when mothers introduce unfamiliar words to their children correlates with children's vocabulary growth during this period. These findings suggest that the prosody of IDS, especially in word-learning contexts, may serve linguistic purposes.
Collapse
|
6
|
Utterance-Initial Prosodic Differences Between Statements and Questions in Infant-Directed Speech. JOURNAL OF CHILD LANGUAGE 2024; 51:137-167. [PMID: 36286327 DOI: 10.1017/s0305000922000460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Cross-linguistically, statements and questions broadly differ in syntactic organization. To learn the syntactic properties of each sentence type, learners might first rely on non-syntactic information. This paper analyzed prosodic differences between infant-directed wh-questions and statements to determine what kinds of cues might be available. We predicted there would be a significant difference depending on the first words that appear in wh-questions (e.g., two closed-class words; meaning words from a category that rarely changes) compared to the variety of first words found in statements. We measured F0, duration, and intensity of the first two words in statements and wh-questions in naturalistic speech from 13 mother-child dyads in the Brent corpus of the CHILDES database. Results found larger differences between sentence-types when the second word was an open-class not a closed-class word, suggesting a relationship between prosodic and syntactic information in an utterance-initial position that infants may use to make sentence-type distinctions.
Collapse
|
7
|
Finding Structure in Modern Dance. Cogn Sci 2023; 47:e13375. [PMID: 37950547 DOI: 10.1111/cogs.13375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 08/24/2023] [Accepted: 10/25/2023] [Indexed: 11/12/2023]
Abstract
Research has shown that both adults and children organize familiar activity into discrete units with consistent boundaries, despite the dynamic, continuous nature of everyday experiences. However, less is known about how observers segment unfamiliar event sequences. In the current study, we took advantage of the novelty that is inherent in modern dance. Modern dance features natural human motion but does not contain canonical goals-therefore, observers cannot recruit prior goal-related knowledge to segment it. Our main aims were to identify whether observers segment modern dance into the steps intended by the dancers, and what types of cues contribute to segmentation under these circumstances. Experiment 1 used a classic event segmentation task and found that adults were able to consistently identify only a few of the dancers' intended steps. Experiment 2 tested adults in an offline labeling task. Results showed that steps which could more easily be labeled offline in Experiment 2 were more likely to be segmented online in Experiment 1.
Collapse
|
8
|
Speech Segmentation and Cross-Situational Word Learning in Parallel. Open Mind (Camb) 2023; 7:510-533. [PMID: 37637304 PMCID: PMC10449405 DOI: 10.1162/opmi_a_00095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 08/29/2023] Open
Abstract
Language learners track conditional probabilities to find words in continuous speech and to map words and objects across ambiguous contexts. It remains unclear, however, whether learners can leverage the structure of the linguistic input to do both tasks at the same time. To explore this question, we combined speech segmentation and cross-situational word learning into a single task. In Experiment 1, when adults (N = 60) simultaneously segmented continuous speech and mapped the newly segmented words to objects, they demonstrated better performance than when either task was performed alone. However, when the speech stream had conflicting statistics, participants were able to correctly map words to objects, but were at chance level on speech segmentation. In Experiment 2, we used a more sensitive speech segmentation measure to find that adults (N = 35), exposed to the same conflicting speech stream, correctly identified non-words as such, but were still unable to discriminate between words and part-words. Again, mapping was above chance. Our study suggests that learners can track multiple sources of statistical information to find and map words to objects in noisy environments. It also prompts questions on how to effectively measure the knowledge arising from these learning experiences.
Collapse
|
9
|
Language in educational apps for pre-schoolers. A comparison of grammatical constructions and psycholinguistic features in apps, books and child directed speech. JOURNAL OF CHILD LANGUAGE 2023; 50:895-921. [PMID: 35481491 DOI: 10.1017/s0305000922000198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Language in touchscreen apps could be useful as an additional source of children's language input, alongside child directed speech (CDS) and books. Here we performed the first analysis of language in apps, as compared with books and CDS. We analysed language in 18 of the most popular educational apps targeting pre-schoolers and compared their language content to children's books and CDS with respect to types of constructions and psycholinguistic features of words. We found that apps contained lower frequency words and had lower lexical diversity compared to CDS, and shorter utterances compared to books. Apps may thus provide an enriched supplementary form of input for young children, due to containing less frequent words. However, apps do not expose children to a high proportion of questions and complex sentences, both of which are crucial for supporting child's development of structurally rich constructions.
Collapse
|
10
|
English-Learning 12-Month-Olds Do Not Map Function-Like Words to Objects. JOURNAL OF COGNITION AND DEVELOPMENT 2023. [DOI: 10.1080/15248372.2023.2197067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
11
|
Consequences of phonological variation for algorithmic word segmentation. Cognition 2023; 235:105401. [PMID: 36787685 PMCID: PMC10085835 DOI: 10.1016/j.cognition.2023.105401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/27/2022] [Accepted: 02/06/2023] [Indexed: 02/15/2023]
Abstract
Over the first year, infants begin to learn the words of their language. Previous work suggests that certain statistical regularities in speech could help infants segment the speech stream into words, thereby forming a proto-lexicon that could support learning of the eventual vocabulary. However, computational models of word segmentation have typically been tested using language input that is much less variable than actual speech is. We show that using actual, transcribed pronunciations rather than dictionary pronunciations of the same speech leads to worse segmentation performance across models. We also find that phonologically variable input poses serious problems for lexicon building, because even correctly segmented word forms exhibit a complex, many-to-many relationship with speakers' intended words. Many phonologically distinct word forms were actually the same intended word, and many identical transcriptions came from different intended words. The fact that previous models appear to have substantially overestimated the utility of simple statistical heuristics suggests a need to consider the formation of the lexicon in infancy differently.
Collapse
|
12
|
Infants' short-term memory for consonant-vowel syllables. J Exp Child Psychol 2023; 226:105567. [PMID: 36244079 PMCID: PMC9691597 DOI: 10.1016/j.jecp.2022.105567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 07/16/2022] [Accepted: 09/22/2022] [Indexed: 11/05/2022]
Abstract
This research examined whether the auditory short-term memory (STM) capacity for speech sounds differs from that for nonlinguistic sounds in 11-month-old infants. Infants were presented with streams composed of repeating sequences of either 2 or 4 syllables, akin to prior work by Ross-Sheehy and Newman (2015) using nonlinguistic musical instruments. These syllable sequences either stayed the same for every repetition (constant) or changed by one syllable each time it repeated (varying). Using the head-turn preference procedure, we measured infant listening time to each type of stream (constant vs varying and 2 vs 4 syllables). Longer listening to the varying stream was taken as evidence for STM because this required remembering all syllables in the sequence. We found that infants listened longer to the varying streams for 2-syllable sequences but not for 4-syllable sequences. This capacity limitation is comparable to that found previously for nonlinguistic instrument tones, suggesting that young infants have similar STM limitations for speech and nonspeech stimuli.
Collapse
|
13
|
Understanding why infant-directed speech supports learning: A dynamic attention perspective. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2022.101047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
14
|
Immature Vocalizations Simplify the Speech of Tseltal Mayan and U.S. Caregivers. Top Cogn Sci 2022; 15:315-328. [PMID: 36426721 DOI: 10.1111/tops.12632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 10/16/2022] [Accepted: 10/17/2022] [Indexed: 11/27/2022]
Abstract
What is the function of immature vocalizing in early learning environments? Previous work on infants in the United States indicates that prelinguistic vocalizations elicit caregiver speech which is simplified in its linguistic structure. However, there is substantial cross-cultural variation in the extent to which children's vocalizations elicit responses from caregivers. In the current study, we ask whether children's vocalizations elicit similar changes in their immediate caregivers' speech structure across two cultural sites with differing perspectives on how to interact with infants and young children. Here, we compare Tseltal Mayan and U.S. caregivers' verbal responses to their children's vocalizations. Similar to findings from U.S. dyads, we found that children from the Tseltal community regulate the statistical structure of caregivers' speech simply by vocalizing. Following the interaction burst hypothesis, where clusters of child-adult contingent response alternations facilitate learning from limited input, we reveal a stable source of information that may facilitate language learning within ongoing interaction.
Collapse
|
15
|
Statistical Learning of Language: A Meta-Analysis Into 25 Years of Research. Cogn Sci 2022; 46:e13198. [PMID: 36121309 DOI: 10.1111/cogs.13198] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 08/16/2022] [Accepted: 08/22/2022] [Indexed: 11/29/2022]
Abstract
Statistical learning is a key concept in our understanding of language acquisition. Ample work has highlighted its role in numerous linguistic functions-yet statistical learning is not a unitary construct, and its consistency across different language properties remains unclear. In a meta-analysis of auditory-linguistic statistical learning research spanning the last 25 years, we evaluated how learning varies across different language properties in infants, children, and adults and surveyed the methodological trends in the literature. We found robust learning across stimuli (syllables, words, etc.) in infants, and across stimuli and structures (adjacent dependencies, non-adjacent dependencies, etc.) in adults, with larger effect sizes when multiple cues were present. However, the analysis also showed significant publication bias and revealed a tendency toward using a narrow range of simplified language properties, including in the strength of the transitional probabilities used during training. Bayes factor analyses revealed prevalent data insensitivity of moderators commonly hypothesized to impact learning, such as the amount of exposure and transitional probability strength, which contradict core theoretical assumptions in the field. Methodological factors, such as the tasks used at test, also significantly impacted effect sizes in adults and children, suggesting that choice of task may critically constrain current theories of how statistical learning operates. Collectively, our results suggest that auditory-linguistic statistical learning has the kind of robustness needed to play a foundational role in language acquisition, but that more research is warranted to reveal its full potential.
Collapse
|
16
|
Are translation equivalents special? Evidence from simulations and empirical data from bilingual infants. Cognition 2022; 225:105084. [DOI: 10.1016/j.cognition.2022.105084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 02/21/2022] [Accepted: 02/28/2022] [Indexed: 11/03/2022]
|
17
|
Distributional Lattices as a Model for Discovering Syntactic Categories in Child-Directed Speech. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2022; 51:917-931. [PMID: 35348946 DOI: 10.1007/s10936-022-09872-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/10/2022] [Indexed: 06/14/2023]
Abstract
Distribution information plays an important role in word categorization. In this paper, we present a novel distributional model, distributional lattices to discover syntactic categories in child directed speech. A distributional lattice is a hierarchy formed by closed sets of words that are distributionally similar. Such a hierarchy is potentially useful for capturing syntactic categories by clustering words with associate patterns they occur in. In order to empirically support the suggestion that the distributional lattice is effective at categorizing words, we present a distributional lattice analysis of the Brent corpus of child-directed speech. The results show that distributional lattices are able to yield extremely accurate syntactic categories.
Collapse
|
18
|
A mechanism for punctuating equilibria during mammalian vocal development. PLoS Comput Biol 2022; 18:e1010173. [PMID: 35696441 PMCID: PMC9232141 DOI: 10.1371/journal.pcbi.1010173] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 06/24/2022] [Accepted: 05/05/2022] [Indexed: 12/02/2022] Open
Abstract
Evolution and development are typically characterized as the outcomes of gradual changes, but sometimes (states of equilibrium can be punctuated by sudden change. Here, we studied the early vocal development of three different mammals: common marmoset monkeys, Egyptian fruit bats, and humans. Consistent with the notion of punctuated equilibria, we found that all three species undergo at least one sudden transition in the acoustics of their developing vocalizations. To understand the mechanism, we modeled different developmental landscapes. We found that the transition was best described as a shift in the balance of two vocalization landscapes. We show that the natural dynamics of these two landscapes are consistent with the dynamics of energy expenditure and information transmission. By using them as constraints for each species, we predicted the differences in transition timing from immature to mature vocalizations. Using marmoset monkeys, we were able to manipulate both infant energy expenditure (vocalizing in an environment with lighter air) and information transmission (closed-loop contingent parental vocal playback). These experiments support the importance of energy and information in leading to punctuated equilibrium states of vocal development. Species can sometimes evolve suddenly; their appearance is preceded and followed by long periods of stability. This process is known as “punctuated equilibrium”. Our data show that for three mammalian species—marmoset monkeys, fruit bats, and humans—early vocal development trajectories can also be characterized as different equilibrium states punctuated by sharp transitions; transitions indicate the advent of a new vocal behavior. To better understand the putative mechanism behind such transitions, we show that a balance model, in which variables trade-off in their importance over time, captured this change by accurately simulating the shape of the developmental trajectory and predicting the timing of the transition between immature and mature vocal states for all three species. Two variables—energy and information—were hypothesized to trade-off during development. We tested and found support for this hypothesis in analyses of two marmoset monkey experiments, one which manipulated energy metabolic costs and another which manipulated information transmission.
Collapse
|
19
|
Développement lexical dans le cadre d’une déficience intellectuelle : le point sur la question. PSYCHOLOGIE FRANCAISE 2022. [DOI: 10.1016/j.psfr.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Preverbal infants' sensitivity to grammatical dependencies. INFANCY 2022; 27:648-662. [PMID: 35353438 DOI: 10.1111/infa.12466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Revised: 02/14/2022] [Accepted: 02/21/2022] [Indexed: 11/26/2022]
Abstract
During their first months of life, infants can already distinguish function words (e.g., pronouns and determiners) from content words (e.g., verbs and nouns). Little research has explored preverbal infants' sensitivity to the relationships between these word categories. This preregistered study examines whether French-learning 8- and 11-month-olds track the grammatical dependencies between determiners and nouns as well as pronouns and verbs. Using the Visual Fixation Procedure, infants were presented with lists containing either grammatical (e.g., tu manges "you eat", des biberons "some bottles") or ungrammatical (e.g., des manges "some eat", tu biberons "you bottle") phrases. In Experiment 1 (N = 59), the lists involved common nouns and verbs, while in Experiment 2 (N = 28), only common verbs were used. Eleven-month-olds showed a clear preference for correct over incorrect co-occurrences in both experiments, while 8-month-olds showed a trend in the same direction. These results suggest that before their first birthday, infants' storage and access of words and word sequences are sufficiently sophisticated to include the means to track categorical dependencies. This early sensitivity to co-occurrence patterns may be greatly beneficial for constraining lexical access and later on for learning novel words' syntactic and semantic properties.
Collapse
|
21
|
EXPRESS: Adult listeners can extract age-related cues from child-directed speech. Q J Exp Psychol (Hove) 2022; 75:2244-2255. [PMID: 35272517 DOI: 10.1177/17470218221089634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study investigated adult listeners' ability to detect age-related cues in child-directed speech (CDS). Participants (N = 186) listened to two speech recordings directed at children between the ages of 6 to 44 months and guessed which had addressed a younger or an older child. The recordings came from North American English-speaking mothers and listeners were native speakers of Turkish with varying degrees of English knowledge. Participants were randomly assigned to listen either to the original recordings or to the low-pass filtered versions. Accuracy was above chance level across all groups. Participants' English level, age and the age difference between the addressees significantly predicted accuracy. After controlling for these variables, we found a significant effect of condition. Participants' accuracy tended to be better in the unfiltered condition with the exception of male participants without children. These results suggest that age-related variations in child-directed speech are perceptually available to adult listeners. Further, even though sensitivity to the age-related cues is facilitated by the availability of content-related cues in speech, it does not seem to be solely dependent on these cues, providing further support for the form-function relations in CDS.
Collapse
|
22
|
Abstract
To acquire language, infants must learn to segment words from running speech. A significant body of experimental research shows that infants use multiple cues to do so; however, little research has comprehensively examined the distribution of such cues in naturalistic speech. We conducted a comprehensive corpus analysis of German child-directed speech (CDS) using data from the Child Language Data Exchange System (CHILDES) database, investigating the availability of word stress, transitional probabilities (TPs), and lexical and sublexical frequencies as potential cues for word segmentation. Seven hours of data (~15,000 words) were coded, representing around an average day of speech to infants. The analysis revealed that for 97% of words, primary stress was carried by the initial syllable, implicating stress as a reliable cue to word onset in German CDS. Word identity was also marked by TPs between syllables, which were higher within than between words, and higher for backwards than forwards transitions. Words followed a Zipfian-like frequency distribution, and over two-thirds of words (78%) were monosyllabic. Of the 50 most frequent words, 82% were function words, which accounted for 47% of word tokens in the entire corpus. Finally, 15% of all utterances comprised single words. These results give rich novel insights into the availability of segmentation cues in German CDS, and support the possibility that infants draw on multiple converging cues to segment their input. The data, which we make openly available to the research community, will help guide future experimental investigations on this topic.
Collapse
|
23
|
Does morphological complexity affect word segmentation? Evidence from computational modeling. Cognition 2021; 220:104960. [PMID: 34920298 DOI: 10.1016/j.cognition.2021.104960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 06/01/2021] [Accepted: 11/15/2021] [Indexed: 11/21/2022]
Abstract
How can infants detect where words or morphemes start and end in the continuous stream of speech? Previous computational studies have investigated this question mainly for English, where morpheme and word boundaries are often isomorphic. Yet in many languages, words are often multimorphemic, such that word and morpheme boundaries do not align. Our study employed corpora of two languages that differ in the complexity of inflectional morphology, Chintang (Sino-Tibetan) and Japanese (in Experiment 1), as well as corpora of artificial languages ranging in morphological complexity, as measured by the ratio and distribution of morphemes per word (in Experiments 2 and 3). We used two baselines and three conceptually diverse word segmentation algorithms, two of which rely purely on sublexical information using distributional cues, and one that builds a lexicon. The algorithms' performance was evaluated on both word- and morpheme-level representations of the corpora. Segmentation results were better for the morphologically simpler languages than for the morphologically more complex languages, in line with the hypothesis that languages with greater inflectional complexity could be more difficult to segment into words. We further show that the effect of morphological complexity is relatively small, compared to that of algorithm and evaluation level. We therefore recommend that infant researchers look for signatures of the different segmentation algorithms and strategies, before looking for differences in infant segmentation landmarks across languages varying in complexity.
Collapse
|
24
|
The effects of maternal input on language in the absence of genetic confounds: Vocabulary development in internationally adopted children. Child Dev 2021; 93:237-253. [PMID: 34882780 DOI: 10.1111/cdev.13688] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Previous studies have found correlations between parent input and child language outcomes, providing prima facie evidence for a causal relation. However, this could also reflect the effects of shared genes. The present study removed this genetic confound by measuring English vocabulary growth in 29 preschool-aged children (21 girls) aged 31-73 months and 17 infants (all girls) aged 15-32 months adopted from China and Eastern Europe and comparing it to speech produced by their adoptive mothers. Vocabulary growth in both groups was correlated with maternal input features; in infants with mean-length of maternal utterance, and in preschoolers with both mean-length of utterance and lexical diversity. Thus, input effects on language outcomes persist even in the absence of genetic confounds.
Collapse
|
25
|
Fine-tuning language discrimination: Bilingual and monolingual infants' detection of language switching. INFANCY 2021; 26:1037-1056. [PMID: 34482624 PMCID: PMC8530864 DOI: 10.1111/infa.12429] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 06/13/2021] [Accepted: 07/14/2021] [Indexed: 11/27/2022]
Abstract
The ability to differentiate between two languages sets the stage for bilingual learning. Infants can discriminate languages when hearing long passages, but language switches often occur on short time scales with few cues to language identity. As bilingual infants begin learning sequences of sounds and words, how do they detect the dynamics of two languages? In two studies using the head-turn preference procedure, we investigated whether infants (n = 44) can discriminate languages at the level of individual words. In Study 1, bilingual and monolingual 8- to 12-month-olds were tested on their detection of single-word language switching in lists of words (e.g., "dog… lait [fr. milk]"). In Study 2, they were tested on language switching within sentences (e.g., "Do you like the lait?"). We found that infants were unable to detect language switching in lists of words, but the results were inconclusive about infants' ability to detect language switching within sentences. No differences were observed between bilinguals and monolinguals. Given that bilingual proficiency eventually requires detection of sound sequences across two languages, more research will be needed to conclusively understand when and how this skill emerges. Materials, data, and analysis scripts are available at https://osf.io/9dtwn/.
Collapse
|
26
|
Child-directed and overheard input from different speakers in two distinct cultures. JOURNAL OF CHILD LANGUAGE 2021; 49:1-20. [PMID: 34663486 DOI: 10.1017/s0305000921000623] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the fact that in most communities interaction occurs between the child and multiple speakers, most previous research on input to children focused on input from mothers. We annotated recordings of Sesotho-learning toddlers living in non-industrial Lesotho in South Africa, and French-learning toddlers living in urban regions in France. We examined who produced the input (mothers, other children, adults), how much input was child directed, and whether and how it varied across speakers. As expected, mothers contributed most of the input in the French recordings. However, in the Sesotho recordings, input from other children was more common than input from mothers or other adults. Child-directed speech from all speakers in both cultural groups showed similar qualitative modifications. Our findings suggest that input from other children is prevalent and has similar features as child-directed from adults described in previous work, inviting cross-cultural research into the effects of input from other children.
Collapse
|
27
|
Child-directed speech is optimized for syntax-free semantic inference. Sci Rep 2021; 11:16527. [PMID: 34400656 PMCID: PMC8368066 DOI: 10.1038/s41598-021-95392-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/22/2021] [Indexed: 02/07/2023] Open
Abstract
The way infants learn language is a highly complex adaptive behavior. This behavior chiefly relies on the ability to extract information from the speech they hear and combine it with information from the external environment. Most theories assume that this ability critically hinges on the recognition of at least some syntactic structure. Here, we show that child-directed speech allows for semantic inference without relying on explicit structural information. We simulate the process of semantic inference with machine learning applied to large text collections of two different types of speech, child-directed speech versus adult-directed speech. Taking the core meaning of causality as a test case, we find that in child-directed speech causal meaning can be successfully inferred from simple co-occurrences of neighboring words. By contrast, semantic inference in adult-directed speech fundamentally requires additional access to syntactic structure. These results suggest that child-directed speech is ideally shaped for a learner who has not yet mastered syntactic structure.
Collapse
|
28
|
Abstract
Young children learn language at an incredible rate. Although children come prepared with powerful statistical-learning mechanisms, the statistics they encounter are also prepared for them: Children learn from caregivers motivated to communicate with them. How precisely do parents tune their speech to their children's individual language knowledge? To answer this question, we asked parent-child pairs (N = 41) to play a reference game in which the parents' goal was to guide their child to select a target animal from a set of three. Parents fine-tuned their referring expressions to their children's knowledge at the lexical level, producing more informative references for animals they thought their children did not know. Further, parents learned about their children's knowledge over the course of the game and tuned their referring expressions accordingly. Child-directed speech may thus support children's learning not because it is uniformly simplified but because it is tuned to individual children's language development.
Collapse
|
29
|
Tailoring the Input to Children's Needs: The Use of Fine Lexical Tuning in Speech Directed to Normally Hearing Children and Children With Cochlear Implants. Front Psychol 2021; 12:676664. [PMID: 34220646 PMCID: PMC8245684 DOI: 10.3389/fpsyg.2021.676664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 05/24/2021] [Indexed: 12/04/2022] Open
Abstract
Purpose: The aim of the present study was to explore fine lexical tuning in Dutch infant-directed speech (IDS) addressed to congenitally deaf infants who received a cochlear implant (CI) early in life (<2 years of age) in comparison with children with normal hearing (NH). The longitudinal pattern of parents' utterance length in the initial stages of the child's lexical development was examined. Parents' utterances containing the words the children eventually acquired in the earliest developmental stages were selected and their MLU (Mean Length of Utterance) was measured. Method: Transcriptions of monthly recordings of spontaneous interactions of 10 CI children and 30 NH children with their parents were analyzed. The children with CI were followed from the moment their device was switched on, and the NH children from the age of 6 months onwards. A total of 57,846 utterances of parents of CI children and 149,468 utterances of parents of NH children were analyzed. Results: IDS addressed to children with NH and children with CI exhibits fine lexical tuning: parents adjust the MLU of the utterances that contain the words that children are on the verge of producing themselves. More specifically, the parents' mean length of those utterances decreased in relation to the point when the children began using the item. Consequently, the number of occurrences in isolation of the lexical item increased. The speech addressed to all the children exhibited this phenomenon, but it was significantly more strongly present in speech addressed to the children with CI. Conclusions: The speech addressed to children with NH and CI is characterized by fine lexical tuning and a high incidence of single-word utterances in the period leading up to the children's first use of words in speech production. Notwithstanding striking commonalities, IDS addressed to children with a hearing impairment is markedly different, which suggests that parents take this specific character of the children into account.
Collapse
|
30
|
Fine lexical tuning in infant directed speech to typically developing children. JOURNAL OF CHILD LANGUAGE 2021; 48:591-604. [PMID: 32698914 DOI: 10.1017/s0305000920000379] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Do parents fine-tune the MLU of utterances with a particular word as the word is on the verge of appearing in the child's production? We analyzed a corpus of spontaneous interactions of 30 dyads. The children were in the initial stages of their lexical development, and the parents' utterances containing the words the children eventually acquired were selected. The main finding is that the MLU of the parental utterances containing the target words gradually decreased up to the point of the children's first production of those words. This suggests that parents fine-tune their utterances to support the children's linguistic development.
Collapse
|
31
|
A Vocabulary Acquisition and Usage for Late Talkers Treatment Efficacy Study: The Effect of Input Utterance Length and Identification of Responder Profiles. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1235-1255. [PMID: 33784467 PMCID: PMC8608147 DOI: 10.1044/2020_jslhr-20-00525] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 11/09/2020] [Accepted: 12/10/2020] [Indexed: 06/12/2023]
Abstract
Purpose This study examined the efficacy of the Vocabulary Acquisition and Usage for Late Talkers (VAULT) treatment in a version that manipulated the length of clinician utterance in which a target word was presented (dose length). The study also explored ways to characterize treatment responders versus nonresponders. Method Nineteen primarily English-speaking late-talking toddlers (aged 24-34 months at treatment onset) received VAULT and were quasirandomly assigned to have target words presented in grammatical utterances matching one of two lengths: brief (four words or fewer) or extended (five words or more). Children were measured on their pre- and posttreatment production of (a) target and control words specific to treatment and (b) words not specific to treatment. Classification and Regression Tree (CART) analysis was used to classify responders versus nonresponders. Results VAULT was successful as a whole (i.e., treatment effect sizes of greater than 0), with no difference between the brief and extended conditions. Despite the overall significant treatment effect, the treatment was not successful for all participants. CART results (using participants from the current study and a previous iteration of VAULT) provided a dual-node decision tree for classifying treatment responders versus nonresponders. Conclusions The input-based VAULT treatment protocol is efficacious and offers some flexibility in terms of utterance length. When VAULT works, it works well. The CART decision tree uses pretreatment vocabulary levels and performance in the first two treatment sessions to provide clinicians with promising guidelines for who is likely to be a nonresponder and thus might need a modified treatment plan. Supplemental Material https://doi.org/10.23641/asha.14226641.
Collapse
|
32
|
When statistics collide: The use of transitional and phonotactic probability cues to word boundaries. Mem Cognit 2021; 49:1300-1310. [PMID: 33751490 DOI: 10.3758/s13421-021-01163-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2021] [Indexed: 11/08/2022]
Abstract
Statistical regularities in linguistic input, such as transitional probability and phonotactic probability, have been shown to promote speech segmentation. It remains unclear, however, whether or how the combination of transitional probabilities and subtle phonotactic probabilities influence segmentation. The present study provides a fine-grained investigation of the effects of such combined statistics. Adults (N = 81) were tested in one of two conditions. In the Anchor condition, they heard a continuous stream of words with small differences in phonotactic probabilities. In the Uniform condition, all words had comparable phonotactic probabilities. In both conditions, transitional probability was stronger in words than in part-words. Only participants from the Anchor condition preferred words at test, indicating that the combination of transitional probabilities and subtle phonotactic probabilities may facilitate speech segmentation. We discuss the methodological implications of our findings, which demonstrate that even small phonotactic variations should be accounted for when investigating statistical speech segmentation.
Collapse
|
33
|
Stem similarity modulates infants' acquisition of phonological alternations. Cognition 2021; 209:104573. [PMID: 33406462 DOI: 10.1016/j.cognition.2020.104573] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 11/14/2020] [Accepted: 12/22/2020] [Indexed: 10/22/2022]
Abstract
Phonemes have variant pronunciations depending on context. For instance, in American English, the [t] in pat [pæt] and the [d] in pad [pæd] are both realized with a tap [ɾ] when the -ing suffix is attached, [pæɾɪŋ]. We show that despite greater distributional and acoustic support for the [t]-tap alternation, 12-month-olds successfully relate taps to stems with a perceptually-similar final [d], not the dissimilar final-[t]. Thus, distributional learning of phonological alternations is constrained by infants' preference for the alternation of perceptually-similar segments. Further, the ability to relate variant surface forms emerges between 8- and 12-months. Our findings of biased learning provide further empirical support for a role for perceptual similarity in the acquisition of linguistically-relevant categories. We discuss the implications of our findings for phonological theory, language acquisition and models of the mental lexicon.
Collapse
|
34
|
Using lexical context to discover the noun category: Younger children have it easier. PSYCHOLOGY OF LEARNING AND MOTIVATION 2021. [DOI: 10.1016/bs.plm.2021.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
35
|
Adjacent and Non-Adjacent Word Contexts Both Predict Age of Acquisition of English Words: A Distributional Corpus Analysis of Child-Directed Speech. Cogn Sci 2020; 44:e12899. [PMID: 33164262 DOI: 10.1111/cogs.12899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 07/27/2020] [Accepted: 08/04/2020] [Indexed: 12/01/2022]
Abstract
Children show a remarkable degree of consistency in learning some words earlier than others. What patterns of word usage predict variations among words in age of acquisition? We use distributional analysis of a naturalistic corpus of child-directed speech to create quantitative features representing natural variability in word contexts. We evaluate two sets of features: One set is generated from the distribution of words into frames defined by the two adjacent words. These features primarily encode syntactic aspects of word usage. The other set is generated from non-adjacent co-occurrences between words. These features encode complementary thematic aspects of word usage. Regression models using these distributional features to predict age of acquisition of 656 early-acquired English words indicate that both types of features improve predictions over simpler models based on frequency and appearance in salient or simple utterance contexts. Syntactic features were stronger predictors of children's production than comprehension, whereas thematic features were stronger predictors of comprehension. Overall, earlier acquisition was predicted by features representing frames that select for nouns and verbs, and by thematic content related to food and face-to-face play topics; later acquisition was predicted by features representing frames that select for pronouns and question words, and by content related to narratives and object play.
Collapse
|
36
|
A computational theory of child overextension. Cognition 2020; 206:104472. [PMID: 33091729 DOI: 10.1016/j.cognition.2020.104472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 09/02/2020] [Accepted: 09/24/2020] [Indexed: 11/25/2022]
Abstract
Overextension-the phenomenon that children extend known words to describe referents outside their vocabulary-is a hallmark of lexical innovation in early childhood. Overextension is a subject of extensive inquiry in linguistics and developmental psychology, but there exists no coherent formal account of this phenomenon. We develop a general computational framework that captures important properties of overextension reported separately in the previous literature. We operationalize overextension as probabilistic inference over a conceptual space that draws on a fusion of knowledge from lexical semantics, deep neural networks, and psychological experiments to support both production and comprehension. We show how this minimally parameterized framework explains overextension in young children over a comprehensive set of noun-referent pairs previously reported in child speech, and it also predicts the behavioral asymmetry in children's overextensional production and comprehension reported in lab settings. Our work offers a computational theory for the origins of word meaning extension and supports a single-system view of language production and comprehension.
Collapse
|
37
|
Dual language statistical word segmentation in infancy: Simulating a language-mixing bilingual environment. Dev Sci 2020; 24:e13050. [PMID: 33063938 DOI: 10.1111/desc.13050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Revised: 08/24/2020] [Accepted: 10/08/2020] [Indexed: 11/26/2022]
Abstract
Infants are sensitive to syllable co-occurrence probabilities when segmenting words from fluent speech. However, segmenting two languages overlapping at the syllabic level is challenging because the statistical cues across the languages are incongruent. Successful segmentation, thus, relies on infants' ability to separate language inputs and track the statistics of each language. Here, we report three experiments investigating how infants statistically segment words from two overlapping languages in a simulated language-mixing bilingual environment. In the first two experiments, we investigated whether 9.5-month-olds can use French and English phonetic markers to segment words from two overlapping artificial languages produced by one individual. After showing that infants could segment the languages when the languages were presented in isolation (Experiment 1), we presented infants with two interleaved languages differing in phonetic cues (Experiment 2). Both monolingual and bilingual infants successfully segmented words from one of the two languages-the language heard last during familiarization. In Experiment 3, a conceptual replication, we replicated the findings of Experiment 2 with a different population and with different cues. As before, when 12-month-old monolingual infants heard two interleaved languages differing in English and Finnish phonetic cues, they learned only the last language heard during familiarization. Together, our findings suggest that segmenting words in a language-mixing environment is challenging, but infants possess a nascent ability to recruit phonetic cues to segment words from one of two overlapping languages in a bilingual-like environment. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=92pNcpxZguw.
Collapse
|
38
|
Abstract
BACKGROUND How the brain develops accurate models of the external world and generates appropriate behavioral responses is a vital question of widespread multidisciplinary interest. It is increasingly understood that brain signal variability-posited to enhance perception, facilitate flexible cognitive representations, and improve behavioral outcomes-plays an important role in neural and cognitive development. The ability to perceive, interpret, and respond to complex and dynamic social information is particularly critical for the development of adaptive learning and behavior. Social perception relies on oxytocin-regulated neural networks that emerge early in development. METHODS We tested the hypothesis that individual differences in the endogenous oxytocinergic system early in life may influence social behavioral outcomes by regulating variability in brain signaling during social perception. In study 1, 55 infants provided a saliva sample at 5 months of age for analysis of individual differences in the oxytocinergic system and underwent electroencephalography (EEG) while listening to human vocalizations at 8 months of age for the assessment of brain signal variability. Infant behavior was assessed via parental report. In study 2, 60 infants provided a saliva sample and underwent EEG while viewing faces and objects and listening to human speech and water sounds at 4 months of age. Infant behavior was assessed via parental report and eye tracking. RESULTS We show in two independent infant samples that increased brain signal entropy during social perception is in part explained by an epigenetic modification to the oxytocin receptor gene (OXTR) and accounts for significant individual differences in social behavior in the first year of life. These results are measure-, context-, and modality-specific: entropy, not standard deviation, links OXTR methylation and infant behavior; entropy evoked during social perception specifically explains social behavior only; and only entropy evoked during social auditory perception predicts infant vocalization behavior. CONCLUSIONS Demonstrating these associations in infancy is critical for elucidating the neurobiological mechanisms accounting for individual differences in cognition and behavior relevant to neurodevelopmental disorders. Our results suggest that an epigenetic modification to the oxytocin receptor gene and brain signal entropy are useful indicators of social development and may hold potential diagnostic, therapeutic, and prognostic value.
Collapse
|
39
|
14‐month‐olds exploit verbs’ syntactic contexts to build expectations about novel words. INFANCY 2020; 25:719-733. [DOI: 10.1111/infa.12354] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 04/13/2020] [Accepted: 04/29/2020] [Indexed: 11/30/2022]
|
40
|
Abstract
Learners often need to identify and remember recurring units in continuous sequences, but the underlying mechanisms are debated. A particularly prominent candidate mechanism relies on distributional statistics such as Transitional Probabilities (TPs). However, it is unclear what the outputs of statistical segmentation mechanisms are, and if learners store these outputs as discrete chunks in memory. We critically review the evidence for the possibility that statistically coherent items are stored in memory and outline difficulties in interpreting past research. We use Slone and Johnson's (2018) experiments as a case study to show that it is difficult to delineate the different mechanisms learners might use to solve a learning problem. Slone and Johnson (2018) reported that 8-month-old infants learned coherent chunks of shapes in visual sequences. Here, we describe an alternate interpretation of their findings based on a multiple-cue integration perspective. First, when multiple cues to statistical structure were available, infants' looking behavior seemed to track with the strength of the strongest one - backward TPs, suggesting that infants process multiple cues simultaneously and select the strongest one. Second, like adults, infants are exquisitely sensitive to chunks, but may require multiple cues to extract them. In Slone and Johnson's (2018) experiments, these cues were provided by immediate chunk repetitions during familiarization. Accordingly, infants showed strongest evidence of chunking following familiarization sequences in which immediate repetitions were more frequent. These interpretations provide a strong argument for infants' processing of multiple cues and the potential importance of multiple cues for chunk recognition in infancy.
Collapse
|
41
|
Look who's talking: A comparison of automated and human-generated speaker tags in naturalistic day-long recordings. Behav Res Methods 2020; 52:641-653. [PMID: 31342467 PMCID: PMC6980911 DOI: 10.3758/s13428-019-01265-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The LENA system has revolutionized research on language acquisition, providing both a wearable device to collect day-long recordings of children's environments, and a set of automated outputs that process, identify, and classify speech using proprietary algorithms. This output includes information about input sources (e.g., adult male, electronics). While this system has been tested across a variety of settings, here we delve deeper into validating the accuracy and reliability of LENA's automated diarization, i.e., tags of who is talking. Specifically, we compare LENA's output with a gold standard set of manually generated talker tags from a dataset of 88 day-long recordings, taken from 44 infants at 6 and 7 months, which includes 57,983 utterances. We compare accuracy across a range of classifications from the original Lena Technical Report, alongside a set of analyses examining classification accuracy by utterance type (e.g., declarative, singing). Consistent with previous validations, we find overall high agreement between the human and LENA-generated speaker tags for adult speech in particular, with poorer performance identifying child, overlap, noise, and electronic speech (accuracy range across all measures: 0-92%). We discuss several clear benefits of using this automated system alongside potential caveats based on the error patterns we observe, concluding with implications for research using LENA-generated speaker tags.
Collapse
|
42
|
Infants Segment Words from Songs-An EEG Study. Brain Sci 2020; 10:E39. [PMID: 31936586 PMCID: PMC7017257 DOI: 10.3390/brainsci10010039] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 12/25/2019] [Accepted: 01/06/2020] [Indexed: 12/15/2022] Open
Abstract
Children's songs are omnipresent and highly attractive stimuli in infants' input. Previous work suggests that infants process linguistic-phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children's songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.
Collapse
|
43
|
Abstract
When referring to objects, adults package words, sentences, and gestures in ways that shape children's learning. Here, to understand how continuity of reference shapes word learning, an adult taught new words to 4-year-old children (N = 120) using either clusters of references to the same object or no sequential references to each object. In three experiments, the adult used a combination of labels and other object references, which provided informative discourse (e.g., This is small and green), neutral discourse (e.g., This is really great), or no verbal discourse. Switching verbal references from one object to another interfered with learning relative to providing clustered references to a particular object, revealing that discontinuity in discourse hinders children's encoding of new words.
Collapse
|
44
|
Infant-directed speech as a simplified but not simple register: a longitudinal study of lexical and syntactic features. JOURNAL OF CHILD LANGUAGE 2020; 47:22-44. [PMID: 31663485 DOI: 10.1017/s0305000919000643] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Infant-directed speech (IDS) is a specific register that adults use to address infants, and it is characterised by prosodic exaggeration and lexical and syntactic simplification. Several authors have underlined that this simplified speech becomes more complex according to the infant's age. However, there is a lack of studies on lexical and syntactic modifications in Italian IDS during the first year of an infant's life. In the present study, 80 mother-infant dyads were longitudinally observed at 3, 6, 9, and 12 months during free-play interactions. Maternal vocal productions were subsequently coded. The results show an overall low lexical variability and syntactic complexity that identify speech to infants as a simplified register; however, the high occurrence of complex items and well-structured utterances suggests that IDS is not simple speech. Moreover, maternal IDS becomes more complex over time, but not linearly, with a maximum simplification in the second half of the first year.
Collapse
|
45
|
Dónde está la ball? Examining the effect of code switching on bilingual children's word recognition. JOURNAL OF CHILD LANGUAGE 2019; 46:1238-1248. [PMID: 31405393 PMCID: PMC7592264 DOI: 10.1017/s0305000919000400] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Hearing words in sentences facilitates word recognition in monolingual children. Many children grow up receiving input in multiple languages - including exposure to sentences that 'mix' the languages. We explored Spanish-English bilingual toddlers' (n = 24) ability to identify familiar words in three conditions: (i) single word (ball!); (ii) same-language sentence (Where's the ball?); or (iii) mixed-language sentence (Dónde está la ball?). Children successfully identified words across conditions; however, the advantage linked to hearing words in sentences was present only in the same-language condition. This work hence suggests that language mixing plays an important role on bilingual children's ability to recognize spoken words.
Collapse
|
46
|
Differences in sentence complexity in the text of children's picture books and child-directed speech. FIRST LANGUAGE 2019; 39:527-546. [PMID: 31564759 PMCID: PMC6764450 DOI: 10.1177/0142723719849996] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Reading picture books to pre-literate children is associated with improved language outcomes, but the causal pathways of this relationship are not well understood. The present analyses focus on several syntactic differences between the text of children's picture books and typical child-directed speech, with the aim of understanding ways in which picture book text may systematically differ from typical child-directed speech. The analyses show that picture books contain more rare and complex sentence types, including passive sentences and sentences containing relative clauses, than does child-directed speech. These differences in the patterns of language contained in picture books and typical child-directed speech suggest that one important means by which picture book reading may come to be associated with improved language outcomes is by providing children with types of complex language that might be otherwise rare in their input.
Collapse
|
47
|
The ecology of prelinguistic vocal learning: parents simplify the structure of their speech in response to babbling. JOURNAL OF CHILD LANGUAGE 2019; 46:998-1011. [PMID: 31307565 DOI: 10.1017/s0305000919000291] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
What is the function of babbling in language learning? We examined the structure of parental speech as a function of contingency on infants' non-cry prelinguistic vocalizations. We analyzed several acoustic and linguistic measures of caregivers' speech. Contingent speech was less lexically diverse and shorter in utterance length than non-contingent speech. We also found that the lexical diversity of contingent parental speech only predicted infant vocal maturity. These findings illustrate a new form of influence infants have over their ambient language in everyday learning environments. By vocalizing, infants catalyze the production of simplified, more easily learnable language from caregivers.
Collapse
|
48
|
Monolingual and bilingual infants’ word segmentation abilities in an inter‐mixed dual‐language task. INFANCY 2019; 24:718-737. [DOI: 10.1111/infa.12296] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 04/03/2019] [Accepted: 04/09/2019] [Indexed: 11/28/2022]
|
49
|
The most frequently used words: Comparing child-directed speech and young children's speech to inform vocabulary selection for aided input. Augment Altern Commun 2019; 35:120-131. [DOI: 10.1080/07434618.2019.1576225] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
50
|
Mapping non-native pitch contours to meaning: Perceptual and experiential factors. JOURNAL OF MEMORY AND LANGUAGE 2019; 105:131-140. [PMID: 31244505 PMCID: PMC6594708 DOI: 10.1016/j.jml.2018.12.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Infants show interesting patterns of flexibility and constraint early in word learning. Here, we explore perceptual and experiential factors that drive associative learning of labels that differ in pitch contour. Contrary to the salience hypothesis proposed in Experiment 1, English-learning 14-month-olds failed to map acoustically distinctive level and dipping labels to novel referents, even though they discriminated the labels when no potential referents were present. Conversely, infants readily mapped the less distinctive rising and dipping labels. In Experiment 2, we found that the degree of pitch variation in labels also does not account for learning. Instead, English-learning infants only learned if one of the labels had a rising pitch contour. We argue that experience with hearing and/or producing native language prosody may lead infants to initially over-interpret the role rising pitch plays in differentiating words. Together, our findings suggest that multiple factors contribute to whether specific acoustic forms will function as candidate object labels.
Collapse
|