1
|
Monaghan P, Donnelly S, Alcock K, Bidgood A, Cain K, Durrant S, Frost RLA, Jago LS, Peter MS, Pine JM, Turnbull H, Rowland CF. Learning to generalise but not segment an artificial language at 17 months predicts children's language skills 3 years later. Cogn Psychol 2023; 147:101607. [PMID: 37804784 DOI: 10.1016/j.cogpsych.2023.101607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/30/2023] [Accepted: 09/18/2023] [Indexed: 10/09/2023]
Abstract
We investigated whether learning an artificial language at 17 months was predictive of children's natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities - a composite of vocabulary and grammar skills - at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks - especially those that probe grammar learning - provide a valuable tool for uncovering the mechanisms driving children's early language development.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Lana S Jago
- Lancaster University, UK; Liverpool John Moores University, UK
| | | | | | | | - Caroline F Rowland
- Liverpool University, UK; Max Planck Institute for Psycholinguistics, the Netherlands; Radboud University, the Netherlands
| |
Collapse
|
2
|
Does Where You Live Predict What You Say? Associations between Neighborhood Factors, Child Sleep, and Language Development. Brain Sci 2022; 12:brainsci12020223. [PMID: 35203986 PMCID: PMC8870121 DOI: 10.3390/brainsci12020223] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/29/2022] [Accepted: 02/03/2022] [Indexed: 02/01/2023] Open
Abstract
Language ability is strongly related to important child developmental outcomes. Family-level socioeconomic status influences child language ability; it is unclear if, and through which mechanisms, neighborhood-level factors impact child language. The current study investigated the association between neighborhood factors (deprivation and disorder) assessed before birth and child language outcomes at age 5, with sleep duration as a potential underlying pathway. Secondary analysis was conducted on data collected between 2008 and 2018 on a subsample of 2444 participants from the All Our Families cohort study (Calgary, Canada) for whom neighborhood information from pregnancy could be geocoded. Neighborhood deprivation was determined using the Vancouver Area Neighborhood Deprivation Index (VANDIX), and disorder was assessed using crime reports. Mothers reported on their children’s sleep duration and language ability. Multilevel modeling indicated that greater neighborhood deprivation and disorder during pregnancy were predictive of lower scores on the Child Communication Checklist–2 (CCC–2) at 5 years. Path analyses revealed an indirect effect of neighborhood disorder on language through child sleep duration at 12 months. These results add to growing evidence that child development should be considered within the context of multiple systems. Sleep duration as an underlying link between environmental factors and child language ability warrants further study as a potential target for intervention.
Collapse
|
3
|
Trotter AS, Monaghan P, Beckers GJL, Christiansen MH. Exploring Variation Between Artificial Grammar Learning Experiments: Outlining a Meta-Analysis Approach. Top Cogn Sci 2020; 12:875-893. [PMID: 31495072 PMCID: PMC7496870 DOI: 10.1111/tops.12454] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 07/14/2019] [Accepted: 07/25/2019] [Indexed: 11/30/2022]
Abstract
Artificial grammar learning (AGL) has become an important tool used to understand aspects of human language learning and whether the abilities underlying learning may be unique to humans or found in other species. Successful learning is typically assumed when human or animal participants are able to distinguish stimuli generated by the grammar from those that are not at a level better than chance. However, the question remains as to what subjects actually learn in these experiments. Previous studies of AGL have frequently introduced multiple potential contributors to performance in the training and testing stimuli, but meta-analysis techniques now enable us to consider these multiple information sources for their contribution to learning-enabling intended and unintended structures to be assessed simultaneously. We present a blueprint for meta-analysis approaches to appraise the effect of learning in human and other animal studies for a series of artificial grammar learning experiments, focusing on studies that examine auditory and visual modalities. We identify a series of variables that differ across these studies, focusing on both structural and surface properties of the grammar, and characteristics of training and test regimes, and provide a first step in assessing the relative contribution of these design features of artificial grammars as well as species-specific effects for learning.
Collapse
Affiliation(s)
- Antony S. Trotter
- Department of Speech, Hearing & Phonetic SciencesUniversity College London
| | - Padraic Monaghan
- Department of PsychologyLancaster University
- Department of EnglishUniversity of Amsterdam
| | - Gabriël J. L. Beckers
- Department of Psychology, Cognitive Neurobiology and Helmholtz InstituteUtrecht University
| | - Morten H. Christiansen
- Department of PsychologyCornell University
- Interacting Minds Centre and School of Communication and CultureAarhus University
- Haskins Laboratories
| |
Collapse
|
4
|
Frost RLA, Jessop A, Durrant S, Peter MS, Bidgood A, Pine JM, Rowland CF, Monaghan P. Non-adjacent dependency learning in infancy, and its link to language development. Cogn Psychol 2020; 120:101291. [PMID: 32197131 DOI: 10.1016/j.cogpsych.2020.101291] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 02/20/2020] [Accepted: 03/05/2020] [Indexed: 11/25/2022]
Abstract
To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants' capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants' statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants' vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants' segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size.
Collapse
Affiliation(s)
| | - Andrew Jessop
- Max Planck Institute for Psycholinguistics, Netherlands
| | | | | | | | | | - Caroline F Rowland
- Max Planck Institute for Psycholinguistics, Netherlands; University of Liverpool, UK
| | | |
Collapse
|
5
|
Casillas M, Brown P, Levinson SC. Early Language Experience in a Tseltal Mayan Village. Child Dev 2019; 91:1819-1835. [PMID: 31891183 DOI: 10.1111/cdev.13349] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Daylong at-home audio recordings from 10 Tseltal Mayan children (0;2-3;0; Southern Mexico) were analyzed for how often children engaged in verbal interaction with others and whether their speech environment changed with age, time of day, household size, and number of speakers present. Children were infrequently directly spoken to, with most directed speech coming from adults, and no increase with age. Most directed speech came in the mornings, and interactional peaks contained nearly four times the baseline rate of directed speech. Coarse indicators of children's language development (babbling, first words, first word combinations) suggest that Tseltal children manage to extract the linguistic information they need despite minimal directed speech. Multiple proposals for how they might do so are discussed.
Collapse
|
6
|
Jost E, Brill-Schuetz K, Morgan-Short K, Christiansen MH. Input Complexity Affects Long-Term Retention of Statistically Learned Regularities in an Artificial Language Learning Task. Front Hum Neurosci 2019; 13:358. [PMID: 31680911 PMCID: PMC6803473 DOI: 10.3389/fnhum.2019.00358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 09/26/2019] [Indexed: 12/03/2022] Open
Abstract
Statistical learning (SL) involving sensitivity to distributional regularities in the environment has been suggested to be an important factor in many aspects of cognition, including language. However, the degree to which statistically-learned information is retained over time is not well understood. To establish whether or not learners are able to preserve such regularities over time, we examined performance on an artificial second language learning task both immediately after training and also at a follow-up session 2 weeks later. Participants were exposed to an artificial language (Brocanto2), half of them receiving simplified training items in which only 20% of sequences contained complex structures, whereas the other half were exposed to a training set in which 80% of the items were composed of complex sequences. Overall, participants showed signs of learning at the first session and retention at the second, but the degree of learning was affected by the nature of the training they received. Participants exposed to the simplified input outperformed those in the more complex training condition. A GLMM was used to model the relationship between stimulus properties and participants' endorsement strategies across both sessions. The results indicate that participants in the complex training condition relied more on an item's chunk strength than those in the simple training condition. Taken together, this set of findings shows that statistically learned regularities are retained over the course of 2 weeks. The results also demonstrate that training on input featuring simple items leads to improved learning and retention of grammatical regularities.
Collapse
Affiliation(s)
- Ethan Jost
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | | | - Kara Morgan-Short
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, United States
| | | |
Collapse
|
7
|
Lerner I, Gluck MA. Sleep and the extraction of hidden regularities: A systematic review and the importance of temporal rules. Sleep Med Rev 2019; 47:39-50. [PMID: 31252335 DOI: 10.1016/j.smrv.2019.05.004] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Revised: 05/01/2019] [Accepted: 05/27/2019] [Indexed: 10/26/2022]
Abstract
As part of its role in memory consolidation, sleep has been repeatedly identified as critical for the extraction of regularities from wake experiences. However, many null results have been published as well, with no clear consensus emerging regarding the conditions that yield this sleep effect. Here, we systematically review the role of sleep in the extraction of hidden regularities, specifically those involving associative relations embedded in newly learned information. We found that the specific behavioral task used in a study had far more impact on whether a sleep effect was discovered than either the category of the cognitive processes targeted, or the particular experimental design employed. One emerging pattern, however, was that the explicit detection of hidden rules is more likely to happen when the rules are of a temporal nature (i.e., event A at time t predicts a later event B) than when they are non-temporal. We discuss this temporal rule sensitivity in reference to the compressed memory replay occurring in the hippocampus during slow-wave-sleep, and compare this effect to what happens when the extraction of regularities depends on prior knowledge and relies on structures other than the hippocampus.
Collapse
Affiliation(s)
- Itamar Lerner
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Avenue, Newark, NJ 07102, USA.
| | - Mark A Gluck
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Avenue, Newark, NJ 07102, USA
| |
Collapse
|
8
|
The role of consolidation in learning context-dependent phonotactic patterns in speech and digital sequence production. Proc Natl Acad Sci U S A 2018; 115:3617-3622. [PMID: 29555766 DOI: 10.1073/pnas.1721107115] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learning is revealed in their speech errors. First-order patterns, such as "/f/ must be a syllable onset," can be distinguished from contingent, or second-order, patterns, such as "/f/ must be an onset if the vowel is /a/, but a coda if the vowel is /o/." A metaanalysis of 19 experiments clearly demonstrated that first-order patterns affect speech errors to a very great extent in a single experimental session, but second-order vowel-contingent patterns only affect errors on the second day of testing, suggesting the need for a consolidation period. Two experiments tested an analogue to these studies involving sequences of button pushes, with fingers as "consonants" and thumbs as "vowels." The button-push errors revealed two of the key speech-error findings: first-order patterns are learned quickly, but second-order thumb-contingent patterns are only strongly revealed in the errors on the second day of testing. The influence of computational complexity on the implicit learning of phonotactic patterns in speech production may be a general feature of sequence production.
Collapse
|
9
|
Schapiro AC, McDevitt EA, Chen L, Norman KA, Mednick SC, Rogers TT. Sleep Benefits Memory for Semantic Category Structure While Preserving Exemplar-Specific Information. Sci Rep 2017; 7:14869. [PMID: 29093451 PMCID: PMC5665979 DOI: 10.1038/s41598-017-12884-5] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Accepted: 09/15/2017] [Indexed: 01/24/2023] Open
Abstract
Semantic memory encompasses knowledge about both the properties that typify concepts (e.g. robins, like all birds, have wings) as well as the properties that individuate conceptually related items (e.g. robins, in particular, have red breasts). We investigate the impact of sleep on new semantic learning using a property inference task in which both kinds of information are initially acquired equally well. Participants learned about three categories of novel objects possessing some properties that were shared among category exemplars and others that were unique to an exemplar, with exposure frequency varying across categories. In Experiment 1, memory for shared properties improved and memory for unique properties was preserved across a night of sleep, while memory for both feature types declined over a day awake. In Experiment 2, memory for shared properties improved across a nap, but only for the lower-frequency category, suggesting a prioritization of weakly learned information early in a sleep period. The increase was significantly correlated with amount of REM, but was also observed in participants who did not enter REM, suggesting involvement of both REM and NREM sleep. The results provide the first evidence that sleep improves memory for the shared structure of object categories, while simultaneously preserving object-unique information.
Collapse
Affiliation(s)
- Anna C Schapiro
- Department of Psychiatry, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, USA.
| | - Elizabeth A McDevitt
- Department of Psychology, University of California-Riverside, Riverside, CA, USA
| | - Lang Chen
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
| | - Kenneth A Norman
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Sara C Mednick
- Department of Psychology, University of California-Riverside, Riverside, CA, USA
| | - Timothy T Rogers
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|