1
|
Thao PN, Nishijo M, Tai PT, Nghi TN, Hoa VT, Anh TH, Tien TV, Nishino Y, Nishijo H. Impacts of perinatal dioxin exposure on gaze behavior in 2-year-old children in the largest dioxin-contaminated area in Vietnam. Sci Rep 2023; 13:20679. [PMID: 38001134 PMCID: PMC10673870 DOI: 10.1038/s41598-023-47893-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/20/2023] [Indexed: 11/26/2023] Open
Abstract
Fifty-five children aged 2 years from a birth cohort in the largest dioxin-contaminated area in Bien Hoa city, Vietnam participated in this survey to examine gaze behavior. Exposure levels were indicated by 2,3,7,8-tetrachlorodibenzo-p-dibenzodioxin (TCDD) and toxic equivalent of polychlorinated dibenzo-p-dioxin and polychlorinated dibenzofuran (TEQ-PCDD/Fs) levels in maternal breast milk. The percentage of the total fixation duration on the face (% Face), mouth (% Mouth), and eye areas (% Eyes) when viewing silent and conversation scenes was used as gaze behavior indices. When they reached 3-year-old, autistic behavior was assessed using the Autism Spectrum Rating Scale (ASRS). A general linear model adjusted for confounding factors was used to compare gaze indices and ASRS scores between high and low dioxin exposure groups. Effects of perinatal dioxin exposure on gaze behavior were found only when viewing conversation scenes indicated by lower % Face for boys in high TCDD exposure group and lower % Eyes for girls in high TEQ-PCDD/Fs group. Increased autistic traits showed by higher ASRS scores at 3-year-old were found in both gender in the high TCDD exposure group. These findings indicate that perinatal TCDD and TEQ-PCDD/Fs exposure may reduce gaze behavior in 2-year-old children, predicting increased autistic traits at 3-year-old.
Collapse
Affiliation(s)
- Pham Ngoc Thao
- Department of Functional Diagnosis, 103 Military Hospital, Vietnam Military Medical University, 160 Phung Hung, Ha Dong, 12108, Ha Noi, Vietnam.
- Department of Functional Diagnosis, 103 Military Hospital, Vietnam Military Medical University, 261 Phung Hung Street, Phuc La Commune, Ha Dong District, Ha Noi, Vietnam.
| | - Muneko Nishijo
- Department of Epidemiology and Public Health, Kanazawa Medical University, Ishikawa, 920-0293, Japan
| | - Pham The Tai
- Institute of Biomedicine and Pharmacy, Vietnam Military Medical University, 12108, Ha Noi, Vietnam
| | - Tran Ngoc Nghi
- Ministry of Health, Vietnamese Government, Hanoi, Vietnam
| | - Vu Thi Hoa
- Department of Epidemiology and Public Health, Kanazawa Medical University, Ishikawa, 920-0293, Japan
| | - Tran Hai Anh
- Department of Physiology, Vietnam Military Medical University, 12108, Ha Noi, Vietnam
| | - Tran Viet Tien
- Department of Tropical and Infectious diseases, 103 Military Hospital, Vietnam Military Medical University, 12108, Ha Noi, Vietnam
| | - Yoshikazu Nishino
- Department of Epidemiology and Public Health, Kanazawa Medical University, Ishikawa, 920-0293, Japan
| | - Hisao Nishijo
- Department of Sport and Health Sciences, Faculty of Human Sciences, University of East Asia, Shimonoseki-Shi, Yamaguchi, 751-8503, Japan
| |
Collapse
|
2
|
Edgar EV, Todd JT, Bahrick LE. Intersensory processing of faces and voices at 6 months predicts language outcomes at 18, 24, and 36 months of age. INFANCY 2023; 28:569-596. [PMID: 36760157 PMCID: PMC10564323 DOI: 10.1111/infa.12533] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/04/2023] [Accepted: 01/13/2023] [Indexed: 02/11/2023]
Abstract
Intersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine-grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12-month intersensory processing of face-voice synchrony predicts language outcomes at 18- and 24-months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine-grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3- and 6-months predicts language outcomes at 12-, 18-, 24-, and 36-months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6-months (but not 3-months) accounted for significant unique variance in language outcomes at 18-, 24-, and 36-months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face-voice synchrony as a foundation for language development as early as 6-months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5-years later.
Collapse
|
3
|
Quinones JF, Hildebrandt A, Pavan T, Thiel CM, Heep A. Preterm birth and neonatal white matter microstructure in in-vivo reconstructed fiber tracts among audiovisual integration brain regions. Dev Cogn Neurosci 2023; 60:101202. [PMID: 36731359 PMCID: PMC9894786 DOI: 10.1016/j.dcn.2023.101202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 01/02/2023] [Accepted: 01/25/2023] [Indexed: 01/28/2023] Open
Abstract
Individuals born preterm are at risk of developing a variety of sequelae. Audiovisual integration (AVI) has received little attention despite its facilitating role in the development of socio-cognitive abilities. The present study assessed the association between prematurity and in-vivo reconstructed fiber bundles among brain regions relevant for AVI. We retrieved data from 63 preterm neonates enrolled in the Developing Human Connectome Project (http://www.developingconnectome.org/) and matched them with 63 term-born neonates from the same study by means of propensity score matching. We performed probabilistic tractography, DTI and NODDI analysis on the traced fibers. We found that specific DTI and NODDI metrics are significantly associated with prematurity in neonates matched for postmenstrual age at scan. We investigated the spatial overlap and developmental order of the reconstructed tractograms between preterm and full-term neonates. Permutation-based analysis revealed significant differences in dice similarity coefficients and developmental order between preterm and full term neonates at the group level. Contrarily, no group differences in the amount of interindividual variability of DTI and NODDI metrics were observed. We conclude that microstructural detriment in the reconstructed fiber bundles along with developmental and morphological differences are likely to contribute to disadvantages in AVI in preterm individuals.
Collapse
Affiliation(s)
- Juan F Quinones
- Psychological Methods and Statistics, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Andrea Hildebrandt
- Psychological Methods and Statistics, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität Oldenburg, Germany.
| | - Tommaso Pavan
- Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland
| | - Christiane M Thiel
- Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität Oldenburg, Germany; Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Axel Heep
- Research Center Neurosensory Science, Carl von Ossietzky Universität Oldenburg, Germany; Klinik für Neonatologie, Intensivmedizin und Kinderkardiologie, Oldenburg, Germany
| |
Collapse
|
4
|
Tan SHJ, Kalashnikova M, Burnham D. Seeing a talking face matters: Infants' segmentation of continuous auditory-visual speech. INFANCY 2023; 28:277-300. [PMID: 36217702 DOI: 10.1111/infa.12509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Visual speech cues from a speaker's talking face aid speech segmentation in adults, but despite the importance of speech segmentation in language acquisition, little is known about the possible influence of visual speech on infants' speech segmentation. Here, to investigate whether there is facilitation of speech segmentation by visual information, two groups of English-learning 7-month-old infants were presented with continuous speech passages, one group with auditory-only (AO) speech and the other with auditory-visual (AV) speech. Additionally, the possible relation between infants' relative attention to the speaker's mouth versus eye regions and their segmentation performance was examined. Both the AO and the AV groups of infants successfully segmented words from the continuous speech stream, but segmentation performance persisted for longer for infants in the AV group. Interestingly, while AV group infants showed no significant relation between the relative amount of time spent fixating the speaker's mouth versus eyes and word segmentation, their attention to the mouth was greater than that of AO group infants, especially early in test trials. The results are discussed in relation to the possible pathways through which visual speech cues aid speech perception.
Collapse
Affiliation(s)
- Sok Hui Jessica Tan
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Milpera, New South Wales, Australia.,Office of Education Research, National Institute of Education, Nanyang Technological University, Singapore, Singapore
| | - Marina Kalashnikova
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Milpera, New South Wales, Australia.,The Basque Centre on Cognition, Brain and Language, San Sebastián, Basque Country, Spain.,IKERBASQUE, Basque Foundation for Science, San Sebastián, Basque Country, Spain
| | - Denis Burnham
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Milpera, New South Wales, Australia
| |
Collapse
|
5
|
Belteki Z, van den Boomen C, Junge C. Face-to-face contact during infancy: How the development of gaze to faces feeds into infants' vocabulary outcomes. Front Psychol 2022; 13:997186. [PMID: 36389540 PMCID: PMC9650530 DOI: 10.3389/fpsyg.2022.997186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/03/2022] [Indexed: 08/10/2023] Open
Abstract
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants' gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
Collapse
|
6
|
Processing third-party social interactions in the human infant brain. Infant Behav Dev 2022; 68:101727. [PMID: 35667276 DOI: 10.1016/j.infbeh.2022.101727] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 05/25/2022] [Accepted: 05/25/2022] [Indexed: 11/21/2022]
Abstract
The understanding of developing social brain functions during infancy relies on research that has focused on studying how infants engage in first-person social interactions or view individual agents and their actions. Behavioral research suggests that observing and learning from third-party social interactions plays a foundational role in early social and moral development. However, the brain systems involved in observing third-party social interactions during infancy are unknown. The current study tested the hypothesis that brain systems in prefrontal and temporal cortex, previously identified in adults and children, begin to specialize in third-party social interaction processing during infancy. Infants (N = 62), ranging from 6 to 13 months in age, had their brain responses measured using functional near-infrared spectroscopy (fNIRS) while viewing third-party social interactions and two control conditions, infants viewing two individual actions and infants viewing inverted social interactions. The results show that infants preferentially engage brain regions localized within the dorsomedial prefrontal cortex when viewing third-party social interactions. These findings suggest that brain systems processing third-party social interaction begin to develop early in human ontogeny and may thus play a foundational role in supporting the interpretation of and learning from social interactions.
Collapse
|
7
|
Fiber tracing and microstructural characterization among audiovisual integration brain regions in neonates compared with young adults. Neuroimage 2022; 254:119141. [PMID: 35342006 DOI: 10.1016/j.neuroimage.2022.119141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/23/2022] [Accepted: 03/21/2022] [Indexed: 11/23/2022] Open
Abstract
Audiovisual integration has been related with cognitive-processing and behavioral advantages, as well as with various socio-cognitive disorders. While some studies have identified brain regions instantiating this ability shortly after birth, little is known about the structural pathways connecting them. The goal of the present study was to reconstruct fiber tracts linking AVI regions in the newborn in-vivo brain and assess their adult-likeness by comparing them with analogous fiber tracts of young adults. We performed probabilistic tractography and compared connective probabilities between a sample of term-born neonates (N = 311; the Developing Human Connectome Project (dHCP, http://www.developingconnectome.org) and young adults (N = 311 The Human Connectome Project; https://www.humanconnectome.org/) by means of a classification algorithm. Furthermore, we computed Dice coefficients to assess between-group spatial similarity of the reconstructed fibers and used diffusion metrics to characterize neonates' AVI brain network in terms of microstructural properties, interhemispheric differences and the association with perinatal covariates and biological sex. Overall, our results indicate that the AVI fiber bundles were successfully reconstructed in a vast majority of neonates, similarly to adults. Connective probability distributional similarities and spatial overlaps of AVI fibers between the two groups differed across the reconstructed fibers. There was a rank-order correspondence of the fibers' connective strengths across the groups. Additionally, the study revealed patterns of diffusion metrics in line with early white matter developmental trajectories and a developmental advantage for females. Altogether, these findings deliver evidence of meaningful structural connections among AVI regions in the newborn in-vivo brain.
Collapse
|
8
|
Buchanan-Worster E, Hulme C, Dennan R, MacSweeney M. Speechreading in hearing children can be improved by training. Dev Sci 2021; 24:e13124. [PMID: 34060185 PMCID: PMC7612880 DOI: 10.1111/desc.13124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 03/05/2021] [Accepted: 05/06/2021] [Indexed: 12/02/2022]
Abstract
Visual information conveyed by a speaking face aids speech perception. In addition, children’s ability to comprehend visual-only speech (speechreading ability) is related to phonological awareness and reading skills in both deaf and hearing children. We tested whether training speechreading would improve speechreading, phoneme blending, and reading ability in hearing children. Ninety-two hearing 4- to 5-year-old children were randomised into two groups: business-as-usual controls, and an intervention group, who completed three weeks of computerised speechreading training. The intervention group showed greater improvements in speechreading than the control group at post-test both immediately after training and 3 months later. This was the case for both trained and untrained words. There were no group effects on the phonological awareness or single-word reading tasks, although those with the lowest phoneme blending scores did show greater improvements in blending as a result of training. The improvement in speechreading in hearing children following brief training is encouraging. The results are also important in suggesting a hypothesis for future investigation: that a focus on visual speech information may contribute to phonological skills, not only in deaf children but also in hearing children who are at risk of reading difficulties. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=bBdpliGkbkY.
Collapse
Affiliation(s)
- Elizabeth Buchanan-Worster
- Institute of Cognitive Neuroscience, University College London, London, UK.,Deafness, Cognition and Language Research Centre, University College London, London, UK
| | - Charles Hulme
- Department of Education, Oxford University, Oxford, Oxfordshire, UK
| | - Rachel Dennan
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Mairéad MacSweeney
- Institute of Cognitive Neuroscience, University College London, London, UK.,Deafness, Cognition and Language Research Centre, University College London, London, UK
| |
Collapse
|
9
|
Gut microbiota composition is associated with newborn functional brain connectivity and behavioral temperament. Brain Behav Immun 2021; 91:472-486. [PMID: 33157257 DOI: 10.1016/j.bbi.2020.11.003] [Citation(s) in RCA: 67] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/27/2020] [Accepted: 11/01/2020] [Indexed: 12/12/2022] Open
Abstract
The gut microbiome appears to play an important role in human health and disease. However, only little is known about how variability in the gut microbiome contributes to individual differences during early and sensitive stages of brain and behavioral development. The current study examined the link between gut microbiome, brain, and behavior in newborn infants (N = 63; M [age] = 25 days). Infant gut microbiome diversity was measured from stool samples using metagenomic sequencing, infant functional brain network connectivity was assessed using a resting state functional near infrared spectroscopy (rs-fNIRS) procedure, and infant behavioral temperament was assessed using parental report. Our results show that gut microbiota composition is linked to individual variability in brain network connectivity, which in turn mediated individual differences in behavioral temperament, specifically negative emotionality, among infants. Furthermore, virulence factors, possibly indexing pathogenic activity, were associated with differences in brain network connectivity linked to negative emotionality. These findings provide novel insights into the early developmental origins of the gut microbiome-brain axis and its association with variability in important behavioral traits. This suggests that the gut microbiome is an important biological factor to consider when studying human development and health.
Collapse
|
10
|
Buchanan-Worster E, MacSweeney M, Pimperton H, Kyle F, Harris M, Beedie I, Ralph-Lewis A, Hulme C. Speechreading Ability Is Related to Phonological Awareness and Single-Word Reading in Both Deaf and Hearing Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3775-3785. [PMID: 33108258 PMCID: PMC8530507 DOI: 10.1044/2020_jslhr-20-00159] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 06/24/2020] [Accepted: 08/15/2020] [Indexed: 06/11/2023]
Abstract
Purpose Speechreading (lipreading) is a correlate of reading ability in both deaf and hearing children. We investigated whether the relationship between speechreading and single-word reading is mediated by phonological awareness in deaf and hearing children. Method In two separate studies, 66 deaf children and 138 hearing children, aged 5-8 years old, were assessed on measures of speechreading, phonological awareness, and single-word reading. We assessed the concurrent relationships between latent variables measuring speechreading, phonological awareness, and single-word reading. Results In both deaf and hearing children, there was a strong relationship between speechreading and single-word reading, which was fully mediated by phonological awareness. Conclusions These results are consistent with ideas from previous studies that visual speech information contributes to the development of phonological representations in both deaf and hearing children, which, in turn, support learning to read. Future longitudinal and training studies are required to establish whether these relationships reflect causal effects.
Collapse
Affiliation(s)
- Elizabeth Buchanan-Worster
- Institute of Cognitive Neuroscience, University College London, United Kingdom
- Deafness Cognition and Language Research Centre, University College London, United Kingdom
| | - Mairéad MacSweeney
- Institute of Cognitive Neuroscience, University College London, United Kingdom
- Deafness Cognition and Language Research Centre, University College London, United Kingdom
| | - Hannah Pimperton
- Institute of Cognitive Neuroscience, University College London, United Kingdom
| | - Fiona Kyle
- Deafness Cognition and Language Research Centre, University College London, United Kingdom
| | - Margaret Harris
- Faculty of Health and Life Sciences, Oxford Brookes University, United Kingdom
| | - Indie Beedie
- Deafness Cognition and Language Research Centre, University College London, United Kingdom
| | - Amelia Ralph-Lewis
- Deafness Cognition and Language Research Centre, University College London, United Kingdom
| | - Charles Hulme
- Department of Education, University of Oxford, United Kingdom
| |
Collapse
|
11
|
Swallow KM, Wang Q. Culture influences how people divide continuous sensory experience into events. Cognition 2020; 205:104450. [PMID: 32927384 DOI: 10.1016/j.cognition.2020.104450] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 08/13/2020] [Accepted: 08/26/2020] [Indexed: 10/23/2022]
Abstract
Everyday experience is divided into meaningful events as a part of human perception. Current accounts of this process, known as event segmentation, focus on how characteristics of the experience (e.g., situation changes) influence segmentation. However, characteristics of the viewers themselves have been largely neglected. We test whether one such viewer characteristic, their cultural background, impacts online event segmentation. Culture could impact event segmentation (1) by emphasizing different aspects of experiences as being important for comprehension, memory, and communication, and (2) by providing different exemplars of how everyday activities are performed, which objects are likely to be used, and how scenes are laid out. Indian and US viewers (N = 152) identified events in everyday activities (e.g., making coffee) recorded in Indian and US settings. Consistent with their cultural preference for analytical processing, US viewers segmented the activities into more events than did Indian viewers. Furthermore, event boundaries identified by US viewers were more strongly associated with visual changes, whereas boundaries identified by Indian viewers were more strongly associated with goal changes. There was no evidence that familiarity with an activity impacted segmentation. Thus, culture impacts event perception by altering the types of information people prioritize when dividing experience into meaningful events.
Collapse
Affiliation(s)
- Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA.
| | - Qi Wang
- Department of Human Development, Cornell University, Ithaca, NY, USA
| |
Collapse
|
12
|
Tsuji S, Jincho N, Mazuka R, Cristia A. Communicative cues in the absence of a human interaction partner enhance 12-month-old infants’ word learning. J Exp Child Psychol 2020; 191:104740. [DOI: 10.1016/j.jecp.2019.104740] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 10/25/2019] [Accepted: 10/25/2019] [Indexed: 10/25/2022]
|
13
|
Altarelli I, Dehaene-Lambertz G, Bavelier D. Individual differences in the acquisition of non-linguistic audio-visual associations in 5 year olds. Dev Sci 2019; 23:e12913. [PMID: 31608547 DOI: 10.1111/desc.12913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 08/02/2019] [Accepted: 09/27/2019] [Indexed: 11/29/2022]
Abstract
Audio-visual associative learning - at least when linguistic stimuli are employed - is known to rely on core linguistic skills such as phonological awareness. Here we ask whether this would also be the case in a task that does not manipulate linguistic information. Another question of interest is whether executive skills, often found to support learning, may play a larger role in a non-linguistic audio-visual associative task compared to a linguistic one. We present a new task that measures learning when having to associate non-linguistic auditory signals with novel visual shapes. Importantly, our novel task shares with linguistic processes such as reading acquisition the need to associate sounds with arbitrary shapes. Yet, rather than phonemes or syllables, it uses novel environmental sounds - therefore limiting direct reliance on linguistic abilities. Five-year-old French-speaking children (N = 76, 39 girls) were assessed individually in our novel audio-visual associative task, as well as in a number of other cognitive tasks evaluating linguistic abilities and executive functions. We found phonological awareness and language comprehension to be related to scores in the audio-visual associative task, while no correlation with executive functions was observed. These results underscore a key relation between foundational language competencies and audio-visual associative learning, even in the absence of linguistic input in the associative task.
Collapse
Affiliation(s)
- Irene Altarelli
- Cognitive Neuroimaging Unit U992, INSERM, CEA DRF/Institut Joliot, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France.,Faculty of Psychology and Education Sciences, University of Geneva, Geneva, Switzerland.,CNRS UMR 8240, Laboratory for the Psychology of Child Development and Education (LaPsyDE), University Paris Descartes, Université de Paris, Paris, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit U992, INSERM, CEA DRF/Institut Joliot, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France
| | - Daphne Bavelier
- Faculty of Psychology and Education Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
14
|
A step forward: Bayesian hierarchical modelling as a tool in assessment of individual discrimination performance. Infant Behav Dev 2019; 57:101345. [PMID: 31563856 DOI: 10.1016/j.infbeh.2019.101345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 07/25/2019] [Accepted: 07/31/2019] [Indexed: 10/25/2022]
Abstract
Individual assessment of infants' speech discrimination is of great value for studies of language development that seek to relate early and later skills, as well as for clinical work. The present study explored the applicability of the hybrid visual fixation paradigm (Houston et al., 2007) and the associated statistical analysis approach to assess individual discrimination of a native vowel contrast, /aː/ - /eː/, in Dutch 6 to 10-month-old infants. Houston et al. found that 80% (8/10) of the 9-month-old infants successfully discriminated the contrast between pseudowords boodup - seepug. Using the same approach, we found that 12% (14/117) of the infants in our sample discriminated the highly salient /aː/ -/eː/ contrast. This percentage was reduced to 3% (3/117) when we corrected for multiple testing. Bayesian hierarchical modeling indicated that 50% of the infants showed evidence of discrimination. Advantages of Bayesian hierarchical modeling are that 1) there is no need for a correction for multiple testing and 2) better estimates at the individual level are obtained. Thus, individual speech discrimination can be more accurately assessed using state of the art statistical approaches.
Collapse
|
15
|
Imafuku M, Kanakogi Y, Butler D, Myowa M. Demystifying infant vocal imitation: The roles of mouth looking and speaker's gaze. Dev Sci 2019; 22:e12825. [PMID: 30980494 DOI: 10.1111/desc.12825] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 01/08/2019] [Accepted: 03/01/2019] [Indexed: 12/20/2022]
Abstract
Vocal imitation plays a fundamental role in human language acquisition from infancy. Little is known, however, about how infants imitate other's sounds. We focused on three factors: (a) whether infants receive information from upright faces, (b) the infant's observation of the speaker's mouth and (c) the speaker directing their gaze towards the infant. We recorded the eye movements of 6-month-olds who participated in experiments watching videos of a speaker producing vowel sounds. We found that an infants' tendency to vocally imitate such videos increased as a function of (a) seeing upright rather than inverted faces, (b) their increased looking towards the speaker's mouth and (c) whether the speaker directed their gaze towards, rather than away from infants. These latter findings are consistent with theories of motor resonance and natural pedagogy respectively. New light has been shed on the cues and underlying mechanisms linking infant speech perception and production.
Collapse
Affiliation(s)
- Masahiro Imafuku
- Graduate School of Education, Kyoto University, Kyoto, Japan.,Faculty of Education, Musashino University, Tokyo, Japan
| | | | - David Butler
- Graduate School of Education, Kyoto University, Kyoto, Japan.,The Institute for Social Neuroscience Psychology, Heidelberg, Victoria, Australia
| | - Masako Myowa
- Graduate School of Education, Kyoto University, Kyoto, Japan
| |
Collapse
|
16
|
Kelsey CM, Krol KM, Kret ME, Grossmann T. Infants' brain responses to pupillary changes in others are affected by race. Sci Rep 2019; 9:4317. [PMID: 30867473 PMCID: PMC6416351 DOI: 10.1038/s41598-019-40661-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 02/18/2019] [Indexed: 11/12/2022] Open
Abstract
Sensitive responding to eye cues plays a key role during human social interactions. Observed changes in pupillary size provide a range of socially-relevant information including cues regarding a person's emotional and arousal states. Recently, infants have been found to mimic observed pupillary changes in others, instantiating a foundational mechanism for eye-based social communication. Among adults, perception of pupillary changes is affected by race. Here, we examined whether and how race impacts the neural processing of others' pupillary changes in early ontogeny. We measured 9-month-old infants' brain responses to dilating and constricting pupils in the context of viewing own-race and other-race eyes using functional near-infrared spectroscopy (fNIRS). Our results show that only when responding to own-race eyes, infants' brains distinguished between changes in pupillary size. Specifically, infants showed enhanced responses in the right superior temporal cortex when observing own-race pupil dilation. Moreover, when processing other-race pupillary changes, infants recruited the dorsolateral prefrontal cortex, a brain region linked to cognitive control functions. These findings suggest that, early in development, the fundamental process of responding to pupillary changes is impacted by race and interracial interactions may afford greater cognitive control or effort. This critically informs our understanding of the early origins of responding to pupillary signals in others and further highlights the impact of race on the processing of social signals.
Collapse
Affiliation(s)
- Caroline M Kelsey
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | - Kathleen M Krol
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | - Mariska E Kret
- Institute of Psychology, Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
- Leiden University, Leiden Institute for Brain and Cognition (LIBC), Leiden, The Netherlands
| | - Tobias Grossmann
- Department of Psychology, University of Virginia, Charlottesville, VA, USA.
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
17
|
Imafuku M, Kawai M, Niwa F, Shinya Y, Myowa M. Audiovisual speech perception and language acquisition in preterm infants: A longitudinal study. Early Hum Dev 2019; 128:93-100. [PMID: 30541680 DOI: 10.1016/j.earlhumdev.2018.11.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 09/07/2018] [Accepted: 11/01/2018] [Indexed: 10/27/2022]
Abstract
BACKGROUND Preterm infants have a higher risk of language delay throughout childhood. The ability to integrate audiovisual speech information is associated with language acquisition in term infants; however, the relation is still unclear in preterm infant. AIM AND METHODS This study longitudinally investigated visual preference for audiovisual congruent and incongruent speech during a preferential looking task using eye-tracking in preterm and term infants at 6, 12, and 18 months of corrected age. The infants' receptive and expressive vocabulary at 12 and 18 months were obtained by parent report, using the Japanese MacArthur Communicative Development Inventory. RESULTS We found that preterm infants did not clearly show visual preference for the congruent audiovisual display at any age, whereas term infants looked at the congruent audiovisual display longer than the incongruent audiovisual display at 6 and 18 months. Preterm infants' receptive and expressive vocabulary scores were lower than those of term infants at 12 and 18 months. Furthermore, the proportion of looking time toward the congruent audiovisual display at 6 months was positively correlated with receptive vocabulary scores at 12 and 18 months for both groups. CONCLUSIONS These findings suggest that better audiovisual speech perception abilities are one factor that results in better language acquisition in preterm as well as term infants. Early identification of behaviors associated with later language in preterm infants may contribute to planning intervention for developmental problems.
Collapse
Affiliation(s)
- Masahiro Imafuku
- Graduate School of Education, Kyoto University, Kyoto, Japan; Faculty of Education, Musashino University, Tokyo, Japan.
| | - Masahiko Kawai
- Department of Pediatrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Fusako Niwa
- Department of Pediatrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Yuta Shinya
- Graduate School of Education, Kyoto University, Kyoto, Japan; Graduate School of Education, The University of Tokyo, Tokyo, Japan
| | - Masako Myowa
- Graduate School of Education, Kyoto University, Kyoto, Japan
| |
Collapse
|
18
|
Bahrick LE, Soska KC, Todd JT. Assessing individual differences in the speed and accuracy of intersensory processing in young children: The intersensory processing efficiency protocol. Dev Psychol 2018; 54:2226-2239. [PMID: 30346188 PMCID: PMC6261800 DOI: 10.1037/dev0000575] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detecting intersensory redundancy guides cognitive, social, and language development. Yet, researchers lack fine-grained, individual difference measures needed for studying how early intersensory skills lead to later outcomes. The intersensory processing efficiency protocol (IPEP) addresses this need. Across a number of brief trials, participants must find a sound-synchronized visual target event (social, nonsocial) amid five visual distractor events, simulating the "noisiness" of natural environments. Sixty-four 3- to 5-year-old children were tested using remote eye-tracking. Children showed intersensory processing by attending to the sound-synchronous event more frequently and longer than in a silent visual control, and more frequently than expected by chance. The IPEP provides a fine-grained, nonverbal method for characterizing individual differences in intersensory processing appropriate for infants and children. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Kasey C Soska
- Department of Psychology, Florida International University
| | | |
Collapse
|
19
|
Bahrick LE, Todd JT, Soska KC. The Multisensory Attention Assessment Protocol (MAAP): Characterizing individual differences in multisensory attention skills in infants and children and relations with language and cognition. Dev Psychol 2018; 54:2207-2225. [PMID: 30359058 PMCID: PMC6263835 DOI: 10.1037/dev0000594] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multisensory attention skills provide a crucial foundation for early cognitive, social, and language development, yet there are no fine-grained, individual difference measures of these skills appropriate for preverbal children. The Multisensory Attention Assessment Protocol (MAAP) fills this need. In a single video-based protocol requiring no language skills, the MAAP assesses individual differences in three fundamental building blocks of attention to multisensory events-the duration of attention maintenance, the accuracy of intersensory (audiovisual) matching, and the speed of shifting-for both social and nonsocial events, in the context of high and low competing visual stimulation. In Experiment 1, 2- to 5-year-old children (N = 36) received the MAAP and assessments of language and cognitive functioning. In Experiment 2 the procedure was streamlined and presented to 12-month-olds (N = 48). Both infants and children showed high levels of attention maintenance to social and nonsocial events, impaired attention maintenance and speed of shifting when competing stimulation was high, and significant intersensory matching. Children showed longer maintenance, faster shifting, and less impairment from competing stimulation than infants. In 2- to 5-year-old children, duration and accuracy were intercorrelated, showed increases with age, and predicted cognitive and language functioning. The MAAP opens the door to assessing developmental pathways between early attention patterns to audiovisual events and language, cognitive, and social development. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
20
|
Young children protest against the incorrect use of novel words: Toward a normative pragmatic account on language acquisition. J Exp Child Psychol 2018; 180:113-122. [PMID: 30384967 DOI: 10.1016/j.jecp.2018.09.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 09/16/2018] [Accepted: 09/18/2018] [Indexed: 11/23/2022]
Abstract
The current study examined whether young children conceive of language use as a normative practice. To this end, 3- and 5-year-old children learned how others used a novel word in either a direct-instruction or an overhearing context. Thereafter, they were presented with a protagonist who used the novel word to refer to either the same or another object. Children of both age groups selectively protested when the protagonist used the word to refer to another object, and older children selectively affirmed when the protagonist used the word to refer to the same object. Overall, the study is in line with theoretical notions that early language acquisition could be conceived of as the acquisition of a normative social practice.
Collapse
|
21
|
Grossmann T, Missana M, Krol KM. The neurodevelopmental precursors of altruistic behavior in infancy. PLoS Biol 2018; 16:e2005281. [PMID: 30252842 PMCID: PMC6155440 DOI: 10.1371/journal.pbio.2005281] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 08/17/2018] [Indexed: 11/19/2022] Open
Abstract
Altruistic behavior is considered a key feature of the human cooperative makeup, with deep ontogenetic roots. The tendency to engage in altruistic behavior varies between individuals and has been linked to differences in responding to fearful faces. The current study tests the hypothesis that this link exists from early in human ontogeny. Using eye tracking, we examined whether attentional responses to fear in others at 7 months of age predict altruistic behavior at 14 months of age. Our analysis revealed that altruistic behavior in toddlerhood was predicted by infants' attention to fearful faces but not happy or angry faces. Specifically, infants who showed heightened initial attention to (i.e., prolonged first look) followed by greater disengagement (i.e., reduced attentional bias over 15 seconds) from fearful faces at 7 months displayed greater prosocial behavior at 14 months of age. Our data further show that infants' attentional bias to fearful faces and their altruistic behavior was predicted by brain responses in the dorsolateral prefrontal cortex (dlPFC), measured through functional near-infrared spectroscopy (fNIRS). This suggests that, from early in ontogeny, variability in altruistic helping behavior is linked to our responsiveness to seeing others in distress and brain processes implicated in attentional control. These findings critically advance our understanding of the emergence of altruism in humans by identifying responsiveness to fear in others as an early precursor contributing to variability in prosocial behavior.
Collapse
Affiliation(s)
- Tobias Grossmann
- Department of Psychology, University of Virginia, Charlottesville, Virginia, United States of America
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Manuela Missana
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Institute of Educational Sciences, University of Leipzig, Leipzig, Germany
| | - Kathleen M. Krol
- Department of Psychology, University of Virginia, Charlottesville, Virginia, United States of America
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
22
|
Crossmodal association of auditory and visual material properties in infants. Sci Rep 2018; 8:9301. [PMID: 29915205 PMCID: PMC6006328 DOI: 10.1038/s41598-018-27153-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 05/24/2018] [Indexed: 11/09/2022] Open
Abstract
The human perceptual system enables us to extract visual properties of an object’s material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material (“Metal” and “Wood”) in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the “Metal” material later than for the “Wood” material, since infants form the visual property of “Metal” material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material’s familiarity might facilitate the development of multisensory processing during the first year of life.
Collapse
|
23
|
Bergmann C, Cristia A. Environmental Influences on Infants’ Native Vowel Discrimination: The Case of Talker Number in Daily Life. INFANCY 2018. [DOI: 10.1111/infa.12232] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Christina Bergmann
- LSCP; Département d'Etudes Cognitives; ENS, EHESS, CNRS; PSL Research University
- Language Development Department; Max Planck Institute for Psycholinguistics
| | - Alejandrina Cristia
- LSCP; Département d'Etudes Cognitives; ENS, EHESS, CNRS; PSL Research University
| |
Collapse
|
24
|
Havy M, Zesiger P. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children. Front Psychol 2017; 8:2122. [PMID: 29276493 PMCID: PMC5727082 DOI: 10.3389/fpsyg.2017.02122] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 11/21/2017] [Indexed: 12/02/2022] Open
Abstract
From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face). Another group experienced the words in the visual modality (seeing a silent talking face). Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.
Collapse
Affiliation(s)
- Mélanie Havy
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | | |
Collapse
|
25
|
Hakuno Y, Omori T, Yamamoto JI, Minagawa Y. Social interaction facilitates word learning in preverbal infants: Word–object mapping and word segmentation. Infant Behav Dev 2017; 48:65-77. [DOI: 10.1016/j.infbeh.2017.05.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 05/26/2017] [Accepted: 05/26/2017] [Indexed: 11/30/2022]
|
26
|
Ferguson B, Lew-Williams C. Communicative signals support abstract rule learning by 7-month-old infants. Sci Rep 2016; 6:25434. [PMID: 27150270 PMCID: PMC4858667 DOI: 10.1038/srep25434] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2015] [Accepted: 04/14/2016] [Indexed: 11/11/2022] Open
Abstract
The mechanisms underlying the discovery of abstract rules like those found in natural language may be evolutionarily tuned to speech, according to previous research. When infants hear speech sounds, they can learn rules that govern their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do so. Here we show that infants' rule learning is not tied to speech per se, but is instead enhanced more broadly by communicative signals. In two experiments, infants succeeded in learning and generalizing rules from tones that were introduced as if they could be used to communicate. In two control experiments, infants failed to learn the very same rules when familiarized to tones outside of a communicative exchange. These results reveal that infants' attention to social agents and communication catalyzes a fundamental achievement of human learning.
Collapse
Affiliation(s)
- Brock Ferguson
- Department of Psychology, Northwestern University, 2029 Sheridan Rd., Evanston, IL, USA 60208
| | - Casey Lew-Williams
- Department of Psychology, Princeton University, Princeton, NJ, USA 08540
| |
Collapse
|
27
|
Ter Schure SMM, Junge CMM, Boersma PPG. Semantics guide infants' vowel learning: Computational and experimental evidence. Infant Behav Dev 2016; 43:44-57. [PMID: 27130954 DOI: 10.1016/j.infbeh.2016.01.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Revised: 11/27/2015] [Accepted: 01/15/2016] [Indexed: 11/28/2022]
Abstract
In their first year, infants' perceptual abilities zoom in on only those speech sound contrasts that are relevant for their language. Infants' lexicons do not yet contain sufficient minimal pairs to explain this phonetic categorization process. Therefore, researchers suggested a bottom-up learning mechanism: infants create categories aligned with the frequency distributions of sounds in their input. Recent evidence shows that this bottom-up mechanism may be complemented by the semantic context in which speech sounds occur, such as simultaneously present objects. To test this hypothesis, we investigated whether discrimination of a non-native vowel contrast improves when sounds from the contrast were paired consistently or randomly with two distinct visually presented objects, while the distribution of speech tokens suggested a single broad category. This was assessed in two ways: computationally, namely in a neural network simulation, and experimentally, namely in a group of 8-month-old infants. The neural network, trained with a large set of sound-meaning pairs, revealed that two categories emerge only if sounds are consistently paired with objects. A group of 49 real 8-month-old infants did not immediately show sensitivity to the pairing condition; a later test at 18 months with some of the same infants, however, showed that this sensitivity at 8 months interacted with their vocabulary size at 18 months. This interaction can be explained by the idea that infants with larger future vocabularies are more positively influenced by consistent training (and/or more negatively influenced by inconsistent training) than infants with smaller future vocabularies. This suggests that consistent pairing with distinct visual objects can help infants to discriminate speech sounds even when the auditory information does not signal a distinction. Together our results give computational as well as experimental support for the idea that semantic context plays a role in disambiguating phonetic auditory input.
Collapse
Affiliation(s)
- S M M Ter Schure
- Amsterdam Center for Language and Communication, University of Amsterdam, the Netherlands.
| | - C M M Junge
- Department of Social and Behavioral Sciences, Utrecht University, the Netherlands
| | - P P G Boersma
- Amsterdam Center for Language and Communication, University of Amsterdam, the Netherlands
| |
Collapse
|
28
|
The role of left inferior frontal cortex during audiovisual speech perception in infants. Neuroimage 2016; 133:14-20. [PMID: 26946090 DOI: 10.1016/j.neuroimage.2016.02.061] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Accepted: 02/21/2016] [Indexed: 11/21/2022] Open
Abstract
In the first year of life, infants' speech perception attunes to their native language. While the behavioral changes associated with native language attunement are fairly well mapped, the underlying mechanisms and neural processes are still only poorly understood. Using fNIRS and eye tracking, the current study investigated 6-month-old infants' processing of audiovisual speech that contained matching or mismatching auditory and visual speech cues. Our results revealed that infants' speech-sensitive brain responses in inferior frontal brain regions were lateralized to the left hemisphere. Critically, our results further revealed that speech-sensitive left inferior frontal regions showed enhanced responses to matching when compared to mismatching audiovisual speech, and that infants with a preference to look at the speaker's mouth showed an enhanced left inferior frontal response to speech compared to infants with a preference to look at the speaker's eyes. These results suggest that left inferior frontal regions play a crucial role in associating information from different modalities during native language attunement, fostering the formation of multimodal phonological categories.
Collapse
|
29
|
Streri A, Coulon M, Marie J, Yeung HH. Developmental Change in Infants' Detection of Visual Faces that Match Auditory Vowels. INFANCY 2015. [DOI: 10.1111/infa.12104] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Arlette Streri
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - Marion Coulon
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - Julien Marie
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - H. Henny Yeung
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
- The Centre National de la Recherche Scientifique
| |
Collapse
|