1
|
Dopierała AAW, Pérez DL, Mercure E, Pluta A, Malinowska-Korczak A, Evans S, Wolak T, Tomalski P. The Development of Cortical Responses to the Integration of Audiovisual Speech in Infancy. Brain Topogr 2023:10.1007/s10548-023-00959-8. [PMID: 37171657 PMCID: PMC10176292 DOI: 10.1007/s10548-023-00959-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 04/11/2023] [Indexed: 05/13/2023]
Abstract
In adults, the integration of audiovisual speech elicits specific higher (super-additive) or lower (sub-additive) cortical responses when compared to the responses to unisensory stimuli. Although there is evidence that the fronto-temporal network is active during perception of audiovisual speech in infancy, the development of fronto-temporal responses to audiovisual integration remains unknown. In the current study, 5-month-olds and 10-month-olds watched bimodal (audiovisual) and alternating unimodal (auditory + visual) syllables. In this context we use alternating unimodal to denote alternating auditory and visual syllables that are perceived as separate syllables by adults. Using fNIRS we measured responses over large cortical areas including the inferior frontal and superior temporal regions. We identified channels showing different responses to bimodal than alternating unimodal condition and used multivariate pattern analysis (MVPA) to decode patterns of cortical responses to bimodal (audiovisual) and alternating unimodal (auditory + visual) speech. Results showed that in both age groups integration elicits cortical responses consistent with both super- and sub-additive responses in the fronto-temporal cortex. The univariate analyses revealed that between 5 and 10 months spatial distribution of these responses becomes increasingly focal. MVPA correctly classified responses at 5 months, with key input from channels located in the inferior frontal and superior temporal channels of the right hemisphere. However, MVPA classification was not successful at 10 months, suggesting a potential cortical re-organisation of audiovisual speech perception at this age. These results show the complex and non-gradual development of the cortical responses to integration of congruent audiovisual speech in infancy.
Collapse
Affiliation(s)
- Aleksandra A W Dopierała
- Faculty of Psychology, University of Warsaw, Warsaw, Poland.
- Department of Psychology, University of British Columbia, Vancouver, Canada.
| | | | | | - Agnieszka Pluta
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
- University of Westminister, London, UK
| | | | - Samuel Evans
- Kings College London, London, UK
- University of Westminister, London, UK
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, Bioimaging Research Center, World Hearing Centre, Warsaw, Poland
| | - Przemysław Tomalski
- Faculty of Psychology, University of Warsaw, Warsaw, Poland.
- Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
2
|
Agrawal T, Schachner A. Hearing water temperature: characterizing the development of nuanced perception of sound sources. Dev Sci 2022; 26:e13321. [PMID: 36068928 DOI: 10.1111/desc.13321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 08/18/2022] [Accepted: 08/22/2022] [Indexed: 11/30/2022]
Abstract
Without conscious thought, listeners link events in the world to sounds they hear. We study one surprising example: Adults can judge the temperature of water simply from hearing it being poured. We test development of the ability to hear water temperature, with the goal of informing developmental theories regarding the origins and cognitive bases of nuanced sound source judgments. We first confirmed that adults accurately distinguished the sounds of hot and cold water (pre-registered Exps. 1, 2; total N = 384), even though many were unaware or uncertain of this ability. By contrast, children showed protracted development of this skill over the course of middle childhood (Exps. 2, 3; total N = 178). In spite of accurately identifying other sounds and hot/cold images, older children (7-11 years) but not younger children (3-6 years) reliably distinguished the sounds of hot and cold water. Accuracy increased with age; 11 year old's performance was similar to adults'. Adults also showed individual differences in accuracy that were predicted by their amount of prior relevant experience (Exp. 1). Experience may similarly play a role in children's performance; differences in auditory sensitivity and multimodal integration may also contribute to young children's failures. The ability to hear water temperature develops slowly over childhood, such that nuanced auditory information that is easily and quickly accessible to adults is not available to guide young children's behavior. Adults can make nuanced judgments from sound, including accurately judging the temperature of water from the sound of it being poured. Children showed protracted development of this skill over the course of middle childhood, such that 7-11 year-olds reliably succeeded while 3-6 year-olds performed at chance. Developmental changes may be due to experience (adults with greater relevant experience showed higher accuracy) and development of multimodal integration and auditory sensitivity. Young children may not detect subtle auditory information that adults easily perceive. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | - Adena Schachner
- Department of Psychology, University of California, San Diego, USA
| |
Collapse
|
3
|
Quinones JF, Pavan T, Liu X, Thiel CM, Heep A, Hildebrandt A. Fiber tracing and microstructural characterization among audiovisual integration brain regions in neonates compared with young adults. Neuroimage 2022; 254:119141. [PMID: 35342006 DOI: 10.1016/j.neuroimage.2022.119141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/23/2022] [Accepted: 03/21/2022] [Indexed: 11/23/2022] Open
Abstract
Audiovisual integration has been related with cognitive-processing and behavioral advantages, as well as with various socio-cognitive disorders. While some studies have identified brain regions instantiating this ability shortly after birth, little is known about the structural pathways connecting them. The goal of the present study was to reconstruct fiber tracts linking AVI regions in the newborn in-vivo brain and assess their adult-likeness by comparing them with analogous fiber tracts of young adults. We performed probabilistic tractography and compared connective probabilities between a sample of term-born neonates (N = 311; the Developing Human Connectome Project (dHCP, http://www.developingconnectome.org) and young adults (N = 311 The Human Connectome Project; https://www.humanconnectome.org/) by means of a classification algorithm. Furthermore, we computed Dice coefficients to assess between-group spatial similarity of the reconstructed fibers and used diffusion metrics to characterize neonates' AVI brain network in terms of microstructural properties, interhemispheric differences and the association with perinatal covariates and biological sex. Overall, our results indicate that the AVI fiber bundles were successfully reconstructed in a vast majority of neonates, similarly to adults. Connective probability distributional similarities and spatial overlaps of AVI fibers between the two groups differed across the reconstructed fibers. There was a rank-order correspondence of the fibers' connective strengths across the groups. Additionally, the study revealed patterns of diffusion metrics in line with early white matter developmental trajectories and a developmental advantage for females. Altogether, these findings deliver evidence of meaningful structural connections among AVI regions in the newborn in-vivo brain.
Collapse
|
4
|
Geangu E, Roberti E, Turati C. Do infants represent human actions cross-modally? An ERP visual-auditory priming study. Biol Psychol 2021; 160:108047. [PMID: 33596461 DOI: 10.1016/j.biopsycho.2021.108047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 01/15/2021] [Accepted: 02/08/2021] [Indexed: 12/27/2022]
Abstract
Recent findings indicate that 7-months-old infants perceive and represent the sounds inherent to moving human bodies. However, it is not known whether infants integrate auditory and visual information in representations of specific human actions. To address this issue, we used ERPs to investigate infants' neural sensitivity to the correspondence between sounds and images of human actions. In a cross-modal priming paradigm, 7-months-olds were presented with the sounds generated by two types of human body movement, walking and handclapping, after watching the kinematics of those actions in either a congruent or incongruent manner. ERPs recorded from frontal, central and parietal electrodes in response to action sounds indicate that 7-months-old infants perceptually link the visual and auditory cues of human actions. However, at this age these percepts do not seem to be integrated in cognitive multimodal representations of human actions.
Collapse
|
5
|
Joly-Mascheroni R, Abad-Hernando S, Forster B, Calvo-Merino B. Embodiment and Multisensory Perception of Synchronicity: Biological Features Modulate Visual and Tactile Multisensory Interaction in Simultaneity Judgements. Multisens Res 2021; 34:1-18. [PMID: 33535162 DOI: 10.1163/22134808-bja10020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Accepted: 12/15/2020] [Indexed: 11/19/2022]
Abstract
The concept of embodiment has been used in multiple scenarios, but in cognitive neuroscience it normally refers to the comprehension of the role of one's own body in the cognition of everyday situations and the processes involved in that perception. Multisensory research is gradually embracing the concept of embodiment, but the focus has mostly been concentrated upon audiovisual integration. In two experiments, we evaluated how the likelihood of a perceived stimulus to be embodied modulates visuotactile interaction in a Simultaneity Judgement task. Experiment 1 compared the perception of two visual stimuli with and without biological attributes (hands and geometrical shapes) moving towards each other, while tactile stimuli were provided on the palm of the participants' hand. Participants judged whether the meeting point of two periodically-moving visual stimuli was synchronous with the tactile stimulation in their own hands. Results showed that in the hand condition, the Point of Subjective Simultaneity (PSS) was significantly more distant to real synchrony (60 ms after the Stimulus Onset Asynchrony, SOA) than in the geometrical shape condition (45 ms after SOA). In experiment 2, we further explored the impact of biological attributes by comparing performance on two visual biological stimuli (hands and ears), that also vary in their motor and visuotactile properties. Results showed that the PSS was equally distant to real synchrony in both the hands and ears conditions. Overall, findings suggest that embodied visual biological stimuli may modulate visual and tactile multisensory interaction in simultaneity judgements.
Collapse
Affiliation(s)
- Ramiro Joly-Mascheroni
- Cognitive Neuroscience Research Unit, Department of Psychology, City University of London, Northampton Square, EC1V 0HB, London, UK
| | - Sonia Abad-Hernando
- Cognitive Neuroscience Research Unit, Department of Psychology, City University of London, Northampton Square, EC1V 0HB, London, UK
| | - Bettina Forster
- Cognitive Neuroscience Research Unit, Department of Psychology, City University of London, Northampton Square, EC1V 0HB, London, UK
| | - Beatriz Calvo-Merino
- Cognitive Neuroscience Research Unit, Department of Psychology, City University of London, Northampton Square, EC1V 0HB, London, UK
| |
Collapse
|
6
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
7
|
Curtindale LM, Bahrick LE, Lickliter R, Colombo J. Effects of multimodal synchrony on infant attention and heart rate during events with social and nonsocial stimuli. J Exp Child Psychol 2019; 178:283-294. [PMID: 30445204 PMCID: PMC6980371 DOI: 10.1016/j.jecp.2018.10.006] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 09/16/2018] [Accepted: 10/16/2018] [Indexed: 11/25/2022]
Abstract
Attention is a state of readiness or alertness, associated with behavioral and psychophysiological responses, that facilitates learning and memory. Multisensory and dynamic events have been shown to elicit more attention and produce greater sustained attention in infants than auditory or visual events alone. Such redundant and often temporally synchronous information guides selectivity and facilitates perception, learning, and memory of properties of events specified by redundancy. In addition, events involving faces or other social stimuli provide an extraordinary amount of redundant information that attracts and sustains attention. In the current study, 4- and 8-month-old infants were shown 2-min multimodal videos featuring social or nonsocial stimuli to determine the relative roles of synchrony and stimulus category in inducing attention. Behavioral measures included average looking time and peak look duration, and convergent measurement of heart rate (HR) allowed for the calculation of HR-defined phases of attention: Orienting (OR), sustained attention (SA), and attention termination (AT). The synchronous condition produced an earlier onset of SA (less time in OR) and a deeper state of SA than the asynchronous condition. Social stimuli attracted and held attention (longer duration of peak looks and lower HR than nonsocial stimuli). Effects of synchrony and the social nature of stimuli were additive, suggesting independence of their influence on attention. These findings are the first to demonstrate different HR-defined phases of attention as a function of intersensory redundancy, suggesting greater salience and deeper processing of naturalistic synchronous audiovisual events compared with asynchronous ones.
Collapse
Affiliation(s)
- Lori M Curtindale
- Department of Psychology, East Carolina University, Greenville, NC 27858, USA.
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - Robert Lickliter
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - John Colombo
- Department of Psychology, University of Kansas, Lawrence, KS 66045, USA
| |
Collapse
|
8
|
Bache C, Springer A, Noack H, Stadler W, Kopp F, Lindenberger U, Werkle-Bergner M. 10-Month-Old Infants Are Sensitive to the Time Course of Perceived Actions: Eye-Tracking and EEG Evidence. Front Psychol 2017; 8:1170. [PMID: 28769831 PMCID: PMC5509954 DOI: 10.3389/fpsyg.2017.01170] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Accepted: 06/27/2017] [Indexed: 11/20/2022] Open
Abstract
Research has shown that infants are able to track a moving target efficiently - even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action's timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants' current motor repertoire.
Collapse
Affiliation(s)
- Cathleen Bache
- Center for Lifespan Psychology, Max Planck Institute for Human DevelopmentBerlin, Germany
| | - Anne Springer
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Department of Clinical Psychology and Psychiatry, University of BaselBasel, Switzerland
| | - Hannes Noack
- Institute for Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
| | - Waltraud Stadler
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Human Movement Science, Technische Universität MünchenMunich, Germany
| | - Franziska Kopp
- Center for Lifespan Psychology, Max Planck Institute for Human DevelopmentBerlin, Germany
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human DevelopmentBerlin, Germany
- European University InstituteFiesole, Italy
| | - Markus Werkle-Bergner
- Center for Lifespan Psychology, Max Planck Institute for Human DevelopmentBerlin, Germany
| |
Collapse
|
9
|
Hannon EE, Schachner A, Nave-Blodgett JE. Babies know bad dancing when they see it: Older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays. J Exp Child Psychol 2017; 159:159-174. [PMID: 28288412 DOI: 10.1016/j.jecp.2017.01.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/17/2017] [Accepted: 01/17/2017] [Indexed: 10/20/2022]
Abstract
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.
Collapse
Affiliation(s)
- Erin E Hannon
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV 89154, USA.
| | - Adena Schachner
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093, USA
| | | |
Collapse
|
10
|
Abstract
The authors provide an alternative to the traditional view that verbs are harder to learn than nouns by reviewing three lines of behavioral and neurophysiological evidence in word-mapping development across cultures. First, preverbal infants tune into word-action and word-object pairings using domain-general mechanisms. Second, while post-verbal infants from noun-friendly language environments experience verb-action mapping difficulty, infants from verb-friendly language environments do not. Third, children use language-specific conventions to learn all types of words, although still strongly influenced by their language environment. Additionally, the authors suggest neurophysiological research to advance these lines of evidence beyond traditional views of word learning.
Collapse
Affiliation(s)
- Lakshmi Gogate
- a Communication Sciences and Disorders , University of Missouri-Columbia , Columbia , Missouri
| | - George Hollich
- b Psychological Sciences , Purdue University , West Lafayette , Indiana
| |
Collapse
|
11
|
Abstract
In this article, we describe behavioral and neurophysiological evidence for infants' multimodal face-voice perception. We argue that the behavioral development of face-voice perception, like multimodal perception more broadly, is consistent with the intersensory redundancy hypothesis (IRH). Furthermore, we highlight that several recently observed features of the neural responses in infants converge with the behavioral predictions of the intersensory redundancy hypothesis. Finally, we discuss the potential benefits of combining brain and behavioral measures to study multisensory processing, as well as some applications of this work for atypical development.
Collapse
Affiliation(s)
- Daniel C Hyde
- a Department of Psychology , University of Illinois at Urbana-Champaign , Champaign , Illinois
| | - Ross Flom
- b Department of Psychology , Brigham Young University , Provo , Utah
| | - Chris L Porter
- c School of Family Life , Brigham Young University , Provo , Utah
| |
Collapse
|
12
|
Gerson SA, Schiavio A, Timmers R, Hunnius S. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions. PLoS One 2015; 10:e0130960. [PMID: 26111226 PMCID: PMC4482535 DOI: 10.1371/journal.pone.0130960] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 05/27/2015] [Indexed: 11/19/2022] Open
Abstract
In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.
Collapse
Affiliation(s)
- Sarah A. Gerson
- University of St Andrews, School of Psychology & Neuroscience, St Andrews, United Kingdom
- Donders Institute for Brain, Cognition, and Behaviour, Center for Cognition, Radboud University, Nijmegen, The Netherlands
| | - Andrea Schiavio
- Music Mind Machine in Sheffield, Department of Music, The University of Sheffield, Sheffield, United Kingdom
| | - Renee Timmers
- Music Mind Machine in Sheffield, Department of Music, The University of Sheffield, Sheffield, United Kingdom
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition, and Behaviour, Center for Cognition, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Baart M, Bortfeld H, Vroomen J. Phonetic matching of auditory and visual speech develops during childhood: evidence from sine-wave speech. J Exp Child Psychol 2014; 129:157-64. [PMID: 25258018 DOI: 10.1016/j.jecp.2014.08.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Revised: 08/12/2014] [Accepted: 08/15/2014] [Indexed: 11/29/2022]
Abstract
The correspondence between auditory speech and lip-read information can be detected based on a combination of temporal and phonetic cross-modal cues. Here, we determined the point in developmental time at which children start to effectively use phonetic information to match a speech sound with one of two articulating faces. We presented 4- to 11-year-olds (N=77) with three-syllabic sine-wave speech replicas of two pseudo-words that were perceived as non-speech and asked them to match the sounds with the corresponding lip-read video. At first, children had no phonetic knowledge about the sounds, and matching was thus based on the temporal cues that are fully retained in sine-wave speech. Next, we trained all children to perceive the phonetic identity of the sine-wave speech and repeated the audiovisual (AV) matching task. Only at around 6.5 years of age did the benefit of having phonetic knowledge about the stimuli become apparent, thereby indicating that AV matching based on phonetic cues presumably develops more slowly than AV matching based on temporal cues.
Collapse
Affiliation(s)
- Martijn Baart
- BCBL. Basque Center on Cognition, Brain, and Language, 20009 Donostia (San Sebastián), Spain.
| | - Heather Bortfeld
- Department of Psychology, University of Connecticut, Storrs, CT 06269, USA; Haskins Laboratories, New Haven, CT 06511, USA
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| |
Collapse
|