1
|
Kaganovich N, Ragheb R, Christ S, Schumaker J. Audiovisual speech perception deficits in unaffected siblings of children with developmental language disorder. BRAIN AND LANGUAGE 2025; 263:105547. [PMID: 39954391 PMCID: PMC12147596 DOI: 10.1016/j.bandl.2025.105547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 01/29/2025] [Accepted: 01/30/2025] [Indexed: 02/17/2025]
Abstract
Siblings of children with developmental language disorder (DLD) often have weaker language skills compared to peers with typical development (TD). However, whether their language-relevant audiovisual skills are also atypical is unknown. Study 1 examined whether siblings use information about a talker's mouth shape during phonemic processing as children with TD do. Study 2 examined siblings' ability to match auditory words with observed word articulations. Only children with TD showed a significant MMN to audiovisual phonemic violations, suggesting that, just like in children with DLD, lip shape does not modulate phonemic processing in siblings. Children with DLD and siblings were also less accurate than children with TD at detecting audiovisual word mismatches. The N400 amplitude in children with TD was significantly larger than in children with DLD and marginally larger than in siblings. Phonemic and lexical representations in siblings lack audiovisual details, which may contribute to poor language development.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States; Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2038, United States.
| | - Rhiana Ragheb
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States
| | - Sharon Christ
- Department of Statistics, 250 N. University Street, West Lafayette, IN 47907-2066, United States; Department of Human Development and Family Studies, 1202 West State St, West Lafayette, IN 47907-2055, United States
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States
| |
Collapse
|
2
|
Cirelli LK, Talukder LS, Kragness HE. Infant attention to rhythmic audiovisual synchrony is modulated by stimulus properties. Front Psychol 2024; 15:1393295. [PMID: 39027053 PMCID: PMC11256966 DOI: 10.3389/fpsyg.2024.1393295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/06/2024] [Indexed: 07/20/2024] Open
Abstract
Musical interactions are a common and multimodal part of an infant's daily experiences. Infants hear their parents sing while watching their lips move and see their older siblings dance along to music playing over the radio. Here, we explore whether 8- to 12-month-old infants associate musical rhythms they hear with synchronous visual displays by tracking their dynamic visual attention to matched and mismatched displays. Visual attention was measured using eye-tracking while they attended to a screen displaying two videos of a finger tapping at different speeds. These videos were presented side by side while infants listened to an auditory rhythm (high or low pitch) synchronized with one of the two videos. Infants attended more to the low-pitch trials than to the high-pitch trials but did not display a preference for attending to the synchronous hand over the asynchronous hand within trials. Exploratory evidence, however, suggests that tempo, pitch, and rhythmic complexity interactively engage infants' visual attention to a tapping hand, especially when that hand is aligned with the auditory stimulus. For example, when the rhythm was complex and the auditory stimulus was low in pitch, infants attended to the fast hand more when it aligned with the auditory stream than to misaligned trials. These results suggest that the audiovisual integration in rhythmic non-speech contexts is influenced by stimulus properties.
Collapse
Affiliation(s)
- Laura K. Cirelli
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Labeeb S. Talukder
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Haley E. Kragness
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
- Psychology Department, Bucknell University, Lewisburg, PA, United States
| |
Collapse
|
3
|
Werker JF. Phonetic perceptual reorganization across the first year of life: Looking back. Infant Behav Dev 2024; 75:101935. [PMID: 38569416 DOI: 10.1016/j.infbeh.2024.101935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/26/2024] [Accepted: 02/27/2024] [Indexed: 04/05/2024]
Abstract
This paper provides a selective overview of some of the research that has followed from the publication of Werker and Tees (1984a) "Cross-language speech perception: Evidence for Perceptual Reorganization During the First Year of Life." Specifically, I briefly present the original finding, our interpretation of its meaning, and some key replications and extensions. I then review some of the work that has followed, including work with different kinds of populations, different kinds of speech sound contrasts, as well as attunement (perceptual reorganization) to additional properties of language beyond phonetic contrasts. Included is the body of work that queries whether perceptual attunement is a critical period phenomenon. Potential learning mechanisms for how experience functions to guide phonetic perceptual development are also presented, as is work on the relation between speech perception and word learning.
Collapse
Affiliation(s)
- Janet F Werker
- Department of Psychology, The University of British Columbia, Canada.
| |
Collapse
|
4
|
Dal Ben R, Prequero IT, Souza DDH, Hay JF. Speech Segmentation and Cross-Situational Word Learning in Parallel. Open Mind (Camb) 2023; 7:510-533. [PMID: 37637304 PMCID: PMC10449405 DOI: 10.1162/opmi_a_00095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 08/29/2023] Open
Abstract
Language learners track conditional probabilities to find words in continuous speech and to map words and objects across ambiguous contexts. It remains unclear, however, whether learners can leverage the structure of the linguistic input to do both tasks at the same time. To explore this question, we combined speech segmentation and cross-situational word learning into a single task. In Experiment 1, when adults (N = 60) simultaneously segmented continuous speech and mapped the newly segmented words to objects, they demonstrated better performance than when either task was performed alone. However, when the speech stream had conflicting statistics, participants were able to correctly map words to objects, but were at chance level on speech segmentation. In Experiment 2, we used a more sensitive speech segmentation measure to find that adults (N = 35), exposed to the same conflicting speech stream, correctly identified non-words as such, but were still unable to discriminate between words and part-words. Again, mapping was above chance. Our study suggests that learners can track multiple sources of statistical information to find and map words to objects in noisy environments. It also prompts questions on how to effectively measure the knowledge arising from these learning experiences.
Collapse
Affiliation(s)
- Rodrigo Dal Ben
- Universidade Federal de São Carlos, São Carlos, São Paulo, Brazil
| | | | | | | |
Collapse
|
5
|
Choi D, Yeung HH, Werker JF. Sensorimotor foundations of speech perception in infancy. Trends Cogn Sci 2023:S1364-6613(23)00124-9. [PMID: 37302917 DOI: 10.1016/j.tics.2023.05.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 06/13/2023]
Abstract
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners' ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.
Collapse
Affiliation(s)
- Dawoon Choi
- Department of Psychology, Yale University, Yale, CT, USA.
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
6
|
Zamuner TS, Rabideau T, McDonald M, Yeung HH. Developmental change in children's speech processing of auditory and visual cues: An eyetracking study. JOURNAL OF CHILD LANGUAGE 2023; 50:27-51. [PMID: 36503546 DOI: 10.1017/s0305000921000684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
This study investigates how children aged two to eight years (N = 129) and adults (N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.
Collapse
Affiliation(s)
| | | | - Margarethe McDonald
- Department of Linguistics, University of Ottawa, Canada
- School of Psychology, University of Ottawa, Canada
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Canada
- Integrative Neuroscience and Cognition Centre, UMR 8002, CNRS and University of Paris, France
| |
Collapse
|
7
|
Belteki Z, van den Boomen C, Junge C. Face-to-face contact during infancy: How the development of gaze to faces feeds into infants' vocabulary outcomes. Front Psychol 2022; 13:997186. [PMID: 36389540 PMCID: PMC9650530 DOI: 10.3389/fpsyg.2022.997186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/03/2022] [Indexed: 08/10/2023] Open
Abstract
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants' gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
Collapse
|
8
|
Roth KC, Clayton KRH, Reynolds GD. Infant selective attention to native and non-native audiovisual speech. Sci Rep 2022; 12:15781. [PMID: 36138107 PMCID: PMC9500058 DOI: 10.1038/s41598-022-19704-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 09/02/2022] [Indexed: 11/24/2022] Open
Abstract
The current study utilized eye-tracking to investigate the effects of intersensory redundancy and language on infant visual attention and detection of a change in prosody in audiovisual speech. Twelve-month-old monolingual English-learning infants viewed either synchronous (redundant) or asynchronous (non-redundant) presentations of a woman speaking in native or non-native speech. Halfway through each trial, the speaker changed prosody from infant-directed speech (IDS) to adult-directed speech (ADS) or vice versa. Infants focused more on the mouth of the speaker on IDS trials compared to ADS trials regardless of language or intersensory redundancy. Additionally, infants demonstrated greater detection of prosody changes from IDS speech to ADS speech in native speech. Planned comparisons indicated that infants detected prosody changes across a broader range of conditions during redundant stimulus presentations. These findings shed light on the influence of language and prosody on infant attention and highlight the complexity of audiovisual speech processing in infancy.
Collapse
Affiliation(s)
- Kelly C Roth
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, 37996, USA
- Data Scientist at 84.51°, Cincinnati, OH, 45202, USA
| | - Kenna R H Clayton
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, 37996, USA
| | - Greg D Reynolds
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, 37996, USA.
| |
Collapse
|
9
|
Lozano I, López Pérez D, Laudańska Z, Malinowska‐Korczak A, Szmytke M, Radkowska A, Tomalski P. Changes in selective attention to articulating mouth across infancy: Sex differences and associations with language outcomes. INFANCY 2022; 27:1132-1153. [DOI: 10.1111/infa.12496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 05/27/2022] [Accepted: 07/15/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Itziar Lozano
- Department of Cognitive Psychology and Neurocognitive Science Faculty of Psychology, University of Warsaw Warsaw Poland
- Universidad Autónoma de Madrid, Faculty of Psychology Madrid Spain
| | - David López Pérez
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Zuzanna Laudańska
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Anna Malinowska‐Korczak
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Magdalena Szmytke
- Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw Warsaw Poland
| | - Alicja Radkowska
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
- Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw Warsaw Poland
| | - Przemysław Tomalski
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| |
Collapse
|
10
|
Mercure E, Bright P, Quiroz I, Filippi R. Effect of infant bilingualism on audiovisual integration in a McGurk task. J Exp Child Psychol 2022; 217:105351. [DOI: 10.1016/j.jecp.2021.105351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/12/2021] [Accepted: 12/10/2021] [Indexed: 10/19/2022]
|
11
|
Erdener D, Evren Erdener Ş. Speechreading as a secondary diagnostic tool in bipolar disorder. Med Hypotheses 2022. [DOI: 10.1016/j.mehy.2021.110744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Kaganovich N, Christ S. Event-related potentials evidence for long-term audiovisual representations of phonemes in adults. Eur J Neurosci 2021; 54:7860-7875. [PMID: 34750895 PMCID: PMC8815308 DOI: 10.1111/ejn.15519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 11/03/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
The presence of long-term auditory representations for phonemes has been well-established. However, since speech perception is typically audiovisual, we hypothesized that long-term phoneme representations may also contain information on speakers' mouth shape during articulation. We used an audiovisual oddball paradigm in which, on each trial, participants saw a face and heard one of two vowels. One vowel occurred frequently (standard), while another occurred rarely (deviant). In one condition (neutral), the face had a closed, non-articulating mouth. In the other condition (audiovisual violation), the mouth shape matched the frequent vowel. Although in both conditions stimuli were audiovisual, we hypothesized that identical auditory changes would be perceived differently by participants. Namely, in the neutral condition, deviants violated only the audiovisual pattern specific to each block. By contrast, in the audiovisual violation condition, deviants additionally violated long-term representations for how a speaker's mouth looks during articulation. We compared the amplitude of mismatch negativity (MMN) and P3 components elicited by deviants in the two conditions. The MMN extended posteriorly over temporal and occipital sites even though deviants contained no visual changes, suggesting that deviants were perceived as interruptions in audiovisual, rather than auditory only, sequences. As predicted, deviants elicited larger MMN and P3 in the audiovisual violation compared to the neutral condition. The results suggest that long-term representations of phonemes are indeed audiovisual.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
- Department of Psychological Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Sharon Christ
- Department of Human Development and Family Studies, Purdue University, West Lafayette, Indiana, USA
- Department of Statistics, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
13
|
Havy M, Zesiger PE. Bridging ears and eyes when learning spoken words: On the effects of bilingual experience at 30 months. Dev Sci 2020; 24:e13002. [PMID: 32506622 DOI: 10.1111/desc.13002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 05/15/2020] [Accepted: 05/15/2020] [Indexed: 10/24/2022]
Abstract
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross-modal word-learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning ('same modality' condition: auditory test after auditory learning, visual test after visual learning) or in the other modality ('cross-modality' condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross-modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross-modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross-modal representation of visually learned words.
Collapse
Affiliation(s)
- Mélanie Havy
- Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland
| | - Pascal E Zesiger
- Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland
| |
Collapse
|
14
|
Sensorimotor influences on speech perception in pre-babbling infants: Replication and extension of Bruderer et al. (2015). Psychon Bull Rev 2019; 26:1388-1399. [PMID: 31037603 DOI: 10.3758/s13423-019-01601-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The relationship between speech perception and production is central to understanding language processing, yet remains under debate, particularly in early development. Recent research suggests that in infants aged 6 months, when the native phonological system is still being established, sensorimotor information from the articulators influences speech perception: The placement of a teething toy restricting tongue-tip movements interfered with infants' discrimination of a non-native contrast, /Da/-/da/, that involves tongue-tip movement. This effect was selective: A different teething toy that prevented lip closure but not tongue-tip movement did not disrupt discrimination. We conducted two sets of studies to replicate and extend these findings. Experiments 1 and 2 replicated the study by Bruderer et al. (Proceedings of the National Academy of Sciences of the United States of America, 112 (44), 13531-13536, 2015), but with synthesized auditory stimuli. Infants discriminated the non-native contrast (dental /da/ - retroflex /Da/) (Experiment 1), but showed no evidence of discrimination when the tongue-tip movement was prevented with a teething toy (Experiment 2). Experiments 3 and 4 extended this work to a native phonetic contrast (bilabial /ba/ - dental /da/). Infants discriminated the distinction with no teething toy present (Experiment 3), but when they were given a teething toy that interfered only with lip closure, a movement involved in the production of /ba/, discrimination was disrupted (Experiment 4). Importantly, this was the same teething toy that did not interfere with discrimination of /da/-/Da/ in Bruderer et al. (2015). These findings reveal specificity in the relation between sensorimotor and perceptual processes in pre-babbling infants, and show generalizability to a second phonetic contrast.
Collapse
|
15
|
de la Cruz-Pavía I, Gervain J, Vatikiotis-Bateson E, Werker JF. Finding phrases: On the role of co-verbal facial information in learning word order in infancy. PLoS One 2019; 14:e0224786. [PMID: 31710615 PMCID: PMC6844464 DOI: 10.1371/journal.pone.0224786] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/22/2019] [Indexed: 11/23/2022] Open
Abstract
The input contains perceptually available cues, which might allow young infants to discover abstract properties of the target language. Thus, word frequency and prosodic prominence correlate systematically with basic word order in natural languages. Prelexical infants are sensitive to these frequency-based and prosodic cues, and use them to parse new input into phrases that follow the order characteristic of their native languages. Importantly, young infants readily integrate auditory and visual facial information while processing language. Here, we ask whether co-verbal visual information provided by talking faces also helps prelexical infants learn the word order of their native language in addition to word frequency and prosodic prominence. We created two structurally ambiguous artificial languages containing head nods produced by an animated avatar, aligned or misaligned with the frequency-based and prosodic information. During 4 minutes, two groups of 4- and 8-month-old infants were familiarized with the artificial language containing aligned auditory and visual cues, while two further groups were exposed to the misaligned language. Using a modified Headturn Preference Procedure, we tested infants’ preference for test items exhibiting the word order of the native language, French, vs. the opposite word order. At 4 months, infants had no preference, suggesting that 4-month-olds were not able to integrate the three available cues, or had not yet built a representation of word order. By contrast, 8-month-olds showed no preference when auditory and visual cues were aligned and a preference for the native word order when visual cues were misaligned. These results imply that infants at this age start to integrate the co-verbal visual and auditory cues.
Collapse
Affiliation(s)
- Irene de la Cruz-Pavía
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), Paris, France
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), CNRS, Paris, France
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- * E-mail:
| | - Judit Gervain
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), Paris, France
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), CNRS, Paris, France
| | - Eric Vatikiotis-Bateson
- Department of Linguistics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Janet F. Werker
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
16
|
Children’s suffix effects for verbal working memory reflect phonological coding and perceptual grouping. J Exp Child Psychol 2019; 183:276-294. [DOI: 10.1016/j.jecp.2019.03.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Revised: 03/07/2019] [Accepted: 03/08/2019] [Indexed: 11/23/2022]
|
17
|
May L, Baron AS, Werker JF. Who can speak that language? Eleven‐month‐old infants have language‐dependent expectations regarding speaker ethnicity. Dev Psychobiol 2019; 61:859-873. [DOI: 10.1002/dev.21851] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 02/14/2019] [Accepted: 02/17/2019] [Indexed: 11/08/2022]
Affiliation(s)
- Lillian May
- Department of Psychology University of British Columbia Vancouver BC Canada
| | - Andrew S. Baron
- Department of Psychology University of British Columbia Vancouver BC Canada
| | - Janet F. Werker
- Department of Psychology University of British Columbia Vancouver BC Canada
| |
Collapse
|
18
|
Mercure E, Kushnerenko E, Goldberg L, Bowden‐Howl H, Coulson K, Johnson MH, MacSweeney M. Language experience influences audiovisual speech integration in unimodal and bimodal bilingual infants. Dev Sci 2019; 22:e12701. [PMID: 30014580 PMCID: PMC6393757 DOI: 10.1111/desc.12701] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 05/17/2018] [Indexed: 11/29/2022]
Abstract
Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4- to 8-month-old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye-tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.
Collapse
Affiliation(s)
| | | | | | - Harriet Bowden‐Howl
- UCL Institute of Cognitive NeuroscienceLondonUK
- School of PsychologyUniversity of PlymouthPlymouthUK
| | - Kimberley Coulson
- UCL Institute of Cognitive NeuroscienceLondonUK
- Department of Psychology and Sports SciencesUniversity of HertfordshireHatfieldHertfordshireUK
| | - Mark H Johnson
- Centre for Brain and Cognitive DevelopmentBirkbeck – University of LondonLondonUK
- Department of PsychologyUniversity of CambridgeCambridgeUK
| | | |
Collapse
|
19
|
Altvater-Mackensen N, Grossmann T. Modality-independent recruitment of inferior frontal cortex during speech processing in human infants. Dev Cogn Neurosci 2018; 34:130-138. [PMID: 30391756 PMCID: PMC6969291 DOI: 10.1016/j.dcn.2018.10.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 08/25/2018] [Accepted: 10/25/2018] [Indexed: 11/22/2022] Open
Abstract
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS). Our results revealed that infants recruit speech-sensitive regions in frontal cortex including IFC regardless of whether they processed unimodal or multimodal speech. We argue that IFC may play an important role in associating multimodal speech information during the early steps of language learning.
Collapse
Affiliation(s)
- Nicole Altvater-Mackensen
- Department of Psychology, Johannes-Gutenberg-University Mainz, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tobias Grossmann
- Department of Psychology, University of Virginia, USA; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
20
|
Echeverría-Palacio CM, Uscátegui-Daccarett A, Talero-Gutiérrez C. Integración auditiva, visual y propioceptiva como sustrato del desarrollo del lenguaje. REVISTA DE LA FACULTAD DE MEDICINA 2018. [DOI: 10.15446/revfacmed.v66n3.60490] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Introducción. El desarrollo del lenguaje es un proceso complejo considerado como marcador evolutivo del ser humano y puede ser comprendido a partir de la contribución de los sistemas sensoriales y de los eventos que ocurren en periodos críticos del desarrollo.Objetivo. Realizar una revisión de cómo se da la integración de la información auditiva, visual y propioceptiva y cómo se refleja en el desarrollo del lenguaje, destacando el papel de la interacción social como contexto que favorece este proceso.Materiales y métodos. Se utilizaron los términos MeSH “Language Development”; “Visual Perception”; “Hearing”; y “Proprioception en las bases de datos MEDLINE y Embase, limitando la búsqueda principal a artículos escritos en inglés, español y portugués.Resultados. El punto de partida lo constituye la información auditiva, la cual, en el primer año de vida, permite la discriminación de los elementos del ambiente que corresponden al lenguaje; luego un pico en su adquisición y posteriormente una etapa de máxima discriminación lingüística. La información visual proporciona la correspondencia del lenguaje en imágenes, sustrato de nominación y comprensión de palabras, además de la interpretación e imitación del componente emocional en la gesticulación. La información propioceptiva ofrece la retroalimentación de los patrones de ejecución motora empleados en la producción del lenguaje.Conclusión. El estudio del desarrollo lenguaje desde la integración sensorial ofrece nuevas perspectivas para el abordaje e intervención de sus desviaciones.
Collapse
|
21
|
Havy M, Zesiger P. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children. Front Psychol 2017; 8:2122. [PMID: 29276493 PMCID: PMC5727082 DOI: 10.3389/fpsyg.2017.02122] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 11/21/2017] [Indexed: 12/02/2022] Open
Abstract
From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face). Another group experienced the words in the visual modality (seeing a silent talking face). Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.
Collapse
Affiliation(s)
- Mélanie Havy
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | | |
Collapse
|