1
|
Meijer A, Benard MR, Woonink A, Başkent D, Dirks E. The Auditory Environment at Early Intervention Groups for Young Children With Hearing Loss: Signal to Noise Ratio, Background Noise, and Reverberation. Ear Hear 2025; 46:827-837. [PMID: 39789707 PMCID: PMC11984553 DOI: 10.1097/aud.0000000000001627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 11/21/2024] [Indexed: 01/12/2025]
Abstract
OBJECTIVES One important aspect in facilitating language access for children with hearing loss (HL) is the auditory environment. An optimal auditory environment is characterized by high signal to noise ratios (SNRs), low background noise levels, and low reverberation times. In this study, the authors describe the auditory environment of early intervention groups specifically equipped for young children with HL. DESIGN Seven early intervention groups for children with HL were included in the study. A total of 26 young children (22 to 46 months) visiting those groups participated. Language Environmental Analysis recorders were used to record all sounds around a child during one group visit. The recordings were analyzed to estimate SNR levels and background noise levels during the intervention groups. The unoccupied noise levels and reverberation times were measured in the unoccupied room either directly before or after the group visit. RESULTS The average SNR encountered by the children in the intervention groups was +13 dB SNR. The detected speech of the attending professionals achieved the +15 dB SNR recommended by the American Speech-Language-Hearing Association in approximately 42% of the time. The unoccupied noise levels were between 29 and 39 dBA, complying with acoustic norms for classroom environments (≤35 dBA, by ANSI/ASA 12.60-2010 Part 1) for six out of seven groups. Reverberation time was between 0.3 and 0.6 sec for all groups, which complies to the acoustic norms for classroom environments for children without HL (0.6 or 0.7 sec, depending on the room size), while only one group complied to the stricter norm for children with HL (0.3 sec). CONCLUSIONS The current findings show characteristics of the auditory environment of a setting that is specifically equipped and designed for groups of children with HL. Maintaining favorable SNRs seems to be the largest challenge to achieve within the constraints of an environment where young children gather, play, and learn. The results underscore the importance of staying attentive to keep spoken language accessible for children with HL in a group setting.
Collapse
Affiliation(s)
| | | | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Evelien Dirks
- Dutch Foundation of the Deaf and Hard of Hearing Child (NSDSK), Amsterdam, The Netherlands
- Department Tranzo, Tilburg University, the Netherlands
| |
Collapse
|
2
|
Benítez-Barrera CR, Denicola-Prechtl K, Castro S, Maguire MJ. A lot of noise about nothing? Speech-to-noise ratios rather than noise predict language outcomes in preschoolers. J Exp Child Psychol 2025; 252:106173. [PMID: 39823719 DOI: 10.1016/j.jecp.2024.106173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 10/30/2024] [Accepted: 12/03/2024] [Indexed: 01/20/2025]
Abstract
It has been proposed that a childhood in a noisy household might lead to poor language skills and slow development of language areas of the brain. Notably, a direct link between noisy households and language development has not been confirmed. Households might have high levels of noise for a range of reasons, including situational (near a large road intersection or airport), family (large families), and cultural (differences in beliefs surrounding noise in the home, including media use). We argue that within the range of safety, noise itself is not problematic to language development if language is made accessible to children. To test this hypothesis, we used LENA (Language Enviromental Analysis) devices to record 3- to 5-year-old children's home environments. All children were living in Spanish-dominant households. Language skills were assessed in Spanish and English. In addition to overall noise levels in the home, we calculated speech-to-noise ratios as an index of access to speech in real-world conditions. There was no relationship between noise in the home and language outcomes. Instead, speech-to-noise ratio explained a significant proportion of variability in language outcomes. The results indicate that enhancing access to language, such as by speaking loudly or getting close to the child, plays a significant role in children's language development outcomes rather than noise per se.
Collapse
Affiliation(s)
- Carlos R Benítez-Barrera
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI 53705, USA; Waisman Center, University of Wisconsin-Madison, Madison, WI 53705, USA.
| | - Kathleen Denicola-Prechtl
- Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Stephanie Castro
- Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Mandy J Maguire
- Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, TX 75080, USA; Center for Children and Families, University of Texas at Dallas, Richardson, TX 75080, USA
| |
Collapse
|
3
|
Saksida A, Živanović S, Battelino S, Orzan E. Let's See If You Can Hear: The Effect of Stimulus Type and Intensity to Pupil Diameter Response in Infants and Adults. Ear Hear 2025:00003446-990000000-00415. [PMID: 40113591 DOI: 10.1097/aud.0000000000001651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2025]
Abstract
OBJECTIVES Pupil dilation can serve as a measure of auditory attention. It has been proposed as an objective measure for adjusting hearing aid configurations, and as a measure of hearing threshold in the pediatric population. Here we explore (1) whether the pupillary dilation response (PDR) to audible sounds can be reliably measured in normally hearing infants within their average attention span, and in normally hearing adults, (2) how accurate within-participant models are in classifying PDR based on the stimulus type at various intensity levels, (3) whether the amount of analyzed data affects the model reliability, and (4) whether we can observe systematic differences in the PDR between speech and nonspeech sounds, and between the discrimination and detection paradigms. DESIGN In experiment 1, we measured the PDR to target warble tones at 500 to 4000 Hz compared with a standard tone (250 Hz) using an oddball discrimination test. A group of normally hearing infants was tested in experiment 1a (n = 36, mean [ME] = 21 months), and a group of young adults in experiment 1b (n = 12, ME = 29 years). The test was divided into five intensity blocks (30 to 70 dB SPL). In experiment 2a (n = 11, ME = 24 years), the task from experiment 1 was transformed into a detection task by removing the standard warble tone, and in experiment 2b (n = 12, ME = 29 years), participants listened to linguistic (Ling-6) sounds instead of tones. RESULTS In all experiments, the increased PDR was significantly associated with target sound stimuli on a group level. Although we found no overall effect of intensity on the response amplitude, the results were most clearly visible at the highest tested intensity level (70 dB SPL). The nonlinear classification models, run for each participant separately, yielded above-chance classification accuracy (sensitivity, specificity, and positive predictive value above 0.5) in 76% of infants and in 75% of adults. Accuracy further improved when only the first six trials at each intensity level were analyzed. However, accuracy was similar when pupil data were randomly attributed to the target or standard categories, indicating over-sensitivity of the proposed algorithms to the regularities in the PDR at the individual level. No differences in the classification accuracy were found between infants and adults at the group level, nor between the discrimination and detection paradigms (experiment 2a versus 1b), whereas the results in experiment 2b (speech stimuli) outperformed those in experiment 1b (tone stimuli). CONCLUSIONS The study confirms that PDR is elicited in both infants and adults across different stimulus types and task paradigms and may thus serve as an indicator of auditory attention. However, for the estimation of the hearing (or comfortable listening) threshold at the individual level, the most efficient and time-effective protocol with the most appropriate type and number of stimuli and a reliable signal to noise ratio is yet to be defined. Future research should explore the application of pupillometry in diverse populations to validate its effectiveness as a supplementary or confirmatory measure within the standard audiological evaluation procedures.
Collapse
Affiliation(s)
- Amanda Saksida
- Pediatric Audiology and Otolaryngology Unit, Institute for Maternal and Child Health - Istituto di Ricovero e Cura a Carattere Scientifico "Burlo Garofolo" - Trieste, Trieste, Italy
- Centre for discourse studies, Educational Research Institute Ljubljana, Slovenia
| | - Sašo Živanović
- Faculty of Arts, University of Ljubljana, Ljubljana, Slovenia
| | - Saba Battelino
- Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
| | - Eva Orzan
- Pediatric Audiology and Otolaryngology Unit, Institute for Maternal and Child Health - Istituto di Ricovero e Cura a Carattere Scientifico "Burlo Garofolo" - Trieste, Trieste, Italy
| |
Collapse
|
4
|
Carlie J, Sahlén B, Andersson K, Johansson R, Whitling S, Jonas Brännström K. Culturally and linguistically diverse children's retention of spoken narratives encoded in quiet and in babble noise. J Exp Child Psychol 2025; 249:106088. [PMID: 39316884 DOI: 10.1016/j.jecp.2024.106088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 09/03/2024] [Accepted: 09/04/2024] [Indexed: 09/26/2024]
Abstract
Multi-talker noise impedes children's speech processing and may affect children listening to their second language more than children listening to their first language. Evidence suggests that multi-talker noise also may impede children's memory retention and learning. A total of 80 culturally and linguistically diverse children aged 7 to 9 years listened to narratives in two listening conditions: quiet and multi-talker noise (signal-to-noise ratio +6 dB). Repeated recall (immediate and delayed recall), was measured with a 1-week retention interval. Retention was calculated as the difference in recall accuracy per question between immediate and delayed recall. Working memory capacity was assessed, and the children's degree of school language (Swedish) exposure was quantified. Immediate narrative recall was lower for the narrative encoded in noise than in quiet. During delayed recall, narrative recall was similar for both listening conditions. Children with higher degrees of school language exposure and higher working memory capacity had better narrative recall overall, but these factors were not associated with an effect of listening condition or retention. Multi-talker babble noise does not impair culturally and linguistically diverse primary school children's retention of spoken narratives as measured by multiple-choice questions. Although a quiet listening condition allows for a superior encoding compared with a noisy listening condition, details are likely lost during memory consolidation and re-consolidation.
Collapse
Affiliation(s)
- Johanna Carlie
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden.
| | - Birgitta Sahlén
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| | - Ketty Andersson
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| | - Roger Johansson
- Department of Psychology, Lund University, 221 00 Lund, Sweden
| | - Susanna Whitling
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| | - K Jonas Brännström
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| |
Collapse
|
5
|
Suarez-Rivera C, Fletcher KK, Tamis-LeMonda CS. Infants' home auditory environment: Background sounds shape language interactions. Dev Psychol 2024; 60:2274-2289. [PMID: 39325385 PMCID: PMC11965805 DOI: 10.1037/dev0001762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
Background sounds at home-namely those from television, communication devices, music, appliances, transportation, and construction-can support or impede infant language interactions and learning. Yet real-time connections at home between background sound and infant-caregiver language interactions remain unexamined. We quantified background sounds in the home environment, from 1- to 2-hr video recordings of infant-mother everyday activities (infants aged 8-26 months, 36 female) in two samples: European-American, English-speaking, middle-socioeconomic status (SES) families (N = 36) and Latine, Spanish-speaking, low-SES families (N = 40). From videos, we identified and coded five types of background sound: television/screens, communication devices, music, appliances, and transportation/construction. Exposure to background sounds varied enormously among homes and was stable across a week, with television/screens and music being the most dominant type of background sounds. Infants' vocalizations and mothers' speech to infants were reduced in the presence of background sound (although effect sizes were small), highlighting real-time processes that affect everyday language exchanges. Over the course of a day, infants in homes with high amounts of background sounds may hear and produce less language than infants in homes with less background sounds, highlighting potential cascading influences from environmental features to everyday interactions to language learning. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
6
|
Yang S, Saïd M, Peyre H, Ramus F, Taine M, Law EC, Dufourg MN, Heude B, Charles MA, Bernard JY. Associations of screen use with cognitive development in early childhood: the ELFE birth cohort. J Child Psychol Psychiatry 2024; 65:680-693. [PMID: 37644361 DOI: 10.1111/jcpp.13887] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/04/2023] [Indexed: 08/31/2023]
Abstract
BACKGROUND The associations of screen use with children's cognition are not well evidenced and recent, large, longitudinal studies are needed. We aimed to assess the associations between screen use and cognitive development in the French nationwide birth cohort. METHODS Time and context of screen use were reported by parents at ages 2, 3.5 and 5.5. Vocabulary, non-verbal reasoning and general cognitive development were assessed with the MacArthur-Bates Communicative Development Inventory (MB) at age 2, the Picture Similarities subtest from the British Ability Scales (PS) at age 3.5 and the Child Development Inventory (CDI) at ages 3.5 and 5.5. Outcome variables were age-adjusted and standardized (mean = 100, SD = 15). Multiple imputations were performed among children (N = 13,763) with ≥1 screen use information and ≥1 cognitive measures. Cross-sectional and longitudinal associations between screen use and cognitive development were assessed by linear regression models adjusted for sociodemographic and birth factors related to the family and children, and children's lifestyle factors competing with screen use. Baseline cognitive scores were further considered in longitudinal analysis. RESULTS TV-on during family meals at age 2, not screen time, was associated with lower MB scores at age 2 (β [95% CI] = -1.67 [-2.21, -1.13]) and CDI scores at age 3.5 (-0.82 [-1.31, -0.33]). In cross-sectional analysis, screen time was negatively associated with CDI scores at ages 3.5 (-0.67 [-0.94, -0.40]) and 5.5 (-0.47 [-0.77, -0.16]), and, in contrast, was positively associated with PS scores (0.39 [0.07, 0.71]) at age 3.5. Screen time at age 3.5 years was not associated with CDI scores at age 5.5 years. CONCLUSIONS Our study found weak associations of screen use with cognition after controlling for sociodemographic and children's birth factors and lifestyle confounders, and suggests that the context of screen use matters, not solely screen time, in children's cognitive development.
Collapse
Affiliation(s)
- Shuai Yang
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Centre de Recherche en Épidémiologie et StatistiqueS (CRESS), Paris, France
| | - Mélèa Saïd
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Centre de Recherche en Épidémiologie et StatistiqueS (CRESS), Paris, France
| | - Hugo Peyre
- Laboratoire de Sciences Cognitives et Psycholinguistique (ENS, EHESS, CNRS), Ecole Normale Supérieure, PSL University, Paris, France
- Université Paris-Saclay, UVSQ, Inserm, CESP, Team DevPsy, Villejuif, France
- Centre de Ressources Autisme Languedoc-Roussillon et Centre d'Excellence sur l'Autisme et les Troubles Neuro-développementaux, CHU Montpellier, Montpellier cedex 05, France
| | - Franck Ramus
- Laboratoire de Sciences Cognitives et Psycholinguistique (ENS, EHESS, CNRS), Ecole Normale Supérieure, PSL University, Paris, France
| | - Marion Taine
- EPI-PHARE (French National Agency for Medicines and Health Products Safety, ANSM; and French National Health Insurance, CNAM), Saint-Denis, France
| | - Evelyn C Law
- Singapore Institute for Clinical Sciences (SICS), Agency for Science, Technology and Research (A*STAR), Singapore City, Singapore
- Department of Paediatrics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore City, Singapore
- Department of Paediatrics, Khoo Teck Puat-National University Children's Medical Institute, National University Health System, Singapore City, Singapore
| | | | - Barbara Heude
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Centre de Recherche en Épidémiologie et StatistiqueS (CRESS), Paris, France
| | - Marie-Aline Charles
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Centre de Recherche en Épidémiologie et StatistiqueS (CRESS), Paris, France
- Unité mixte Inserm-Ined-EFS ELFE, Ined, Aubervilliers, France
| | - Jonathan Y Bernard
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Centre de Recherche en Épidémiologie et StatistiqueS (CRESS), Paris, France
- Singapore Institute for Clinical Sciences (SICS), Agency for Science, Technology and Research (A*STAR), Singapore City, Singapore
| |
Collapse
|
7
|
Hua Z, Hu J, Zeng H, Li J, Cao Y, Gan Y. Auditory language comprehension among children and adolescents with autism spectrum disorder: An ALE meta-analysis of fMRI studies. Autism Res 2024; 17:482-496. [PMID: 38031655 DOI: 10.1002/aur.3055] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023]
Abstract
Difficulties in auditory language comprehension are common among children and adolescents with autism spectrum disorder. However, findings regarding the underlying neural mechanisms remain mixed, and few studies have systematically explored the overall patterns of these findings. Therefore, this study aims to systematically review and meta-analyze the functional magnetic resonance imaging evidence of neural activation patterns while engaging in auditory language comprehension tasks among children and adolescents with autism. Using activation likelihood estimation, we conducted a series of meta-analyses to investigate neural activation patterns during auditory language comprehension tasks compared to baseline conditions in autism and non-autism groups and compared the activation patterns of the groups, respectively. Eight studies were included in the within-group analyses, and seven were included in the between-group analysis. The within-group analyses revealed that the bilateral superior temporal gyrus was activated during auditory language comprehension tasks in both groups, whereas the left superior frontal gyrus and dorsal medial prefrontal cortex were activated only in the non-autism group. Furthermore, the between-group analysis showed that children and adolescents with autism, compared to those without autism, showed reduced activation in the right superior temporal gyrus, left middle temporal gyrus, and insula, whereas the autism group did not show increased activation in any of the regions relative to the non-autism group. Overall, these findings contribute to our understanding of the potential neural mechanisms underlying difficulties in auditory language comprehension in children and adolescents with autism and provide practical implications for early screening and language-related interventions for children and adolescents with autism.
Collapse
Affiliation(s)
- Zihui Hua
- School of Psychological and Cognitive Sciences & Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Jun Hu
- School of Psychological and Cognitive Sciences & Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Huanke Zeng
- School of Psychological and Cognitive Sciences & Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Jiahui Li
- School of Psychological and Cognitive Sciences & Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Yibo Cao
- School of Psychological and Cognitive Sciences & Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Yiqun Gan
- School of Psychological and Cognitive Sciences & Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
8
|
Gordon KR, Grieco-Calub TM. Children build their vocabularies in noisy environments: The necessity of a cross-disciplinary approach to understand word learning. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1671. [PMID: 38043926 PMCID: PMC10939936 DOI: 10.1002/wcs.1671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 11/07/2023] [Accepted: 11/08/2023] [Indexed: 12/05/2023]
Abstract
Research within the language sciences has informed our understanding of how children build vocabulary knowledge especially during early childhood and the early school years. However, to date, our understanding of word learning in children is based primarily on research in quiet laboratory settings. The everyday environments that children inhabit such as schools, homes, and day cares are typically noisy. To better understand vocabulary development, we need to understand the effects of background noise on word learning. To gain this understanding, a cross-disciplinary approach between researchers in the language and hearing sciences in partnership with parents, educators, and clinicians is ideal. Through this approach we can identify characteristics of effective vocabulary instruction that take into account the background noise present in children's learning environments. Furthermore, we can identify characteristics of children who are likely to struggle with learning words in noisy environments. For example, differences in vocabulary knowledge, verbal working memory abilities, and attention skills will likely influence children's ability to learn words in the presence of background noise. These children require effective interventions to support their vocabulary development which subsequently should support their ability to process and learn language in noisy environments. Overall, this cross-disciplinary approach will inform theories of language development and inform educational and intervention practices designed to support children's vocabulary development. This article is categorized under: Psychology > Language Psychology > Learning Psychology > Theory and Methods.
Collapse
|
9
|
Çetinçelik M, Rowland CF, Snijders TM. Ten-month-old infants' neural tracking of naturalistic speech is not facilitated by the speaker's eye gaze. Dev Cogn Neurosci 2023; 64:101297. [PMID: 37778275 PMCID: PMC10543766 DOI: 10.1016/j.dcn.2023.101297] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 08/21/2023] [Accepted: 09/08/2023] [Indexed: 10/03/2023] Open
Abstract
Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker's eye gaze on ten-month-old infants' neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants' speech-brain coherence at stress (1-1.75 Hz) and syllable (2.5-3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants' brains tracked the speech rhythm both at the stress and syllable rates, and that infants' neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker's gaze.
Collapse
Affiliation(s)
- Melis Çetinçelik
- Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| | - Caroline F Rowland
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands; Cognitive Neuropsychology Department, Tilburg University, Tilburg, the Netherlands
| |
Collapse
|
10
|
Colombani A, Saksida A, Pavani F, Orzan E. Symbolic and deictic gestures as a tool to promote parent-child communication in the context of hearing loss: A systematic review. Int J Pediatr Otorhinolaryngol 2023; 165:111421. [PMID: 36669271 DOI: 10.1016/j.ijporl.2022.111421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 12/13/2022] [Accepted: 12/17/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Language and communication outcomes in children with congenital sensorineural hearing loss (cSNHL) are highly variable, and some of this variance can be attributed to the quantity and quality of language input. In this paper, we build from the evidence that human language is inherently multimodal and positive scaffolding of children's linguistic, cognitive, and social-relational development can be supported by Parent Centered Early Interventions (PCEI), to suggest that the use of gestures in these interventions could be a beneficial approach, yet scarcely explored. AIMS AND METHODS This systematic review aimed to examine the literature on PCEI focused on gestures (symbolic and deictic) used to enhance the caregiver-child relationship and infant's language development, in both typically and atypically developing populations. The systematic review was conducted following the PRISMA guidelines for systematic reviews and meta-analyses. From 246 identified studies, 8 met PICO inclusion criteria and were eligible for inclusion. Two reviewers screened papers before completing data extraction and risk of bias assessment using the RoB2 Cochrane scale. RESULTS Included studies measured the effect of implementing symbolic or deictic gestures in daily communication on the relational aspects of mother/parent-child interaction or on language skills in infants. The studies indicate that gesture-oriented PCEI may benefit deprived populations such as atypically developing children, children from low-income families, and children who, for individual reasons, lag behind their peers in communication. CONCLUSIONS Although gesture-oriented PCEI appear to be beneficial in the early intervention for atypically developing populations, this approach has been so far scarcely explored directly in the context of hearing loss. Yet, symbolic gestures being a natural part of early vocabulary acquisition that emerges spontaneously regardless of hearing status, this approach could represent a promising line of intervention in infants with cSNHL, especially those with a worse head start.
Collapse
Affiliation(s)
- Arianna Colombani
- Institute for Maternal and Child Health - IRCCS "Burlo Garofolo" - Trieste, Italy
| | - Amanda Saksida
- Institute for Maternal and Child Health - IRCCS "Burlo Garofolo" - Trieste, Italy.
| | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Centro Interateneo di Ricerca Cognizione, Linguaggio e Sordità (CIRCLeS), University of Trento, Trento, Italy
| | - Eva Orzan
- Institute for Maternal and Child Health - IRCCS "Burlo Garofolo" - Trieste, Italy
| |
Collapse
|
11
|
Lester N, Theakston A, Twomey KE. The role of the museum in promoting language word learning for young children. INFANT AND CHILD DEVELOPMENT 2023. [DOI: 10.1002/icd.2400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Affiliation(s)
- Nicola Lester
- Division of Psychology, Communication and Human Neuroscience, Development and Hearing University of Manchester Manchester UK
| | - Anna Theakston
- Division of Psychology, Communication and Human Neuroscience, Development and Hearing University of Manchester Manchester UK
| | - Katherine E. Twomey
- Division of Psychology, Communication and Human Neuroscience, Development and Hearing University of Manchester Manchester UK
| |
Collapse
|
12
|
Nicastri M, Giallini I, Inguscio BMS, Turchetta R, Guerzoni L, Cuda D, Portanova G, Ruoppolo G, Dincer D'Alessandro H, Mancini P. The influence of auditory selective attention on linguistic outcomes in deaf and hard of hearing children with cochlear implants. Eur Arch Otorhinolaryngol 2023; 280:115-124. [PMID: 35831674 DOI: 10.1007/s00405-022-07463-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 05/23/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Auditory selective attention (ASA) is crucial to focus on significant auditory stimuli without being distracted by irrelevant auditory signals and plays an important role in language development. The present study aimed to investigate the unique contribution of ASA to the linguistic levels achieved by a group of cochlear implanted (CI) children. METHODS Thirty-four CI children with a median age of 10.05 years were tested using both the "Batteria per la Valutazione dell'Attenzione Uditiva e della Memoria di Lavoro Fonologica nell'età evolutiva-VAUM-ELF" to assess their ASA skills, and two Italian standardized tests to measure lexical and morphosyntactic skills. A regression analysis, including demographic and audiological variables, was conducted to assess the unique contribution of ASA to language skills. RESULTS The percentages of CI children with adequate ASA performances ranged from 50 to 29.4%. Bilateral CI children performed better than their monolateral peers. ASA skills contributed significantly to linguistic skills, accounting alone for the 25% of the observed variance. CONCLUSIONS The present findings are clinically relevant as they highlight the importance to assess ASA skills as early as possible, reflecting their important role in language development. Using simple clinical tools, ASA skills could be studied at early developmental stages. This may provide additional information to outcomes from traditional auditory tests and may allow us to implement specific training programs that could positively contribute to the development of neural mechanisms of ASA and, consequently, induce improvements in language skills.
Collapse
Affiliation(s)
- Maria Nicastri
- Department of Sense Organs, Sapienza University, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University, Rome, Italy
| | | | | | - Letizia Guerzoni
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | - Domenico Cuda
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | | | - Giovanni Ruoppolo
- I.R.C.C.S. San Raffaele Pisana, Via Nomentana, 401, 00162, Rome, Italy
| | | | | |
Collapse
|
13
|
Borovsky A. Drivers of Lexical Processing and Implications for Early Learning. ANNUAL REVIEW OF DEVELOPMENTAL PSYCHOLOGY 2022; 4:21-40. [PMID: 38846449 PMCID: PMC11156262 DOI: 10.1146/annurev-devpsych-120920-042902] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2024]
Abstract
Understanding words in unfolding speech requires the coordination of many skills to support successful and rapid comprehension of word meanings. This multifaceted ability emerges before our first birthday, matures over a protracted period of development, varies widely between individuals, forecasts future learning outcomes, and is influenced by immediate context, prior knowledge, and lifetime experience. This article highlights drivers of early lexical processing abilities while exploring questions regarding how learners begin to acquire, represent, and activate meaning in language. The review additionally explores how lexical processing and representation are connected while reflecting on how network science approaches can support richly detailed insights into this connection in young learners. Future research avenues are considered that focus on addressing how language processing and other cognitive skills are connected.
Collapse
Affiliation(s)
- Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
14
|
Understanding why infant-directed speech supports learning: A dynamic attention perspective. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2022.101047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
15
|
Ruba AL, Pollak SD, Saffran JR. Acquiring Complex Communicative Systems: Statistical Learning of Language and Emotion. Top Cogn Sci 2022; 14:432-450. [PMID: 35398974 PMCID: PMC9465951 DOI: 10.1111/tops.12612] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2011] [Revised: 03/16/2022] [Accepted: 03/17/2022] [Indexed: 11/30/2022]
Abstract
During the early postnatal years, most infants rapidly learn to understand two naturally evolved communication systems: language and emotion. While these two domains include different types of content knowledge, it is possible that similar learning processes subserve their acquisition. In this review, we compare the learnable statistical regularities in language and emotion input. We then consider how domain-general learning abilities may underly the acquisition of language and emotion, and how this process may be constrained in each domain. This comparative developmental approach can advance our understanding of how humans learn to communicate with others.
Collapse
Affiliation(s)
- Ashley L. Ruba
- Department of PsychologyUniversity of Wisconsin – Madison
| | - Seth D. Pollak
- Department of PsychologyUniversity of Wisconsin – Madison
| | | |
Collapse
|
16
|
Kominsky JF, Lucca K, Thomas AJ, Frank MC, Hamlin JK. Simplicity and validity in infant research. COGNITIVE DEVELOPMENT 2022. [DOI: 10.1016/j.cogdev.2022.101213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Venker CE, Johnson JR. Electronic Toys Decrease the Quantity and Lexical Diversity of Spoken Language Produced by Children With Autism Spectrum Disorder and Age-Matched Children With Typical Development. Front Psychol 2022; 13:929589. [PMID: 35846691 PMCID: PMC9286016 DOI: 10.3389/fpsyg.2022.929589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 06/15/2022] [Indexed: 11/13/2022] Open
Abstract
Many young children with autism spectrum disorder (ASD) have language delays. Play-based interactions present a rich, naturalistic context for supporting language and communication development, but electronic toys may compromise the quality of play interactions. This study examined how electronic toys impact the quantity and lexical diversity of spoken language produced by children with ASD and age-matched children with typical development (TD), compared to traditional toys without electronic features. Twenty-eight parent-child dyads (14 per group) played with both electronic and traditional toy sets in a counter-balanced order. We transcribed child speech during both play sessions and derived the number of utterances and number of different word (NDW) roots per minute that children produced. Children with ASD and children with TD talked significantly less and produced significantly fewer unique words during electronic toy play than traditional toy play. In this way, children appear to take a “backseat” to electronic toys, decreasing their communicative contributions to play-based social interactions with their parents. These findings highlight the importance of understanding how toy type can affect parent-child play interactions and the subsequent learning opportunities that may be created. Play-based interventions for children with ASD may be most effective when they incorporate traditional toys, rather than electronic toys.
Collapse
|
18
|
Saksida A, Ghiselli S, Picinali L, Pintonello S, Battelino S, Orzan E. Attention to Speech and Music in Young Children with Bilateral Cochlear Implants: A Pupillometry Study. J Clin Med 2022; 11:1745. [PMID: 35330071 PMCID: PMC8956090 DOI: 10.3390/jcm11061745] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/05/2022] [Accepted: 03/16/2022] [Indexed: 12/10/2022] Open
Abstract
Early bilateral cochlear implants (CIs) may enhance attention to speech, and reduce cognitive load in noisy environments. However, it is sometimes difficult to measure speech perception and listening effort, especially in very young children. Behavioral measures cannot always be obtained in young/uncooperative children, whereas objective measures are either difficult to assess or do not reliably correlate with behavioral measures. Recent studies have thus explored pupillometry as a possible objective measure. Here, pupillometry is introduced to assess attention to speech and music in noise in very young children with bilateral CIs (N = 14, age: 17-47 months), and in the age-matched group of normally-hearing (NH) children (N = 14, age: 22-48 months). The results show that the response to speech was affected by the presence of background noise only in children with CIs, but not NH children. Conversely, the presence of background noise altered pupil response to music only in in NH children. We conclude that whereas speech and music may receive comparable attention in comparable listening conditions, in young children with CIs, controlling for background noise affects attention to speech and speech processing more than in NH children. Potential implementations of the results for rehabilitation procedures are discussed.
Collapse
Affiliation(s)
- Amanda Saksida
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”—Trieste, 34137 Trieste, Italy; (S.P.); (E.O.)
| | - Sara Ghiselli
- Ospedale Guglielmo da Saliceto, 29121 Piacenza, Italy;
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London SW7 2DB, UK;
| | - Sara Pintonello
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”—Trieste, 34137 Trieste, Italy; (S.P.); (E.O.)
| | - Saba Battelino
- Faculty of Medicine, University of Ljubljana, University Medical Centre Ljubljana, SI-1000 Ljubljana, Slovenia;
| | - Eva Orzan
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”—Trieste, 34137 Trieste, Italy; (S.P.); (E.O.)
| |
Collapse
|
19
|
Home Auditory Environments of Children With Cochlear Implants and Children With Normal Hearing. Ear Hear 2022; 43:592-604. [PMID: 34582393 PMCID: PMC8881328 DOI: 10.1097/aud.0000000000001124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Early home auditory environment plays an important role in children's spoken language development and overall well-being. This study explored differences in the home auditory environment experienced by children with cochlear implants (CIs) relative to children with normal hearing (NH). DESIGN Measures of the child's home auditory environment, including adult word count (AWC), conversational turns (CTs), child vocalizations (CVs), television and media (TVN), overlapping sound (OLN), and noise (NON), were gathered using the Language Environment Analysis System. The study included 16 children with CIs (M = 22.06 mo) and 25 children with NH (M = 18.71 mo). Families contributed 1 to 3 daylong recordings quarterly over the course of approximately 1 year. Additional parent and infant characteristics including maternal education, amount of residual hearing, and age at activation were also collected. RESULTS The results showed that whereas CTs and CVs increased with child age for children with NH, they did not change as a function of age for children with CIs; NON was significantly higher for the NH group. No significant group differences were found for the measures of AWC, TVN, or OLN. Moreover, measures of CTs, CVs, TVN, and NON from children with CIs were associated with demographic and child factors, including maternal education, age at CI activation, and amount of residual hearing. CONCLUSIONS These findings suggest that there are similarities and differences in the home auditory environment experienced by children with CIs and children with NH. These findings have implications for early intervention programs to promote spoken language development for children with CIs.
Collapse
|
20
|
Martinot P, Bernard JY, Peyre H, De Agostini M, Forhan A, Charles MA, Plancoulaine S, Heude B. Exposure to screens and children's language development in the EDEN mother-child cohort. Sci Rep 2021; 11:11863. [PMID: 34103551 PMCID: PMC8187440 DOI: 10.1038/s41598-021-90867-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 05/10/2021] [Indexed: 11/09/2022] Open
Abstract
Studies in children have reported associations of screen time and background TV on language skills as measured by their parents. However, few large, longitudinal studies have examined language skills assessed by trained psychologists, which is less prone to social desirability. We assessed screen time and exposure to TV during family meals at ages 2, 3 and 5–6 years in 1562 children from the French EDEN cohort. Language skills were evaluated by parents at 2 years (Communicative Development Inventory, CDI) and by trained psychologists at 3 (NEPSY and ELOLA batteries) and 5–6 years (verbal IQ). Cross-sectional and longitudinal associations were assessed by linear regression adjusted for important confounders. Overall, daily screen time was not associated with language scores, except in cross-sectional at age 2 years, where higher CDI scores were observed for intermediate screen time. Exposure to TV during family meals was consistently associated with lower language scores: TV always on (vs never) at age 2 years was associated with lower verbal IQ (− 3.2 [95% IC: − 6.0, − 0.3] points), independent of daily screen time and baseline language score. In conclusion, public health policies should better account for the context of screen watching, not only its amount.
Collapse
Affiliation(s)
- Pauline Martinot
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France
| | - Jonathan Y Bernard
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France. .,Singapore Institute for Clinical Sciences (SICS), Agency for Science, Technology and Research (A*STAR), Singapore, Singapore.
| | - Hugo Peyre
- Laboratoire de Sciences Cognitives Et Psycholinguistique (ENS, EHESS, CNRS), Ecole Normale Supérieure, PSL Research University, Paris, France.,Neurodiderot, Inserm UMR 1141, Paris Diderot University, Paris, France.,Department of Child and Adolescent Psychiatry, Robert Debré Hospital, APHP, Paris, France
| | - Maria De Agostini
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France
| | - Anne Forhan
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France
| | - Marie-Aline Charles
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France
| | - Sabine Plancoulaine
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France
| | - Barbara Heude
- Centre for Research in Epidemiology and StatisticS (CRESS), Université de Paris, Inserm, INRAE, 16 av Paul Vaillant-Couturier, 75004, Paris, France
| |
Collapse
|
21
|
A New Proposal for Phoneme Acquisition: Computing Speaker-Specific Distribution. Brain Sci 2021; 11:brainsci11020177. [PMID: 33535398 PMCID: PMC7911506 DOI: 10.3390/brainsci11020177] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/21/2021] [Accepted: 01/26/2021] [Indexed: 11/17/2022] Open
Abstract
Speech is an acoustically variable signal, and one of the sources of this variation is the presence of multiple speakers. Empirical evidence has suggested that adult listeners possess remarkably sensitive (and systematic) abilities to process speech signals, despite speaker variability. It includes not only a sensitivity to speaker-specific variation, but also an ability to utilize speaker variation with other sources of information for further processing. Recently, many studies also showed that young children seem to possess a similar capacity. This suggests continuity in the processing of speaker-dependent speech variability, and suggests that this ability could also be important for infants learning their native language. In the present paper, we review evidence for speaker variability and speech processing in adults, and speaker variability and speech processing in young children, with an emphasis on how they make use of speaker-specific information in word learning situations. Finally, we will build on these findings to make a novel proposal for the use of speaker-specific information processing in phoneme learning in infancy.
Collapse
|
22
|
Busch T, Vermeulen A, Langereis M, Vanpoucke F, van Wieringen A. Cochlear Implant Data Logs Predict Children’s Receptive Vocabulary. Ear Hear 2020; 41:733-746. [DOI: 10.1097/aud.0000000000000818] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Holland AA, Clem MA, Lampson E, Stavinoha PL. Auditory attention late effects in pediatric acute lymphoblastic leukemia. Child Neuropsychol 2020; 26:865-880. [PMID: 32475222 DOI: 10.1080/09297049.2020.1772738] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
This study sought to characterize auditory attention functioning among pediatric Acute Lymphoblastic Leukemia (ALL) survivors treated on a chemotherapy-only protocol, given previous literature suggesting late impact on sustained visual attention. We hypothesized similar deficits would be observed in auditory attention relative to previous literature indicating weakness with aspect of visual attention in this population. Survivors (n = 107, 53 females, M = 12.80 years) completed the Conners Continuous Auditory Test of Attention (CATA). Parents completed the Behavior Assessment System for Children, Second Edition and reported educational performance and services via structured questionnaire. Results indicated several CATA indices associated with sustained auditory attention were significantly worse than normative data, though group means were average. Reflecting individual variability in performance, 50% of the sample performed worse than one standard deviation from the mean on at least one CATA variable. Parent report of attention did not differ from normative means for the sample. Parent-report data indicated that 60% of the sample utilized academic support services, with a large proportion of survivors having utilized special education services. Poorer performance with sustained auditory attention was associated with poor academic outcomes. Greater methotrexate exposure and younger age at diagnosis were risk factors for inattentiveness. No gender differences were identified on direct assessment of auditory attention or parent report of attention, though male gender was associated with poorer educational performance. Findings suggest that auditory attention is an at-risk cognitive domain following treatment for pediatric ALL, and that an association exists between auditory attention and school performance in this population.
Collapse
Affiliation(s)
- Alice Ann Holland
- Department of Psychiatry, University of Texas Southwestern Medical Center , Dallas, TX, USA.,Department of Psychiatry, Children's Medical Center Dallas , Dallas, TX, USA
| | - Matthew A Clem
- Department of Psychiatry, University of Texas Southwestern Medical Center , Dallas, TX, USA
| | - Erin Lampson
- Department of Pediatrics (EL), University of Texas Southwestern Medical Center , USA
| | - Peter L Stavinoha
- Department of Psychiatry, University of Texas Southwestern Medical Center , Dallas, TX, USA.,Department of Psychiatry, Children's Medical Center Dallas , Dallas, TX, USA
| |
Collapse
|
24
|
Avivi-Reich M, Roberts MY, Grieco-Calub TM. Quantifying the Effects of Background Speech Babble on Preschool Children's Novel Word Learning in a Multisession Paradigm: A Preliminary Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:345-356. [PMID: 31851858 DOI: 10.1044/2019_jslhr-h-19-0083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose This study tested the effects of background speech babble on novel word learning in preschool children with a multisession paradigm. Method Eight 3-year-old children were exposed to a total of 8 novel word-object pairs across 2 story books presented digitally. Each story contained 4 novel consonant-vowel-consonant nonwords. Children were exposed to both stories, one in quiet and one in the presence of 4-talker babble presented at 0-dB signal-to-noise ratio. After each story, children's learning was tested with a referent selection task and a verbal recall (naming) task. Children were exposed to and tested on the novel word-object pairs on 5 separate days within a 2-week span. Results A significant main effect of session was found for both referent selection and verbal recall. There was also a significant main effect of exposure condition on referent selection performance, with more referents correctly selected for word-object pairs that were presented in quiet compared to pairs presented in speech babble. Finally, children's verbal recall of novel words was statistically better than baseline performance (i.e., 0%) on Sessions 3-5 for words exposed in quiet, but only on Session 5 for words exposed in speech babble. Conclusions These findings suggest that background speech babble at 0-dB signal-to-noise ratio disrupts novel word learning in preschool-age children. As a result, children may need more time and more exposures of a novel word before they can recognize or verbally recall it.
Collapse
Affiliation(s)
- Meital Avivi-Reich
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
- Department of Communication Arts, Sciences, and Disorders, Brooklyn College, City University of New York, NY
| | - Megan Y Roberts
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Tina M Grieco-Calub
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| |
Collapse
|
25
|
Abstract
When referring to objects, adults package words, sentences, and gestures in ways that shape children's learning. Here, to understand how continuity of reference shapes word learning, an adult taught new words to 4-year-old children (N = 120) using either clusters of references to the same object or no sequential references to each object. In three experiments, the adult used a combination of labels and other object references, which provided informative discourse (e.g., This is small and green), neutral discourse (e.g., This is really great), or no verbal discourse. Switching verbal references from one object to another interfered with learning relative to providing clustered references to a particular object, revealing that discontinuity in discourse hinders children's encoding of new words.
Collapse
|
26
|
Godwin KE, Erickson LC, Newman RS. Insights From Crossing Research Silos on Visual and Auditory Attention. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2019; 28:47-52. [PMID: 31217671 DOI: 10.1177/0963721418807725] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Many learning tasks that children encounter necessitate the ability to direct and sustain attention to key aspects of the environment while simultaneously tuning out irrelevant features. This is challenging for at least two reasons: (a) The ability to regulate and sustain attention follows a protracted developmental time course, and (b) children spend much of their time in environments not optimized for learning-homes and schools are often chaotic, cluttered, and noisy. Research on these issues is often siloed; that is, researchers tend to examine the relationship among attention, distraction, and learning in only the auditory or the visual domain, but not both together. We provide examples in which auditory and visual aspects of learning each have strong implications for the other. Research examining how visual information and auditory information are distracting can benefit from cross-fertilization. Integrating across research silos informs our understanding of attention and learning, yielding more efficacious guidance for caregivers, educators, developers, and policymakers.
Collapse
Affiliation(s)
| | - Lucy C Erickson
- Department of Hearing and Speech Sciences, University of Maryland
| | | |
Collapse
|
27
|
|
28
|
Abstract
Children use the presence of familiar objects with known names to identify the correct referents of novel words. In natural environments, objects vary widely in salience. The presence of familiar objects may sometimes hinder rather than help word learning. To test this hypothesis, 3-year-olds (N = 36) were shown novel objects paired with familiar objects that varied in their visual salience. When the novel objects were labeled, children were slower and less accurate at fixating them in the presence of highly salient familiar objects than in the presence of less salient familiar objects. They were also less successful in retaining these word-referent pairings. While familiar objects may facilitate novel word learning in ambiguous situations, the properties of familiar objects matter.
Collapse
|
29
|
Bernier DE, Soderstrom M. Was that my name? Infants' listening in conversational multi-talker backgrounds. JOURNAL OF CHILD LANGUAGE 2018; 45:1439-1449. [PMID: 30012230 DOI: 10.1017/s0305000918000247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This study tested infants' ability to segregate target speech from a background of ecologically valid multi-talker speech at a 10 dB SNR. Using the Headturn Preference Procedure, 72 English-learning 5-, 9-, and 12-month-old monolinguals were tested on their ability to detect and perceive their own name. At all three ages infants were able to detect the presence of the target speech, but only at 9 months did they show sensitivity to the phonetic details that distinguished their own name from other names. These results extend previous findings on infants' speech perception in noise to more naturalistic forms of background speech.
Collapse
Affiliation(s)
- Dana E Bernier
- Department of Psychology,University of Waterloo,200 University Avenue West (PAS 3020),Waterloo,Ontario N2L 3G1
| | - Melanie Soderstrom
- Department of Psychology,University of Manitoba,P404 Duff Roblin Bldg,190 Dysart Rd,Winnipeg,Manitoba R3T 2N2
| |
Collapse
|
30
|
Erickson LC, Newman RS. Influences of background noise on infants and children. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2017; 26:451-457. [PMID: 29375201 DOI: 10.1177/0963721417709087] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The goal of this review is to provide a high-level, selected overview of the consequences of background noise on health, perception, cognition, and learning during early development, with a specific focus on how noise may impair speech comprehension and language learning (e.g., via masking). Although much of the existing literature has focused on adults, research shows that infants and young children are relatively disadvantaged at listening in noise. Consequently, a major goal is to consider how background noise may affect young children, who must learn and develop language in noisy environments despite being simultaneously less equipped to do so.
Collapse
|
31
|
Grieco-Calub TM, Simeon KM, Snyder HE, Lew-Williams C. Word segmentation from noise-band vocoded speech. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 32:1344-1356. [PMID: 29977950 PMCID: PMC6028043 DOI: 10.1080/23273798.2017.1354129] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 07/02/2017] [Indexed: 06/01/2023]
Abstract
Spectral degradation reduces access to the acoustics of spoken language and compromises how learners break into its structure. We hypothesised that spectral degradation disrupts word segmentation, but that listeners can exploit other cues to restore detection of words. Normal-hearing adults were familiarised to artificial speech that was unprocessed or spectrally degraded by noise-band vocoding into 16 or 8 spectral channels. The monotonic speech stream was pause-free (Experiment 1), interspersed with isolated words (Experiment 2), or slowed by 33% (Experiment 3). Participants were tested on segmentation of familiar vs. novel syllable sequences and on recognition of individual syllables. As expected, vocoding hindered both word segmentation and syllable recognition. The addition of isolated words, but not slowed speech, improved segmentation. We conclude that syllable recognition is necessary but not sufficient for successful word segmentation, and that isolated words can facilitate listeners' access to the structure of acoustically degraded speech.
Collapse
Affiliation(s)
- Tina M. Grieco-Calub
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Katherine M. Simeon
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Hillary E. Snyder
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | | |
Collapse
|