1
|
Tolkacheva V, Brownsett SLE, McMahon KL, de Zubicaray GI. Perceiving and misperceiving speech: lexical and sublexical processing in the superior temporal lobes. Cereb Cortex 2024; 34:bhae087. [PMID: 38494418 PMCID: PMC10944697 DOI: 10.1093/cercor/bhae087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024] Open
Abstract
Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.
Collapse
Affiliation(s)
- Valeriya Tolkacheva
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, School of Health and Rehabilitation Sciences, University of Queensland, Surgical Treatment and Rehabilitation Services, Herston, Queensland, 4006, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Health Sciences Building 1, 1 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Katie L McMahon
- Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Building 71/918, Royal Brisbane & Women’s Hospital, Herston, Queensland, 4006, Australia
- Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, 60 Musk Avenue, Kelvin Grove, Queensland, 4059, Australia
| | - Greig I de Zubicaray
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| |
Collapse
|
2
|
Petley L, Blankenship C, Hunter LL, Stewart HJ, Lin L, Moore DR. Amplitude Modulation Perception and Cortical Evoked Potentials in Children With Listening Difficulties and Their Typically Developing Peers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:633-656. [PMID: 38241680 PMCID: PMC11000788 DOI: 10.1044/2023_jslhr-23-00317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 09/01/2023] [Accepted: 11/09/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE Amplitude modulations (AMs) are important for speech intelligibility, and deficits in speech intelligibility are a leading source of impairment in childhood listening difficulties (LiD). The present study aimed to explore the relationships between AM perception and speech-in-noise (SiN) comprehension in children and to determine whether deficits in AM processing contribute to childhood LiD. Evoked responses were used to parse the neural origins of AM processing. METHOD Forty-one children with LiD and 44 typically developing children, ages 8-16 years, participated in the study. Behavioral AM depth thresholds were measured at 4 and 40 Hz. SiN tasks included the Listening in Spatialized Noise-Sentences Test (LiSN-S) and a coordinate response measure (CRM)-based task. Evoked responses were obtained during an AM change detection task using alternations between 4 and 40 Hz, including the N1 of the acoustic change complex, auditory steady-state response (ASSR), P300, and a late positive response (late potential [LP]). Maturational effects were explored via age correlations. RESULTS Age correlated with 4-Hz AM thresholds, CRM separated talker scores, and N1 amplitude. Age-normed LiSN-S scores obtained without spatial or talker cues correlated with age-corrected 4-Hz AM thresholds and area under the LP curve. CRM separated talker scores correlated with AM thresholds and area under the LP curve. Most behavioral measures of AM perception correlated with the signal-to-noise ratio and phase coherence of the 40-Hz ASSR. AM change response time also correlated with area under the LP curve. Children with LiD exhibited deficits with respect to 4-Hz thresholds, AM change accuracy, and area under the LP curve. CONCLUSIONS The observed relationships between AM perception and SiN performance extend the evidence that modulation perception is important for understanding SiN in childhood. In line with this finding, children with LiD demonstrated poorer performance on some measures of AM perception, but their evoked responses implicated a primarily cognitive deficit. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25009103.
Collapse
Affiliation(s)
- Lauren Petley
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Patient Services Research, Cincinnati Children's Hospital Medical Center, OH
- Department of Psychology, Clarkson University, Potsdam, NY
| | - Chelsea Blankenship
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Patient Services Research, Cincinnati Children's Hospital Medical Center, OH
| | - Lisa L. Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Patient Services Research, Cincinnati Children's Hospital Medical Center, OH
- Department of Otolaryngology, College of Medicine, University of Cincinnati, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | | | - Li Lin
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Patient Services Research, Cincinnati Children's Hospital Medical Center, OH
| | - David R. Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Patient Services Research, Cincinnati Children's Hospital Medical Center, OH
- Department of Otolaryngology, College of Medicine, University of Cincinnati, OH
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
| |
Collapse
|
3
|
Petley L, Blankenship C, Hunter LL, Stewart HJ, Lin L, Moore DR. Amplitude modulation perception and cortical evoked potentials in children with listening difficulties and their typically-developing peers. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.10.26.23297523. [PMID: 37961469 PMCID: PMC10635202 DOI: 10.1101/2023.10.26.23297523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Purpose Amplitude modulations (AM) are important for speech intelligibility, and deficits in speech intelligibility are a leading source of impairment in childhood listening difficulties (LiD). The present study aimed to explore the relationships between AM perception and speech-in-noise (SiN) comprehension in children and to determine whether deficits in AM processing contribute to childhood LiD. Evoked responses were used to parse the neural origin of AM processing. Method Forty-one children with LiD and forty-four typically-developing children, ages 8-16 y.o., participated in the study. Behavioral AM depth thresholds were measured at 4 and 40 Hz. SiN tasks included the LiSN-S and a Coordinate Response Measure (CRM)-based task. Evoked responses were obtained during an AM Change detection task using alternations between 4 and 40 Hz, including the N1 of the acoustic change complex, auditory steady-state response (ASSR), P300, and a late positive response (LP). Maturational effects were explored via age correlations. Results Age correlated with 4 Hz AM thresholds, CRM Separated Talker scores, and N1 amplitude. Age-normed LiSN-S scores obtained without spatial or talker cues correlated with age-corrected 4 Hz AM thresholds and area under the LP curve. CRM Separated Talker scores correlated with AM thresholds and area under the LP curve. Most behavioral measures of AM perception correlated with the SNR and phase coherence of the 40 Hz ASSR. AM Change RT also correlated with area under the LP curve. Children with LiD exhibited deficits with respect to 4 Hz thresholds, AM Change accuracy, and area under the LP curve. Conclusions The observed relationships between AM perception and SiN performance extend the evidence that modulation perception is important for understanding SiN in childhood. In line with this finding, children with LiD demonstrated poorer performance on some measures of AM perception, but their evoked responses implicated a primarily cognitive deficit.
Collapse
|
4
|
Van Hirtum T, Somers B, Verschueren E, Dieudonné B, Francart T. Delta-band neural envelope tracking predicts speech intelligibility in noise in preschoolers. Hear Res 2023; 434:108785. [PMID: 37172414 DOI: 10.1016/j.heares.2023.108785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 04/24/2023] [Accepted: 05/05/2023] [Indexed: 05/15/2023]
Abstract
Behavioral tests are currently the gold standard in measuring speech intelligibility. However, these tests can be difficult to administer in young children due to factors such as motivation, linguistic knowledge and cognitive skills. It has been shown that measures of neural envelope tracking can be used to predict speech intelligibility and overcome these issues. However, its potential as an objective measure for speech intelligibility in noise remains to be investigated in preschool children. Here, we evaluated neural envelope tracking as a function of signal-to-noise ratio (SNR) in 14 5-year-old children. We examined EEG responses to natural, continuous speech presented at different SNRs ranging from -8 (very difficult) to 8 dB SNR (very easy). As expected delta band (0.5-4 Hz) tracking increased with increasing stimulus SNR. However, this increase was not strictly monotonic as neural tracking reached a plateau between 0 and 4 dB SNR, similarly to the behavioral speech intelligibility outcomes. These findings indicate that neural tracking in the delta band remains stable, as long as the acoustical degradation of the speech signal does not reflect significant changes in speech intelligibility. Theta band tracking (4-8 Hz), on the other hand, was found to be drastically reduced and more easily affected by noise in children, making it less reliable as a measure of speech intelligibility. By contrast, neural envelope tracking in the delta band was directly associated with behavioral measures of speech intelligibility. This suggests that neural envelope tracking in the delta band is a valuable tool for evaluating speech-in-noise intelligibility in preschoolers, highlighting its potential as an objective measure of speech in difficult-to-test populations.
Collapse
Affiliation(s)
- Tilde Van Hirtum
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium.
| | - Ben Somers
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| | - Eline Verschueren
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| | - Benjamin Dieudonné
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| | - Tom Francart
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| |
Collapse
|
5
|
Ji H, Yu X, Xiao Z, Zhu H, Liu P, Lin H, Chen R, Hong Q. Features of Cognitive Ability and Central Auditory Processing of Preschool Children With Minimal and Mild Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1867-1888. [PMID: 37116308 DOI: 10.1044/2023_jslhr-22-00395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
OBJECTIVE This study aimed to investigate the current status of cognitive development and central auditory processing development of preschool children with minimal and mild hearing loss (MMHL) in Nanjing, China. METHOD We recruited 34 children with MMHL and 45 children with normal hearing (NH). They completed a series of tests, including cognitive tests (i.e., Wechsler Preschool and Primary Scale of Intelligence and Continuous Performance Test), behavioral auditory tests (speech-in-noise [SIN] test and frequency pattern test), and objective electrophysiological audiometry (speech-evoked auditory brainstem response and cortical auditory evoked potential). In addition, teacher evaluations and demographic information and questionnaires completed by parents were collected. RESULTS Regarding cognitive ability, statistical differences in the verbal comprehensive index, full-scale intelligence quotient, and abnormal rate of attention test score were found between the MMHL group and the NH group. The children with MMHL performed poorer on the SIN test than the children with NH. As for the auditory electrophysiology of the two groups, the latency and amplitude of some waves of the speech-evoked auditory brainstem response and cortical auditory evoked potential were statistically different between the two groups. We attempted to explore the relationship between some key indicators of auditory processing and some key indicators of cognitive development. CONCLUSIONS Children with MMHL are already at increased developmental risk as early as preschool. They are more likely to have problems with attention and verbal comprehension than children with NH. This condition is not compensated with increasing age during the preschool years. The results suggest a possible relationship between the risk of cognitive deficit and divergence of auditory processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22670473.
Collapse
Affiliation(s)
- Hui Ji
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| | - Xinyue Yu
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Zhenglu Xiao
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Huiqin Zhu
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Panting Liu
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| | - Huanxi Lin
- School of Nursing, Nanjing Medical University, Jiangsu, China
| | - Renjie Chen
- The Second Affiliated Hospital of Nanjing Medical University, Jiangsu, China
| | - Qin Hong
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| |
Collapse
|
6
|
Leist L, Breuer C, Yadav M, Fremerey S, Fels J, Raake A, Lachmann T, Schlittmeier SJ, Klatte M. Differential Effects of Task-Irrelevant Monaural and Binaural Classroom Scenarios on Children's and Adults' Speech Perception, Listening Comprehension, and Visual-Verbal Short-Term Memory. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15998. [PMID: 36498071 PMCID: PMC9738007 DOI: 10.3390/ijerph192315998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/25/2022] [Accepted: 11/26/2022] [Indexed: 06/17/2023]
Abstract
Most studies investigating the effects of environmental noise on children's cognitive performance examine the impact of monaural noise (i.e., same signal to both ears), oversimplifying multiple aspects of binaural hearing (i.e., adequately reproducing interaural differences and spatial information). In the current study, the effects of a realistic classroom-noise scenario presented either monaurally or binaurally on tasks requiring processing of auditory and visually presented information were analyzed in children and adults. In Experiment 1, across age groups, word identification was more impaired by monaural than by binaural classroom noise, whereas listening comprehension (acting out oral instructions) was equally impaired in both noise conditions. In both tasks, children were more affected than adults. Disturbance ratings were unrelated to the actual performance decrements. Experiment 2 revealed detrimental effects of classroom noise on short-term memory (serial recall of words presented pictorially), which did not differ with age or presentation mode (monaural vs. binaural). The present results add to the evidence for detrimental effects of noise on speech perception and cognitive performance, and their interactions with age, using a realistic classroom-noise scenario. Binaural simulations of real-world auditory environments can improve the external validity of studies on the impact of noise on children's and adults' learning.
Collapse
Affiliation(s)
- Larissa Leist
- Cognitive and Developmental Psychology Unit, Center for Cognitive Science, Department of Cognitive Psychology, University of Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
| | - Carolin Breuer
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, 52074 Aachen, Germany
| | - Manuj Yadav
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, 52074 Aachen, Germany
| | - Stephan Fremerey
- Audiovisual Technology Group, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Janina Fels
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, 52074 Aachen, Germany
| | - Alexander Raake
- Audiovisual Technology Group, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Thomas Lachmann
- Cognitive and Developmental Psychology Unit, Center for Cognitive Science, Department of Cognitive Psychology, University of Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
- Centro de Investigación Nebrija en Cognición, Facultad de Lenguas y Educacion, Universidad Nebrija, 28015 Madrid, Spain
| | - Sabine J. Schlittmeier
- Teaching and Research Area of Work and Engineering Psychology, RWTH Aachen University, 52066 Aachen, Germany
| | - Maria Klatte
- Cognitive and Developmental Psychology Unit, Center for Cognitive Science, Department of Cognitive Psychology, University of Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
| |
Collapse
|
7
|
Buss E, Felder J, Miller MK, Leibold LJ, Calandruccio L. Can Closed-Set Word Recognition Differentially Assess Vowel and Consonant Perception for School-Age Children With and Without Hearing Loss? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3934-3950. [PMID: 36194777 PMCID: PMC9927623 DOI: 10.1044/2022_jslhr-20-00749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 04/02/2022] [Accepted: 06/18/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Vowels and consonants play different roles in language acquisition and speech recognition, yet standard clinical tests do not assess vowel and consonant perception separately. As a result, opportunities for targeted intervention may be lost. This study evaluated closed-set word recognition tests designed to rely predominantly on either vowel or consonant perception and compared results with sentence recognition scores. METHOD Participants were children (5-17 years of age) and adults (18-38 years of age) with normal hearing and children with sensorineural hearing loss (7-17 years of age). Speech reception thresholds (SRTs) were measured in speech-shaped noise. Children with hearing loss were tested with their hearing aids. Word recognition was evaluated using a three-alternative forced-choice procedure, with a picture-pointing response; monosyllabic target words varied with respect to either consonant or vowel content. Sentence recognition was evaluated for low- and high-probability sentences. In a subset of conditions, stimuli were low-pass filtered to simulate a steeply sloping hearing loss in participants with normal hearing. RESULTS Children's SRTs improved with increasing age for words and sentences. Low-pass filtering had a larger effect for consonant-variable words than vowel-variable words for both children and adults with normal hearing, consistent with the greater high-frequency content of consonants. Children with hearing loss tested with hearing aids tended to perform more poorly than age-matched children with normal hearing, particularly for sentence recognition, but consonant- and vowel-variable word recognition did not appear to be differentially affected by the amount of high- and low-frequency hearing loss. CONCLUSIONS Closed-set recognition of consonant- and vowel-variable words appeared to differentially evaluate vowel and consonant perception but did not vary by configuration of hearing loss in this group of pediatric hearing aid users. Word scores obtained in this manner do not fully characterize the auditory abilities necessary for open-set sentence recognition, but they do provide a general estimate.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | | | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
8
|
Schwarz J, Li KK, Sim JH, Zhang Y, Buchanan-Worster E, Post B, Gibson JL, McDougall K. Semantic Cues Modulate Children’s and Adults’ Processing of Audio-Visual Face Mask Speech. Front Psychol 2022; 13:879156. [PMID: 35928422 PMCID: PMC9343587 DOI: 10.3389/fpsyg.2022.879156] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/25/2022] [Indexed: 12/03/2022] Open
Abstract
During the COVID-19 pandemic, questions have been raised about the impact of face masks on communication in classroom settings. However, it is unclear to what extent visual obstruction of the speaker’s mouth or changes to the acoustic signal lead to speech processing difficulties, and whether these effects can be mitigated by semantic predictability, i.e., the availability of contextual information. The present study investigated the acoustic and visual effects of face masks on speech intelligibility and processing speed under varying semantic predictability. Twenty-six children (aged 8-12) and twenty-six adults performed an internet-based cued shadowing task, in which they had to repeat aloud the last word of sentences presented in audio-visual format. The results showed that children and adults made more mistakes and responded more slowly when listening to face mask speech compared to speech produced without a face mask. Adults were only significantly affected by face mask speech when both the acoustic and the visual signal were degraded. While acoustic mask effects were similar for children, removal of visual speech cues through the face mask affected children to a lesser degree. However, high semantic predictability reduced audio-visual mask effects, leading to full compensation of the acoustically degraded mask speech in the adult group. Even though children did not fully compensate for face mask speech with high semantic predictability, overall, they still profited from semantic cues in all conditions. Therefore, in classroom settings, strategies that increase contextual information such as building on students’ prior knowledge, using keywords, and providing visual aids, are likely to help overcome any adverse face mask effects.
Collapse
Affiliation(s)
- Julia Schwarz
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
- *Correspondence: Julia Schwarz,
| | - Katrina Kechun Li
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
- Katrina Kechun Li,
| | - Jasper Hong Sim
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Yixin Zhang
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Elizabeth Buchanan-Worster
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Brechtje Post
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | | | - Kirsty McDougall
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
9
|
van Wieringen A, Wouters J. Lilliput: speech perception in speech-weighted noise and in quiet in young children. Int J Audiol 2022:1-9. [PMID: 35732012 DOI: 10.1080/14992027.2022.2086491] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE The aim of this study was to develop an open-set word recognition task in speech-weighted noise and in quiet for young children and examine age effects for open versus closed response formats. DESIGN Dutch monosyllabic words were presented in quiet and in stationary speech-weighted noise to 4- and 5-year-old children as well as to young adults in an open-set response format. Additionally, performance in open and closed context was assessed, as well as in a picture-pointing paradigm. STUDY SAMPLE More than 200 children and 50 adults with normal hearing participated in the various validation phases. RESULTS Average fitted speech reception thresholds (50%) yielded an age effect between 4-year and 5-year olds (and adults), both in speech-weighted noise and in quiet. The closed-set format yielded lower (better) SNRs than the open-set format, and children benefitted to the same extent as adults from phonetically similar words in speech-weighted noise. Additionally, the 4 AFC picture-pointing paradigm can be used to assess word recognition in quiet from 3 years of age. CONCLUSIONS The same materials reveal performance differences between 4 and 5 years of age (and adults), both in quiet and speech-weighted noise using an open-set response format. This relatively small yet significant difference in SRT for a gap of only 1 year shows a developmental change for word recognition in speech-weighted noise and in quiet in the first decade of life.The study is part of the protocol registered on ClinicalTrials.gov (ID = NCT04063748).
Collapse
Affiliation(s)
- Astrid van Wieringen
- Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology, KU Leuven - University of Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
10
|
Schiller IS, Remacle A, Durieux N, Morsomme D. Effects of Noise and a Speaker's Impaired Voice Quality on Spoken Language Processing in School-Aged Children: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:169-199. [PMID: 34902257 DOI: 10.1044/2021_jslhr-21-00183] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Background noise and voice problems among teachers can degrade listening conditions in classrooms. The aim of this literature review is to understand how these acoustic degradations affect spoken language processing in 6- to 18-year-old children. METHOD In a narrative report and meta-analysis, we systematically review studies that examined the effects of noise and/or impaired voice on children's response accuracy and response time (RT) in listening tasks. We propose the Speech Processing under Acoustic DEgradations (SPADE) framework to classify relevant findings according to three processing dimensions-speech perception, listening comprehension, and auditory working memory-and highlight potential moderators. RESULTS Thirty-one studies are included in this systematic review. Our meta-analysis shows that noise can impede children's accuracy in listening tasks across all processing dimensions (Cohen's d between -0.67 and -2.65, depending on signal-to-noise ratio) and that impaired voice lowers children's accuracy in listening comprehension tasks (d = -0.35). A handful of studies assessed RT, but results are inconclusive. The impact of noise and impaired voice can be moderated by listener, task, environmental, and exposure factors. The interaction between noise and impaired voice remains underinvestigated. CONCLUSIONS Overall, this review suggests that children have more trouble perceiving speech, processing verbal messages, and recalling verbal information when listening to speech in noise or to a speaker with dysphonia. Impoverished speech input could impede pupils' motivation and academic performance at school. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.17139377.
Collapse
Affiliation(s)
- Isabel S Schiller
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
- Teaching and Research Area Work and Engineering Psychology, Institute of Psychology, RWTH Aachen University, Germany
| | - Angélique Remacle
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
- Center For Research in Cognition and Neurosciences, Faculty of Psychological Science and Education, Université Libre de Bruxelles, Belgium
| | - Nancy Durieux
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
| | - Dominique Morsomme
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
| |
Collapse
|
11
|
Wolmarans J, De Sousa KC, Frisby C, Mahomed-Asmail F, Smits C, Moore DR, Swanepoel DW. Speech Recognition in Noise Using Binaural Diotic and Antiphasic Digits-in-Noise in Children: Maturation and Self-Test Validity. J Am Acad Audiol 2021; 32:315-323. [PMID: 34375996 DOI: 10.1055/s-0041-1727274] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Digits-in-noise (DIN) tests have become popular for hearing screening over the past 15 years. Several recent studies have highlighted the potential utility of DIN as a school-aged hearing test. However, age may influence test performance in children due to maturation. In addition, a new antiphasic stimulus paradigm has been introduced, allowing binaural intelligibility level difference (BILD) to be measured by using a combination of conventional diotic and antiphasic DIN. PURPOSE This study determined age-specific normative data for diotic and antiphasic DIN, and a derived measure, BILD, in children. A secondary aim evaluated the validity of DIN as a smartphone self-test in a subgroup of young children. RESEARCH DESIGN A cross-sectional, quantitative design was used. Participants with confirmed normal audiometric hearing were tested with a diotic and antiphasic DIN. During the test, arrangements of three spoken digits were presented in noise via headphones at varying signal-to-noise ratio (SNR). Researchers entered each three-digit spoken sequence repeated by the participant on a smartphone keypad. STUDY SAMPLE Overall, 621 (428 male and 193 female) normal hearing children (bilateral pure tone threshold of ≤ 20 dB hearing level at 1, 2, and 4 kHz) ranging between the ages of 6 and 13 years were recruited. A subgroup of 7-year-olds (n = 30), complying with the same selection criteria, was selected to determine the validity of self-testing. DATA COLLECTION AND ANALYSIS DIN testing was completed via headphones coupled to a smartphone. Diotic and antiphasic DIN speech recognition thresholds (SRTs) were analyzed and compared for each age group. BILD was calculated through subtraction of antiphasic from diotic SRTs. Multiple linear regressions were run to determine the effect of age on SRT and BILD. In addition, piecewise linear regressions were fit across different age groups. Wilcoxon signed-rank tests were used to determine differences between self- and facilitated tests. RESULTS Age was a significant predictor, of both diotic and antiphasic DIN SRTs (p < 0.05). SRTs improved by 0.15 dB and 0.35 dB SNR per year for diotic and antiphasic SRTs, respectively. However, age effects were only significant up to 10 and 12 years for antiphasic and diotic SRTs, respectively. Age significantly (p < 0.001) predicted BILD, which increased by 0.18 dB per year. A small SRT advantage for facilitated over self-testing was seen but was not significant (p > 0.05). CONCLUSIONS Increasing age was significantly associated with improved SRT and BILD using diotic and antiphasic DINs. DIN could be used as a smartphone self-test in young children from 7 years of age with appropriate quality control measures to avoid potential false positives.
Collapse
Affiliation(s)
- Jenique Wolmarans
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa
| | - Karina C De Sousa
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa
| | - Caitlin Frisby
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa
| | - Faheema Mahomed-Asmail
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa
| | - Cas Smits
- Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan, Amsterdam, The Netherlands
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center and University of Cincinnati, Cincinnati, Ohio.,Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa.,Ear Science Institute Australia, Subiaco, Western Australia, Australia
| |
Collapse
|
12
|
Behavioural performance and self-report measures in children with unilateral hearing loss due to congenital aural atresia. Auris Nasus Larynx 2021; 48:65-74. [DOI: 10.1016/j.anl.2020.07.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/28/2020] [Accepted: 07/13/2020] [Indexed: 11/20/2022]
|
13
|
Chandni J, Vipin Ghosh PG, Chetak KB, Aishwarya L. Maturation of speech perception in noise abilities during adolescence. Int J Pediatr Otorhinolaryngol 2020; 139:110459. [PMID: 33099190 DOI: 10.1016/j.ijporl.2020.110459] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 10/13/2020] [Accepted: 10/13/2020] [Indexed: 11/16/2022]
Abstract
BACKGROUND Speech perception encompasses the perception of spectro-temporal cues. These cues include temporal envelope, temporal fine structure, and spectral shape of the signal. Extraction of these cues is essential for speech perception and, most importantly, for perceiving speech in the presence of noise (SPIN). Speech perception in noise scores improves with age in children and is crucial in their routine communications, including classroom learning. Though it is established that the speech perception in noise improves with age in children, the age at which SPIN scores become adult-like and the differences in the maturation pattern between the ears remains unclear. The present study aimed to assess and understand the maturation pattern of speech perception in noise abilities during adolescence. METHOD The study included 146 participants who were divided into six crossectional age groups. Participants were in the age range of 10-15 years and adults of 18-19 years. SPIN was assessed for right and left ears for each of these sub-groups. The scores were compared across the different subgroups for both the ears. RESULTS Results demonstrated that SPIN scores in the right ear were matured by the age of 10 years and were comparable with the right ear SPIN scores in adults. Pairwise comparison using Bonferroni's corrections for multiple comparisons of left ear SPIN scores revealed that SPIN scores in the left ear become adult-like between 13 and 14 years of age. DISCUSSION Findings of the current study can be attributed to the morphological changes and differences in the developmental changes across the different regions of the cortex.
Collapse
Affiliation(s)
| | | | - K B Chetak
- Assistant Professor,Department of Pediatrics, JSS Medical College, Mysuru, India
| | - Lakshmi Aishwarya
- Audiologist at Amplifon (India) Private Limited, Canada Corner, Nashik, India
| |
Collapse
|
14
|
McCreery RW, Miller MK, Buss E, Leibold LJ. Cognitive and Linguistic Contributions to Masked Speech Recognition in Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3525-3538. [PMID: 32881629 PMCID: PMC8060059 DOI: 10.1044/2020_jslhr-20-00030] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/08/2020] [Accepted: 06/28/2020] [Indexed: 05/31/2023]
Abstract
Purpose The goal of this study was to examine the effects of cognitive and linguistic skills on masked speech recognition for children with normal hearing in three different masking conditions: (a) speech-shaped noise (SSN), (b) amplitude-modulated SSN (AMSSN), and (c) two-talker speech (TTS). We hypothesized that children with better working memory and language skills would have better masked speech recognition than peers with poorer skills in these areas. Selective attention was predicted to affect performance in the TTS masker due to increased cognitive demands from informational masking. Method A group of 60 children in two age groups (5- to 6-year-olds and 9- to 10-year-olds) with normal hearing completed sentence recognition in SSN, AMSSN, and TTS masker conditions. Speech recognition thresholds for 50% correct were measured. Children also completed standardized measures of language, memory, and executive function. Results Children's speech recognition was poorer in the TTS relative to the SSN and AMSSN maskers. Older children had lower speech recognition thresholds than younger children for all masker conditions. Greater language abilities were associated with better sentence recognition for the younger children in all masker conditions, but there was no effect of language for older children. Better working memory and selective attention skills were associated with better masked sentence recognition for both age groups, but only in the TTS masker condition. Conclusions The decreasing influence of vocabulary on masked speech recognition for older children supports the idea that this relationship depends on an interaction between the language level of the stimuli and the listener's vocabulary. Increased cognitive demands associated with perceptually isolating the target talker and two competing masker talkers with a TTS masker may result in the recruitment of working memory and selective attention skills, effects that were not observed in SSN or AMSSN maskers. Future research should evaluate these effects across a broader range of stimuli or with children who have hearing loss.
Collapse
Affiliation(s)
- Ryan W. McCreery
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
15
|
Papesh MA, Stefl AA, Gallun FJ, Billings CJ. Effects of Signal Type and Noise Background on Auditory Evoked Potential N1, P2, and P3 Measurements in Blast-Exposed Veterans. Ear Hear 2020; 42:106-121. [PMID: 32520849 DOI: 10.1097/aud.0000000000000906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Veterans who have been exposed to high-intensity blast waves frequently report persistent auditory difficulties such as problems with speech-in-noise (SIN) understanding, even when hearing sensitivity remains normal. However, these subjective reports have proven challenging to corroborate objectively. Here, we sought to determine whether use of complex stimuli and challenging signal contrasts in auditory evoked potential (AEP) paradigms rather than traditional use of simple stimuli and easy signal contrasts improved the ability of these measures to (1) distinguish between blast-exposed Veterans with auditory complaints and neurologically normal control participants, and (2) predict behavioral measures of SIN perception. DESIGN A total of 33 adults (aged 19-56 years) took part in this study, including 17 Veterans exposed to high-intensity blast waves within the past 10 years and 16 neurologically normal control participants matched for age and hearing status with the Veteran participants. All participants completed the following test measures: (1) a questionnaire probing perceived hearing abilities; (2) behavioral measures of SIN understanding including the BKB-SIN, the AzBio presented in 0 and +5 dB signal to noise ratios (SNRs), and a word-level consonant-vowel-consonant test presented at +5 dB SNR; and (3) electrophysiological tasks involving oddball paradigms in response to simple tones (500 Hz standard, 1000 Hz deviant) and complex speech syllables (/ba/ standard, /da/ deviant) presented in quiet and in four-talker speech babble at a SNR of +5 dB. RESULTS Blast-exposed Veterans reported significantly greater auditory difficulties compared to control participants. Behavioral performance on tests of SIN perception was generally, but not significantly, poorer among the groups. Latencies of P3 responses to tone signals were significantly longer among blast-exposed participants compared to control participants regardless of background condition, though responses to speech signals were similar across groups. For cortical AEPs, no significant interactions were found between group membership and either stimulus type or background. P3 amplitudes measured in response to signals in background babble accounted for 30.9% of the variance in subjective auditory reports. Behavioral SIN performance was best predicted by a combination of N1 and P2 responses to signals in quiet which accounted for 69.6% and 57.4% of the variance on the AzBio at 0 dB SNR and the BKB-SIN, respectively. CONCLUSIONS Although blast-exposed participants reported far more auditory difficulties compared to controls, use of complex stimuli and challenging signal contrasts in cortical and cognitive AEP measures failed to reveal larger group differences than responses to simple stimuli and easy signal contrasts. Despite this, only P3 responses to signals presented in background babble were predictive of subjective auditory complaints. In contrast, cortical N1 and P2 responses were predictive of behavioral SIN performance but not subjective auditory complaints, and use of challenging background babble generally did not improve performance predictions. These results suggest that challenging stimulus protocols are more likely to tap into perceived auditory deficits, but may not be beneficial for predicting performance on clinical measures of SIN understanding. Finally, these results should be interpreted with caution since blast-exposed participants did not perform significantly poorer on tests of SIN perception.
Collapse
Affiliation(s)
- Melissa A Papesh
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA.,Department of Otolaryngology Head and Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| | - Alyssa A Stefl
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA.,Department of Otolaryngology Head and Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA.,Department of Neurology, Oregon Health & Science University, Portland, Oregon, USA
| | - Curtis J Billings
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA.,Department of Otolaryngology Head and Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
16
|
Children With Normal Hearing Are Efficient Users of Fundamental Frequency and Vocal Tract Length Cues for Voice Discrimination. Ear Hear 2020; 41:182-193. [DOI: 10.1097/aud.0000000000000743] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Torkildsen JVK, Hitchins A, Myhrum M, Wie OB. Speech-in-Noise Perception in Children With Cochlear Implants, Hearing Aids, Developmental Language Disorder and Typical Development: The Effects of Linguistic and Cognitive Abilities. Front Psychol 2019; 10:2530. [PMID: 31803095 PMCID: PMC6877734 DOI: 10.3389/fpsyg.2019.02530] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 10/25/2019] [Indexed: 12/03/2022] Open
Abstract
Children with hearing loss, and those with language disorders, can have excellent speech recognition in quiet, but still experience unique challenges when listening to speech in noisy environments. However, little is known about how speech-in-noise (SiN) perception relates to individual differences in cognitive and linguistic abilities in these children. The present study used the Norwegian version of the Hearing in Noise Test (HINT) to investigate SiN perception in 175 children aged 5.5–12.9 years, including children with cochlear implants (CI, n = 64), hearing aids (HA, n = 37), developmental language disorder (DLD, n = 16) and typical development (TD, n = 58). Further, the study examined whether general language ability, verbal memory span, non-verbal IQ and speech perception of monosyllables and sentences in quiet were predictors of performance on the HINT. To allow comparisons across ages, scores derived from age-based norms were used for the HINT and the tests of language and cognition. There were significant differences in SiN perception between all the groups except between the HA and DLD groups, with the CI group requiring the highest signal-to-noise ratios (i.e., poorest performance) and the TD group requiring the lowest signal-to-noise ratios. For the full sample, language ability explained significant variance in HINT performance beyond speech perception in quiet. Follow-up analyses for the separate groups revealed that language ability was a significant predictor of HINT performance for children with CI, HA, and DLD, but not for children with TD. Memory span and IQ did not predict variance in SiN perception when language ability and speech perception in quiet were taken into account. The finding of a robust relation between SiN perception and general language skills in all three clinical groups call for further investigation into the mechanisms that underlie this association.
Collapse
Affiliation(s)
- Janne von Koss Torkildsen
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Oslo, Norway
| | - Abigail Hitchins
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Oslo, Norway.,Auditory Verbal UK, Oxon, United Kingdom
| | - Marte Myhrum
- Division of Head, Neck and Reconstructive Surgery, Department of Otorhinolaryngology and Head and Neck Surgery, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Ona Bø Wie
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Oslo, Norway.,Division of Head, Neck and Reconstructive Surgery, Department of Otorhinolaryngology and Head and Neck Surgery, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
18
|
Reducing Simulated Channel Interaction Reveals Differences in Phoneme Identification Between Children and Adults With Normal Hearing. Ear Hear 2019; 40:295-311. [PMID: 29927780 DOI: 10.1097/aud.0000000000000615] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Channel interaction, the stimulation of overlapping populations of auditory neurons by distinct cochlear implant (CI) channels, likely limits the speech perception performance of CI users. This study examined the role of vocoder-simulated channel interaction in the ability of children with normal hearing (cNH) and adults with normal hearing (aNH) to recognize spectrally degraded speech. The primary aim was to determine the interaction between number of processing channels and degree of simulated channel interaction on phoneme identification performance as a function of age for cNH and to relate those findings to aNH and to CI users. DESIGN Medial vowel and consonant identification of cNH (age 8-17 years) and young aNH were assessed under six (for children) or nine (for adults) different conditions of spectral degradation. Stimuli were processed using a noise-band vocoder with 8, 12, and 15 channels and synthesis filter slopes of 15 (aNH only), 30, and 60 dB/octave (all NH subjects). Steeper filter slopes (larger numbers) simulated less electrical current spread and, therefore, less channel interaction. Spectrally degraded performance of the NH listeners was also compared with the unprocessed phoneme identification of school-aged children and adults with CIs. RESULTS Spectrally degraded phoneme identification improved as a function of age for cNH. For vowel recognition, cNH exhibited an interaction between the number of processing channels and vocoder filter slope, whereas aNH did not. Specifically, for cNH, increasing the number of processing channels only improved vowel identification in the steepest filter slope condition. Additionally, cNH were more sensitive to changes in filter slope. As the filter slopes increased, cNH continued to receive vowel identification benefit beyond where aNH performance plateaued or reached ceiling. For all NH participants, consonant identification improved with increasing filter slopes but was unaffected by the number of processing channels. Although cNH made more phoneme identification errors overall, their phoneme error patterns were similar to aNH. Furthermore, consonant identification of adults with CI was comparable to aNH listening to simulations with shallow filter slopes (15 dB/octave). Vowel identification of earlier-implanted pediatric ears was better than that of later-implanted ears and more comparable to cNH listening in conditions with steep filter slopes (60 dB/octave). CONCLUSIONS Recognition of spectrally degraded phonemes improved when simulated channel interaction was reduced, particularly for children. cNH showed an interaction between number of processing channels and filter slope for vowel identification. The differences observed between cNH and aNH suggest that identification of spectrally degraded phonemes continues to improve through adolescence and that children may benefit from reduced channel interaction beyond where adult performance has plateaued. Comparison to CI users suggests that early implantation may facilitate development of better phoneme discrimination.
Collapse
|
19
|
Speech Recognition Abilities in Normal-Hearing Children 4 to 12 Years of Age in Stationary and Interrupted Noise. Ear Hear 2019; 39:1091-1103. [PMID: 29554035 PMCID: PMC7664447 DOI: 10.1097/aud.0000000000000569] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: The main purpose of this study was to examine developmental effects for speech recognition in noise abilities for normal-hearing children in several listening conditions, relevant for daily life. Our aim was to study the auditory component in these listening abilities by using a test that was designed to minimize the dependency on nonauditory factors, the digits-in-noise (DIN) test. Secondary aims were to examine the feasibility of the DIN test for children, and to establish age-dependent normative data for diotic and dichotic listening conditions in both stationary and interrupted noise. Design: In experiment 1, a newly designed pediatric DIN (pDIN) test was compared with the standard DIN test. Major differences with the DIN test are that the pDIN test uses 79% correct instead of 50% correct as a target point, single digits (except 0) instead of triplets, and animations in the test procedure. In this experiment, 43 normal-hearing subjects between 4 and 12 years of age and 10 adult subjects participated. The authors measured the monaural speech reception threshold for both DIN test and pDIN test using headphones. Experiment 2 used the standard DIN test to measure speech reception thresholds in noise in 112 normal-hearing children between 4 and 12 years of age and 33 adults. The DIN test was applied using headphones in stationary and interrupted noise, and in diotic and dichotic conditions, to study also binaural unmasking and the benefit of listening in the gaps. Results: Most children could reliably do both pDIN test and DIN test, and measurement errors for the pDIN test were comparable between children and adults. There was no significant difference between the score for the pDIN test and that of the DIN test. Speech recognition scores increase with age for all conditions tested, and performance is adult-like by 10 to 12 years of age in stationary noise but not interrupted noise. The youngest, 4-year-old children have speech reception thresholds 3 to 7 dB less favorable than adults, depending on test conditions. The authors found significant age effects on binaural unmasking and fluctuating masker benefit, even after correction for the lower baseline speech reception threshold of adults in stationary noise. Conclusions: Speech recognition in noise abilities develop well into adolescence, and young children need a more favorable signal-to-noise ratio than adults for all listening conditions. Speech recognition abilities in children in stationary and interrupted noise can accurately and reliably be tested using the DIN test. A pediatric version of the test was shown to be unnecessary. Normative data were established for the DIN test in stationary and fluctuating maskers, and in diotic and dichotic conditions. The DIN test can thus be used to test speech recognition abilities for normal-hearing children from the age of 4 years and older.
Collapse
|
20
|
DiNino M, Arenberg JG. Age-Related Performance on Vowel Identification and the Spectral-temporally Modulated Ripple Test in Children With Normal Hearing and With Cochlear Implants. Trends Hear 2019; 22:2331216518770959. [PMID: 29708065 PMCID: PMC5949928 DOI: 10.1177/2331216518770959] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Children’s performance on psychoacoustic tasks improves with age, but inadequate auditory input may delay this maturation. Cochlear implant (CI) users receive a degraded auditory signal with reduced frequency resolution compared with normal, acoustic hearing; thus, immature auditory abilities may contribute to the variation among pediatric CI users’ speech recognition scores. This study investigated relationships between age-related variables, spectral resolution, and vowel identification scores in prelingually deafened, early-implanted children with CIs compared with normal hearing (NH) children. All participants performed vowel identification and the Spectral-temporally Modulated Ripple Test (SMRT). Vowel stimuli for NH children were vocoded to simulate the reduced spectral resolution of CI hearing. Age positively predicted NH children’s vocoded vowel identification scores, but time with the CI was a stronger predictor of vowel recognition and SMRT performance of children with CIs. For both groups, SMRT thresholds were related to vowel identification performance, analogous to previous findings in adults. Sequential information analysis of vowel feature perception indicated greater transmission of duration-related information compared with formant features in both groups of children. In addition, the amount of F2 information transmitted predicted SMRT thresholds in children with NH and with CIs. Comparisons between the two CIs of bilaterally implanted children revealed disparate task performance levels and information transmission values within the same child. These findings indicate that adequate auditory experience contributes to auditory perceptual abilities of pediatric CI users. Further, factors related to individual CIs may be more relevant to psychoacoustic task performance than are the overall capabilities of the child.
Collapse
Affiliation(s)
- Mishaela DiNino
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Julie G Arenberg
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
21
|
Cortical Tracking of Speech-in-Noise Develops from Childhood to Adulthood. J Neurosci 2019; 39:2938-2950. [PMID: 30745419 DOI: 10.1523/jneurosci.1732-18.2019] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 01/08/2019] [Accepted: 01/12/2019] [Indexed: 11/21/2022] Open
Abstract
In multitalker backgrounds, the auditory cortex of adult humans tracks the attended speech stream rather than the global auditory scene. Still, it is unknown whether such preferential tracking also occurs in children whose speech-in-noise (SiN) abilities are typically lower compared with adults. We used magnetoencephalography (MEG) to investigate the frequency-specific cortical tracking of different elements of a cocktail party auditory scene in 20 children (age range, 6-9 years; 8 females) and 20 adults (age range, 21-40 years; 10 females). During MEG recordings, subjects attended to four different 5 min stories, mixed with different levels of multitalker background at four signal-to-noise ratios (SNRs; noiseless, +5, 0, and -5 dB). Coherence analysis quantified the coupling between the time courses of the MEG activity and attended speech stream, multitalker background, or global auditory scene, respectively. In adults, statistically significant coherence was observed between MEG signals originating from the auditory system and the attended stream at <1, 1-4, and 4-8 Hz in all SNR conditions. Children displayed similar coupling at <1 and 1-4 Hz, but increasing noise impaired the coupling more strongly than in adults. Also, children displayed drastically lower coherence at 4-8 Hz in all SNR conditions. These results suggest that children's difficulties to understand speech in noisy conditions are related to an immature selective cortical tracking of the attended speech streams. Our results also provide unprecedented evidence for an acquired cortical tracking of speech at syllable rate and argue for a progressive development of SiN abilities in humans.SIGNIFICANCE STATEMENT Behaviorally, children are less proficient than adults at understanding speech-in-noise. Here, neuromagnetic signals were recorded while healthy adults and typically developing 6- to 9-year-old children attended to a speech stream embedded in a multitalker background noise with varying intensity. Results demonstrate that auditory cortices of both children and adults selectively track the attended speaker's voice rather than the global acoustic input at phrasal and word rates. However, increments of noise compromised the tracking significantly more in children than in adults. Unexpectedly, children displayed limited tracking of both the attended voice and the global acoustic input at the 4-8 Hz syllable rhythm. Thus, both speech-in-noise abilities and cortical tracking of speech syllable repetition rate seem to mature later in adolescence.
Collapse
|
22
|
Gustafson SJ, Billings CJ, Hornsby BWY, Key AP. Effect of competing noise on cortical auditory evoked potentials elicited by speech sounds in 7- to 25-year-old listeners. Hear Res 2019; 373:103-112. [PMID: 30660965 DOI: 10.1016/j.heares.2019.01.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 01/03/2019] [Accepted: 01/07/2019] [Indexed: 11/27/2022]
Abstract
Child listeners have particular difficulty with speech perception when competing speech noise is present; this challenge is often attributed to their immature top-down processing abilities. The purpose of this study was to determine if the effects of competing speech noise on speech-sound processing vary with age. Cortical auditory evoked potentials (CAEPs) were measured during an active speech-syllable discrimination task in 58 normal-hearing participants (age 7-25 years). Speech syllables were presented in quiet and embedded in competing speech noise (4-talker babble, +15 dB signal-to-noise ratio; SNR). While noise was expected to similarly reduce amplitude and delay latencies of N1 and P2 peaks in all listeners, it was hypothesized that effects of noise on the P3b peak would be inversely related to age due to the maturation of top-down processing abilities throughout childhood. Consistent with previous work, results showed that a +15 dB SNR reduces amplitudes and delays latencies of CAEPs for listeners of all ages, affecting speech-sound processing, delaying stimulus evaluation, and causing a reduction in behavioral speech-sound discrimination. Contrary to expectations, findings suggest that competing speech noise at a +15 dB SNR may have similar effects on various stages of speech-sound processing for listeners of all ages. Future research directions should examine how more difficult listening conditions (poorer SNRs) might affect results across ages.
Collapse
Affiliation(s)
- Samantha J Gustafson
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Curtis J Billings
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland, OR, USA; National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, USA; Vanderbilt Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
| |
Collapse
|
23
|
De Sousa KC, Swanepoel DW, Moore DR, Smits C. A Smartphone National Hearing Test: Performance and Characteristics of Users. Am J Audiol 2018; 27:448-454. [PMID: 30452748 DOI: 10.1044/2018_aja-imia3-18-0016] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 04/30/2018] [Indexed: 01/21/2023] Open
Abstract
PURPOSE The smartphone digits-in-noise hearing test, called hearZA, was made available as a self-test in South Africa in March 2016. This study determined characteristics and test performance of the listeners who took the test. METHOD A retrospective analysis of 24,072 persons who completed a test between March 2016 and August 2017 was conducted. User characteristics, including age, English-speaking competence, and self-reported hearing difficulty, were analyzed. Regression analyses were conducted to determine predictors of the speech reception threshold. RESULTS Overall referral rate of the hearZA test was 22.4%, and 37% of these reported a known hearing difficulty. Age distributions showed that 33.2% of listeners were ages 30 years and younger, 40.5% were between ages 31 and 50 years, and 26.4% were older than 50 years. Age, self-reported English-speaking competence, and self-reported hearing difficulty were significant predictors of the speech reception threshold. CONCLUSIONS High test uptake, particularly among younger users, and high overall referral rate indicates that the hearZA app addresses a public health need. The test also reaches target audiences, including those with self-reported hearing difficulty and those with normal hearing who should monitor their hearing ability.
Collapse
Affiliation(s)
- Karina C. De Sousa
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Gauteng, South Africa
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Gauteng, South Africa
- Ear Sciences Centre, School of Surgery, The University of Western Australia, Nedlands
- Ear Science Institute Australia, Subiaco
| | - David R. Moore
- Communication Sciences Research Center, Cincinnati Childrens' Hospital Medical Center, Cincinnati, OH
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
| | - Cas Smits
- Department of Otolaryngology/Head & Neck Surgery, Section Ear & Hearing, and Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| |
Collapse
|
24
|
Ching TYC, Zhang VW, Flynn C, Burns L, Button L, Hou S, McGhie K, Van Buynder P. Factors influencing speech perception in noise for 5-year-old children using hearing aids or cochlear implants. Int J Audiol 2018; 57:S70-S80. [PMID: 28687057 PMCID: PMC5756692 DOI: 10.1080/14992027.2017.1346307] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2017] [Accepted: 06/18/2017] [Indexed: 10/19/2022]
Abstract
OBJECTIVE We investigated the factors influencing speech perception in babble for 5-year-old children with hearing loss who were using hearing aids (HAs) or cochlear implants (CIs). DESIGN Speech reception thresholds (SRTs) for 50% correct identification were measured in two conditions - speech collocated with babble, and speech with spatially separated babble. The difference in SRTs between the two conditions give a measure of binaural unmasking, commonly known as spatial release from masking (SRM). Multiple linear regression analyses were conducted to examine the influence of a range of demographic factors on outcomes. STUDY SAMPLE Participants were 252 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. RESULTS Children using HAs or CIs required a better signal-to-noise ratio to achieve the same level of performance as their normal-hearing peers but demonstrated SRM of a similar magnitude. For children using HAs, speech perception was significantly influenced by cognitive and language abilities. For children using CIs, age at CI activation and language ability were significant predictors of speech perception outcomes. CONCLUSIONS Speech perception in children with hearing loss can be enhanced by improving their language abilities. Early age at cochlear implantation was also associated with better outcomes.
Collapse
Affiliation(s)
- Teresa YC Ching
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Vicky W Zhang
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Christopher Flynn
- National Acoustic Laboratories, Sydney, Australia
- Australian Hearing, Australia
| | - Lauren Burns
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
- Australian Hearing, Australia
| | - Laura Button
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Sanna Hou
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Karen McGhie
- National Acoustic Laboratories, Sydney, Australia
- Australian Hearing, Australia
| | - Patricia Van Buynder
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| |
Collapse
|
25
|
Sheikh Rashid M, Dreschler WA, de Laat JAPM. Evaluation of an internet-based speech-in-noise screening test for school-age children. Int J Audiol 2017; 56:967-975. [PMID: 28936876 DOI: 10.1080/14992027.2017.1378932] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
OBJECTIVE To evaluate a Dutch online speech-in-noise screening test (in Dutch: "Kinderhoortest") in normal-hearing school-age children. Sub-aims were to study test-retest reliability, and the effects of presentation type and age on test results. DESIGN An observational cross-sectional study at school. Speech reception thresholds (SRTs) were obtained through the online test in a training condition, and two test conditions: on a desktop computer and smartphone. The order of the test conditions was counterbalanced. STUDY SAMPLE Ninety-four children participated (5-12 years), of which 75 children were normal-hearing (≤25 dB HL at 0.5 kHz, ≤20 dB HL at 1-4 kHz). RESULTS There was a significant effect for test order for the two test conditions (first or second test), but not for presentation type (desktop computer or smartphone) (repeated measures analyses, F(1,75) = 12.48, p < 0.001; F(1,75) = 0.01, p = 0.982). SRT significantly improved by age year (first test: 0.25 dB SNR, 95% CI: -0.43 to -0.08, p = 0.004. Second test: 0.29 dB SNR, 95% CI: -0.46 to -0.11; p = 0.002). CONCLUSIONS The online test shows potential for routine-hearing screening of school-age children, and can be presented on either a desktop computer or smartphone. The test should be evaluated further in order to establish sensitivity and specificity for hearing loss in children.
Collapse
Affiliation(s)
- Marya Sheikh Rashid
- a Clinical and Experimental Audiology, Amsterdam Public Health Research Institute , Academic Medical Center (AMC) Amsterdam , Amsterdam , The Netherlands and
| | - Wouter A Dreschler
- a Clinical and Experimental Audiology, Amsterdam Public Health Research Institute , Academic Medical Center (AMC) Amsterdam , Amsterdam , The Netherlands and
| | - Jan A P M de Laat
- b Department of Audiology , Leiden University Medical Center , Leiden , The Netherlands
| |
Collapse
|
26
|
Key AP, Gustafson SJ, Rentmeester L, Hornsby BWY, Bess FH. Speech-Processing Fatigue in Children: Auditory Event-Related Potential and Behavioral Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2090-2104. [PMID: 28595261 PMCID: PMC5831094 DOI: 10.1044/2016_jslhr-h-16-0052] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 12/05/2016] [Indexed: 05/06/2023]
Abstract
PURPOSE Fatigue related to speech processing is an understudied area that may have significant negative effects, especially in children who spend the majority of their school days listening to classroom instruction. METHOD This study examined the feasibility of using auditory P300 responses and behavioral indices (lapses of attention and self-report) to measure fatigue resulting from sustained listening demands in 27 children (M = 9.28 years). RESULTS Consistent with predictions, increased lapses of attention, longer reaction times, reduced P300 amplitudes to infrequent target stimuli, and self-report of greater fatigue were observed after the completion of a series of demanding listening tasks compared with the baseline values. The event-related potential responses correlated with the behavioral measures of performance. CONCLUSION These findings suggest that neural and behavioral responses indexing attention and processing resources show promise as effective markers of fatigue in children.
Collapse
Affiliation(s)
- Alexandra P. Key
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN
- Vanderbilt Kennedy Center for Research on Human Development, Nashville, TN
| | - Samantha J. Gustafson
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN
| | - Lindsey Rentmeester
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN
| | - Benjamin W. Y. Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN
| | - Fred H. Bess
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN
| |
Collapse
|
27
|
Rashid MS, Leensen MCJ, Dreschler WA. Application of the online hearing screening test "Earcheck": Speech intelligibility in noise in teenagers and young adults. Noise Health 2017; 18:312-318. [PMID: 27991462 PMCID: PMC5227011 DOI: 10.4103/1463-1741.195807] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Objective: The objective was to describe the speech intelligibility in noise test results among Dutch teenagers and young adults aged 12–24 years, using a national online speech reception threshold (SRT) test, the Earcheck. A secondary objective was to assess the effect of age and gender on speech intelligibility in noise. Design: Cross-sectional SRT data were collected over a 5-year period (2010–2014), from participants of Earcheck. Regression analyses were performed, with SRT as the dependent variable, and age and gender as explaining variables. To cross-validate the model, data from 12- to 24-year olds from the same test distributed by a hearing aid dispenser (Hoorscan) were used. Results: In total, 96,803 valid test results were analyzed. The mean SRT score was −18.3 dB signal-to-noise ratio (SNR) (standard deviation (SD) = 3.7). Twenty-five percent of the scores was rated as insufficient or poor. SRT performance significantly improved with increasing age for teenagers aged 12–18 years by 0.49 dB SNR per age-year. A smaller age-effect (0.09 dB SNR per age-year) was found for young adults aged 19–24 years. Small differences between male and female users were found. Conclusion: Earcheck generated large quantities of national SRT data. The data implied that a substantial number of users of Earcheck may have some difficulty in understanding speech in noise. Furthermore, the results of this study showed an effect of gender and age on SRT performance, suggesting an ongoing maturation of speech-in-noise performance into late adolescence. This suggests the use of age-dependent reference values, but for this purpose, more research is required.
Collapse
Affiliation(s)
- Marya Sheikh Rashid
- Clinical and Experimental Audiology, ENT Department, Academic Medical Center (AMC) Amsterdam, The Netherlands
| | - Monique C J Leensen
- Clinical and Experimental Audiology, ENT Department, Academic Medical Center (AMC) Amsterdam, The Netherlands
| | - Wouter A Dreschler
- Clinical and Experimental Audiology, ENT Department, Academic Medical Center (AMC) Amsterdam, The Netherlands
| |
Collapse
|
28
|
McCreery RW, Spratford M, Kirby B, Brennan M. Individual differences in language and working memory affect children's speech recognition in noise. Int J Audiol 2017; 56:306-315. [PMID: 27981855 PMCID: PMC5634965 DOI: 10.1080/14992027.2016.1266703] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. DESIGN As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. STUDY SAMPLE Ninety-six children with normal hearing, who were between 5 and 12 years of age. RESULTS Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. CONCLUSIONS Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.
Collapse
Affiliation(s)
- Ryan W. McCreery
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - Meredith Spratford
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - Benjamin Kirby
- Department of Communication Sciences and Disorders, Illinois State University, Normal, IL, USA
| | - Marc Brennan
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
29
|
Söderlund GBW, Jobs EN. Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise. Front Psychol 2016; 7:34. [PMID: 26858679 PMCID: PMC4731512 DOI: 10.3389/fpsyg.2016.00034] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2015] [Accepted: 01/08/2016] [Indexed: 01/09/2023] Open
Abstract
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.
Collapse
Affiliation(s)
- Göran B W Söderlund
- Department of Teacher Education and Sports, Sogn og Fjordane University College Sogndal, Norway
| | | |
Collapse
|
30
|
Lexical and age effects on word recognition in noise in normal-hearing children. Int J Pediatr Otorhinolaryngol 2015; 79:2023-7. [PMID: 26545791 DOI: 10.1016/j.ijporl.2015.08.034] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 08/19/2015] [Accepted: 08/21/2015] [Indexed: 11/22/2022]
Abstract
OBJECTIVES The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. METHODS Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. RESULTS The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. CONCLUSIONS Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word-recognition performance in noise increased with age in NH children of 3-6 years old and had not reached plateau at 6 years of age in the NH children.
Collapse
|
31
|
Persson Waye K, Magnusson L, Fredriksson S, Croy I. A screening approach for classroom acoustics using web-based listening tests and subjective ratings. PLoS One 2015; 10:e0116572. [PMID: 25615692 PMCID: PMC4304827 DOI: 10.1371/journal.pone.0116572] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 12/10/2014] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Perception of speech is crucial in school where speech is the main mode of communication. The aim of the study was to evaluate whether a web based approach including listening tests and questionnaires could be used as a screening tool for poor classroom acoustics. The prime focus was the relation between pupils' comprehension of speech, the classroom acoustics and their description of the acoustic qualities of the classroom. METHODOLOGY/PRINCIPAL FINDINGS In total, 1106 pupils aged 13-19, from 59 classes and 38 schools in Sweden participated in a listening study using Hagerman's sentences administered via Internet. Four listening conditions were applied: high and low background noise level and positions close and far away from the loudspeaker. The pupils described the acoustic quality of the classroom and teachers provided information on the physical features of the classroom using questionnaires. CONCLUSIONS/SIGNIFICANCE In 69% of the classes, at least three pupils described the sound environment as adverse and in 88% of the classes one or more pupil reported often having difficulties concentrating due to noise. The pupils' comprehension of speech was strongly influenced by the background noise level (p<0.001) and distance to the loudspeakers (p<0.001). Of the physical classroom features, presence of suspended acoustic panels (p<0.05) and length of the classroom (p<0.01) predicted speech comprehension. Of the pupils' descriptions of acoustic qualities, clattery significantly (p<0.05) predicted speech comprehension. Clattery was furthermore associated to difficulties understanding each other, while the description noisy was associated to concentration difficulties. The majority of classrooms do not seem to have an optimal sound environment. The pupil's descriptions of acoustic qualities and listening tests can be one way of predicting sound conditions in the classroom.
Collapse
Affiliation(s)
- Kerstin Persson Waye
- Occupational and Environmental Medicine, The Sahlgrenska Academy at the University of Gothenburg, Gothenburg, Sweden
- * E-mail:
| | - Lennart Magnusson
- Department for Clinical Neuroscience and Rehabilitation, Section for Audiology, University of Gothenburg, Gothenburg, Sweden
| | - Sofie Fredriksson
- Occupational and Environmental Medicine, The Sahlgrenska Academy at the University of Gothenburg, Gothenburg, Sweden
| | - Ilona Croy
- Occupational and Environmental Medicine, The Sahlgrenska Academy at the University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
32
|
Wang S, Liu S, Kong Y, Liu H, Feng J, Li S, Yang Y. Psychometric properties of the Standard-Chinese lexical neighborhood test. Acta Otolaryngol 2014; 134:66-72. [PMID: 24256040 DOI: 10.3109/00016489.2013.840923] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
CONCLUSION The psychometric characteristics of Standard-Chinese lexical neighborhood test (LNT) confirmed the lexical effects of the four word categories. The established normative baseline can be used in evaluating the word-recognition performance of the hearing-impaired listeners. OBJECTIVES The purpose of the present study was to examine the psychometric characteristics and evaluate the reliability of Standard-Chinese LNT in children and adults. METHODS Twenty-six normal-hearing adults and 13 normal-hearing children were recruited. Word recognition was tested with the Standard-Chinese LNT materials that consisted of four types of word list: monosyllable easy words, monosyllable hard words, disyllable easy words, and disyllable hard words. RESULTS The thresholds at 50% correct performance for the easy word lists and disyllable word lists were lower than those for the hard word lists and monosyllable word lists, respectively (all p < 0.001). The slopes for disyllable words were steeper than the monosyllable words (p < 0.05). In addition, the recognition threshold of the four categories for children was higher than that for adults (all p < 0.05). The critical difference was on average 26.6% for adults and 30.0% for children.
Collapse
Affiliation(s)
- Suju Wang
- Department of Otolaryngology - Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University
| | | | | | | | | | | | | |
Collapse
|
33
|
Klatte M, Bergström K, Lachmann T. Does noise affect learning? A short review on noise effects on cognitive performance in children. Front Psychol 2013; 4:578. [PMID: 24009598 PMCID: PMC3757288 DOI: 10.3389/fpsyg.2013.00578] [Citation(s) in RCA: 140] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2013] [Accepted: 08/12/2013] [Indexed: 11/13/2022] Open
Abstract
The present paper provides an overview of research concerning both acute and chronic effects of exposure to noise on children's cognitive performance. Experimental studies addressing the impact of acute exposure showed negative effects on speech perception and listening comprehension. These effects are more pronounced in children as compared to adults. Children with language or attention disorders and second-language learners are still more impaired than age-matched controls. Noise-induced disruption was also found for non-auditory tasks, i.e., serial recall of visually presented lists and reading. The impact of chronic exposure to noise was examined in quasi-experimental studies. Indoor noise and reverberation in classroom settings were found to be associated with poorer performance of the children in verbal tasks. Regarding chronic exposure to aircraft noise, studies consistently found that high exposure is associated with lower reading performance. Even though the reported effects are usually small in magnitude, and confounding variables were not always sufficiently controlled, policy makers responsible for noise abatement should be aware of the potential impact of environmental noise on children's development.
Collapse
Affiliation(s)
- Maria Klatte
- Center for Cognitive Science, Cognitive and Developmental Psychology Laboratory, University of Kaiserslautern Kaiserslautern, Germany
| | | | | |
Collapse
|
34
|
Kühnle S, Ludwig A, Meuret S, Küttner C, Witte C, Scholbach J, Fuchs M, Rübsamen R. Development of Auditory Localization Accuracy and Auditory Spatial Discrimination in Children and Adolescents. ACTA ACUST UNITED AC 2013; 18:48-62. [DOI: 10.1159/000342904] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2012] [Accepted: 08/21/2012] [Indexed: 11/19/2022]
|
35
|
Ross LA, Molholm S, Blanco D, Gomez-Ramirez M, Saint-Amour D, Foxe JJ. The development of multisensory speech perception continues into the late childhood years. THE EUROPEAN JOURNAL OF NEUROSCIENCE 2011. [PMID: 21615556 DOI: 10.1111/j.1460–9568.2011.07685.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.
Collapse
Affiliation(s)
- Lars A Ross
- The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
| | | | | | | | | | | |
Collapse
|
36
|
Ross LA, Molholm S, Blanco D, Gomez-Ramirez M, Saint-Amour D, Foxe JJ. The development of multisensory speech perception continues into the late childhood years. Eur J Neurosci 2011; 33:2329-37. [PMID: 21615556 DOI: 10.1111/j.1460-9568.2011.07685.x] [Citation(s) in RCA: 92] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.
Collapse
Affiliation(s)
- Lars A Ross
- The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
| | | | | | | | | | | |
Collapse
|
37
|
Bishop DVM, Anderson M, Reid C, Fox AM. Auditory development between 7 and 11 years: an event-related potential (ERP) study. PLoS One 2011; 6:e18993. [PMID: 21573058 PMCID: PMC3090390 DOI: 10.1371/journal.pone.0018993] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2010] [Accepted: 03/25/2011] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND There is considerable uncertainty about the time-course of central auditory maturation. On some indices, children appear to have adult-like competence by school age, whereas for other measures development follows a protracted course. METHODOLOGY We studied auditory development using auditory event-related potentials (ERPs) elicited by tones in 105 children on two occasions two years apart. Just over half of the children were 7 years initially and 9 years at follow-up, whereas the remainder were 9 years initially and 11 years at follow-up. We used conventional analysis of peaks in the auditory ERP, independent component analysis, and time-frequency analysis. PRINCIPAL FINDINGS We demonstrated maturational changes in the auditory ERP between 7 and 11 years, both using conventional peak measurements, and time-frequency analysis. The developmental trajectory was different for temporal vs. fronto-central electrode sites. Temporal electrode sites showed strong lateralisation of responses and no increase of low-frequency phase-resetting with age, whereas responses recorded from fronto-central electrode sites were not lateralised and showed progressive change with age. Fronto-central vs. temporal electrode sites also mapped onto independent components with differently oriented dipole sources in auditory cortex. A global measure of waveform shape proved to be the most effective method for distinguishing age bands. CONCLUSIONS/SIGNIFICANCE The results supported the idea that different cortical regions mature at different rates. The ICC measure is proposed as the best measure of 'auditory ERP age'.
Collapse
Affiliation(s)
- Dorothy V. M. Bishop
- School of Psychology, University of Western Australia, Perth, Australia
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Mike Anderson
- School of Psychology, University of Western Australia, Perth, Australia
- Neurocognitive Development Unit, University of Western Australia, Perth, Australia
| | - Corinne Reid
- School of Psychology, Murdoch University, Perth, Australia
- Neurocognitive Development Unit, University of Western Australia, Perth, Australia
| | - Allison M. Fox
- School of Psychology, University of Western Australia, Perth, Australia
- Neurocognitive Development Unit, University of Western Australia, Perth, Australia
| |
Collapse
|
38
|
Mahajan Y, McArthur G. The effect of a movie soundtrack on auditory event-related potentials in children, adolescents, and adults. Clin Neurophysiol 2010; 122:934-41. [PMID: 20869913 DOI: 10.1016/j.clinph.2010.08.014] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2010] [Revised: 08/13/2010] [Accepted: 08/30/2010] [Indexed: 10/19/2022]
Abstract
OBJECTIVE To determine if an audible movie soundtrack has a degrading effect on the auditory P1, N1, P2, N2, or mismatch negativity (MMN) event-related potentials (ERPs) in children, adolescents, or adults. METHODS The auditory ERPs of 36 children, 32 young adolescents, 19 older adolescents, and 10 adults were measured while they watched a movie in two conditions: with an audible soundtrack and with a silent soundtrack. RESULTS In children and adolescents, the audible movie soundtrack had a significant impact on amplitude, latency or split-half reliability of the N1, P2, N2, and MMN ERPs. The audible soundtrack had minimal impact on the auditory ERPs of adults. CONCLUSIONS These findings challenge previous claims that an audible soundtrack does not degrade the auditory ERPs of children. Further, the reliability of the MMN is poorer than P1, N1, P2, and N2 peaks in both sound-off and sound-on conditions. SIGNIFICANCE Researchers should be cautious about using an audible movie soundtrack when measuring auditory ERPs in younger listeners.
Collapse
Affiliation(s)
- Yatin Mahajan
- Macquarie Centre for Cognitive Science, Macquarie University, Sydney, NSW 2109, Australia.
| | | |
Collapse
|
39
|
Barutchu A, Danaher J, Crewther SG, Innes-Brown H, Shivdasani MN, Paolini AG. Audiovisual integration in noise by children and adults. J Exp Child Psychol 2010; 105:38-50. [DOI: 10.1016/j.jecp.2009.08.005] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2008] [Revised: 08/31/2009] [Accepted: 08/31/2009] [Indexed: 11/28/2022]
|
40
|
Spaulding TJ, Plante E, Vance R. Sustained selective attention skills of preschool children with specific language impairment: evidence for separate attentional capacities. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2008; 51:16-34. [PMID: 18230853 DOI: 10.1044/1092-4388(2008/002)] [Citation(s) in RCA: 105] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE The present study was designed to investigate the performance of preschool children with specific language impairment (SLI) and their typically developing (TD) peers on sustained selective attention tasks. METHOD This study included 23 children diagnosed with SLI and 23 TD children matched for age, gender, and maternal education level. The children's sustained selective attention skills were assessed with different types of stimuli (visual, nonverbal-auditory, linguistic) under 2 attentional load conditions (high, low) using computerized tasks. A mixed design was used to compare children across groups and performance across tasks. RESULTS The SLI participants exhibited poorer performance than their peers on the sustained selective attention tasks presented in the auditory modality (linguistic and nonverbal-auditory) under the high attentional load conditions. Performance was comparable with their peers under the low attentional load conditions. The SLI group exhibited similar performance to their peers on the visual tasks regardless of attentional load. CONCLUSION These results support the notion of attention difficulties in preschool children with SLI and suggest separate attentional capacities for different stimulus modalities.
Collapse
Affiliation(s)
- Tammie J Spaulding
- Department of Speech, Language, and Hearing Sciences, P.O. Box 210071, University of Arizona, Tucson, AZ 85721-0071, USA.
| | | | | |
Collapse
|