1
|
Lewis DE. Speech Understanding in Complex Environments by School-Age Children with Mild Bilateral or Unilateral Hearing Loss. Semin Hear 2023; 44:S36-S48. [PMID: 36970648 PMCID: PMC10033204 DOI: 10.1055/s-0043-1764134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
Abstract
Numerous studies have shown that children with mild bilateral (MBHL) or unilateral hearing loss (UHL) experience speech perception difficulties in poor acoustics. Much of the research in this area has been conducted via laboratory studies using speech-recognition tasks with a single talker and presentation via earphones and/or from a loudspeaker located directly in front of the listener. Real-world speech understanding is more complex, however, and these children may need to exert greater effort than their peers with normal hearing to understand speech, potentially impacting progress in a number of developmental areas. This article discusses issues and research relative to speech understanding in complex environments for children with MBHL or UHL and implications for real-world listening and understanding.
Collapse
Affiliation(s)
- Dawna E. Lewis
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska
| |
Collapse
|
2
|
Buss E, Felder J, Miller MK, Leibold LJ, Calandruccio L. Can Closed-Set Word Recognition Differentially Assess Vowel and Consonant Perception for School-Age Children With and Without Hearing Loss? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3934-3950. [PMID: 36194777 PMCID: PMC9927623 DOI: 10.1044/2022_jslhr-20-00749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 04/02/2022] [Accepted: 06/18/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Vowels and consonants play different roles in language acquisition and speech recognition, yet standard clinical tests do not assess vowel and consonant perception separately. As a result, opportunities for targeted intervention may be lost. This study evaluated closed-set word recognition tests designed to rely predominantly on either vowel or consonant perception and compared results with sentence recognition scores. METHOD Participants were children (5-17 years of age) and adults (18-38 years of age) with normal hearing and children with sensorineural hearing loss (7-17 years of age). Speech reception thresholds (SRTs) were measured in speech-shaped noise. Children with hearing loss were tested with their hearing aids. Word recognition was evaluated using a three-alternative forced-choice procedure, with a picture-pointing response; monosyllabic target words varied with respect to either consonant or vowel content. Sentence recognition was evaluated for low- and high-probability sentences. In a subset of conditions, stimuli were low-pass filtered to simulate a steeply sloping hearing loss in participants with normal hearing. RESULTS Children's SRTs improved with increasing age for words and sentences. Low-pass filtering had a larger effect for consonant-variable words than vowel-variable words for both children and adults with normal hearing, consistent with the greater high-frequency content of consonants. Children with hearing loss tested with hearing aids tended to perform more poorly than age-matched children with normal hearing, particularly for sentence recognition, but consonant- and vowel-variable word recognition did not appear to be differentially affected by the amount of high- and low-frequency hearing loss. CONCLUSIONS Closed-set recognition of consonant- and vowel-variable words appeared to differentially evaluate vowel and consonant perception but did not vary by configuration of hearing loss in this group of pediatric hearing aid users. Word scores obtained in this manner do not fully characterize the auditory abilities necessary for open-set sentence recognition, but they do provide a general estimate.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | | | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
3
|
Schwarz J, Li KK, Sim JH, Zhang Y, Buchanan-Worster E, Post B, Gibson JL, McDougall K. Semantic Cues Modulate Children’s and Adults’ Processing of Audio-Visual Face Mask Speech. Front Psychol 2022; 13:879156. [PMID: 35928422 PMCID: PMC9343587 DOI: 10.3389/fpsyg.2022.879156] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/25/2022] [Indexed: 12/03/2022] Open
Abstract
During the COVID-19 pandemic, questions have been raised about the impact of face masks on communication in classroom settings. However, it is unclear to what extent visual obstruction of the speaker’s mouth or changes to the acoustic signal lead to speech processing difficulties, and whether these effects can be mitigated by semantic predictability, i.e., the availability of contextual information. The present study investigated the acoustic and visual effects of face masks on speech intelligibility and processing speed under varying semantic predictability. Twenty-six children (aged 8-12) and twenty-six adults performed an internet-based cued shadowing task, in which they had to repeat aloud the last word of sentences presented in audio-visual format. The results showed that children and adults made more mistakes and responded more slowly when listening to face mask speech compared to speech produced without a face mask. Adults were only significantly affected by face mask speech when both the acoustic and the visual signal were degraded. While acoustic mask effects were similar for children, removal of visual speech cues through the face mask affected children to a lesser degree. However, high semantic predictability reduced audio-visual mask effects, leading to full compensation of the acoustically degraded mask speech in the adult group. Even though children did not fully compensate for face mask speech with high semantic predictability, overall, they still profited from semantic cues in all conditions. Therefore, in classroom settings, strategies that increase contextual information such as building on students’ prior knowledge, using keywords, and providing visual aids, are likely to help overcome any adverse face mask effects.
Collapse
Affiliation(s)
- Julia Schwarz
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
- *Correspondence: Julia Schwarz,
| | - Katrina Kechun Li
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
- Katrina Kechun Li,
| | - Jasper Hong Sim
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Yixin Zhang
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Elizabeth Buchanan-Worster
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Brechtje Post
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | | | - Kirsty McDougall
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
4
|
Buisson Savin J, Reynard P, Bailly-Masson E, Joseph C, Joly CA, Boiteux C, Thai-Van H. Adult Normative Data for the Adaptation of the Hearing in Noise Test in European French (HINT-5 Min). Healthcare (Basel) 2022; 10:healthcare10071306. [PMID: 35885831 PMCID: PMC9315974 DOI: 10.3390/healthcare10071306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 07/07/2022] [Accepted: 07/11/2022] [Indexed: 12/05/2022] Open
Abstract
Decreased speech-in-noise (SpIN) understanding is an early marker not only of presbycusis but also of auditory processing disorder. Previous research has shown a strong relationship between hearing disorders and cognitive limitations. It is therefore crucial to allow SpIN testing in subjects who cannot sustain prolonged diagnostic procedures. The objectives of this study were to develop a rapid and reproducible version of the Hearing in Noise Test (HINT-5 min), and to determine its adult normative values in free-field and monaural or binaural headphone conditions. Following an adaptive signal-to-noise ratio (SNR) protocol, the test used a fixed noise level, while the signal level varied to reach the 50% speech reception threshold (SRT50). The speech material consisted of five lists of 20 sentences each, all recorded in European French. The whole semi-automated procedure lasted 5 min and was administered to 83 subjects aged 19 to 49 years with no reported listening difficulties. Fifty-two subjects were retested between 7 and 8 days later. For the binaural free-field condition, the mean SRT50 was −1.0 dB SNR with a standard deviation of 1.3 dB SNR. There was no significant difference between the results obtained at test and retest, nor was there any effect of listening condition, sex, or age on SRT50. The results indicate that the procedure is robust and not affected by any learning phenomenon. The HINT-5 min was found to be both a fast and reliable marker of the ability to understand speech in background noise.
Collapse
Affiliation(s)
- Johanna Buisson Savin
- Institut de l’Audition, Institut Pasteur, INSERM U1120, 75012 Paris, France; (J.B.S.); (P.R.); (C.-A.J.)
- Amplifon France, 94110 Arcueil, France; (E.B.-M.); (C.J.); (C.B.)
| | - Pierre Reynard
- Institut de l’Audition, Institut Pasteur, INSERM U1120, 75012 Paris, France; (J.B.S.); (P.R.); (C.-A.J.)
- Service d’Audiologie et d’Explorations Otoneurologiques, Hospices Civils de Lyon, Hôpital Edouard Herriot, 69003 Lyon, France
- Faculty of Medicine, University Claude Bernard Lyon 1, 69100 Villeurbanne, France
| | | | - Célia Joseph
- Amplifon France, 94110 Arcueil, France; (E.B.-M.); (C.J.); (C.B.)
| | - Charles-Alexandre Joly
- Institut de l’Audition, Institut Pasteur, INSERM U1120, 75012 Paris, France; (J.B.S.); (P.R.); (C.-A.J.)
- Service d’Audiologie et d’Explorations Otoneurologiques, Hospices Civils de Lyon, Hôpital Edouard Herriot, 69003 Lyon, France
- Faculty of Medicine, University Claude Bernard Lyon 1, 69100 Villeurbanne, France
| | | | - Hung Thai-Van
- Institut de l’Audition, Institut Pasteur, INSERM U1120, 75012 Paris, France; (J.B.S.); (P.R.); (C.-A.J.)
- Service d’Audiologie et d’Explorations Otoneurologiques, Hospices Civils de Lyon, Hôpital Edouard Herriot, 69003 Lyon, France
- Faculty of Medicine, University Claude Bernard Lyon 1, 69100 Villeurbanne, France
- Correspondence:
| |
Collapse
|
5
|
Meemann K, Smiljanić R. Intelligibility of Noise-Adapted and Clear Speech in Energetic and Informational Maskers for Native and Nonnative Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1263-1281. [PMID: 35235410 DOI: 10.1044/2021_jslhr-21-00175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study explored clear speech (CS) and noise-adapted speech (NAS) intelligibility benefits for native and nonnative English listeners. It also examined how the two speaking style adaptations interact with maskers that vary from purely energetic to largely informational at different signal-to-noise ratios (SNRs). METHOD Materials consisted of 40 sentences produced by 10 young adult talkers in a conversational and a clear speaking style under two conditions: (a) in quiet and (b) in response to speech-shaped noise (SSN) played over headphones (NAS). Young adult native (Experiment 1) and nonnative (Experiment 2) English listeners heard target sentences presented in two-talker (2T) babble, six-talker (6T) babble, or SSN and at an "easier" and a "harder" SNR. RESULTS When talkers produced CS and NAS, word recognition accuracy was significantly improved for both listener groups. The largest intelligibility benefit was obtained for the CS produced in response to noise (CS+NAS). Overall accuracy was highest in 2T babble. Accuracy was higher in SSN than in 6T babble for nonnative listeners at both levels of listening difficulty but only at a more difficult SNR for native listeners. Listeners benefited from CS and NAS most in the presence of SSN and least in 2T babble. When SNRs were the same for the two listener groups, native listeners outperformed nonnative listeners in almost all listening conditions, but nonnative listeners benefited more from CS and NAS in 6T babble than native listeners did. CONCLUSIONS Combined speaking style enhancements, CS+NAS, provided the largest intelligibility increases for native and nonnative listeners in all listening conditions. The results add to the body of evidence supporting speech-oriented, behavioral therapy techniques for maximizing speech intelligibility in everyday listening situations.
Collapse
Affiliation(s)
- Kirsten Meemann
- Department of Linguistics, The University of Texas at Austin
| | - Rajka Smiljanić
- Department of Linguistics, The University of Texas at Austin
| |
Collapse
|
6
|
Multiple Cases of Auditory Neuropathy Illuminate the Importance of Subcortical Neural Synchrony for Speech-in-noise Recognition and the Frequency-following Response. Ear Hear 2021; 43:605-619. [PMID: 34619687 DOI: 10.1097/aud.0000000000001122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.
Collapse
|
7
|
Simeon KM, Grieco-Calub TM. The Impact of Hearing Experience on Children's Use of Phonological and Semantic Information During Lexical Access. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2825-2844. [PMID: 34106737 PMCID: PMC8632499 DOI: 10.1044/2021_jslhr-20-00547] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/14/2021] [Accepted: 03/02/2021] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to examine the extent to which phonological competition and semantic priming influence lexical access in school-aged children with cochlear implants (CIs) and children with normal acoustic hearing. Method Participants included children who were 5-10 years of age with either normal hearing (n = 41) or bilateral severe to profound sensorineural hearing loss and used CIs (n = 13). All participants completed a two-alternative forced-choice task while eye gaze to visual images was recorded and quantified during a word recognition task. In this task, the target image was juxtaposed with a competitor image that was either a phonological onset competitor (i.e., shared the same initial consonant-vowel-consonant syllable as the target) or an unrelated distractor. Half of the trials were preceded by an image prime that was semantically related to the target image. Results Children with CIs showed evidence of phonological competition during real-time processing of speech. This effect, however, was less and occurred later in the time course of speech processing than what was observed in children with normal hearing. The presence of a semantically related visual prime reduced the effects of phonological competition in both groups of children but to a greater degree in children with CIs. Conclusions Children with CIs were able to process single words similarly to their counterparts with normal hearing. However, children with CIs appeared to have increased reliance on surrounding semantic information compared to their normal-hearing counterparts.
Collapse
Affiliation(s)
- Katherine M. Simeon
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Tina M. Grieco-Calub
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
- Hugh Knowles Hearing Center, Northwestern University, Evanston, IL
| |
Collapse
|
8
|
Masked Sentence Recognition in Children, Young Adults, and Older Adults: Age-Dependent Effects of Semantic Context and Masker Type. Ear Hear 2020; 40:1117-1126. [PMID: 30601213 DOI: 10.1097/aud.0000000000000692] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Masked speech recognition in normal-hearing listeners depends in part on masker type and semantic context of the target. Children and older adults are more susceptible to masking than young adults, particularly when the masker is speech. Semantic context has been shown to facilitate noise-masked sentence recognition in all age groups, but it is not known whether age affects a listener's ability to use context with a speech masker. The purpose of the present study was to evaluate the effect of masker type and semantic context of the target as a function of listener age. DESIGN Listeners were children (5 to 16 years), young adults (19 to 30 years), and older adults (67 to 81 years), all with normal or near-normal hearing. Maskers were either speech-shaped noise or two-talker speech, and targets were either semantically correct (high context) sentences or semantically anomalous (low context) sentences. RESULTS As predicted, speech reception thresholds were lower for young adults than either children or older adults. Age effects were larger for the two-talker masker than the speech-shaped noise masker, and the effect of masker type was larger in children than older adults. Performance tended to be better for targets with high than low semantic context, but this benefit depended on age group and masker type. In contrast to adults, children benefitted less from context in the two-talker speech masker than the speech-shaped noise masker. Context effects were small compared with differences across age and masker type. CONCLUSIONS Different effects of masker type and target context are observed at different points across the lifespan. While the two-talker masker is particularly challenging for children and older adults, the speech masker may limit the use of semantic context in children but not adults.
Collapse
|
9
|
Thompson EC, Krizman J, White-Schwoch T, Nicol T, Estabrook R, Kraus N. Neurophysiological, linguistic, and cognitive predictors of children's ability to perceive speech in noise. Dev Cogn Neurosci 2019; 39:100672. [PMID: 31430627 PMCID: PMC6886664 DOI: 10.1016/j.dcn.2019.100672] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 06/07/2019] [Accepted: 06/10/2019] [Indexed: 11/16/2022] Open
Abstract
Hearing in noisy environments is a complicated task that engages attention, memory, linguistic knowledge, and precise auditory-neurophysiological processing of sound. Accumulating evidence in school-aged children and adults suggests these mechanisms vary with the task’s demands. For instance, co-located speech and noise demands a large cognitive load and recruits working memory, while spatially separating speech and noise diminishes this load and draws on alternative skills. Past research has focused on one or two mechanisms underlying speech-in-noise perception in isolation; few studies have considered multiple factors in tandem, or how they interact during critical developmental years. This project sought to test complementary hypotheses involving neurophysiological, cognitive, and linguistic processes supporting speech-in-noise perception in young children under different masking conditions (co-located, spatially separated). Structural equation modeling was used to identify latent constructs and examine their contributions as predictors. Results reveal cognitive and language skills operate as a single factor supporting speech-in-noise perception under different masking conditions. While neural coding of the F0 supports perception in both co-located and spatially separated conditions, neural timing predicts perception of spatially separated listening exclusively. Together, these results suggest co-located and spatially separated speech-in-noise perception draw on similar cognitive/linguistic skills, but distinct neural factors, in early childhood.
Collapse
Affiliation(s)
- Elaine C Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Ryne Estabrook
- Department of Medical Social Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA; Institute for Neuroscience, Northwestern University, Evanston, IL, USA; Department of Neurobiology, Northwestern University, Evanston, IL, USA; Department of Otolaryngology, Northwestern University, Chicago, IL, USA.
| |
Collapse
|
10
|
Bent T, Holt RF, Miller K, Libersky E. Sentence Context Facilitation for Children's and Adults' Recognition of Native- and Nonnative-Accented Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:423-433. [PMID: 30950691 DOI: 10.1044/2018_jslhr-h-18-0273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Supportive semantic and syntactic information can increase children's and adults' word recognition accuracy in adverse listening conditions. However, there are inconsistent findings regarding how a talker's accent or dialect modulates these context effects. Here, we compare children's and adults' abilities to capitalize on sentence context to overcome misleading acoustic-phonetic cues in nonnative-accented speech. Method Monolingual American English-speaking 5- to 7-year-old children ( n = 90) and 18- to 35-year-old adults ( n = 30) were presented with full sentences or the excised final word from each of the sentences and repeated what they heard. Participants were randomly assigned to 1 of 2 conditions: native-accented (Midland American English) or nonnative-accented (Spanish- and Japanese-accented English) speech. Participants also completed the NIH Toolbox Picture Vocabulary Test. Results Children and adults benefited from sentence context for both native- and nonnative-accent talkers, but the benefit was greater for nonnative than native talkers. Furthermore, adults showed a greater context benefit than children for nonnative talkers, but the 2 age groups showed a similar benefit for native talkers. Children's age and vocabulary scores both correlated with context benefit. Conclusions The cognitive-linguistic development that occurs between the early school-age years and adulthood may increase listeners' abilities to capitalize on top-down cues for lexical identification with nonnative-accented speech. These results have implications for the perception of speech with source degradation, including speech sound disorders, hearing loss, or signal processing that does not faithfully represent the original signal.
Collapse
Affiliation(s)
- Tessa Bent
- Department of Speech and Hearing Sciences, Indiana University, Bloomington
| | - Rachael Frush Holt
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Katherine Miller
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Emma Libersky
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| |
Collapse
|
11
|
Creel SC. Protracted perceptual learning of auditory pattern structure in spoken language. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
12
|
Simeon KM, Bicknell K, Grieco-Calub TM. Belief Shift or Only Facilitation: How Semantic Expectancy Affects Processing of Speech Degraded by Background Noise. Front Psychol 2018; 9:116. [PMID: 29472883 PMCID: PMC5809983 DOI: 10.3389/fpsyg.2018.00116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 01/24/2018] [Indexed: 11/13/2022] Open
Abstract
Individuals use semantic expectancy - applying conceptual and linguistic knowledge to speech input - to improve the accuracy and speed of language comprehension. This study tested how adults use semantic expectancy in quiet and in the presence of speech-shaped broadband noise at -7 and -12 dB signal-to-noise ratio. Twenty-four adults (22.1 ± 3.6 years, mean ±SD) were tested on a four-alternative-forced-choice task whereby they listened to sentences and were instructed to select an image matching the sentence-final word. The semantic expectancy of the sentences was unrelated to (neutral), congruent with, or conflicting with the acoustic target. Congruent expectancy improved accuracy and conflicting expectancy decreased accuracy relative to neutral, consistent with a theory where expectancy shifts beliefs toward likely words and away from unlikely words. Additionally, there were no significant interactions of expectancy and noise level when analyzed in log-odds, supporting the predictions of ideal observer models of speech perception.
Collapse
Affiliation(s)
- Katherine M. Simeon
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Klinton Bicknell
- Department of Linguistics, Northwestern University, Evanston, IL, United States
| | - Tina M. Grieco-Calub
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Hugh Knowles Hearing Center, Northwestern University, Evanston, IL, United States
| |
Collapse
|
13
|
McDonald M, Gross M, Buac M, Batko M, Kaushanskaya M. Processing and Comprehension of Accented Speech by Monolingual and Bilingual Children. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2017; 14:113-129. [PMID: 30774569 PMCID: PMC6377242 DOI: 10.1080/15475441.2017.1404467] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This study tested the effect of Spanish-accented speech on sentence comprehension in children with different degrees of Spanish experience. The hypothesis was that earlier acquisition of Spanish would be associated with enhanced comprehension of Spanish-accented speech. Three groups of 5-6 year old children were tested: monolingual English-speaking children, simultaneous Spanish-English bilingual children and early English-Spanish bilingual children. The children completed a semantic judgment task in English on semantically meaningful and nonsensical sentences produced by a native English speaker and a native Spanish speaker characterized by a strong Spanish accent. All children were slower to respond to foreign accented speech, independent of language background. Monolingual and early bilingual children showed reduced comprehension accuracy of accented speech, but only for nonsensical sentences. Simultaneous bilingual children performed similarly to other groups for meaningful contexts, but were not as strongly affected by accent for nonsensical contexts. Together, the findings suggest that children's language background has only a minor influence on processing of accented speech.
Collapse
|
14
|
Smiljanic R, Gilbert RC. Acoustics of Clear and Noise-Adapted Speech in Children, Young, and Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3081-3096. [PMID: 29075775 DOI: 10.1044/2017_jslhr-s-16-0130] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Accepted: 05/08/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study investigated acoustic-phonetic modifications produced in noise-adapted speech (NAS) and clear speech (CS) by children, young adults, and older adults. METHOD Ten children (11-13 years of age), 10 young adults (18-29 years of age), and 10 older adults (60-84 years of age) read sentences in conversational and clear speaking style in quiet and in noise. A number of acoustic measurements were obtained. RESULTS NAS and CS were characterized by a decrease in speaking rate and an increase in 1-3 kHz energy, sound pressure level (SPL), vowel space area (VSA), and harmonics-to-noise ratio. NAS increased fundamental frequency (F0) mean and decreased jitter and shimmer. CS increased frequency and duration of pauses. Older adults produced the slowest speaking rate, longest pauses, and smallest increase in F0 mean, 1-3 kHz energy, and SPL when speaking clearly. They produced the smallest increases in VSA in NAS and CS. Children slowed down less, increased the VSA least, increased harmonics-to-noise ratio, and decreased jitter and shimmer most in CS. Children increased mean F0 and F1 most in noise. CONCLUSIONS Findings have implications for a model of speech production in healthy speakers as well as the potential to aid in clinical decision making for individuals with speech disorders, particularly dysarthria.
Collapse
|
15
|
Smiljanic R, Gilbert RC. Intelligibility of Noise-Adapted and Clear Speech in Child, Young Adult, and Older Adult Talkers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3069-3080. [PMID: 29075748 DOI: 10.1044/2017_jslhr-s-16-0165] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 04/21/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study examined intelligibility of conversational and clear speech sentences produced in quiet and in noise by children, young adults, and older adults. Relative talker intelligibility was assessed across speaking styles. METHOD Sixty-one young adult participants listened to sentences mixed with speech-shaped noise at -5 dB signal-to-noise ratio. The analyses examined percent correct scores across conversational, clear, and noise-adapted conditions and the three talker groups. Correlation analyses examined whether talker intelligibility is consistent across speaking style adaptations. RESULTS Noise-adapted and clear speech significantly enhanced intelligibility for young adult listeners. The intelligibility improvement varied across the three talker groups. Notably, intelligibility benefit was smallest for children's speaking style modifications. Listeners also perceived speech produced in noise by older adults to be less intelligible compared to the younger talkers. Talker intelligibility was correlated strongly between conversational and clear speech in quiet, but not for conversational speech produced in quiet and in noise. CONCLUSIONS Results provide evidence that intelligibility variation related to age and communicative barrier has the potential to aid clinical decision making for individuals with speech disorders, particularly dysarthria.
Collapse
|
16
|
Holt RF, Bent T. Children's Use of Semantic Context in Perception of Foreign-Accented Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:223-230. [PMID: 28056139 DOI: 10.1044/2016_jslhr-h-16-0014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 07/01/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE The purpose of this study is to evaluate children's use of semantic context to facilitate foreign-accented word recognition in noise. METHOD Monolingual American English speaking 5- to 7-year-olds (n = 168) repeated either Mandarin- or American English-accented sentences in babble, half of which contained final words that were highly predictable from context. The same final words were presented in the low- and high-predictability sentences. RESULTS Word recognition scores were better in the high- than low-predictability contexts. Scores improved with age and were higher for the native than the Mandarin accent. The oldest children saw the greatest benefit from context; however, context benefit was similar regardless of speaker accent. CONCLUSION Despite significant acoustic-phonetic deviations from native norms, young children capitalize on contextual cues when presented with foreign-accented speech. Implications for spoken word recognition in children with speech, language, and hearing differences are discussed.
Collapse
Affiliation(s)
- Rachael Frush Holt
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Tessa Bent
- Department of Speech and Hearing Sciences, Indiana University, Bloomington
| |
Collapse
|
17
|
Buss E, Leibold LJ, Hall JW. Effect of response context and masker type on word recognition in school-age children and adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:968. [PMID: 27586729 PMCID: PMC5392093 DOI: 10.1121/1.4960587] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 07/13/2016] [Accepted: 07/26/2016] [Indexed: 05/14/2023]
Abstract
In adults, masked speech recognition improves with the provision of a closed set of response alternatives. The present study evaluated whether school-age children (5-13 years) benefit to the same extent as adults from a forced-choice context, and whether this effect depends on masker type. Experiment 1 compared masked speech reception thresholds for disyllabic words in either an open-set or a four-alternative forced-choice (4AFC) task. Maskers were speech-shaped noise or two-talker speech. Experiment 2 compared masked speech reception thresholds for monosyllabic words in two 4AFC tasks, one in which the target and foils were phonetically similar and one in which they were dissimilar. Maskers were speech-shaped noise, amplitude-modulated noise, or two-talker speech. For both experiments, it was predicted that children would not benefit from the information provided by the 4AFC context to the same degree as adults, particularly when the masker was complex (two-talker) or when audible speech cues were temporally sparse (modulated-noise). Results indicate that young children do benefit from a 4AFC context to the same extent as adults in speech-shaped noise and amplitude-modulated noise, but the benefit of context increases with listener age for the two-talker speech masker.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Lori J Leibold
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Joseph W Hall
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| |
Collapse
|
18
|
Creel SC, Rojo DP, Paullada AN. Effects of contextual support on preschoolers' accented speech comprehension. J Exp Child Psychol 2016; 146:156-80. [PMID: 26950507 DOI: 10.1016/j.jecp.2016.01.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 01/29/2016] [Accepted: 01/29/2016] [Indexed: 11/17/2022]
Abstract
Young children often hear speech in unfamiliar accents, but relatively little research characterizes their comprehension capacity. The current study tested preschoolers' comprehension of familiar-accented versus unfamiliar-accented speech with varying levels of contextual support from sentence frames (full sentences vs. isolated words) and from visual context (four salient pictured alternatives vs. the absence of salient visual referents). The familiar accent advantage was more robust when visual context was absent, suggesting that previous findings of good accent comprehension in infants and young children may result from ceiling effects in easier tasks (e.g., picture fixation, picture selection) relative to the more difficult tasks often used with older children and adults. In contrast to prior work on mispronunciations, where most errors were novel object responses, children in the current study did not select novel object referents above chance levels. This suggests that some property of accented speech may dissuade children from inferring that an unrecognized familiar-but-accented word has a novel referent. Finally, children showed detectable accent processing difficulty despite presumed incidental community exposure. Results suggest that preschoolers' accented speech comprehension is still developing, consistent with theories of protracted development of speech processing.
Collapse
Affiliation(s)
- Sarah C Creel
- University of California, San Diego, La Jolla, CA 92093, USA.
| | - Dolly P Rojo
- University of Texas at Austin, Austin, TX 78712, USA
| | | |
Collapse
|
19
|
Effect of minimal/mild hearing loss on children's speech understanding in a simulated classroom. Ear Hear 2015; 36:136-44. [PMID: 25170780 DOI: 10.1097/aud.0000000000000092] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. DESIGN Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. RESULTS Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. CONCLUSIONS The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.
Collapse
|
20
|
Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear Hear 2015; 35:519-32. [PMID: 24699702 DOI: 10.1097/aud.0000000000000040] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The authors have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild to moderate sensorineural hearing loss and in normal-hearing controls. DESIGN Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6 to 7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of the source stimuli. For the Inteo, the fourth condition consisted of recordings made with the same settings as the first, but with the noise-reduction feature activated (FT-off). For the Naída, the fourth condition was the same as the first condition except that source stimuli were preprocessed by a novel frequency compression algorithm, spectral envelope decimation (SED), designed in MATLAB, which allowed for a more complete lowering of the 4 to 10 kHz input band. A follow-up experiment with NFC used Phonak's Naída SP V BTE, which could also lower a greater range of input frequencies. RESULTS For normal-hearing and hearing-impaired listeners, performance with FT was significantly worse compared with that in the other conditions. Consistent with previous findings, performance for the hearing-impaired listeners in the WB condition was significantly better than in the FT-off condition. In addition, performance in the SED and WB conditions were both significantly better than in the NFC-off condition and the NFC condition with 6 kHz input bandwidth. There were no significant differences between SED and WB, indicating that improvements in fricative identification obtained by increasing bandwidth can also be obtained using this form of frequency compression. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for the phonemes /s/ and /z/. In the follow-up experiment, performance in the NFC condition with 10 kHz input bandwidth was significantly better than NFC-off, replicating the results obtained with SED. Furthermore, listeners who performed poorly with NFC-off tended to show the most improvement with NFC. CONCLUSIONS Improvements in the identification of stimuli chosen to be sensitive to the effects of frequency lowering have been demonstrated using two forms of frequency compression (NFC and SED) in individuals with mild to moderate high-frequency sensorineural hearing loss. However, negative results caution against using FT for this population. Results also indicate that the advantage of an extended bandwidth as reported here and elsewhere applies to the input bandwidth for frequency compression (NFC/SED) when the start frequency is ≥4 kHz.
Collapse
|
21
|
Lewis DE, Manninen CM, Valente DL, Smith NA. Children's understanding of instructions presented in noise and reverberation. Am J Audiol 2014; 23:326-36. [PMID: 25036922 DOI: 10.1044/2014_aja-14-0020] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2014] [Accepted: 06/27/2014] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study examined children's ability to follow audio-visual instructions presented in noise and reverberation. METHOD Children (8-12 years of age) with normal hearing followed instructions in noise or noise plus reverberation. Performance was compared for a single talker (ST), multiple talkers speaking one at a time (MT), and multiple talkers with competing comments from other talkers (MTC). Working memory was assessed using measures of digit span. RESULTS Performance was better for children in noise than for those in noise plus reverberation. In noise, performance for ST was better than for either MT or MTC, and performance for MT was better than for MTC. In noise plus reverberation, performance for ST and MT was better than for MTC, but there were no differences between ST and MT. Digit span did not account for significant variance in the task. CONCLUSIONS Overall, children performed better in noise than in noise plus reverberation. However, differing patterns across conditions for the 2 environments suggested that the addition of reverberation may have affected performance in a way that was not apparent in noise alone. Continued research is needed to examine the differing effects of noise and reverberation on children's speech understanding.
Collapse
Affiliation(s)
| | - Crystal M. Manninen
- Boys Town National Research Hospital, Omaha, NE
- University of Nebraska–Lincoln
| | | | | |
Collapse
|
22
|
Crukley J, Scollie SD. The Effects of Digital Signal Processing Features on Children's Speech Recognition and Loudness Perception. Am J Audiol 2014; 23:99-115. [DOI: 10.1044/1059-0889(2013/13-0024)] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose
The purpose of this study was to determine the effects of hearing instruments set to Desired Sensation Level version 5 (DSL v5) hearing instrument prescription algorithm targets and equipped with directional microphones and digital noise reduction (DNR) on children's sentence recognition in noise performance and loudness perception in a classroom environment.
Method
Ten children (ages 8–17 years) with stable, congenital sensorineural hearing losses participated in the study. Participants were fitted bilaterally with behind-the-ear hearing instruments set to DSL v5 prescriptive targets. Sentence recognition in noise was evaluated using the Bamford–Kowal–Bench Speech in Noise Test (Niquette et al., 2003). Loudness perception was evaluated using a modified version of the Contour Test of Loudness Perception (Cox, Alexander, Taylor, & Gray, 1997).
Results
Children's sentence recognition in noise performance was significantly better when using directional microphones alone or in combination with DNR than when using omnidirectional microphones alone or in combination with DNR. Children's loudness ratings for sounds above 72 dB SPL were lowest when fitted with the DSL v5 Noise prescription combined with directional microphones. DNR use showed no effect on loudness ratings.
Conclusion
Use of the DSL v5 Noise prescription with a directional microphone improved sentence recognition in noise performance and reduced loudness perception ratings for loud sounds relative to a typical clinical reference fitting with the DSL v5 Quiet prescription with no digital signal processing features enabled. Potential clinical strategies are discussed.
Collapse
Affiliation(s)
- Jeffery Crukley
- The Brain & Mind Institute, The University of Western Ontario, London, Ontario, Canada
| | - Susan D. Scollie
- National Centre for Audiology, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
23
|
Smiljanic R, Sladen D. Acoustic and semantic enhancements for children with cochlear implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1085-1096. [PMID: 23785186 DOI: 10.1044/1092-4388(2012/12-0097)] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this study, the authors examined how signal clarity interacts with the use of sentence context information in determining speech-in-noise recognition for children with cochlear implants and children with normal hearing. METHOD One hundred and twenty sentences in which the final word varied in predictability (high vs. low semantic context) were produced in conversational and clear speech. Nine children with cochlear implants and 9 children with normal hearing completed the sentence-in-noise listening tests and a standardized language measure. RESULTS Word recognition in noise improved significantly for both groups of children for high-predictability sentences in clear speech. Children with normal hearing benefited more from each source of information compared with children with cochlear implants. There was a significant correlation between more developed language skills and the ability to use contextual enhancements. The smaller context gain in clear speech for children with cochlear implants is in accord with the effortfulness hypothesis (McCoy et al., 2005) and points to the cumulative effects of noise throughout the processing system. CONCLUSION Modifications of the speech signal and the context of the utterances through changes in the talker output hold substantial promise as a communication enhancement technique for both children with cochlear implants and children with normal hearing.
Collapse
|
24
|
Freyman RL, Griffin AM, Macmillan NA. Priming of lowpass-filtered speech affects response bias, not sensitivity, in a bandwidth discrimination task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:1183-92. [PMID: 23927117 PMCID: PMC3745481 DOI: 10.1121/1.4807824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2013] [Revised: 05/01/2013] [Accepted: 05/09/2013] [Indexed: 05/26/2023]
Abstract
Priming is demonstrated when prior information about the content of a distorted, filtered, or masked auditory message improves its clarity. The current experiment attempted to quantify aspects of priming by determining its effects on performance and bias in a lowpass-filter-cutoff frequency discrimination task. Nonsense sentences recorded by a female talker were sharply lowpass filtered at a nominal cutoff frequency (F) of 0.5 or 0.75 kHz or at a higher cutoff frequency (F + ΔF). The listeners' task was to determine which interval of a two-interval-forced-choice trial contained the nonsense sentence filtered with F + ΔF. On priming trials, the interval 1 sentence was displayed on a computer screen prior to the auditory portion of the trial. The prime markedly affected bias, increasing the number of correct and incorrect interval 1 responses but did not affect overall discrimination performance substantially. These findings were supported through a second experiment that required listeners to make confidence judgments. The paradigm has the potential to help quantify the limits of speech perception when uncertainty about the auditory message is removed.
Collapse
Affiliation(s)
- Richard L Freyman
- Department of Communication Disorders, University of Massachusetts, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA.
| | | | | |
Collapse
|
25
|
Krishnan S, Leech R, Aydelott J, Dick F. School-age children's environmental object identification in natural auditory scenes: Effects of masking and contextual congruence. Hear Res 2013; 300:46-55. [DOI: 10.1016/j.heares.2013.03.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2012] [Revised: 02/17/2013] [Accepted: 03/05/2013] [Indexed: 11/24/2022]
|
26
|
Prodi N, Visentin C, Feletti A. On the perception of speech in primary school classrooms: ranking of noise interference and of age influence. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:255-68. [PMID: 23297900 DOI: 10.1121/1.4770259] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
It is well documented that the interference of noise in the classroom puts younger pupils at a disadvantage for speech perception tasks. Nevertheless, the dependence of this phenomenon on the type of noise, and the way it is realized for each class by a specific combination of intelligibility and effort have not been fully investigated. Following on a previous laboratory study on "listening efficiency," which stems from a combination of accuracy and latency measures, this work tackles the problems above to better understand the basic mechanisms governing the speech perception performance of pupils in noisy classrooms. Listening tests were conducted in real classrooms for a relevant number of students, and tests in quiet were also developed. The statistical analysis is based on stochastic ordering and is able to clarify the behavior of the classes and the different impacts of noises on performance. It is found that the joint babble and activity noise has the worst effect on performance whereas tapping and external traffic noises are less disruptive.
Collapse
Affiliation(s)
- Nicola Prodi
- Dipartimento di Ingegneria, Università di Ferrara, via Saragat 1, 44122 Ferrara, Italy.
| | | | | |
Collapse
|
27
|
Abstract
OBJECTIVES The purpose of this study was to test the hypothesis that a carrier phrase can improve word recognition performance for both children and adults by providing an auditory grouping cue. It was hypothesized that the carrier phrase would benefit listeners under conditions in which they have difficulty in perceptually separating the target word from the competing background. To test this hypothesis, word recognition was examined for maskers that were believed to vary in their ability to create perceptual masking. In addition to determining the conditions under which a carrier-phrase benefit is obtained, age-related differences in both susceptibility to masking and carrier-phrase benefit were examined. DESIGN Two experiments were conducted to characterize developmental effects in the ability to benefit from a carrier phrase (i.e., "say the word") before the target word. Using an open-set task, word recognition performance was measured for three listener age groups: 5- to 7-year-old children, 8- to 10-year-old children, and adults (18-30 years). For all experiments, target words were presented in each of two carrier-phrase conditions: (1) carrier-present and (2) carrier-absent. Across experiments, word recognition performance was assessed in the presence of multi-talker babble (Experiment 1), two-talker speech (Experiment 2), or speech-shaped noise (Experiment 2). RESULTS Children's word recognition performance was generally poorer than that of adults for all three masker conditions. Differences between the two age groups of children were seen for both speech-shaped noise and multi-talker babble, with 5- to 7-year-olds performing more poorly than 8- to 10-year-olds. However, 5- to 7-year-olds and 8- to 10-year-olds performed similarly for the two-talker masker. Despite developmental effects in susceptibility to masking, both groups of children and adults showed a carrier-phrase benefit in multi-talker babble (Experiment 1) and in the two-talker masker (Experiment 2). The magnitude of the carrier-phrase benefit was similar for a given masker type across age groups, but the carrier-phrase benefit was greater in the presence of the two-talker masker than in multi-talker babble. Specifically, the children's average carrier-phrase benefit was 7.1% for multi-talker and 16.8% for the two-talker masker condition. No carrier-phrase benefit was observed for any age group in the presence of speech-shaped noise. CONCLUSIONS Effects of auditory masking on word recognition performance were greater for children than for adults. The time course of development for susceptibility to masking seems to be more prolonged for a two-talker speech masker than for multi-talker babble or speech-shaped noise. Unique to the present study, this work suggests that a carrier phrase can provide an effective auditory grouping cue for both children and adults under conditions expected to produce substantial perceptual masking.
Collapse
Affiliation(s)
- Angela Yarnell Bonino
- Department of Allied Health Sciences, CB 7190, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| | | | | |
Collapse
|
28
|
Crukley J, Scollie SD. Children’s Speech Recognition and Loudness Perception With the Desired Sensation Level v5 Quiet and Noise Prescriptions. Am J Audiol 2012; 21:149-62. [DOI: 10.1044/1059-0889(2012/12-0002)] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose
To determine whether Desired Sensation Level (DSL) v5 Noise is a viable hearing instrument prescriptive algorithm for children, in comparison with DSL v5 Quiet. In particular, the authors compared children’s performance on measures of consonant recognition in quiet, sentence recognition in noise, and loudness perception when fitted with DSL v5 Quiet and Noise.
Method
Eleven children (ages 8 to 17 years) with stable, congenital sensorineural hearing losses participated in the study. Participants were fitted bilaterally to DSL v5 prescriptions with behind-the-ear hearing instruments. The order of prescription was counterbalanced across participants. Repeated measures analysis of variance was used to compare performance between prescriptions.
Results
Use of the Noise prescription resulted in a significant decrease in consonant perception in Quiet with low-level input, but no difference with average-level input. There was no significant difference in sentence-in-noise recognition between the two prescriptions. Loudness ratings for input levels above 72 dB SPL were significantly lower with the noise prescription.
Conclusions
Average-level consonant recognition in quiet was preserved and aversive loudness was alleviated by the Noise prescription relative to the quiet prescription, which suggests that the DSL v5 Noise prescription may be an effective approach to managing the nonquiet listening needs of children with hearing loss.
Collapse
|
29
|
Abstract
OBJECTIVES The primary goal of this study was to investigate how speech perception is altered by the provision of a preview or "prime" of a sample of speech just before it is presented in masking. A same-different test paradigm was developed which enabled the effect of priming to be measured with energetic maskers in addition to those that most likely produced both energetic and informational masking. Using this paradigm, the benefit of priming in overcoming energetic and informational masking was compared. DESIGN Twenty-four normal-hearing subjects listened to nonsense sentences presented in a background of competing speech (two-talker babble) or one of two types of speech-shaped noise. Both target and masker were presented via loudspeaker directly in front of the listeners. In the baseline condition, the listeners were then shown a sentence on a computer screen that either matched the auditory target sentence exactly or contained a replacement for one of the three target key words. Their task was to judge whether the printed sentence matched the auditory target and respond via computer keyboard. In the first experimental condition, the printed sentence preceded rather than followed the auditory presentation (the priming condition). In the second experimental condition, the perception of spatial separation was created between target and masker by presenting the masker from two loudspeakers (front and 60° to the right) and imposing a 4-msec delay in the masker coming from the front loudspeaker. This resulted in the target being heard from the front while, because of the precedence effect, the masker was heard well to the right (the spatial condition). In a third experimental condition, spatial separation and priming were combined. A total of five signal-to-noise ratios were tested for each masker. RESULTS The competing speech masker produced more masking than noise, consistent with previous findings. For the competing speech masker, the signal-to-noise ratio for 80% correct performance was approximately 6.7 dB lower when the listeners read the sentences first (the priming condition) than in the baseline condition. This priming effect was similar to the improvement obtained when the target and masker were separated spatially. Significant priming effects were also observed with speech-shaped noise maskers, and when there was perceived spatial separation between target and masker, conditions in which informational masking was believed to have been minimal. There seemed to be an additive effect of spatial separation and priming in the two-talker babble condition. CONCLUSIONS (1) Priming was effective in improving speech perception in all conditions, including those consisting of primarily energetic masking. (2) It is not clear how much benefit from priming could be attributed to release from informational masking. (3) Performance on the same-different task was linearly related to performance on an open-set speech recognition task using the same target and masker.
Collapse
|
30
|
Valente DL, Plevinsky HM, Franco JM, Heinrichs-Graham EC, Lewis DE. Experimental investigation of the effects of the acoustical conditions in a simulated classroom on speech recognition and learning in children. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:232-46. [PMID: 22280587 PMCID: PMC3283898 DOI: 10.1121/1.3662059] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2010] [Revised: 10/17/2011] [Accepted: 10/18/2011] [Indexed: 05/22/2023]
Abstract
The potential effects of acoustical environment on speech understanding are especially important as children enter school where students' ability to hear and understand complex verbal information is critical to learning. However, this ability is compromised because of widely varied and unfavorable classroom acoustics. The extent to which unfavorable classroom acoustics affect children's performance on longer learning tasks is largely unknown as most research has focused on testing children using words, syllables, or sentences as stimuli. In the current study, a simulated classroom environment was used to measure comprehension performance of two classroom learning activities: a discussion and lecture. Comprehension performance was measured for groups of elementary-aged students in one of four environments with varied reverberation times and background noise levels. The reverberation time was either 0.6 or 1.5 s, and the signal-to-noise level was either +10 or +7 dB. Performance is compared to adult subjects as well as to sentence-recognition in the same condition. Significant differences were seen in comprehension scores as a function of age and condition; both increasing background noise and reverberation degraded performance in comprehension tasks compared to minimal differences in measures of sentence-recognition.
Collapse
Affiliation(s)
- Daniel L Valente
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA.
| | | | | | | | | |
Collapse
|
31
|
Pittman A. Age-related benefits of digital noise reduction for short-term word learning in children with hearing loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:1448-1463. [PMID: 21646423 DOI: 10.1044/1092-4388(2011/10-0341)] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
PURPOSE To determine the rate of word learning for children with hearing loss (HL) in quiet and in noise compared to normal-hearing (NH) peers. The effects of digital noise reduction (DNR) were examined for children with HL. METHOD Forty-one children with NH and 26 children with HL were grouped by age (8-9 years and 11-12 years). The children learned novel words associated with novel objects through a process of trial and error. Functions relating performance across trials were calculated for each child in each listening condition and were compared. RESULTS Significant effects were observed for age (older > younger) in the children with NH and listening condition (quiet > noise) in the children with HL. Significant effects of hearing status were also observed across groups (NH > HL), indicating that the children with HL required more trials to learn the new words. However, word learning improved significantly in noise with the use of DNR for the older but not for the younger children with HL. Hearing aid history and signal-to-noise ratio did not contribute to performance. CONCLUSION Word learning was significantly reduced in younger children, in noise, and in the presence of hearing loss. Age-related benefits of DNR were apparent for children over 10 years of age.
Collapse
|
32
|
Lagacé J, Jutras B, Giguère C, Gagné JP. Speech perception in noise: exploring the effect of linguistic context in children with and without auditory processing disorder. Int J Audiol 2011; 50:385-95. [PMID: 21599614 DOI: 10.3109/14992027.2011.553204] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE The objective of this study was to investigate whether the speech perception problems in noise of children with auditory processing disorder (APD) stem from an auditory or a higher order dysfunction. DESIGN A repeated measures design comparing the sentence key word recognition scores of children with APD and a control group was used. Four sentence lists from the Test de phrases dans le bruit (TPB) were presented with a babble masker at four different signal-to-noise ratios. The TPB is a Canadian French adaptation of the speech perception in noise test. STUDY SAMPLE Ten participants between 9-12 years with APD participated in this study, as well as ten age- and gender-matched children with no sign of APD. RESULTS Group analyses revealed that children with APD had poorer overall sentence key word recognition scores than the control group. Analysis of the difference scores between the high and low predictability sentences indicated that the benefit derived from linguistic context is similar between the groups. However, individual patterns of results revealed different profiles within the APD group. CONCLUSION Further study using a larger sample is warranted to deepen our understanding of the nature of APD and identify characteristic profiles to enable better tailoring of therapeutic programs.
Collapse
Affiliation(s)
- Josée Lagacé
- École d'orthophonie et d'audiologie, University of Montreal, Canada.
| | | | | | | |
Collapse
|
33
|
Relationship between speech perception in noise and phonological awareness skills for children with normal hearing. Ear Hear 2011; 31:761-8. [PMID: 20562623 DOI: 10.1097/aud.0b013e3181e5d188] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Speech perception difficulties experienced by children in adverse listening environments have been well documented. It has been suggested that phonological awareness may be related to children's ability to understand speech in noise. The goal of this study was to provide data that will allow a clearer characterization of this potential relation in typically developing children. Doing so may result in a better understanding of how children learn to listen in noise as well as providing information to identify children who are at risk for difficulties listening in noise. DESIGN Thirty-six children (5 to 7 yrs) with normal hearing participated in the study. Three phonological awareness tasks (syllable counting, initial consonant same, and phoneme deletion), representing a range of skills, were administered. For perception in noise tasks, nonsense syllables, monosyllabic words, and meaningful sentences with three key words were presented (50 dB SPL) at three signal to noise ratios (0, +5, and +10 dB). RESULTS Among the speech in noise tasks, there was a significant effect of signal to noise ratio, with children performing less well at 0-dB signal to noise ratio for all stimuli. A significant age effect occurred only for word recognition, with 7-yr-olds scoring significantly higher than 5-yr olds. For all three phonological awareness tasks, an age effect existed with 7-year-olds again performing significantly better than 5-yr-olds. However, when examining the relation between speech recognition in noise and phonological awareness skills, no single variable accounted for a significant part of the variance in performance on nonsense syllables, words, or sentences. However, there was an association between vocabulary knowledge and speech perception in noise. CONCLUSIONS Although phonological awareness skills are strongly related to reading and some children with reading difficulties also demonstrate poor speech perception in noise, results of this study question a relation between phonological awareness skills and speech perception in moderate levels of noise for typically developing children with normal hearing from 5 to 7 yrs of age. Further research in this area is needed to examine possible relations among the many factors that affect both speech perception in noise and the development of phonological awareness.
Collapse
|
34
|
Gustafson SJ, Pittman AL. Sentence perception in listening conditions having similar speech intelligibility indices. Int J Audiol 2010; 50:34-40. [DOI: 10.3109/14992027.2010.521198] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
35
|
Spatial Speech Perception Benefits in Young Children With Normal Hearing and Cochlear Implants. Ear Hear 2010; 31:702-13. [DOI: 10.1097/aud.0b013e3181e40dfe] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
36
|
Wightman FL, Kistler DJ, O'Bryan A. Individual differences and age effects in a dichotic informational masking paradigm. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:270-9. [PMID: 20649222 PMCID: PMC2921429 DOI: 10.1121/1.3436536] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2009] [Revised: 04/21/2010] [Accepted: 04/30/2010] [Indexed: 05/22/2023]
Abstract
Sixty normally-hearing listeners, ages 5 to 61 years, participated in a monaural speech understanding task designed to assess the impact of a single-talker speech masker presented to the opposite ear. The speech targets were masked by ipsilateral speech-spectrum noise. Masker level was fixed and target level was varied to estimate psychometric functions. The target/masker ratio that led to 51% correct performance in this task was taken as the baseline threshold. The impact of a modulated speech-spectrum noise, a male talker, or a female talker presented at a fixed level to the contralateral ear was quantified by the change in the baseline threshold and was assumed to reflect informational masking. The modulated-noise masker produced no informational masking across the entire age range. Speech maskers produced as much as 20 dB of informational masking for children aged 5-8 years and only 4 dB for adults. In contrast with previous studies using ipsilateral speech maskers, the male and female contralateral speech maskers produced comparable informational masking. Analyses of the developmental rate of change for informational masking and of the patterns of individual differences suggest that the informational masking produced by contralateral and ipsilateral maskers may be mediated by different mechanisms or processes.
Collapse
Affiliation(s)
- Frederic L Wightman
- Department of Psychological and Brain Sciences, University of Louisville, 2301 S Third Street, Louisville, Kentucky 40292, USA.
| | | | | |
Collapse
|
37
|
Prodi N, Visentin C, Farnetani A. Intelligibility, listening difficulty and listening efficiency in auralized classrooms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:172-181. [PMID: 20649212 DOI: 10.1121/1.3436563] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
In order to obtain an effective speech communication in rooms it is advisable, besides reaching the full intelligibility of words, to minimize the effort paid by the listener in the recognition of the speech material. This twofold requirement is not easily described by the current room acoustic indicators, which are mainly concerned either with a subjective rating by means of word recognition scores or with using listeners' impressions of reported listening difficulties. In this work, the problem is tackled by introducing the concept of "listening efficiency," which is defined as a combination of the accuracy of intelligibility and of the effort spent on achieving this goal. This indicator is here developed, and an application of the former and of the "listening efficiency" is presented in the field of classroom acoustics. Listening tests with pupils and adults were performed and the subsequent statistical analyses indicated several interesting findings. In particular, listening efficiency is able to clearly discriminate between equal intelligibility scores obtained under different acoustical conditions, permitting room acoustics to be tailored for specific groups, such as children.
Collapse
Affiliation(s)
- Nicola Prodi
- Dipartimento di Ingegneria, Universita degli Studi di Ferrara, via Saragat 1, 44100 Ferrara, Italy.
| | | | | |
Collapse
|
38
|
Lagacé J, Jutras B, Gagné JP. Auditory processing disorder and speech perception problems in noise: finding the underlying origin. Am J Audiol 2010; 19:17-25. [PMID: 20308289 DOI: 10.1044/1059-0889(2010/09-0022)] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE A hallmark listening problem of individuals presenting with auditory processing disorder (APD) is their poor recognition of speech in noise. The underlying perceptual problem of the listening difficulties in unfavorable listening conditions is unknown. The objective of this article was to demonstrate theoretically how to determine whether the speech recognition problems are related to an auditory dysfunction, a language-based dysfunction, or a combination of both. METHOD Tests such as the Speech Perception in Noise (SPIN) test allow the exploration of the auditory and language-based functions involved in speech perception in noise, which is not possible with most other speech-in-noise tests. Psychometric functions illustrating results from hypothetical groups of individuals with APD on the SPIN test are presented. This approach makes it possible to postulate about the origin of the speech perception problems in noise. CONCLUSION APD is a complex and heterogeneous disorder for which the underlying deficit is currently unclear. Because of their design, SPIN-like tests can potentially be used to identify the nature of the deficits underlying problems with speech perception in noise for this population. A better understanding of the difficulties with speech perception in noise experienced by many listeners with APD should lead to more efficient intervention programs.
Collapse
Affiliation(s)
- Josée Lagacé
- Université de Montréal and Centre de recherche du Centre Hospitalier Universitaire Sainte-Justine, Montreal, Quebec, Canada
| | - Benoît Jutras
- Université de Montréal and Centre de recherche du Centre Hospitalier Universitaire Sainte-Justine, Montreal, Quebec, Canada
| | - Jean-Pierre Gagné
- Université de Montréal and Centre de recherche de l’Institut Universitaire de Gériatrie de Montréal
| |
Collapse
|
39
|
Choi S, Lotto A, Lewis D, Hoover B, Stelmachowicz P. Attentional modulation of word recognition by children in a dual-task paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2008; 51:1042-1054. [PMID: 18658070 PMCID: PMC2585316 DOI: 10.1044/1092-4388(2008/076)] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE This study investigated an account of limited short-term memory capacity for children's speech perception in noise using a dual-task paradigm. METHOD Sixty-four normal-hearing children (7-14 years of age) participated in this study. Dual tasks were repeating monosyllabic words presented in noise at 8 dB signal-to-noise ratio and rehearsing sets of 3 or 5 digits for subsequent serial recall. Half of the children were told to allocate their primary attention to word repetition and the other half to remembering digits. Dual-task performance was compared to single-task performance. Limitations in short-term memory demands required for the primary task were measured by dual-task decrements in nonprimary tasks. RESULTS Results revealed that (a) regardless of task priority, no dual-task decrements were found for word recognition, but significant dual-task decrements were found for digit recall; (b) most children did not show the ability to allocate attention preferentially to primary tasks; and (c) younger children (7- to 10-year-olds) demonstrated improved word recognition in the dual-task conditions relative to their single-task performance. CONCLUSIONS Seven- to 8-year-old children showed the greatest improvement in word recognition at the expense of the greatest decrement in digit recall during dual tasks. Several possibilities for improved word recognition in the dual-task conditions are discussed.
Collapse
Affiliation(s)
- Sangsook Choi
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA.
| | | | | | | | | |
Collapse
|
40
|
The effect of amplitude modulation on intelligibility of time-varying sinusoidal speech in children and adults. ACTA ACUST UNITED AC 2008; 69:1140-51. [PMID: 18038952 DOI: 10.3758/bf03193951] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although researchers are currently studying auditory object formation in adults, little is known about the development of this phenomenon in children. Amplitude modulation has been suggested as one of the characteristics of the speech signal that allows auditory grouping. In this experiment, we evaluated children (4 to 13 years of age) and adults to examine whether children's ability to use amplitude modulation (AM) in perception of time-varying sinusoidal (TVS) sentences is different from that of adults, and whether there are developmental changes. We evaluated performance on recognition of TVS sentences (unmodulated, amplitude-comodulated at 25, 50, 100, and 200 Hz, and amplitude-modulated using conflicting frequencies). Overall, the youngest children performed more poorly than did older children and adults. However, difference scores, defined as the percentage of phonemes correct in a given modulation condition minus the percentage correct for the unmodulated condition, showed no significant effects of age. Unlike the findings of previous studies (Carrell & Opie, 1992), these results support the ability of modulation with conflicting frequencies to improve intelligibility. The present study provides evidence that children and adults receive the same benefits (or decrements) from amplitude modulation.
Collapse
|
41
|
Bradlow AR, Alexander JA. Semantic and phonetic enhancements for speech-in-noise recognition by native and non-native listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 121:2339-49. [PMID: 17471746 DOI: 10.1121/1.2642103] [Citation(s) in RCA: 149] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Previous research has shown that speech recognition differences between native and proficient non-native listeners emerge under suboptimal conditions. Current evidence has suggested that the key deficit that underlies this disproportionate effect of unfavorable listening conditions for non-native listeners is their less effective use of compensatory information at higher levels of processing to recover from information loss at the phoneme identification level. The present study investigated whether this non-native disadvantage could be overcome if enhancements at various levels of processing were presented in combination. Native and non-native listeners were presented with English sentences in which the final word varied in predictability and which were produced in either plain or clear speech. Results showed that, relative to the low-predictability-plain-speech baseline condition, non-native listener final word recognition improved only when both semantic and acoustic enhancements were available (high-predictability-clear-speech). In contrast, the native listeners benefited from each source of enhancement separately and in combination. These results suggests that native and non-native listeners apply similar strategies for speech-in-noise perception: The crucial difference is in the signal clarity required for contextual information to be effective, rather than in an inability of non-native listeners to take advantage of this contextual information per se.
Collapse
Affiliation(s)
- Ann R Bradlow
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA.
| | | |
Collapse
|
42
|
Scollie S, Seewald R, Cornelisse L, Moodie S, Bagatto M, Laurnagaray D, Beaulac S, Pumford J. The Desired Sensation Level multistage input/output algorithm. Trends Amplif 2006; 9:159-97. [PMID: 16424945 PMCID: PMC4111494 DOI: 10.1177/108471380500900403] [Citation(s) in RCA: 229] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The Desired Sensation Level (DSL) Method was revised to support hearing instrument fitting for infants, young children, and adults who use modern hearing instrument technologies, including multichannel compression, expansion, and multimemory capability. The aims of this revision are to maintain aspects of the previous versions of the DSL Method that have been supported by research, while extending the method to account for adult-child differences in preference and listening requirements. The goals of this version (5.0) include avoiding loudness discomfort, selecting a frequency response that meets audibility requirements, choosing compression characteristics that appropriately match technology to the user's needs, and accommodating the overall prescription to meet individual needs for use in various listening environments. This review summarizes the status of research on the use of the DSL Method with pediatric and adult populations and presents a series of revisions that have been made during the generation of DSL v5.0. This article concludes with case examples that illustrate key differences between the DSL v4.1 and DSL v5.0 prescriptions.
Collapse
Affiliation(s)
- Susan Scollie
- National Centre for Audiology, Faculty of Health Sciences, University of Western Ontario, London, Ontario, Canada NG6 1H1.
| | | | | | | | | | | | | | | |
Collapse
|