1
|
Green GD, Jacewicz E, Santosa H, Arzbecker LJ, Fox RA. Evaluating Speaker-Listener Cognitive Effort in Speech Communication Through Brain-to-Brain Synchrony: A Pilot Functional Near-Infrared Spectroscopy Investigation. J Speech Lang Hear Res 2024; 67:1339-1359. [PMID: 38535722 DOI: 10.1044/2024_jslhr-23-00476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2024]
Abstract
PURPOSE We explore a new approach to the study of cognitive effort involved in listening to speech by measuring the brain activity in a listener in relation to the brain activity in a speaker. We hypothesize that the strength of this brain-to-brain synchrony (coupling) reflects the magnitude of cognitive effort involved in verbal communication and includes both listening effort and speaking effort. We investigate whether interbrain synchrony is greater in native-to-native versus native-to-nonnative communication using functional near-infrared spectroscopy (fNIRS). METHOD Two speakers participated, a native speaker of American English and a native speaker of Korean who spoke English as a second language. Each speaker was fitted with the fNIRS cap and told short stories. The native English speaker provided the English narratives, and the Korean speaker provided both the nonnative (accented) English and Korean narratives. In separate sessions, fNIRS data were obtained from seven English monolingual participants ages 20-24 years who listened to each speaker's stories. After listening to each story in native and nonnative English, they retold the content, and their transcripts and audio recordings were analyzed for comprehension and discourse fluency, measured in the number of hesitations and articulation rate. No story retellings were obtained for narratives in Korean (an incomprehensible language for English listeners). Utilizing fNIRS technique termed sequential scanning, we quantified the brain-to-brain synchronization in each speaker-listener dyad. RESULTS For native-to-native dyads, multiple brain regions associated with various linguistic and executive functions were activated. There was a weaker coupling for native-to-nonnative dyads, and only the brain regions associated with higher order cognitive processes and functions were synchronized. All listeners understood the content of all stories, but they hesitated significantly more when retelling stories told in accented English. The nonnative speaker hesitated significantly more often than the native speaker and had a significantly slower articulation rate. There was no brain-to-brain coupling during listening to Korean, indicating a break in communication when listeners failed to comprehend the speaker. CONCLUSIONS We found that effortful speech processing decreased interbrain synchrony and delayed comprehension processes. The obtained brain-based and behavioral patterns are consistent with our proposal that cognitive effort in verbal communication pertains to both the listener and the speaker and that brain-to-brain synchrony can be an indicator of differences in their cumulative communicative effort. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25452142.
Collapse
Affiliation(s)
- Geoff D Green
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Ewa Jacewicz
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | | | - Lian J Arzbecker
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Robert A Fox
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| |
Collapse
|
2
|
Bottalico P, Murgia S, Puglisi GE, Astolfi A, Ishikawa K. Intelligibility of dysphonic speech in auralized classrooms. J Acoust Soc Am 2021; 150:2912. [PMID: 34717474 DOI: 10.1121/10.0006741] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 09/28/2021] [Indexed: 06/13/2023]
Abstract
Voice disorders can reduce the speech intelligibility of affected speakers. This study evaluated the effect of noise, voice disorders, and room acoustics on vowel intelligibility, listening easiness, and the listener's reaction time. Three adult females with dysphonia and three adult females with normal voice quality recorded a series of nine vowels of American English in /h/-V-/d/ format (e.g., "had"). The recordings were convolved with two oral-binaural impulse responses acquired from measurements in two classrooms with 0.4 and 3.1 s of reverberation time, respectively. The stimuli were presented in a forced-choice format to 29 college students. The intelligibility and the listening easiness were significantly higher in quiet than in noisy conditions, when the speakers had normal voice quality compared to a dysphonic voice, and in low reverberated environments compared to high reverberated environments. The response time of the listener was significantly longer for speech presented in noisy conditions compared to quiet conditions and when the voice was dysphonic compared with healthy voice quality.
Collapse
Affiliation(s)
- Pasquale Bottalico
- Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign, Champaign, Illinois, USA
| | - Silvia Murgia
- Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign, Champaign, Illinois, USA
| | | | | | - Keiko Ishikawa
- Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign, Champaign, Illinois, USA
| |
Collapse
|
3
|
Nogueira W, Boghdady NE, Langner F, Gaudrain E, Başkent D. Effect of Channel Interaction on Vocal Cue Perception in Cochlear Implant Users. Trends Hear 2021; 25:23312165211030166. [PMID: 34461780 PMCID: PMC8411629 DOI: 10.1177/23312165211030166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 06/14/2021] [Accepted: 06/16/2021] [Indexed: 11/16/2022] Open
Abstract
Speech intelligibility in multitalker settings is challenging for most cochlear implant (CI) users. One possibility for this limitation is the suboptimal representation of vocal cues in implant processing, such as the fundamental frequency (F0), and the vocal tract length (VTL). Previous studies suggested that while F0 perception depends on spectrotemporal cues, VTL perception relies largely on spectral cues. To investigate how spectral smearing in CIs affects vocal cue perception in speech-on-speech (SoS) settings, adjacent electrodes were simultaneously stimulated using current steering in 12 Advanced Bionics users to simulate channel interaction. In current steering, two adjacent electrodes are simultaneously stimulated forming a channel of parallel stimulation. Three such stimulation patterns were used: Sequential (one current steering channel), Paired (two channels), and Triplet stimulation (three channels). F0 and VTL just-noticeable differences (JNDs; Task 1), in addition to SoS intelligibility (Task 2) and comprehension (Task 3), were measured for each stimulation strategy. In Tasks 2 and 3, four maskers were used: the same female talker, a male voice obtained by manipulating both F0 and VTL (F0+VTL) of the original female speaker, a voice where only F0 was manipulated, and a voice where only VTL was manipulated. JNDs were measured relative to the original voice for the F0, VTL, and F0+VTL manipulations. When spectral smearing was increased from Sequential to Triplet, a significant deterioration in performance was observed for Tasks 1 and 2, with no differences between Sequential and Paired stimulation. Data from Task 3 were inconclusive. These results imply that CI users may tolerate certain amounts of channel interaction without significant reduction in performance on tasks relying on voice perception. This points to possibilities for using parallel stimulation in CIs for reducing power consumption.
Collapse
Affiliation(s)
- Waldo Nogueira
- Department of Otolaryngology, Medical University
Hannover and Cluster of Excellence Hearing4all, Hanover, Germany
| | - Nawal El Boghdady
- Department of Otorhinolaryngology, University Medical
Center Groningen, University of Groningen, Groningen,
Netherlands
- Research School of Behavioral and Cognitive
Neurosciences, University of
Groningen, University of Groningen, Groningen,
Netherlands
| | - Florian Langner
- Department of Otolaryngology, Medical University
Hannover and Cluster of Excellence Hearing4all, Hanover, Germany
| | - Etienne Gaudrain
- Department of Otorhinolaryngology, University Medical
Center Groningen, University of Groningen, Groningen,
Netherlands
- Research School of Behavioral and Cognitive
Neurosciences, University of
Groningen, University of Groningen, Groningen,
Netherlands
- Lyon Neuroscience Research Center, CNRS UMR 5292,
INSERM U1028, University Lyon 1, Lyon, France
| | - Deniz Başkent
- Department of Otorhinolaryngology, University Medical
Center Groningen, University of Groningen, Groningen,
Netherlands
- Research School of Behavioral and Cognitive
Neurosciences, University of
Groningen, University of Groningen, Groningen,
Netherlands
| |
Collapse
|
4
|
El Boghdady N, Gaudrain E, Başkent D. Does good perception of vocal characteristics relate to better speech-on-speech intelligibility for cochlear implant users? J Acoust Soc Am 2019; 145:417. [PMID: 30710943 DOI: 10.1121/1.5087693] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 12/21/2018] [Indexed: 06/09/2023]
Abstract
Differences in voice pitch (F0) and vocal tract length (VTL) improve intelligibility of speech masked by a background talker (speech-on-speech; SoS) for normal-hearing (NH) listeners. Cochlear implant (CI) users, who are less sensitive to these two voice cues compared to NH listeners, experience difficulties in SoS perception. Three research questions were addressed: (1) whether increasing the F0 and VTL difference (ΔF0; ΔVTL) between two competing talkers benefits CI users in SoS intelligibility and comprehension, (2) whether this benefit is related to their F0 and VTL sensitivity, and (3) whether their overall SoS intelligibility and comprehension are related to their F0 and VTL sensitivity. Results showed: (1) CI users did not benefit in SoS perception from increasing ΔF0 and ΔVTL; increasing ΔVTL had a slightly detrimental effect on SoS intelligibility and comprehension. Results also showed: (2) the effect from increasing ΔF0 on SoS intelligibility was correlated with F0 sensitivity, while the effect from increasing ΔVTL on SoS comprehension was correlated with VTL sensitivity. Finally, (3) the sensitivity to both F0 and VTL, and not only one of them, was found to be correlated with overall SoS performance, elucidating important aspects of voice perception that should be optimized through future coding strategies.
Collapse
Affiliation(s)
- Nawal El Boghdady
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
5
|
Visentin C, Prodi N. A Matrixed Speech-in-Noise Test to Discriminate Favorable Listening Conditions by Means of Intelligibility and Response Time Results. J Speech Lang Hear Res 2018; 61:1497-1516. [PMID: 29845187 DOI: 10.1044/2018_jslhr-h-17-0418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 02/28/2018] [Indexed: 05/22/2023]
Abstract
PURPOSE The primary aim of this study was to develop and examine the potentials of a new speech-in-noise test in discriminating the favorable listening conditions targeted in the acoustical design of communication spaces. The test is based on the recognition and recall of disyllabic word sequences. A secondary aim was to compare the test with current speech-in-noise tests, assessing its benefits and limitations. METHOD Young adults (19-40 years old), self-reporting normal hearing, were presented with the newly developed Words Sequence Test (WST; 16 participants, Experiment 1) and with a consonant confusion test and a sentence recognition test (Experiment 2, 36 participants randomly assigned to the 2 tests). Participants performing the WST were presented with word sequences of different lengths (from 2 up to 6 words). Two listening conditions were selected: (a) no noise and no reverberation, and (b) reverberant, steady-state noise (Speech Transmission Index: 0.47). The tests were presented in a closed-set format; data on the number of words correctly recognized (speech intelligibility, IS) and the response times (RTs) were collected (onset RT, single words' RT). RESULTS It was found that a sequence composed of 4 disyllabic words ensured both the full recognition score in quiet conditions and a significant decrease in IS results when noise and reverberation degraded the speech signal. RTs increased with the worsening of the listening conditions and the number of words of the sequence. The greatest onset RT variation was found when using a sequence of 4 words. In the comparison with current speech-in-noise tests, it was found that the WST maximized the IS difference between the selected listening conditions as well as the RT increase. CONCLUSIONS Overall, the results suggest that the new speech-in-noise test has good potentials in discriminating conditions with near-ceiling accuracy. As compared with current speech-in-noise tests, it appears that the WST with a 4-word sequence allows for a finer mapping of the acoustical design target conditions of public spaces through accuracy and onset RT data.
Collapse
Affiliation(s)
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Italy
| |
Collapse
|
6
|
Başkent D, Clarke J, Pals C, Benard MR, Bhargava P, Saija J, Sarampalis A, Wagner A, Gaudrain E. Cognitive Compensation of Speech Perception With Hearing Impairment, Cochlear Implants, and Aging. Trends Hear 2016. [PMCID: PMC5056620 DOI: 10.1177/2331216516670279] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.
Collapse
Affiliation(s)
- Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Jeanne Clarke
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Carina Pals
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Michel R. Benard
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Pento Speech and Hearing Center Zwolle, Zwolle, Netherlands
| | - Pranesh Bhargava
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Jefta Saija
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| | - Anita Wagner
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Auditory Cognition and Psychoacoustics, CNRS, Lyon Neuroscience Research Center, Lyon, France
| |
Collapse
|
7
|
Abstract
This study compares two response-time measures of listening effort that can be combined with a clinical speech test for a more comprehensive evaluation of total listening experience; verbal response times to auditory stimuli (RT(aud)) and response times to a visual task (RTs(vis)) in a dual-task paradigm. The listening task was presented in five masker conditions; no noise, and two types of noise at two fixed intelligibility levels. Both the RTs(aud) and RTs(vis) showed effects of noise. However, only RTs(aud) showed an effect of intelligibility. Because of its simplicity in implementation, RTs(aud) may be a useful effort measure for clinical applications.
Collapse
Affiliation(s)
- Carina Pals
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | | | - Hedderik van Rijn
- Department of Psychology, University of Groningen, Groningen, The Netherlands ,
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
8
|
Abstract
Anecdotal reports of fatigue after sustained speech-processing demands are common among adults with hearing loss; however, systematic research examining hearing loss-related fatigue is limited, particularly with regard to fatigue among children with hearing loss (CHL). Many audiologists, educators, and parents have long suspected that CHL experience stress and fatigue as a result of the difficult listening demands they encounter throughout the day at school. Recent research in this area provides support for these intuitive suggestions. In this article, the authors provide a framework for understanding the construct of fatigue and its relation to hearing loss, particularly in children. Although empirical evidence is limited, preliminary data from recent studies suggest that some CHL experience significant fatigue-and such fatigue has the potential to compromise a child's performance in the classroom. In this commentary, the authors discuss several aspects of fatigue including its importance, definitions, prevalence, consequences, and potential linkage to increased listening effort in persons with hearing loss. The authors also provide a brief synopsis of subjective and objective methods to quantify listening effort and fatigue. Finally, the authors suggest a common-sense approach for identification of fatigue in CHL; and, the authors briefly comment on the use of amplification as a management strategy for reducing hearing-related fatigue.
Collapse
|
9
|
Drgas S, Blaszak MA. Perception of speech in reverberant conditions using AM-FM cochlear implant simulation. Hear Res 2010; 269:162-8. [PMID: 20603206 DOI: 10.1016/j.heares.2010.06.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2009] [Revised: 06/14/2010] [Accepted: 06/18/2010] [Indexed: 11/16/2022]
Abstract
This study assessed the effects of speech misidentification and cognitive processing errors in normal-hearing adults listening to degraded auditory input signals simulating cochlear implants in reverberation conditions. Three variables were controlled: number of vocoder channels (six and twelve), instantaneous frequency change rate (none, 50, 400 Hz), and enclosures (different reverberation conditions). The analyses were made on the basis of: (a) nonsense word recognition scores for eight young normal-hearing listeners, (b) 'ease of listening' based on the time of response, and (c) the subjective measure of difficulty. The maximum score of speech intelligibility in cochlear implant simulation was 70% for non-reverberant conditions with a 12-channel vocoder and changes of instantaneous frequency limited to 400 Hz. In the presence of reflections, word misidentification was about 10-20 percentage points higher. There was little difference between the 50 and 400 Hz frequency modulation cut-off for the 12-channel vocoder; however, in the case of six channels this difference was more significant. The results of the experiment suggest that the information other than F0, that is carried by FM, can be sufficient to improve speech intelligibility in the real-world conditions.
Collapse
Affiliation(s)
- Szymon Drgas
- Adam Mickiewicz University, Institute of Acoustics, Poznan, Umultowska 85, Poland.
| | | |
Collapse
|
10
|
Prodi N, Visentin C, Farnetani A. Intelligibility, listening difficulty and listening efficiency in auralized classrooms. J Acoust Soc Am 2010; 128:172-181. [PMID: 20649212 DOI: 10.1121/1.3436563] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
In order to obtain an effective speech communication in rooms it is advisable, besides reaching the full intelligibility of words, to minimize the effort paid by the listener in the recognition of the speech material. This twofold requirement is not easily described by the current room acoustic indicators, which are mainly concerned either with a subjective rating by means of word recognition scores or with using listeners' impressions of reported listening difficulties. In this work, the problem is tackled by introducing the concept of "listening efficiency," which is defined as a combination of the accuracy of intelligibility and of the effort spent on achieving this goal. This indicator is here developed, and an application of the former and of the "listening efficiency" is presented in the field of classroom acoustics. Listening tests with pupils and adults were performed and the subsequent statistical analyses indicated several interesting findings. In particular, listening efficiency is able to clearly discriminate between equal intelligibility scores obtained under different acoustical conditions, permitting room acoustics to be tailored for specific groups, such as children.
Collapse
Affiliation(s)
- Nicola Prodi
- Dipartimento di Ingegneria, Universita degli Studi di Ferrara, via Saragat 1, 44100 Ferrara, Italy.
| | | | | |
Collapse
|
11
|
Abstract
PURPOSE To study the perceptual consequences of changes in parameters of vocoded speech in various reverberation conditions. METHOD The 3 controlled variables were number of vocoder bands, instantaneous frequency change rate, and reverberation conditions. The effects were quantified in terms of (a) nonsense words' recognition scores for young normal-hearing listeners, (b) ease of listening based on the time of response (response delay), and (c) the subjective measure of difficulty (10-degree scale). RESULTS It has been shown that the fine structure of a signal is a relevant cue in speech perception in reverberation conditions. The results obtained for different number of bands, frequency-modulation cutoff frequencies, and reverberation conditions have shown that all these parameters are important for speech perception in reverberation. CONCLUSIONS Only slow variations in the instantaneous frequency (<50 Hz) seem to play a critical role in speech intelligibility in anechoic conditions. In reverberant enclosures, however, fast fluctuations of instantaneous frequency are also significant.
Collapse
|
12
|
Abstract
OBJECTIVE To investigate systematically the effects of sensorineural hearing loss on cortical event-related potentials (ERPs) N1, MMN, N2 and P3 and their associated behavioral measures (d' sensitivity and reaction time) to the speech sounds /ba/ and /da/ presented at 65 and 80 dB ppe SPL. DESIGN Cortical ERPs were recorded to /ba/ and /da/ speech stimuli presented at 65 and 80 dB ppe SPL from 20 normal-hearing adults and 20 adults who are hearing impaired. The degree of sensorineural impairments at 1000 to 2000 Hz ranged from mild losses (defined as 25 to 49 dB HL) to severe/profound losses (75 to 120 dB HL). The speech stimuli were presented in an oddball paradigm and the cortical ERPs were recorded in both active and passive listening conditions for each stimulus intensity. RESULTS Both ERP amplitudes and behavioral discrimination (d') scores were lower for listeners with sensorineural hearing loss than for those with normal hearing. However, these differences in response strength were evident only for those listeners whose average hearing loss at 1000 to 2000 Hz exceeded 60 dB HL for the lower intensity stimuli and exceeded 75 dB HL for the higher intensity stimuli. In contrast, prolongations in the ERP and behavioral latencies, relative to responses from normal-hearing subjects, began with even mild (25 to 49 dB HL) threshold elevations. The amplitude and latency response changes that occurred with sensorineural hearing loss were significantly greater for the later ERP peaks (N2/P3) and behavioral discrimination measures (d' and RT) in comparison with earlier (N1, MMN) responses. CONCLUSIONS The results indicate that latency measures are more sensitive indicators of the early effects of decreased audibility than are response strength (amplitude, d' or percent correct) measures. Sensorineural hearing loss has a greater impact on higher level or "nonsensory" cortical processing in comparison with lower level or "sensory" cortical processing. Possible physiologic mechanisms within the cortex that may be responsible for these response changes are presented. Lastly, the possible clinical significance of these ERP and behavioral findings is discussed.
Collapse
Affiliation(s)
- Peggy A Oates
- Department of Communication Services and Disorders, Towson University, Maryland 21252-0001, USA.
| | | | | |
Collapse
|
13
|
Abstract
OBJECTIVE The purpose of this study was to assess list equivalency and time-order effects of word recognition scores and response time measures obtained using a digital recording of the Modified Rhyme Test (MRT) with a response time monitoring task (Mackersie, Neuman, & Levitt, 1999). DESIGN Response times and percent correct measures were obtained from listeners with normal hearing using the MRT materials presented at a signal to noise ratio of +3 dB. Listeners were tested using a word-monitoring task in which six alternatives were presented in series and listeners pushed a button when they heard the target word (as displayed on the computer monitor). Listeners were tested in two sessions. During each session each of the six MRT lists was administered once. Time-order effects were examined both between and within test sessions. RESULTS All lists were equivalent for both speech recognition accuracy and response time except List 1, which showed slightly higher percent correct scores than the other lists. Varied patterns of systematic change over time were observed in 75% of the listeners for the response time measures and for 33% of the listeners for the percent correct measures. CONCLUSIONS Lists 2 through 6 of this version of the MRT are equivalent, with List 1 producing slightly higher word recognition scores. Systematic changes over time in response time data for the majority of listeners suggest the need for careful implementation of the test to avoid time-order effects.
Collapse
Affiliation(s)
- C Mackersie
- Center for Research in Speech and Hearing Sciences, Graduate School, City University of New York, New York, USA
| | | | | |
Collapse
|
14
|
Abstract
OBJECTIVES The primary purpose of this study was to investigate the possibility of improving speech recognition testing sensitivity by incorporating response time measures as a metric. Two different techniques for obtaining response time were compared: a word-monitoring task and a closed-set identification task. DESIGN Recordings of the Modified Rhyme Test were used to test 12 listeners with normal hearing. Data were collected using a word-monitoring and a closed-set identification task. Response times and percent correct scores were obtained for each task using signal to noise ratios (SNRs) of -3, 0, +3, +6, +9, and +12 dB. RESULTS Both response time and percent correct measures were sensitive to changes in SNR, but greater sensitivity was found with the percent correct measures. Individual subject data showed that combining response time measures with percent correct scores improved test sensitivity for the monitoring task, but not for the closed-set identification task. CONCLUSIONS The best test sensitivity was obtained by combining percent correct and response time measures for the monitoring task. Such an approach may hold promise for future clinical applications.
Collapse
Affiliation(s)
- C Mackersie
- Center for Research in Speech and Hearing Sciences, Graduate School, City University of New York, New York, USA
| | | | | |
Collapse
|
15
|
Abstract
OBJECTIVE To systematically investigate in normal-hearing listeners the effects of decreased audibility produced by broadband noise masking on the cortical event-related potentials (ERPs) N1, N2, and P3 to the speech sounds /ba/ and /da/. DESIGN Ten normal-hearing adult listeners actively (button-press response) discriminated the speech sounds /ba/ and /da/ presented in quiet (no masking) or with broadband masking noise (BBN), using an ERP oddball paradigm. The BBN was presented at 50, 60, and 70 dB SPL when speech sounds were presented at 65 dB ppe SPL and at 60, 70 and, 80 dB SPL when speech sounds were presented at 80 dB ppe SPL. RESULTS On average, the 50, 60, 70, and 80 dB SPL BBN maskers produced behavioral threshold elevations of 18, 25, 35, and 48 dB (average for 250 to 4000 Hz), respectively. The BBN maskers produced significant decreases (relative to quiet condition) in ERP amplitudes and behavioral discriminability. These decreases did not occur, however, until the noise masker intensity (in dB SPL) was equal to or greater than the speech stimulus intensity (in dB ppe SPL), that is, until speech to noise ratios (SNRs) were < or = 0 dB. N1 remained present even after N2, P3, and behavioral discriminability were absent. In contrast to amplitudes, ERP and behavioral latencies showed significant decreases at higher (better) SNRs. Significant latency increases occurred when the noise maskers were within 10 to 20 dB of the stimuli (i.e., SNR < or = 20 dB). The effects of masking were greater for responses to /da/ compared with /ba/. Latency increases occurred with less masking for N1 than for P3 or behavioral reaction time, with N2 falling in between. CONCLUSIONS These results indicate that decreased audibility as a result of masking affects the various ERP peaks in a differential manner and that latencies are more sensitive indicators of these masking effects than are amplitudes.
Collapse
Affiliation(s)
- K A Whiting
- Auditory Evoked Potential Research Laboratory, Albert Einstein College of Medicine, Bronx, New York, USA
| | | | | |
Collapse
|
16
|
Abstract
The benefits of management of hearing disability, in particular by provision of a hearing aid, are traditionally assessed by the percentage improvement in performance on a speech identification task. To provide precise and stable results, such procedures require more time than is available in most clinical settings. In any stressed performance, e.g. an impaired individual trying to listen in noise, there is a trading relationship between accuracy and effort (the cost at which accuracy is achieved). If the control of performance naturally spends effort to stabilize high performance, then benefit from amplification may essentially comprise and be measurable as reduction in effort rather than improvement in accuracy. Certainly complaints of hearing disability emphasize fatigue from careful listening. Hence a hearing aid may not only enable hearing impaired persons to hear more of speech but may enable them to hear it more easily, thus reflecting a second dimension to disability and benefit. Ease of listening was investigated using auditory response times to speech stimuli of two levels of structure: single words and sentences. The speech material was presented to 44 experienced hearing aid users (mild to moderate sensorineural hearing impairment). The speech was presented both unaided and aided at presentation levels of 60, 70 and 80 dB SPL and signal-to-noise ratios of quiet and + 5 dB. Response times were taken to the tokens within each list that were correctly identified. Benefit is defined as the decrease in response time from the unaided to the aided condition.(ABSTRACT TRUNCATED AT 250 WORDS)
Collapse
Affiliation(s)
- S Gatehouse
- MRC Institute of Hearing Research, Glasgow Royal Infirmary, Scotland
| | | |
Collapse
|
17
|
|
18
|
|
19
|
|
20
|
Abstract
This study investigated the effects of an increase in the level of acoustic stress (signal-to-noise ratio) on the retrieval of message sets of 2, 3, or 4 unrelated words presented successively. The results indicated that noise degradation did indeed affect the efficiency with which Ss retrieve sequences of successively presented items. It was noticed that the retention of the initial item of a message set caused a marked decrement in the retention and retrieval of subsequent items of the message set and that the effect increased as a function of the number of words presented. The effects were attributed to proactive inhibition, recency, and limited-channel capacity.
Collapse
|