1
|
Nora A, Rinkinen O, Renvall H, Service E, Arkkila E, Smolander S, Laasonen M, Salmelin R. Impaired Cortical Tracking of Speech in Children with Developmental Language Disorder. J Neurosci 2024; 44:e2048232024. [PMID: 38589232 PMCID: PMC11140678 DOI: 10.1523/jneurosci.2048-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
In developmental language disorder (DLD), learning to comprehend and express oneself with spoken language is impaired, but the reason for this remains unknown. Using millisecond-scale magnetoencephalography recordings combined with machine learning models, we investigated whether the possible neural basis of this disruption lies in poor cortical tracking of speech. The stimuli were common spoken Finnish words (e.g., dog, car, hammer) and sounds with corresponding meanings (e.g., dog bark, car engine, hammering). In both children with DLD (10 boys and 7 girls) and typically developing (TD) control children (14 boys and 3 girls), aged 10-15 years, the cortical activation to spoken words was best modeled as time-locked to the unfolding speech input at ∼100 ms latency between sound and cortical activation. Amplitude envelope (amplitude changes) and spectrogram (detailed time-varying spectral content) of the spoken words, but not other sounds, were very successfully decoded based on time-locked brain responses in bilateral temporal areas; based on the cortical responses, the models could tell at ∼75-85% accuracy which of the two sounds had been presented to the participant. However, the cortical representation of the amplitude envelope information was poorer in children with DLD compared with TD children at longer latencies (at ∼200-300 ms lag). We interpret this effect as reflecting poorer retention of acoustic-phonetic information in short-term memory. This impaired tracking could potentially affect the processing and learning of words as well as continuous speech. The present results offer an explanation for the problems in language comprehension and acquisition in DLD.
Collapse
Affiliation(s)
- Anni Nora
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Oona Rinkinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
- BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, Helsinki FI-00029, Finland
| | - Elisabet Service
- Department of Linguistics and Languages, Centre for Advanced Research in Experimental and Applied Linguistics (ARiEAL), McMaster University, Hamilton, Ontario L8S 4L8, Canada
- Department of Psychology and Logopedics, University of Helsinki, Helsinki FI-00014, Finland
| | - Eva Arkkila
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
| | - Sini Smolander
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Research Unit of Logopedics, University of Oulu, Oulu FI-90014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Marja Laasonen
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| |
Collapse
|
2
|
Hu H, Ewert SD, Kollmeier B, Vickers D. Rate dependent neural responses of interaural-time-difference cues in fine-structure and envelope. PeerJ 2024; 12:e17104. [PMID: 38680894 PMCID: PMC11055513 DOI: 10.7717/peerj.17104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 02/22/2024] [Indexed: 05/01/2024] Open
Abstract
Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.
Collapse
Affiliation(s)
- Hongmei Hu
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neuroscience, Cambridge University, Cambridge, United Kingdom
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Stephan D. Ewert
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Birger Kollmeier
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Deborah Vickers
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neuroscience, Cambridge University, Cambridge, United Kingdom
| |
Collapse
|
3
|
Colas T, Farrugia N, Hendrickx E, Paquier M. Sound externalization in dynamic binaural listening: A comparative behavioral and EEG study. Hear Res 2023; 440:108912. [PMID: 37952369 DOI: 10.1016/j.heares.2023.108912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
Binaural reproduction aims at recreating a realistic sound scene at the ears of the listener using headphones. Unfortunately, externalization for frontal and rear sources is often poor (virtual sources are perceived inside the head, instead of outside the head). Nevertheless, previous studies have shown that large head-tracked movements could substantially improve externalization and that this improvement persisted once the subject had stopped moving his/her head. The present study investigates the relation between externalization and evoked response potentials (ERPs) by performing behavioral and EEG measurements in the same experimental conditions. Different degrees of externalization were achieved by preceding measurements with 1) head-tracked movements, 2) untracked head movements, and 3) no head movement. Results showed that performing a head movement, whether the head tracking was active or not, increased the amplitude of ERP components after 100 ms, which suggests that preceding head movements alters the auditory processing. Moreover, untracked head movements gave a stronger amplitude on the N1 component, which might be a marker of a consistency break in regards to the real world. While externalization scores were higher after head-tracked movements in the behavioral experiment, no marker of externalization could be found in the EEG results.
Collapse
Affiliation(s)
- Tom Colas
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France.
| | - Nicolas Farrugia
- IMT Atlantique, CNRS Lab-STICC UMR 6285, 655 avenue du Technopole, 29280 Plouzane, France
| | - Etienne Hendrickx
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| | - Mathieu Paquier
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| |
Collapse
|
4
|
Sanju HK, Jain T, Kumar P. Acoustic Change Complex as a Neurophysiological Tool to Assess Auditory Discrimination Skill: A Review. Int Arch Otorhinolaryngol 2023; 27:e362-e369. [PMID: 37125361 PMCID: PMC10147461 DOI: 10.1055/s-0042-1743202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 05/17/2021] [Indexed: 03/22/2023] Open
Abstract
Abstract
Introduction Acoustic change complex (ACC) is a type of event-related potential evoked in response to subtle change(s) in the continuing stimuli. In the presence of a growing number of investigations on ACC, there is a need to review the various methodologies, findings, clinical utilities, and conclusions of different studies by authors who have studied ACC.
Objective The present review article is focused on the literature related to the utility of ACC as a tool to assess the auditory discrimination skill in different populations.
Data Synthesis Various database providers, such as Medline, Pubmed, Google, and Google Scholar, were searched for any ACC-related reference. A total of 102 research papers were initially obtained using descriptors such as acoustic change complex, clinical utility of ACC, ACC in children, ACC in cochlear implant users, and ACC in hearing loss. The titles, authors, and year of publication were examined, and the duplicates were eliminated. A total of 31 research papers were found on ACC and were incorporated in the present review. The findings of these 31 articles were reviewed and have been reported in the present article.
Conclusion The present review showed the utility of ACC as an objective tool to support various subjective tests in audiology.
Collapse
Affiliation(s)
- Himanshu Kumar Sanju
- Sri Jagdamba Charitable Eye Hospital and Cochlear Implant Center, Sri Ganganagar, Rajasthan, India
| | - Tushar Jain
- Sri Jagdamba Charitable Eye Hospital and Cochlear Implant Center, Sri Ganganagar, Rajasthan, India
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka, India
| |
Collapse
|
5
|
Ching TYC, Zhang VW, Ibrahim R, Bardy F, Rance G, Van Dun B, Sharma M, Chisari D, Dillon H. Acoustic change complex for assessing speech discrimination in normal-hearing and hearing-impaired infants. Clin Neurophysiol 2023; 149:121-132. [PMID: 36963143 DOI: 10.1016/j.clinph.2023.02.172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/15/2023] [Accepted: 02/12/2023] [Indexed: 03/08/2023]
Abstract
OBJECTIVE This study examined (1) the utility of a clinical system to record acoustic change complex (ACC, an event-related potential recorded by electroencephalography) for assessing speech discrimination in infants, and (2) the relationship between ACC and functional performance in real life. METHODS Participants included 115 infants (43 normal-hearing, 72 hearing-impaired), aged 3-12 months. ACCs were recorded using [szs], [uiu], and a spectral rippled noise high-pass filtered at 2 kHz as stimuli. Assessments were conducted at age 3-6 months and at 7-12 months. Functional performance was evaluated using a parent-report questionnaire, and correlations with ACC were examined. RESULTS The rates of onset and ACC responses of normal-hearing infants were not significantly different from those of aided infants with mild or moderate hearing loss but were significantly higher than those with severe loss. On average, response rates measured at 3-6 months were not significantly different from those at 7-12 months. Higher rates of ACC responses were significantly associated with better functional performance. CONCLUSIONS ACCs demonstrated auditory capacity for discrimination in infants by 3-6 months. This capacity was positively related to real-life functional performance. SIGNIFICANCE ACCs can be used to evaluate the effectiveness of amplification and monitor development in aided hearing-impaired infants.
Collapse
Affiliation(s)
- Teresa Y C Ching
- National Acoustic Laboratories, Australia; Macquarie School of Education, Macquarie University, Australia; NextSense Institute, Australia; School of Health and Rehabilitation Sciences, University of Queensland, Australia.
| | - Vicky W Zhang
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia
| | - Ronny Ibrahim
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia
| | - Fabrice Bardy
- National Acoustic Laboratories, Australia; School of Psychology, University of Auckland, New Zealand
| | - Gary Rance
- Department of Audiology and Speech Pathology, The University of Melbourne, Australia
| | | | - Mridula Sharma
- Department of Linguistics, Macquarie University, Australia
| | - Donella Chisari
- Department of Audiology and Speech Pathology, The University of Melbourne, Australia
| | - Harvey Dillon
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia; Department of Hearing, University of Manchester, United Kingdom
| |
Collapse
|
6
|
Guérit F, Harland AJ, Richardson ML, Gransier R, Middlebrooks JC, Wouters J, Carlyon RP. Electrophysiological and Psychophysical Measures of Temporal Pitch Sensitivity in Normal-hearing Listeners. J Assoc Res Otolaryngol 2023; 24:47-65. [PMID: 36471208 PMCID: PMC9971391 DOI: 10.1007/s10162-022-00879-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
To obtain combined behavioural and electrophysiological measures of pitch perception, we presented harmonic complexes, bandpass filtered to contain only high-numbered harmonics, to normal-hearing listeners. These stimuli resemble bandlimited pulse trains and convey pitch using a purely temporal code. A core set of conditions consisted of six stimuli with baseline pulse rates of 94, 188 and 280 pps, filtered into a HIGH (3365-4755 Hz) or VHIGH (7800-10,800 Hz) region, alternating with a 36% higher pulse rate. Brainstem and cortical processing were measured using the frequency following response (FFR) and auditory change complex (ACC), respectively. Behavioural rate change difference limens (DLs) were measured by requiring participants to discriminate between a stimulus that changed rate twice (up-down or down-up) during its 750-ms presentation from a constant-rate pulse train. FFRs revealed robust brainstem phase locking whose amplitude decreased with increasing rate. Moderate-sized but reliable ACCs were obtained in response to changes in purely temporal pitch and, like the psychophysical DLs, did not depend consistently on the direction of rate change or on the pulse rate for baseline rates between 94 and 280 pps. ACCs were larger and DLs lower for stimuli in the HIGH than in the VHGH region. We argue that the ACC may be a useful surrogate for behavioural measures of rate discrimination, both for normal-hearing listeners and for cochlear-implant users. We also showed that rate DLs increased markedly when the baseline rate was reduced to 48 pps, and compared the behavioural and electrophysiological findings to recent cat data obtained with similar stimuli and methods.
Collapse
Affiliation(s)
- François Guérit
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, England
| | - Andrew J Harland
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, England
| | - Matthew L Richardson
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
| | | | - John C Middlebrooks
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
- Department of Neurobiology and Behavior, University of California at Irvine, Irvine, CA, USA
- Department of Cognitive Sciences, University o f California at Irvine, Irvine, CA, USA
- Department of Biomedical Engineering, University of California at Irvine, Irvine, CA, USA
| | - Jan Wouters
- Department of Neurosciences, ExpORL, Leuven, Belgium
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, England.
| |
Collapse
|
7
|
Saraç Kaya E, Türkyılmaz MD, Yaralı M. The evaluation of cochlear implant users’ acoustic change detection ability. HEARING, BALANCE AND COMMUNICATION 2022. [DOI: 10.1080/21695717.2022.2142390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Eylem Saraç Kaya
- Department of Audiology, Faculty of Health Sciences, Lokman Hekim University, Ankara, Turkey
| | - Meral Didem Türkyılmaz
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Mehmet Yaralı
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| |
Collapse
|
8
|
The Acoustic Change Complex Compared to Hearing Performance in Unilaterally and Bilaterally Deaf Cochlear Implant Users. Ear Hear 2022; 43:1783-1799. [PMID: 35696186 PMCID: PMC9592183 DOI: 10.1097/aud.0000000000001248] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Clinical measures evaluating hearing performance in cochlear implant (CI) users depend on attention and linguistic skills, which limits the evaluation of auditory perception in some patients. The acoustic change complex (ACC), a cortical auditory evoked potential to a sound change, might yield useful objective measures to assess hearing performance and could provide insight in cortical auditory processing. The aim of this study is to examine the ACC in response to frequency changes as an objective measure for hearing performance in CI users. DESIGN Thirteen bilaterally deaf and six single-sided deaf subjects were included, all having used a unilateral CI for at least 1 year. Speech perception was tested with a consonant-vowel-consonant test (+10 dB signal-to-noise ratio) and a digits-in-noise test. Frequency discrimination thresholds were measured at two reference frequencies, using a 3-interval, 2-alternative forced-choice, adaptive staircase procedure. The two reference frequencies were selected using each participant's frequency allocation table and were centered in the frequency band of an electrode that included 500 or 2000 Hz, corresponding to the apical electrode or the middle electrode, respectively. The ACC was evoked with pure tones of the same two reference frequencies with varying frequency increases: within the frequency band of the middle or the apical electrode (+0.25 electrode step), and steps to the center frequency of the first (+1), second (+2), and third (+3) adjacent electrodes. RESULTS Reproducible ACCs were recorded in 17 out of 19 subjects. Most successful recordings were obtained with the largest frequency change (+3 electrode step). Larger frequency changes resulted in shorter N1 latencies and larger N1-P2 amplitudes. In both unilaterally and bilaterally deaf subjects, the N1 latency and N1-P2 amplitude of the CI ears correlated to speech perception as well as frequency discrimination, that is, short latencies and large amplitudes were indicative of better speech perception and better frequency discrimination. No significant differences in ACC latencies or amplitudes were found between the CI ears of the unilaterally and bilaterally deaf subjects, but the CI ears of the unilaterally deaf subjects showed substantially longer latencies and smaller amplitudes than their contralateral normal-hearing ears. CONCLUSIONS The ACC latency and amplitude evoked by tone frequency changes correlate well to frequency discrimination and speech perception capabilities of CI users. For patients unable to reliably perform behavioral tasks, the ACC could be of added value in assessing hearing performance.
Collapse
|
9
|
Schneider BA, Rabaglia C, Avivi-Reich M, Krieger D, Arnott SR, Alain C. Age-Related Differences in Early Cortical Representations of Target Speech Masked by Either Steady-State Noise or Competing Speech. Front Psychol 2022; 13:935475. [PMID: 35992450 PMCID: PMC9389464 DOI: 10.3389/fpsyg.2022.935475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 06/13/2022] [Indexed: 11/30/2022] Open
Abstract
Word in noise identification is facilitated by acoustic differences between target and competing sounds and temporal separation between the onset of the masker and that of the target. Younger and older adults are able to take advantage of onset delay when the masker is dissimilar (Noise) to the target word, but only younger adults are able to do so when the masker is similar (Babble). We examined the neural underpinning of this age difference using cortical evoked responses to words masked by either Babble or Noise when the masker preceded the target word by 100 or 600 ms in younger and older adults, after adjusting the signal-to-noise ratios (SNRs) to equate behavioural performance across age groups and conditions. For the 100 ms onset delay, the word in noise elicited an acoustic change complex (ACC) response that was comparable in younger and older adults. For the 600 ms onset delay, the ACC was modulated by both masker type and age. In older adults, the ACC to a word in babble was not affected by the increase in onset delay whereas younger adults showed a benefit from longer delays. Hence, the age difference in sensitivity to temporal delay is indexed by early activity in the auditory cortex. These results are consistent with the hypothesis that an increase in onset delay improves stream segregation in younger adults in both noise and babble, but only in noise for older adults and that this change in stream segregation is evident in early cortical processes.
Collapse
Affiliation(s)
- Bruce A. Schneider
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
- *Correspondence: Bruce A. Schneider,
| | - Cristina Rabaglia
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
| | - Meital Avivi-Reich
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
- Department of Communication Arts, Sciences, and Disorders, Brooklyn College, City University of New York, Brooklyn, NY, United States
| | - Dena Krieger
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
| | | | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, St. George Campus, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
10
|
Calcus A, Undurraga JA, Vickers D. Simultaneous subcortical and cortical electrophysiological recordings of spectro-temporal processing in humans. Front Neurol 2022; 13:928158. [PMID: 35989907 PMCID: PMC9381701 DOI: 10.3389/fneur.2022.928158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/13/2022] [Indexed: 02/02/2023] Open
Abstract
Objective assessment of auditory discrimination has often been measured using the Auditory Change Complex (ACC), which is a cortically generated potential elicited by a change occurring within an ongoing, long-duration auditory stimulus. In cochlear implant users, the electrically-evoked ACC has been used to measure electrode discrimination by changing the stimulating electrode during stimulus presentation. In addition to this cortical component, subcortical measures provide further information about early auditory processing in both normal hearing listeners and cochlear implant users. In particular, the frequency-following response (FFR) is thought to reflect the auditory encoding at the level of the brainstem. Interestingly, recent research suggests that it is possible to simultaneously measure both subcortical and cortical physiological activity. The aim of this research was twofold: first, to understand the scope for simultaneously recording both the FFR (subcortical) and ACC (cortical) responses in normal hearing adults. Second, to determine the best recording parameters for optimizing the simultaneous capture of both responses with clinical applications in mind. Electrophysiological responses were recorded in 10 normally-hearing adults while they listened to 16-second-long pure tone sequences. The carrier frequency of these sequences was either steady or alternating periodically throughout the sequence, generating an ACC response to each alternation-the alternating ACC paradigm. In the "alternating" sequences, both the alternating rate and the carrier frequency varied parametrically. We investigated three alternating rates (1, 2.5, and 6.5 Hz) and seven frequency pairs covering the low-, mid-, and high-frequency range, including narrow and wide frequency separations. Our results indicate that both the slowest (1 Hz) and medium (2.5 Hz) alternation rates led to significant FFR and ACC responses in most frequency ranges tested. Low carrier frequencies led to larger FFR amplitudes, larger P1 amplitudes, and N1-P2 amplitude difference at slow alternation rates. No significant relationship was found between subcortical and cortical response amplitudes, in line with different generators and processing levels across the auditory pathway. Overall, the alternating ACC paradigm can be used to measure sub-cortical and cortical responses as indicators of auditory early neural encoding (FFR) and sound discrimination (ACC) in the pathway, and these are best obtained at slow alternation rates (1 Hz) in the low-frequency range (300-1200 Hz).
Collapse
Affiliation(s)
- Axelle Calcus
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom,Laboratoire des Systèmes Perceptifs, Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL University, CNRS, Paris, France,Center for Research in Cognitive Neuroscience, Université Libre de Bruxelles (ULB), Brussels, Belgium,*Correspondence: Axelle Calcus
| | - Jaime A. Undurraga
- Department of Linguistics, Macquarie University, Sydney, NSW, Australia,Interacoustics Research Unit, Technical University of Denmark, Lyngby, Denmark
| | - Deborah Vickers
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom,SOUND Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, Herchel Smith Building for Brain and Mind Sciences, Cambridge, United Kingdom
| |
Collapse
|
11
|
Fan ZT, Zhao ZH, Sharma M, Valderrama JT, Fu QJ, Liu JX, Fu X, Li H, Zhao XL, Guo XY, Fu LY, Wang NY, Zhang J. Acoustic Change Complex Evoked by Horizontal Sound Location Change in Young Adults With Normal Hearing. Front Neurosci 2022; 16:908989. [PMID: 35733932 PMCID: PMC9207405 DOI: 10.3389/fnins.2022.908989] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/10/2022] [Indexed: 11/13/2022] Open
Abstract
Acoustic change complex (ACC) is a cortical auditory-evoked potential induced by a change of continuous sound stimulation. This study aimed to explore: (1) whether the change of horizontal sound location can elicit ACC; (2) the relationship between the change of sound location and the amplitude or latency of ACC; (3) the relationship between the behavioral measure of localization, minimum audible angle (MAA), and ACC. A total of 36 normal-hearing adults participated in this study. A 180° horizontal arc-shaped bracket with a 1.2 m radius was set in a sound field where participants sat at the center. MAA was measured in a two-alternative forced-choice setting. The objective electroencephalography recording of ACC was conducted with the location changed at four sets of positions, ±45°, ±15°, ±5°, and ±2°. The test stimulus was a 125–6,000 Hz broadband noise of 1 s at 60 ± 2 dB SPL with a 2 s interval. The N1′–P2′ amplitudes, N1′ latencies, and P2′ latencies of ACC under four positions were evaluated. The influence of electrode sites and the direction of sound position change on ACC waveform was analyzed with analysis of variance. Results suggested that (1) ACC can be elicited successfully by changing the horizontal sound location position. The elicitation rate of ACC increased with the increase of location change. (2) N1′–P2′ amplitude increased and N1′ and P2′ latencies decreased as the change of sound location increased. The effects of test angles on N1′–P2′ amplitude [F(1.91,238.1) = 97.172, p < 0.001], N1′ latency [F(1.78,221.90) = 96.96, p < 0.001], and P2′ latency [F(1.87,233.11) = 79.97, p < 0.001] showed a statistical significance. (3) The direction of sound location change had no significant effect on any of the ACC peak amplitudes or latencies. (4) Sound location discrimination threshold by the ACC test (97.0% elicitation rate at ±5°) was higher than MAA threshold (2.08 ± 0.5°). The current study results show that though the ACC thresholds are higher than the behavioral thresholds on MAA task, ACC can be used as an objective method to evaluate sound localization ability. This article discusses the implications of this research for clinical practice and evaluation of localization skills, especially for children.
Collapse
Affiliation(s)
- Zhi-Tong Fan
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Zi-Hui Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Mridula Sharma
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
| | - Joaquin T. Valderrama
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
- National Acoustic Laboratories, Sydney, NSW, Australia
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Jia-Xing Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xin Fu
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Huan Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xue-Lei Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xin-Yu Guo
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Luo-Yi Fu
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Ning-Yu Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Juan Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Juan Zhang,
| |
Collapse
|
12
|
Vonck BM, van Heteren JA, Lammers MJ, de Jel DV, Schaake WA, van Zanten GA, Stokroos RJ, Versnel H. Cortical potentials evoked by tone frequency changes can predict speech perception in noise. Hear Res 2022; 420:108508. [DOI: 10.1016/j.heares.2022.108508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 04/01/2022] [Accepted: 04/10/2022] [Indexed: 11/04/2022]
|
13
|
Dou H, Dai Y, Qiu Y, Lei Y. Attachment voices promote safety learning in humans: A critical role for P2. Psychophysiology 2022; 59:e13997. [PMID: 35244973 DOI: 10.1111/psyp.13997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/20/2021] [Accepted: 11/23/2021] [Indexed: 11/29/2022]
Abstract
Humans have evolved to seek the proximity of attachment figures during times of threat in order to obtain a sense of safety. In this context, we examined whether or not the voice of an intimate partner (termed "attachment voice") could reduce fear-learning of conditioned stimuli (CS+) and enhance learning of safety signals (CS-). Although the ability to learn safety signals is vital for human survival, few studies have explored how attachment voices affect safety learning. To test our hypothesis, we recruited thirty-five young couples and performed a classic Pavlovian conditioning experiment, recording behavioral and electroencephalographic (EEG) data. The results showed that compared with a stranger's voice, the voices of the partners reduced expectancy of the unconditioned stimulus (a shock) during fear-conditioning, as well as the magnitude of P2 event-related potentials within the EEG responses, provided the voices were safety signals. Additionally, behavioral and EEG responses to the CS+ and CS- differed more when the participants heard their partner's voice than when they heard the stranger's voice. Thus, attachment voices, even as pure vowel sounds without any semantic information, enhanced acquisition of conditioned safety (CS-). These findings may provide implications for investigating other new techniques to improve clinical treatments for fear- and anxiety-related disorders and for psychological interventions against the mental health effects of the public health emergency.
Collapse
Affiliation(s)
- Haoran Dou
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.,Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland.,College of Psychology, Shenzhen University, Shenzhen, China
| | - Yuqian Dai
- College of Psychology, Shenzhen University, Shenzhen, China
| | - Yiwen Qiu
- College of Psychology, Shenzhen University, Shenzhen, China
| | - Yi Lei
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| |
Collapse
|
14
|
Lunardelo PP, Hebihara Fukuda MT, Zuanetti PA, Pontes-Fernandes ÂC, Ferretti MI, Zanchetta S. Cortical auditory evoked potentials with different acoustic stimuli: Evidence of differences and similarities in coding in auditory processing disorders. Int J Pediatr Otorhinolaryngol 2021; 151:110944. [PMID: 34773882 DOI: 10.1016/j.ijporl.2021.110944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 09/05/2021] [Accepted: 10/12/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The use of cortical auditory evoked potentials allows for the study of the processing of acoustic signals at the cortical level, an important step in the diagnostic evaluation process, and for the monitoring of the therapeutic process associated with auditory processing disorders (APD). The differences and similarities in the acoustic coding between different types of stimuli in the context of APD remain unknown to this date. METHODS A total of 37 children aged between 7 and 11 years, with and without APDs (identified based on verbal and non-verbal tests), all with a suitable intelligence quotient with respect to their chronological age, were assessed. Components P1 and N1 were studied using verbal and non-verbal stimuli. RESULTS The comparison between stimuli in each group revealed that the control group had higher latency and amplitude values for speech stimuli, except for the P1 amplitude, while the group with APDs had different results with respect to the amplitudes of P1 and N1, yielding higher values for speech sounds. The differences between the groups varied according to the type of stimulus: the difference was in amplitude for the verbal stimulus and latency for the non-verbal stimulus. CONCLUSION The records of components P1 and N1 revealed that the children with APDs performed the coding underlying the detection and identification of acoustic signals, whether verbal and non-verbal, according to a different pattern than the children in the control group.
Collapse
Affiliation(s)
- Pamela Papile Lunardelo
- Department of Psychology, School of Fhilosophy, Sciences and Letters- Ribeirão Preto, University of São Paulo, Brazil.
| | - Marisa Tomoe Hebihara Fukuda
- Department of Psychology, School of Fhilosophy, Sciences and Letters- Ribeirão Preto, University of São Paulo, Brazil; Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, 3900 Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| | - Patricia Aparecida Zuanetti
- Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| | - Ângela Cristina Pontes-Fernandes
- Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil; University Paulista - UNIP, Ribeirão Preto, Brazil.
| | | | - Sthella Zanchetta
- Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, 3900 Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil; Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| |
Collapse
|
15
|
McGuire K, Firestone GM, Zhang N, Zhang F. The Acoustic Change Complex in Response to Frequency Changes and Its Correlation to Cochlear Implant Speech Outcomes. Front Hum Neurosci 2021; 15:757254. [PMID: 34744668 PMCID: PMC8566680 DOI: 10.3389/fnhum.2021.757254] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/01/2021] [Indexed: 12/12/2022] Open
Abstract
One of the biggest challenges that face cochlear implant (CI) users is the highly variable hearing outcomes of implantation across patients. Since speech perception requires the detection of various dynamic changes in acoustic features (e.g., frequency, intensity, timing) in speech sounds, it is critical to examine the ability to detect the within-stimulus acoustic changes in CI users. The primary objective of this study was to examine the auditory event-related potential (ERP) evoked by the within-stimulus frequency changes (F-changes), one type of the acoustic change complex (ACC), in adult CI users, and its correlation to speech outcomes. Twenty-one adult CI users (29 individual CI ears) were tested with psychoacoustic frequency change detection tasks, speech tests including the Consonant-Nucleus-Consonant (CNC) word recognition, Arizona Biomedical Sentence Recognition in quiet and noise (AzBio-Q and AzBio-N), and the Digit-in-Noise (DIN) tests, and electroencephalographic (EEG) recordings. The stimuli for the psychoacoustic tests and EEG recordings were pure tones at three different base frequencies (0.25, 1, and 4 kHz) that contained a F-change at the midpoint of the tone. Results showed that the frequency change detection threshold (FCDT), ACC N1' latency, and P2' latency did not differ across frequencies (p > 0.05). ACC N1'-P2 amplitude was significantly larger for 0.25 kHz than for other base frequencies (p < 0.05). The mean N1' latency across three base frequencies was negatively correlated with CNC word recognition (r = -0.40, p < 0.05) and CNC phoneme (r = -0.40, p < 0.05), and positively correlated with mean FCDT (r = 0.46, p < 0.05). The P2' latency was positively correlated with DIN (r = 0.47, p < 0.05) and mean FCDT (r = 0.47, p < 0.05). There was no statistically significant correlation between N1'-P2' amplitude and speech outcomes (all ps > 0.05). Results of this study indicated that variability in CI speech outcomes assessed with the CNC, AzBio-Q, and DIN tests can be partially explained (approximately 16-21%) by the variability of cortical sensory encoding of F-changes reflected by the ACC.
Collapse
Affiliation(s)
- Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Gabrielle M. Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
16
|
Informational Masking Effects of Similarity and Uncertainty on Early and Late Stages of Auditory Cortical Processing. Ear Hear 2021; 42:1006-1023. [PMID: 33416259 DOI: 10.1097/aud.0000000000000997] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE Understanding speech in a background of other people talking is a difficult listening situation for hearing-impaired individuals, and even for those with normal hearing. Speech-on-speech masking is known to contribute to increased perceptual difficulty over nonspeech background noise because of informational masking provided over and above the effects of energetic masking. While informational masking research has identified factors of similarity and uncertainty between target and masker that contribute to reduced behavioral performance in speech background noise, critical gaps in knowledge including the underlying neural-perceptual processes remain. By systematically manipulating aspects of acoustic similarity and uncertainty in the same auditory paradigm, the current study examined the time course and objectively quantified these informational masking effects at both early and late stages of auditory processing using auditory evoked potentials (AEPs). METHOD Thirty participants were included in a cross-sectional repeated measures design. Target-masker similarity was manipulated by varying the linguistic/phonetic similarity (i.e., language) of the talkers in the background. Specifically, four levels representing hypothesized increasing levels of informational masking were implemented: (1) no masker (quiet); (2) Mandarin; (3) Dutch; and (4) English. Stimulus uncertainty was manipulated by task complexity, specifically presentation of target-to-target interval (TTI) in the auditory evoked paradigm. Participants had to discriminate between English word stimuli (/bæt/ and /pæt/) presented in an oddball paradigm under each masker condition pressing buttons to either the target or standard stimulus. Responses were recorded simultaneously for P1-N1-P2 (standard waveform) and P3 (target waveform). This design allowed for simultaneous recording of multiple AEP peaks, as well as accuracy, reaction time, and d' behavioral discrimination to button press responses. RESULTS Several trends in AEP components were consistent with effects of increasing linguistic/phonetic similarity and stimulus uncertainty. All babble maskers significantly affected outcomes compared to quiet. In addition, the native language English masker had the largest effect on outcomes in the AEP paradigm, including reduced P3 amplitude and area, as well as decreased accuracy and d' behavioral discrimination to target word responses. AEP outcomes for the Mandarin and Dutch maskers, however, were not significantly different across any measured component. Latency outcomes for both N1 and P3 also supported an effect of stimulus uncertainty, consistent with increased processing time related to greater task complexity. An unanticipated result was the absence of the interaction of linguistic/phonetic similarity and stimulus uncertainty. CONCLUSIONS Observable effects of both similarity and uncertainty were evidenced at a level of the P3 more than the earlier N1 level of auditory cortical processing suggesting that higher-level active auditory processing may be more sensitive to informational masking deficits. The lack of significant interaction between similarity and uncertainty at either level of processing suggests that these informational masking factors operated independently. Speech babble maskers across languages altered AEP component measures, behavioral detection, and reaction time. Specifically, this occurred when the babble was in the native/same language as the target, while the effects of foreign language maskers did not differ. The objective results from this study provide a foundation for further investigation of how the linguistic content of target and masker and task difficulty contribute to difficulty understanding speech-in-noise.
Collapse
|
17
|
Dyball A, Xu Rattanasone N, Ibrahim R, Sharma M. Alpha synchronisation of acoustic responses in active listening is indicative of native language listening experience. Int J Audiol 2021; 61:490-499. [PMID: 34237224 DOI: 10.1080/14992027.2021.1941326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE Examine the effect of language experience on auditory evoked and oscillatory brain responses to lexical tone in passive (ACC) and active (P300) listening conditions. DESIGN Language experience was evaluated using two groups, Mandarin- vs. English-listeners (with vs. without lexical tone experience). Two Mandarin lexical tones with pitch movement (T2 rising; T3 dipping) produced on the syllable /ba/ were used as stimuli. For passive listening, each tone was presented in a block. For active listening, each tone was the standard (80%) or deviant (20%) presented in two blocks. Presentation order was counterbalanced across participants in both tasks. STUDY SAMPLE 10 adult Mandarin-listeners and 13 Australian-English-listeners contributed to the data. RESULTS Both global field power (GFP) and time frequency analysis (TFA) failed to detect group differences in passive listening conditions for the ACC response. In contrast, the active listening condition revealed significant group differences for T2. GFP showed a trending significance with larger GFP (less consistent responses) in English- than Mandarin-listeners. TFA showed significantly higher alpha synchronisation (more focussed attention) for Mandarin- compared to English-listeners. CONCLUSIONS Acoustic responses to speech is influenced by language experience but only during active listening, suggesting that focussed attention is linked to higher level language processes.
Collapse
Affiliation(s)
- Alyssa Dyball
- Department of Linguistics, Macquarie University, Sydney, Australia
| | - Nan Xu Rattanasone
- Department of Linguistics, Macquarie University, Sydney, Australia.,Centre for Language Sciences, Macquarie University, Sydney, Australia.,Multilingualism Research Centre, Macquarie University, Sydney, Australia
| | - Ronny Ibrahim
- Department of Linguistics, Macquarie University, Sydney, Australia
| | - Mridula Sharma
- Department of Linguistics, Macquarie University, Sydney, Australia
| |
Collapse
|
18
|
Kösemihal E, Akdas F. The Effect of Nonlinear Frequency Compression on Acoustic Change Complex Responses in High-Frequency Dead-Regioned Hearing Loss. J Am Acad Audiol 2021; 32:164-170. [PMID: 34030193 DOI: 10.1055/s-0041-1722948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
PURPOSE The study is concern with the distinguishing of the stimuli containing high frequency information with the frequency compression feature at the cortical level using the acoustic change complex (ACC) and the comparison of such with the ACC answers of individuals with normal hearing. RESEARCH DESIGN This is a case-control study. STUDY SAMPLE Thirty adults (21 males and nine females) with normal hearing, ranging in age between 16 and 63 years (mean: 36.7 ± 12.9 years) and 20 adults (16 males and four females) with hearing loss ranging in age between 16 and 70 years (mean:49.0 ± 19.8 years) have been included in this study. DATA COLLECTION AND ANALYSIS A total of 1,000 ms long stimulus containing 500 and 4,000 Hz tonal stimuli was used for ACC recording. The start frequency (SF) and compression ratio (CR) parameters of the hearing aids were programmed according to the default settings (SFd, CRd) in the device software, the optimal setting (SFo, CRo), and the extra compression (SFe, CRe) requirements and ACC has been recorded for each condition. Evaluation has been performed according to P1-N1-P2 wave complex and ACC complex wave latencies. Independent samples t-test was used to test the significance of the differences between the groups. RESULTS In all individuals ACC has been observed. There was a significant difference between the wave latencies in normal hearing- and hearing-impaired groups. All wave latency averages of the individuals with hearing impairment were longer than the individuals with normal hearing. There were statistically significant differences between SFd-SFo, SFd-SFe, and SFo-SFe parameters. But there was no difference between CRd, CRo, and CRe in terms of CRs. CONCLUSION In order to discriminate high frequency information at the cortical level we should not rely on default settings of the SF and CR of the hearing aids. Optimal bandwidth must be adjusted without performing insufficient compression or over-compression. ACC can be used besides the real ear measurement for hearing aid fitting.
Collapse
Affiliation(s)
- Ebru Kösemihal
- Department of Audiology, Near East University, Nicosia, Cyprus
| | - Ferda Akdas
- Department of Audiology, Marmara University School of Medicine, Istanbul, Turkey
| |
Collapse
|
19
|
Cortical potentials evoked by tone frequency changes compared to frequency discrimination and speech perception: Thresholds in normal-hearing and hearing-impaired subjects. Hear Res 2020; 401:108154. [PMID: 33387905 DOI: 10.1016/j.heares.2020.108154] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 11/29/2020] [Accepted: 12/08/2020] [Indexed: 11/21/2022]
Abstract
Frequency discrimination ability varies within the normal hearing population, partially explained by factors such as musical training and age, and it deteriorates with hearing loss. Frequency discrimination, while essential for several auditory tasks, is not routinely measured in clinical setting. This study investigates cortical auditory evoked potentials in response to frequency changes, known as acoustic change complexes (ACCs), and explores their value as a clinically applicable objective measurement of frequency discrimination. In 12 normal-hearing and 13 age-matched hearing-impaired subjects, ACC thresholds were recorded at 4 base frequencies (0.5, 1, 2, 4 kHz) and compared to psychophysically assessed frequency discrimination thresholds. ACC thresholds had a moderate to strong correlation to psychophysical frequency discrimination thresholds. In addition, ACC thresholds increased with hearing loss and higher ACC thresholds were associated with poorer speech perception in noise. The ACC threshold in response to a frequency change therefore holds promise as an objective clinical measurement in hearing impairment, indicative of frequency discrimination ability and related to speech perception. However, recordings as conducted in the current study are relatively time consuming. The current clinical application would be most relevant in cases where behavioral testing is unreliable.
Collapse
|
20
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Brain oscillations evoked by sound motion. Brain Res 2020; 1752:147232. [PMID: 33385379 DOI: 10.1016/j.brainres.2020.147232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/27/2020] [Accepted: 11/30/2020] [Indexed: 11/25/2022]
Abstract
The present study investigates the event-related oscillations underlying the motion-onset response (MOR) evoked by sounds moving at different velocities. EEG was recorded for stationary sounds and for three patterns of sound motion produced by changes in interaural time differences. We explored the effect of motion velocity on the MOR potential, and also on the event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) calculated from the time-frequency decomposition of EEG signals. The phase coherence of slow oscillations increased with an increase in motion velocity similarly to the magnitude of cN1 and cP2 components of the MOR response. The delta-to-alpha inter-trial spectral power remained at the same level up to, but not including, the highest velocity, suggesting that gradual spatial changes within the sound did not induce non-coherent activity. Conversely, the abrupt sound displacement induced theta-alpha oscillations which had low phase consistency. The findings suggest that the MOR potential could be mainly generated by the phase resetting of slow oscillations, and the degree of phase coherence may be considered as a neurophysiological indicator of sound motion processing.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | | | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | - Nikolay I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| |
Collapse
|
21
|
Megha, Maruthy S. Effect of Hearing Aid Acclimatization on Speech-in-Noise Perception and Its Relationship With Changes in Auditory Long Latency Responses. Am J Audiol 2020; 29:774-784. [PMID: 32970453 DOI: 10.1044/2020_aja-19-00124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Objective The study attempted to track speech-in-noise perception and auditory long latency responses (ALLRs) over a period of hearing aid use in naïve hearing aid users. The primary aim was to investigate the relationship of change in speech-in-noise perception with the change in ALLRs. Method Thirty adults with mild-to-moderate sensorineural hearing loss (clinical group) and 17 adults with normal hearing (control group) in the age range of 23-60 years participated in the study. Syllable identification in noise (SIN) and ALLRs in noise were measured three times (three sessions) over a period of 2 months of hearing aid use. Results Results showed a significant increase in SIN and a decrease in the latency of ALLRs in the later sessions compared to the baseline session in the clinical group. However, the changes seen across the three sessions in the control group were not statistically significant. The magnitude of change in ALLRs seen in the clinical group did not significantly correlate with the change in SIN scores seen in them. Conclusions The study provides evidence for improvements in speech perception in noise and in processing time of auditory cortical areas with hearing aid acclimatization. However, it is important to note that the improvement in ALLRs does not assure improvement in speech perception in noise.
Collapse
Affiliation(s)
- Megha
- Department of Audiology, All IndiaInstitute of Speech and Hearing, Manasagangothri, Mysuru, Karnataka
| | - Sandeep Maruthy
- Department of Audiology, All IndiaInstitute of Speech and Hearing, Manasagangothri, Mysuru, Karnataka
| |
Collapse
|
22
|
Cortical processing of location and frequency changes of sounds in normal hearing listeners. Hear Res 2020; 400:108110. [PMID: 33220506 DOI: 10.1016/j.heares.2020.108110] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 10/09/2020] [Accepted: 11/06/2020] [Indexed: 11/21/2022]
Abstract
Sounds we hear in our daily life contain changes in the acoustic features (e.g., frequency, intensity, and duration or "what" information) and/or changes in location ("where" information). The purpose of this study was to examine the cortical auditory evoked potentials (CAEPs) to the change within a stimulus, the acoustic change complex (ACC), in frequency (F) and location (L) of the sound in normal hearing listeners. Fifteen right-handed young normal hearing listeners participated in the electroencephalographic (EEG) recordings. The acoustic stimuli were pure tones (base frequency at 250 Hz) of 1 s, with a perceivable change either in location (L, 180°), frequency (F, 5% and 50%), or both location and frequency (L+F) in the middle of the tone. Additionally, the 250 Hz tone of 1 sec without any change was used as a reference. The participants were asked to listen passively to the stimuli and not to move their heads during the testing. Compared to the reference tone, by which only the onset-CAEP was elicited, the tones containing changes (L, F, or L+F) elicited both onset-CAEP and the ACC. The waveform analysis of ACCs from the vertex electrode (electrode Cz) showed that, larger sound changes evoked larger peak amplitudes [e.g., (L+50%F)- > L-change; (L+50%F)- > 5%F-change] and shorter the peak latencies ([(L+5%F)- < 5%F-change; 50%F- < 5%F-change; (L+50%F)- < 5%F-change] . The current density patterns for the ACC N1' peak displayed some differences between L-change vs. F-change, supporting different cortical processing for "where" and "what" information of the sound; regardless of the nature of the sound change, larger changes evoked a stronger activation than smaller changes [e.g., L- > 5%F-change; (L+5%F)- > 5%F-change; 50%F- > 5%F-change] in frontal lobe regions including the cingulate gyrus, medial frontal gyrus (MFG), superior frontal gyrus (SFG), the limbic lobe cingulate gyrus, and the parietal lobe postcentral gyrus. The results suggested that sound change-detection involves memory-based acoustic comparison (the neural encoding for the sound change vs. neural encoding for the pre-change stimulus stored in memory) and involuntary attention switch.
Collapse
|
23
|
Coding of consonant-vowel transition in children with central auditory processing disorder: an electrophysiological study. Eur Arch Otorhinolaryngol 2020; 278:3673-3681. [PMID: 33052460 DOI: 10.1007/s00405-020-06425-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 10/05/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION Acoustic change complex (ACC) is an important tool to investigate the encoding of the acoustic property of speech signals in various populations. However, there is a limited number of research papers that have explored the usefulness of ACC as a tool to study the neural encoding of consonant-vowel (CV) transition in children with central auditory processing disorder (CAPD). Thus, the present study aims to investigate the utility of ACC as an objective tool to study the neural representation of consonant-vowel (CV) transition in children with CAPD. METHODS Twenty children diagnosed having CAPD and 20 normal counterparts in the age range of 8-14 years were the participants. The ACC was acquired using naturally produced CV syllable /sa/ with a duration of 380 ms. RESULTS Latency of N1' and P2' was found to be prolonged in children with CAPD compared to normal counterparts, whereas the amplitude of N1' and P2' did not show any significant difference. Scalp topography showed significantly different activation patterns for children with and without CAPD. CONCLUSION Prolonged latencies of ACC indicated poor encoding of CV transition in children with CAPD. The difference in scalp topography might be because of the involvement of additional brain areas for the neural discrimination task in children with CAPD.
Collapse
|
24
|
Kumar P, Singh NK, Sanju HK, Kaverappa GM. Feasibility of objective assessment of difference limen for intensity using acoustic change complex in children with central auditory processing disorder. Int J Pediatr Otorhinolaryngol 2020; 137:110189. [PMID: 32682166 DOI: 10.1016/j.ijporl.2020.110189] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 06/09/2020] [Accepted: 06/10/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION Acoustic change complex (ACC) shows brain's ability to discriminate between acoustic features in an ongoing stimulus. It is this nature of ACC that has generated interest in studying the usefulness of ACC as an objective tool for evaluating difference limens for various stimulus parameters. The present study therefore aimed at investigating the utility of ACC as an objective measure of difference limen for intensity (DLI) in normal hearing children with and without (C)APD. METHODS Fifteen children with (C)APD and 15 normal hearing children in whom (C)APD was ruled out (comparison group) in the age range of 8-12 years underwent ACC for 6 intensity differences (+1, +3, +4, +5, +10 & +20 dB) and a standard stimulus using a 1000 Hz stimulus. RESULTS Behavioral DLI (DLIb) as well as DLI found using ACC (DLIo) were both significantly larger in children with (C)APD than the comparison group (p < 0.05). Further, there was a significantly strong positive correlation between DLIb and DLIo (p < 0.001]. CONCLUSION Outcome of the study provides evidence for the clinical use of ACC as an objective tool for examining DLI in children with (C)APD.
Collapse
Affiliation(s)
- Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Niraj Kumar Singh
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Himanshu Kumar Sanju
- Department of ENT and Audiology, Shri Jagdamba Charitable Eye Hospital, Sri Ganganagar, Rajasthan, India.
| | | |
Collapse
|
25
|
Kumar P, Sanju HK, Hussain RO, Kaverappa Ganapathy M, Singh NK. Utility of Acoustic Change Complex as an Objective Tool to Evaluate Difference Limen for Intensity in Cochlear Hearing Loss and Auditory Neuropathy Spectrum Disorder. Am J Audiol 2020; 29:375-383. [PMID: 32628503 DOI: 10.1044/2020_aja-19-00084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study aimed to investigate usefulness of acoustic change complex (ACC) as an objective measure of difference limen for intensity (DLI) in auditory neuropathy spectrum disorders (ANSD) and cochlear hearing loss (CHL). Method The study used a multiple static group comparison research design. Twenty normal-hearing individuals (NH), 19 individuals with ANSD, and 23 individuals with CHL underwent DLI measurement using behavioral (psychoacoustic) techniques and ACC. For eliciting ACC, a 500-ms, 1,000-Hz pure tone was presented at 80 dB SPL. Additionally, six variants of this stimulus with intensity increments of 1, 3, 4, 5, 10, and 20 dB starting 250 ms after stimulus onset were used to elicit the ACC. Results The lowest intensity change that produced replicable and clearly identifiable ACC was referred as objective DLI. In comparison to NH and CHL, the behavioral as well as the objective DLI were significantly larger (poorer) in ANSD (p < .05). Significantly strong positive correlation existed between DLI obtained using behavioral and objective measures (p < .05). Conclusions ACC could be a useful objective tool to measure DLI in the clinical population, provided the individuals of the clinical population fulfill the prerequisite of the presence of Auditory Long Latency Responses. Supplemental Material https://doi.org/10.23641/asha.12560132.
Collapse
Affiliation(s)
- Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| | - Himanshu Kumar Sanju
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| | - Reesha Oovattil Hussain
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| | | | - Niraj Kumar Singh
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| |
Collapse
|
26
|
Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words. eNeuro 2020; 7:ENEURO.0475-19.2020. [PMID: 32513662 PMCID: PMC7470935 DOI: 10.1523/eneuro.0475-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 06/01/2020] [Indexed: 11/21/2022] Open
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Collapse
|
27
|
Uhrig S, Perkis A, Behne DM. Effects of speech transmission quality on sensory processing indicated by the cortical auditory evoked potential. J Neural Eng 2020; 17:046021. [PMID: 32422617 DOI: 10.1088/1741-2552/ab93e1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
OBJECTIVE Degradations of transmitted speech have been shown to affect perceptual and cognitive processing in human listeners, as indicated by the P3 component of the event-related brain potential (ERP). However, research suggests that previously observed P3 modulations might actually be traced back to earlier neural modulations in the time range of the P1-N1-P2 complex of the cortical auditory evoked potential (CAEP). This study investigates whether auditory sensory processing, as reflected by the P1-N1-P2 complex, is already systematically altered by speech quality degradations. APPROACH Electrophysiological data from two studies were analyzed to examine effects of speech transmission quality (high-quality, noisy, bandpass-filtered) for spoken words on amplitude and latency parameters of individual P1, N1 and P2 components. MAIN RESULTS In the resultant ERP waveforms, an initial P1-N1-P2 manifested at stimulus onset, while a second N1-P2 occurred within the ongoing stimulus. Bandpass-filtered versus high-quality word stimuli evoked a faster and larger initial N1 as well as a reduced initial P2, hence exhibiting effects as early as the sensory stage of auditory information processing. SIGNIFICANCE The results corroborate the existence of systematic quality-related modulations in the initial N1-P2, which may potentially have carried over into P3 modulations demonstrated by previous studies. In future psychophysiological speech quality assessments, rigorous control procedures are needed to ensure the validity of P3-based indication of speech transmission quality. An alternative CAEP-based assessment approach is discussed, which promises to be more efficient and less constrained than the established approach based on P3.
Collapse
Affiliation(s)
- Stefan Uhrig
- Quality and Usability Lab, Technische Universität Berlin, D-10587 Berlin, Germany. Department of Electronic Systems, Norwegian University of Science and Technology, 7491 Trondheim, Norway. Author to whom any correspondence should be addressed
| | | | | |
Collapse
|
28
|
Liang C, Wenstrup LH, Samy RN, Xiang J, Zhang F. The Effect of Side of Implantation on the Cortical Processing of Frequency Changes in Adult Cochlear Implant Users. Front Neurosci 2020; 14:368. [PMID: 32410947 PMCID: PMC7201306 DOI: 10.3389/fnins.2020.00368] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Accepted: 03/25/2020] [Indexed: 12/03/2022] Open
Abstract
Cochlear implants (CI) are widely used in children and adults to restore hearing function. However, CI outcomes are vary widely. The affected factors have not been well understood. It is well known that the right and left hemispheres play different roles in auditory perception in adult normal hearing listeners. It is unknown how the implantation side may affect the outcomes of CIs. In this study, the effect of the implantation side on how the brain processes frequency changes within a sound was examined in 12 right-handed adult CI users. The outcomes of CIs were assessed with behaviorally measured frequency change detection threshold (FCDT), which has been reported to significantly affect CI speech performance. The brain activation and regions were also examined using acoustic change complex (ACC, a type of cortical potential evoked by acoustic changes within a stimulus), on which the waveform analysis and the standardized low-resolution brain electromagnetic tomography (sLORETA) were performed. CI users showed activation in the temporal lobe and non-temporal areas, such as the frontal lobe. Right-ear CIs could more efficiently activate the contralateral hemisphere compared to left-ear CIs. For right-ear CIs, the increased activation in the contralateral temporal lobe together with the decreased activation in the contralateral frontal lobe was correlated with good performance of frequency change detection (lower FCDTs). Such a trend was not found in left-ear CIs. These results suggest that the implantation side may significantly affect neuroplasticity patterns in adults.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States.,Child Psychiatry and Rehabilitation, Affiliated Shenzhen Maternity & Child Healthcare Hospital, Southern Medical University, Shenzhen, China
| | - Lisa H Wenstrup
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, OH, United States
| | - Ravi N Samy
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, OH, United States
| | - Jing Xiang
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
29
|
Age-Related Changes in Temporal Resolution Revisited: Electrophysiological and Behavioral Findings From Cochlear Implant Users. Ear Hear 2020; 40:1328-1344. [PMID: 31033701 DOI: 10.1097/aud.0000000000000732] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
OBJECTIVES The mechanisms underlying age-related changes in speech perception are still unclear, most likely multifactorial and often can be difficult to parse out from the effects of hearing loss. Age-related changes in temporal resolution (i.e., the ability to track rapid changes in sounds) have long been associated with speech perception declines exhibited by many older individuals. The goals of this study were as follows: (1) to assess age-related changes in temporal resolution in cochlear implant (CI) users, and (2) to examine the impact of changes in temporal resolution and cognition on the perception of speech in noise. In this population, it is possible to bypass the cochlea and stimulate the auditory nerve directly in a noninvasive way. Additionally, CI technology allows for manipulation of the temporal properties of a signal without changing its spectrum. DESIGN Twenty postlingually deafened Nucleus CI users took part in this study. They were divided into groups of younger (18 to 40 years) and older (68 to 82 years) participants. A cross-sectional study design was used. The speech processor was bypassed and a mid-array electrode was used for stimulation. We compared peripheral and central physiologic measures of temporal resolution with perceptual measures obtained using similar stimuli. Peripherally, temporal resolution was assessed with measures of the rate of recovery of the electrically evoked compound action potential (ECAP), evoked using a single pulse and a pulse train as maskers. The acoustic change complex (ACC) to gaps in pulse trains was used to assess temporal resolution more centrally. Psychophysical gap detection thresholds were also obtained. Cognitive assessment included two tests of processing speed (Symbol Search and Coding) and one test of working memory (Digit Span Test). Speech perception was tested in the presence of background noise (QuickSIN test). A correlational design was used to explore the relationship between temporal resolution, cognition, and speech perception. RESULTS The only metric that showed significant age effects in temporal processing was the ECAP recovery function recorded using pulse train maskers. Younger participants were found to have faster rates of neural recovery following presentation of pulse trains than older participants. Age was not found to have a significant effect on speech perception. When results from both groups were combined, digit span was the only measure significantly correlated with speech perception performance. CONCLUSIONS In this sample of CI users, few effects of advancing age on temporal resolution were evident. While this finding would be consistent with a general lack of aging effects on temporal resolution, it is also possible that aging effects are influenced by processing peripheral to the auditory nerve, which is bypassed by the CI. However, it is known that cross-fiber neural synchrony is improved with electrical (as opposed to acoustic) stimulation. This change in neural synchrony may, in turn, make temporal cues more robust/perceptible to all CI users. Future studies involving larger sample sizes should be conducted to confirm these findings. Results of this study also add to the growing body of literature that suggests that working memory is important for the perception of degraded speech.
Collapse
|
30
|
Miller SE, Zhang Y. Neural Coding of Syllable-Final Fricatives with and without Hearing Aid Amplification. J Am Acad Audiol 2020; 31:566-577. [PMID: 32340057 DOI: 10.1055/s-0040-1709448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
BACKGROUND Cortical auditory event-related potentials are a potentially useful clinical tool to objectively assess speech outcomes with rehabilitative devices. Whether hearing aids reliably encode the spectrotemporal characteristics of fricative stimuli in different phonological contexts and whether these differences result in distinct neural responses with and without hearing aid amplification remain unclear. PURPOSE To determine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differed without hearing aid amplification and whether hearing aid amplification altered neural coding of the fricative contrast. RESEARCH DESIGN A repeated-measures, within subject design was used to compare the neural coding of a fricative contrast with and without hearing aid amplification. STUDY SAMPLE Ten adult listeners with normal hearing participated in the study. DATA COLLECTION AND ANALYSIS Cortical auditory event-related potentials were elicited to an /ɑs/-/ɑʃ/ vowel-fricative contrast in unaided and aided listening conditions. Neural responses to the speech contrast were recorded at 64-electrode sites. Peak latencies and amplitudes of the cortical response waveforms to the fricatives were analyzed using repeated-measures analysis of variance. RESULTS The P2' component of the acoustic change complex significantly differed from the syllable-final fricative contrast with and without hearing aid amplification. Hearing aid amplification differentially altered the neural coding of the contrast across frontal, temporal, and parietal electrode regions. CONCLUSIONS Hearing aid amplification altered the neural coding of syllable-final fricatives. However, the contrast remained acoustically distinct in the aided and unaided conditions, and cortical responses to the fricative significantly differed with and without the hearing aid.
Collapse
Affiliation(s)
- Sharon E Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas
| | - Yang Zhang
- Department of Speech-Language Hearing Science, University of Minnesota, Minneapolis, Minnesota.,Center for Neurobehavioral Development, University of Minnesota, Minneapolis, Minnesota.,Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
31
|
Yaralı M. Varying effect of noise on sound onset and acoustic change evoked auditory cortical N1 responses evoked by a vowel-vowel stimulus. Int J Psychophysiol 2020; 152:36-43. [PMID: 32302643 DOI: 10.1016/j.ijpsycho.2020.04.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 04/09/2020] [Accepted: 04/10/2020] [Indexed: 11/24/2022]
Abstract
INTRODUCTION According to previous studies noise causes prolonged latencies and decreased amplitudes in acoustic change evoked cortical responses. Particularly for a consonant-vowel stimulus, speech shaped noise leads to more pronounced changes on onset evoked response than acoustic change evoked response. Reasoning that this may be related to the spectral characteristics of the stimuli and the noise, in the current study a vowel-vowel stimulus (/ui/) was presented in white noise during cortical response recordings. The hypothesis is that the effect of noise will be higher on acoustic change N1 compared to onset N1 due to the masking effects on formant transitions. METHODS Onset and acoustic change evoked auditory cortical N1-P2 responses were obtained from 21 young adults with normal hearing while presenting 1000 ms /ui/ stimuli in quiet and in white noise at +10 dB and 0 dB signal-to-noise ratio (SNR). RESULTS In the quiet and +10 dB SNR conditions, the N1-P2 responses to both onset and change were present. In the +10 dB SNR condition acoustic change N1-P2 peak-to-peak amplitudes were reduced and N1 latencies were prolonged compared to the quiet condition. Whereas there was not a significant change in onset N1 latencies and N1-P2 peak-to-peak amplitudes in the +10 dB SNR condition. In the 0 dB SNR condition change responses were not observed but onset N1-P2 peak-to-peak amplitudes were significantly lower, and onset N1 latencies were significantly higher compared to the quiet and the 10 dB SNR conditions. Onset and change responses were also compared with each other in each condition. N1 latencies and N1-P2 peak to peak amplitudes of onset and acoustic change were not significantly different in the quiet condition. Whereas at 10 dB SNR, acoustic change N1 latencies were higher and N1-P2 amplitudes were lower than onset latencies and amplitudes. To summarize, presentation of white noise at 10 dB SNR resulted in the reduction of acoustic change evoked N1-P2 peak-to-peak amplitudes and the prolongation of N1 latencies compared to quiet. Same effect on onsets were only observed at 0 dB SNR, where acoustic change N1 was not observed. In the quiet condition, latencies and amplitudes of onsets and changes were not different. Whereas at 10 dB SNR, acoustic change N1 latencies were higher, amplitudes were lower than onset N1. DISCUSSION/CONCLUSIONS The effect of noise was found to be higher on acoustic change evoked N1 response compared to onset N1. This may be related to the spectral characteristics of the utilized noise and the stimuli, possible differences in acoustic features of sound onsets and acoustic changes, or to the possible differences in the mechanisms for detecting acoustic changes and sound onsets. In order to investigate the possible reasons for more pronounced effect of noise on acoustic changes, future work with different vowel-vowel transitions in different noise types is suggested.
Collapse
Affiliation(s)
- Mehmet Yaralı
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
32
|
Lee JY, Kang BC, Park JW, Park HJ. Changes in Cortical Auditory Evoked Potentials by Ipsilateral, Contralateral and Binaural Speech Stimulation in Normal-Hearing Adults. Clin Exp Otorhinolaryngol 2019; 13:133-140. [PMID: 31640335 PMCID: PMC7248601 DOI: 10.21053/ceo.2019.00801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Accepted: 09/02/2019] [Indexed: 11/22/2022] Open
Abstract
Objectives Cortical auditory evoked potentials (CAEPs) have been used to examine auditory cortical development or changes in patients with hearing loss. However, there have been no studies analyzing CAEP responses to the different sound stimulation by different stimulation sides. We characterized changes in normal CAEP responses by stimulation sides in normal-hearing adults. Methods CAEPs from the right auditory cortex were recorded in 16 adults following unilateral (ipsilateral and contralateral) and bilateral sound stimulation using three speech sounds (/m/, /g/, and /t/). Amplitudes and latencies of the CAEP peaks in three conditions were compared. Results Contralateral stimulation elicited larger P2-N1 amplitudes (sum of P2 and N1 amplitudes) than ipsilateral stimulation regardless of the stimulation sounds, mostly due to the larger P2 amplitudes obtained, but elicited comparable P2-N1 amplitudes to bilateral stimulation. Although the P2-N1 amplitudes obtained with the three speech sounds were comparable following contralateral stimulation, the /m/ sound elicited the largest P2-N1 amplitude in ipsilateral stimulation condition due to the largest N1 amplitude obtained, whereas /t/ elicited larger a P2-N1 amplitude than /g/ in bilateral stimulation condition due to a larger P2 amplitude. Conclusion Spectrally different speech sounds and input sides are encoded differently at the cortical level in normal-hearing adults. Standardized speech stimuli, as well as specific input sides of speech, are needed to examine normal development or rehabilitation-related changes of the auditory cortex in the future.
Collapse
Affiliation(s)
- Jee Yeon Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Byung Chul Kang
- Department of Otorhinolaryngology-Head and Neck Surgery, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan, Korea
| | - Jun Woo Park
- Department of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Hong Ju Park
- Department of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|
33
|
Vander Werff KR, Rieger B. Impaired auditory processing and neural representation of speech in noise among symptomatic post-concussion adults. Brain Inj 2019; 33:1320-1331. [PMID: 31317775 PMCID: PMC6731965 DOI: 10.1080/02699052.2019.1641624] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 07/05/2019] [Indexed: 10/26/2022]
Abstract
Background: The purpose of the study was to examine auditory event-related potential (AERP) evidence of changes in earlier and later stages of auditory processing in individuals with long-term post-concussion problems compared to healthy controls, with a secondary aim of comparing AERPs by functional auditory behavioral outcomes. Methods: P1-N1-P2 complex and P300 components recorded to speech in quiet and background noise conditions were completed in individuals with ongoing post-concussion symptoms following mTBI and healthy controls. AERPs were also examined between sub-groups with normal or impaired auditory processing by behavioral tests. Results: Group differences were present for later stages of auditory processing (P300). Earlier components did not significantly differ by group overall but were more affected by noise in the mTBI group. P2 amplitude in noise differed between mTBI sub-groups with normal or impaired auditory processing. Conclusion: AERPs revealed differences between healthy controls and those with chronic post-concussion symptoms following mTBI at a later stage of auditory processing (P300). Neural processing at the earlier stage (P1-N1-P2) was more affected by noise in the mTBI group. Preliminary evidence suggested that it may be only the proportion of individuals with functional evidence of central auditory dysfunction with changes in AERPs at earlier stages of processing.
Collapse
Affiliation(s)
- Kathy R. Vander Werff
- Department of Communication Sciences and Disorders, Syracuse University, Syracuse NY
| | - Brian Rieger
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY
| |
Collapse
|
34
|
Vonck BMD, Lammers MJW, van der Waals M, van Zanten GA, Versnel H. Cortical Auditory Evoked Potentials in Response to Frequency Changes with Varied Magnitude, Rate, and Direction. J Assoc Res Otolaryngol 2019; 20:489-498. [PMID: 31168759 PMCID: PMC6797694 DOI: 10.1007/s10162-019-00726-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 05/20/2019] [Indexed: 11/13/2022] Open
Abstract
Recent literature on cortical auditory evoked potentials has focused on correlations with hearing performance with the aim to develop an objective clinical tool. However, cortical responses depend on the type of stimulus and choice of stimulus parameters. This study investigates cortical auditory evoked potentials to sound changes, so-called acoustic change complexes (ACC), and the effects of varying three stimulus parameters. In twelve normal-hearing subjects, ACC waveforms were evoked by presenting frequency changes with varying magnitude, rate, and direction. The N1 amplitude and latency were strongly affected by magnitude, which is known from the literature. Importantly, both of these N1 variables were also significantly affected by both rate and direction of the frequency change. Larger and earlier N1 peaks were evoked by increasing the magnitude and rate of the frequency change and with downward rather than upward direction of the frequency change. The P2 amplitude increased with magnitude and depended, to a lesser extent, on rate of the frequency change while direction had no effect on this peak. The N1–P2 interval was not affected by any of the stimulus parameters. In conclusion, the ACC is most strongly affected by magnitude and also substantially by rate and direction of the change. These stimulus dependencies should be considered in choosing stimuli for ACCs as objective clinical measure of hearing performance.
Collapse
Affiliation(s)
- Bernard M D Vonck
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands.,UMC Utrecht Brain Center, Utrecht, The Netherlands
| | - Marc J W Lammers
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands.,UMC Utrecht Brain Center, Utrecht, The Netherlands.,BC Rotary Hearing and Balance Centre at St. Paul's Hospital, University of British Columbia, Vancouver, British Columbia, Canada
| | - Marjolijn van der Waals
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands
| | - Gijsbert A van Zanten
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands.,UMC Utrecht Brain Center, Utrecht, The Netherlands
| | - Huib Versnel
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands. .,UMC Utrecht Brain Center, Utrecht, The Netherlands.
| |
Collapse
|
35
|
Lunardelo PP, Simões HDO, Zanchetta S. Differences and similarities in the long-latency auditory evoked potential recording of P1-N1 for different sound stimuli. REVISTA CEFAC 2019. [DOI: 10.1590/1982-0216/201921218618] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
ABSTRACT Purpose: this study aimed at illustrating the similarities and differences in the recording of components P1 and N1 for verbal and non-verbal stimuli, in an adult sample population, for reference purposes. Methods: twenty-one adult, eutrophic individuals of both sexes were recruited for this study. The long-latency auditory evoked potential was detected by bilateral stimulation in both ears, using simultaneous recording, with non-verbal stimuli and the syllable /da/. Results: for non-verbal and speech stimuli, N1 was identified in 100.0% of the participants, whereas P1 was observed in 85.7% and 95.2% individuals for non-verbal and speech stimuli, respectively. Significant differences were observed for the P1 and N1 amplitudes between the ears (p <0.05); the P1 component, in the left ear, was higher than that in the right ear, whereas the N1 component was higher in the right one. Regarding the stimuli, the amplitude and latency values of N1 were higher for speech, whereas in P1, different results were obtained only in latency. Conclusion: the N1 component was the most frequently detected one. Differences in latency and amplitude for each stimuli occurred only for N1, which can be justified by its role in the process of speech discrimination.
Collapse
|
36
|
|
37
|
Noda K, Kitahara T, Doi K. Sound Change Integration Error: An Explanatory Model of Tinnitus. Front Neurosci 2018; 12:831. [PMID: 30538615 PMCID: PMC6277469 DOI: 10.3389/fnins.2018.00831] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Accepted: 10/24/2018] [Indexed: 11/23/2022] Open
Abstract
A growing body of research is focused on identifying and understanding the neurophysiological mechanisms that underlie tinnitus. Unfortunately, however, most current models cannot adequately explain the majority of tinnitus features. For instance, although tinnitus generally appears within minutes after entering a silent environment, most models postulate that tinnitus emerges over a much larger timescale (days). Similarly, there is a limited understanding of how the severity of tinnitus can differ in patients with a similar degree of hearing loss. To address this critical knowledge gap, we have formulated a novel explanatory model of tinnitus, the perception-update (PU) model, which rests on a theory of information processing and can explain several key characteristics of tinnitus onset. The PU model posits that the brain continuously updates the information received from the inner ear by comparing it to the received information immediately before. That is, the auditory system processes the relative change in sensory input, as opposed to the absolute value of the auditory input. This is analogous to the functioning of data compression technology used for music and images called differential pulse code modulation (differential PCM). The PU model proposes that the inner ear transmits sound change to the auditory cortex via an auditory N1 response, an event-related potential component that constitutes is a prime signaler of auditory input change. In cases of hearing impairment, the PU model posits that the auditory system finds itself in a state of uncertainty where perception has to be predicted based on previous stimulation parameters, which can lead to the emergence of tinnitus.
Collapse
Affiliation(s)
| | - Tadashi Kitahara
- Department of Otorhinolaryngology, Head and Neck Surgery, Nara Medical University, Kashihara, Japan
| | - Katsumi Doi
- Department of Otolaryngology, Faculty of Medicine, Kindai University, Osakasayama, Japan
| |
Collapse
|
38
|
Abstract
The problems concerning the registration of late latency auditory responses to electric stimulation in the patients wearing cochlear implants are considered. The renewed interest to this class of evoked potentials is due to unexplained differences in the results of cochlear implantation in the patients with the similar audiological data, etiology, age and the history of deafness as well as cochlear implant surgery in children of first years of life and the extended possibilities for speech processor programming. It is maintained that the advantages of this method include the possibility to objectively evaluate the ability of brain to detect and discriminate between different stimulus characteristics, such as loudness differences, temporal changes or speech tokens. This method is of great clinical significance for the electrophysiological monitoring of brain plasticity and documentation of the clinical effectiveness of different rehabilitation methods. Based on our own experimental and clinical results and the literature data, we consider the application of different electrically evoked late latency potentials for the monitoring of the auditory pathway maturation dynamics during the electric stimulation as well for the estimation of the effectiveness of cochlear implantation. It is concluded that the longer duration of deafness and later age at implantation result in immature morphology and delayed peak latencies and that patients with shorter latencies and higher amplitudes have better speech perception. The use of different classes of electrically evoked responses of auditory cortex could provide the objective control of the effectiveness of the rehabilitative measured in the children following cochlear implantation.
Collapse
Affiliation(s)
- G A Tavartkiladze
- Russian Research Centre for Audiology and Hearing Rehabilitation, Russian Medico-Biological Agency, Moscow, Russia, 117513; Russian Medical Academy of Continuous Professional Education, Ministry of Health of the Russian Federation, Moscow, Russia, 123395
| |
Collapse
|
39
|
Gansonre C, Højlund A, Leminen A, Bailey C, Shtyrov Y. Task-free auditory EEG paradigm for probing multiple levels of speech processing in the brain. Psychophysiology 2018; 55:e13216. [PMID: 30101984 DOI: 10.1111/psyp.13216] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 05/09/2018] [Accepted: 05/09/2018] [Indexed: 11/26/2022]
Abstract
While previous studies on language processing highlighted several ERP components in relation to specific stages of sound and speech processing, no study has yet combined them to obtain a comprehensive picture of language abilities in a single session. Here, we propose a novel task-free paradigm aimed at assessing multiple levels of speech processing by combining various speech and nonspeech sounds in an adaptation of a multifeature passive oddball design. We recorded EEG in healthy adult participants, who were presented with these sounds in the absence of sound-directed attention while being engaged in a primary visual task. This produced a range of responses indexing various levels of sound processing and language comprehension: (a) P1-N1 complex, indexing obligatory auditory processing; (b) P3-like dynamics associated with involuntary attention allocation for unusual sounds; (c) enhanced responses for native speech (as opposed to nonnative phonemes) from ∼50 ms from phoneme onset, indicating phonological processing; (d) amplitude advantage for familiar real words as opposed to meaningless pseudowords, indexing automatic lexical access; (e) topographic distribution differences in the cortical activation of action verbs versus concrete nouns, likely linked with the processing of lexical semantics. These multiple indices of speech-sound processing were acquired in a single attention-free setup that does not require any task or subject cooperation; subject to future research, the present protocol may potentially be developed into a useful tool for assessing the status of auditory and linguistic functions in uncooperative or unresponsive participants, including a range of clinical or developmental populations.
Collapse
Affiliation(s)
- Christelle Gansonre
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Andreas Højlund
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Alina Leminen
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Christopher Bailey
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Yury Shtyrov
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Laboratory of Behavioural Neurodynamics, St. Petersburg State University, St. Petersburg, Russia
| |
Collapse
|
40
|
Billings CJ, Grush LD, Maamor N. Acoustic change complex in background noise: phoneme level and timing effects. Physiol Rep 2018; 5:5/20/e13464. [PMID: 29051305 PMCID: PMC5661231 DOI: 10.14814/phy2.13464] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 08/24/2017] [Accepted: 08/29/2017] [Indexed: 11/24/2022] Open
Abstract
The effects of background noise on speech‐evoked cortical auditory evoked potentials (CAEPs) can provide insight into the physiology of the auditory system. The purpose of this study was to determine background noise effects on neural coding of different phonemes within a syllable. CAEPs were recorded from 15 young normal‐hearing adults in response to speech signals /s/, /ɑ/, and /sɑ/. Signals were presented at varying signal‐to‐noise ratios (SNRs). The effects of SNR and context (in isolation or within syllable) were analyzed for both phonemes. For all three stimuli, latencies generally decreased and amplitudes generally increased as SNR improved, and context effects were not present; however, the amplitude of the /ɑ/ response was the exception, showing no SNR effect and a significant context effect. Differential coding of /s/ and /ɑ/ likely result from level and timing differences. Neural refractoriness may result in the lack of a robust SNR effect on amplitude in the syllable context. The stable amplitude across SNRs in response to the vowel in /sɑ/ suggests the combined effects of (1) acoustic characteristics of the syllable and noise at poor SNRs and (2) refractory effects resulting from phoneme timing at good SNRs. Results provide insights into the coding of multiple‐onset speech syllables in varying levels of background noise and, together with behavioral measures, may help to improve our understanding of speech‐perception‐in‐noise difficulties.
Collapse
Affiliation(s)
- Curtis J Billings
- National Center for Rehabilitative Auditory Research Veterans Affairs Portland Health Care System, Portland, Oregon .,Department of Otolaryngology, Oregon Health & Science University, Portland, Oregon
| | - Leslie D Grush
- National Center for Rehabilitative Auditory Research Veterans Affairs Portland Health Care System, Portland, Oregon
| | - Nashrah Maamor
- National Center for Rehabilitative Auditory Research Veterans Affairs Portland Health Care System, Portland, Oregon.,Audiology Program School of Rehabilitation Sciences Faculty of Health Sciences The National University of Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
41
|
Uhler KM, Hunter SK, Tierney E, Gilley PM. The relationship between mismatch response and the acoustic change complex in normal hearing infants. Clin Neurophysiol 2018; 129:1148-1160. [PMID: 29635099 DOI: 10.1016/j.clinph.2018.02.132] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Revised: 02/19/2018] [Accepted: 02/24/2018] [Indexed: 11/27/2022]
Abstract
OBJECTIVE To examine the utility of the mismatch response (MMR) and acoustic change complex (ACC) for assessing speech discrimination in infants. METHODS Continuous EEG was recorded during sleep from 48 (24 male, 20 female) normally hearing aged 1.77 to -4.57 months in response to two auditory discrimination tasks. ACC was recorded in response to a three-vowel sequence (/i/-/a/-/i/). MMR was recorded in response to a standard vowel, /a/, (probability 85%), and to a deviant vowel, /i/, (probability of 15%). A priori comparisons included: age, sex, and sleep state. These were conducted separately for each of the three bandpass filter settings were compared (1-18, 1-30, and 1-40 Hz). RESULTS A priori tests revealed no differences in MMR or ACC for age, sex, or sleep state for any of the three filter settings. ACC and MMR responses were prominently observed in all 44 sleeping infants (data from four infants were excluded). Significant differences observed for ACC were to the onset and offset of stimuli. However, neither group nor individual differences were observed to changes in speech stimuli in the ACC. MMR revealed two prominent peaks occurring at the stimulus onset and at the stimulus offset. Permutation t-tests revealed significant differences between the standard and deviant stimuli for both the onset and offset MMR peaks (p < 0.01). The 1-18 Hz filter setting revealed significant differences for all participants in the MMR paradigm. CONCLUSION Both ACC and MMR responses were observed to auditory stimulation suggesting that infants perceive and process speech information even during sleep. Significant differences between the standard and deviant responses were observed in the MMR, but not ACC paradigm. These findings suggest that the MMR is sensitive to detecting auditory/speech discrimination processing. SIGNIFICANCE This paper identified that MMR can be used to identify discrimination in normal hearing infants. This suggests that MMR has potential for use in infants with hearing loss to validate hearing aid fittings.
Collapse
Affiliation(s)
- Kristin M Uhler
- University of Colorado Denver, Departments of Physical Medicine and Rehabilitation, Otolaryngology, and Psychiatry, Children's Hospital Colorado, Aurora, CO, USA.
| | - Sharon K Hunter
- University of Colorado Denver, Departments of Psychiatry and Pediatrics, Aurora, CO, USA
| | - Elyse Tierney
- University of Colorado Denver, Departments of Psychiatry and Pediatrics, Aurora, CO, USA
| | - Phillip M Gilley
- University of Colorado, Boulder, Institute of Cognitive Science, Neurodynamics Laboratory, Boulder, CO, USA
| |
Collapse
|
42
|
Tan CT, Martin BA, Svirsky MA. A potential neurophysiological correlate of electric-acoustic pitch matching in adult cochlear implant users: Pilot data. Cochlear Implants Int 2018; 19:198-209. [PMID: 29508662 DOI: 10.1080/14670100.2018.1442126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
The overall goal of this study was to identify an objective physiological correlate of electric-acoustic pitch matching in unilaterally implanted cochlear implant (CI) participants with residual hearing in the non-implanted ear. Electrical and acoustic stimuli were presented in a continuously alternating fashion across ears. The acoustic stimulus and the electrical stimulus were either matched or mismatched in pitch. Auditory evoked potentials were obtained from nine CI users. Results indicated that N1 latency was stimulus-dependent, decreasing when the acoustic frequency of the tone presented to the non-implanted ear was increased. More importantly, there was an additional decrease in N1 latency in the pitch-matched condition. These results indicate the potential utility of N1 latency as an index of pitch matching in CI users.
Collapse
Affiliation(s)
- Chin-Tuan Tan
- a Department of Electrical and Computer Engineering, School of Behavioral and Brain Science (Callier Center for Communication Disorders) , University of Texas at Dallas , Richardson , TX , USA.,b Program in Speech-Language-Hearing Sciences and Program in Audiology, Graduate Center , City University of New York , New York , NY , USA
| | - Brett A Martin
- b Program in Speech-Language-Hearing Sciences and Program in Audiology, Graduate Center , City University of New York , New York , NY , USA
| | - Mario A Svirsky
- c Department of Otolaryngology , New York University , New York , NY , USA
| |
Collapse
|
43
|
Kang S, Woo J, Park H, Brown CJ, Hong SH, Moon IJ. Objective Test of Cochlear Dead Region: Electrophysiologic Approach using Acoustic Change Complex. Sci Rep 2018; 8:3645. [PMID: 29483598 PMCID: PMC5832147 DOI: 10.1038/s41598-018-21754-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Accepted: 02/09/2018] [Indexed: 11/09/2022] Open
Abstract
The goal of this study was to develop an objective and neurophysiologic method of identifying the presence of cochlear dead region (CDR) by combining acoustic change complex (ACC) responses with threshold-equalizing noise (TEN) test. The goal of the first study was to confirm whether ACC could be evoked with TEN stimuli and to also optimize the test conditions. The goal of the second study was to determine whether the TEN-ACC test is capable of detecting CDR(s). The ACC responses were successfully recorded from all study participants. Both behaviorally and electrophysiologically obtained masked thresholds (TEN threshold and TEN-ACC threshold) were similar and below 10 and 12 dB SNR in NH listeners, respectively. HI listeners were divided into HI (non-CDR) and CDR groups based on the behavioral TEN test. For the non-CDR group, TEN-ACC thresholds were below 12 dB which were similar to NH listeners. However, for the CDR group, TEN-ACC thresholds were significantly higher (≥12 dB SNR) than those in the NH and HI groups, indicating that CDR(s) can be objectively detected using the ACC. Results of this study demonstrate that it is possible to detect the presence of CDR using an electrophysiologic method.
Collapse
Affiliation(s)
- Soojin Kang
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.,School of Electrical Engineering, Biomedical Engineering, University of Ulsan, Ulsan, Korea
| | - Jihwan Woo
- School of Electrical Engineering, Biomedical Engineering, University of Ulsan, Ulsan, Korea
| | - Heesung Park
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Carolyn J Brown
- Departments of Speech Pathology and Audiology, University of Iowa, Iowa City, Iowa, USA
| | - Sung Hwa Hong
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon, Korea.
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.
| |
Collapse
|
44
|
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding. J Neurosci 2017; 38:1835-1849. [PMID: 29263241 DOI: 10.1523/jneurosci.1566-17.2017] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 11/17/2017] [Accepted: 12/08/2017] [Indexed: 11/21/2022] Open
Abstract
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ (illusion-fa), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ (illusion-ba), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba, and a reduced N1 when they perceived illusion-fa, mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex.SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator).
Collapse
|
45
|
Wagner M, Shafer VL, Haxhari E, Kiprovski K, Behrmann K, Griffiths T. Stability of the Cortical Sensory Waveforms, the P1-N1-P2 Complex and T-Complex, of Auditory Evoked Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2105-2115. [PMID: 28679003 PMCID: PMC5831095 DOI: 10.1044/2017_jslhr-h-16-0056] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Revised: 07/18/2016] [Accepted: 02/21/2017] [Indexed: 06/07/2023]
Abstract
Purpose Atypical cortical sensory waveforms reflecting impaired encoding of auditory stimuli may result from inconsistency in cortical response to the acoustic feature changes within spoken words. Thus, the present study assessed intrasubject stability of the P1-N1-P2 complex and T-complex to multiple productions of spoken nonwords in 48 adults to provide benchmarks for future studies probing auditory processing deficits. Method Response trials were split (split epoch averages) for each of 4 word types for each subject and compared for similarity in waveform morphology. Waveform morphology association was assessed between 50 and 600 ms, the time frame reflecting spectro-temporal feature processing for the stimuli used in the study. Results Using approximately 70 trials in each split epoch, the P1-N1-P2 complex was found to be highly stable, with high positive associations found for all subjects for at least 3 word types. The T-complex was more variable, with high positive associations found for all subjects to at least 1 word type. Conclusions The P1-N1-P2 split epochs at group and individual levels and the T-complex at group level can be used to assess consistency of neural response in individuals with auditory processing deficits. The T-complex relative to the P1-N1-P2 complex in individuals can provide information pertaining to phonological processing.
Collapse
|
46
|
Liang C, Earl B, Thompson I, Whitaker K, Cahn S, Xiang J, Fu QJ, Zhang F. Musicians Are Better than Non-musicians in Frequency Change Detection: Behavioral and Electrophysiological Evidence. Front Neurosci 2016; 10:464. [PMID: 27826221 PMCID: PMC5078501 DOI: 10.3389/fnins.2016.00464] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Accepted: 09/27/2016] [Indexed: 11/13/2022] Open
Abstract
Objective: The objectives of this study were: (1) to determine if musicians have a better ability to detect frequency changes under quiet and noisy conditions; (2) to use the acoustic change complex (ACC), a type of electroencephalographic (EEG) response, to understand the neural substrates of musician vs. non-musician difference in frequency change detection abilities. Methods: Twenty-four young normal hearing listeners (12 musicians and 12 non-musicians) participated. All participants underwent psychoacoustic frequency detection tests with three types of stimuli: tones (base frequency at 160 Hz) containing frequency changes (Stim 1), tones containing frequency changes masked by low-level noise (Stim 2), and tones containing frequency changes masked by high-level noise (Stim 3). The EEG data were recorded using tones (base frequency at 160 and 1200 Hz, respectively) containing different magnitudes of frequency changes (0, 5, and 50% changes, respectively). The late-latency evoked potential evoked by the onset of the tones (onset LAEP or N1-P2 complex) and that evoked by the frequency change contained in the tone (the acoustic change complex or ACC or N1′-P2′ complex) were analyzed. Results: Musicians significantly outperformed non-musicians in all stimulus conditions. The ACC and onset LAEP showed similarities and differences. Increasing the magnitude of frequency change resulted in increased ACC amplitudes. ACC measures were found to be significantly different between musicians (larger P2′ amplitude) and non-musicians for the base frequency of 160 Hz but not 1200 Hz. Although the peak amplitude in the onset LAEP appeared to be larger and latency shorter in musicians than in non-musicians, the difference did not reach statistical significance. The amplitude of the onset LAEP is significantly correlated with that of the ACC for the base frequency of 160 Hz. Conclusion: The present study demonstrated that musicians do perform better than non-musicians in detecting frequency changes in quiet and noisy conditions. The ACC and onset LAEP may involve different but overlapping neural mechanisms. Significance: This is the first study using the ACC to examine music-training effects. The ACC measures provide an objective tool for documenting musical training effects on frequency detection.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Brian Earl
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Ivy Thompson
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Kayla Whitaker
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Steven Cahn
- Department of Composition, Musicology, and Theory, College-Conservatory of Music, University of Cincinnati Cincinnati, OH, USA
| | - Jing Xiang
- Department of Pediatrics and Neurology, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California, Los Angeles Los Angeles, CA, USA
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| |
Collapse
|
47
|
Abstract
OBJECTIVES Nonlinear frequency compression is a signal processing technique used to increase the audibility of high-frequency speech sounds for hearing aid users with sloping, high-frequency hearing loss. However, excessive compression ratios may reduce spectral contrast between sounds and negatively impact speech perception. This is of particular concern in infants and young children who may not be able to provide feedback about frequency compression settings. This study explores the use of an objective cortical auditory evoked potential that is sensitive to changes in spectral contrast, the acoustic change complex (ACC), in the verification of frequency compression parameters. DESIGN ACC responses were recorded from adult listeners to a spectral ripple contrast stimulus that was processed using a range of frequency compression ratios (1:1, 1.5:1, 2:1, 3:1, and 4:1). Vowel identification, consonant identification, speech recognition in noise (QuickSIN), and behavioral ripple discrimination thresholds were also measured under identical frequency compression conditions. In Experiment 1, these tasks were completed in 10 adults with normal hearing. In Experiment 2, these same tasks were repeated in 10 adults with sloping, high-frequency hearing loss. RESULTS Repeated measures analysis of variance was completed for each task and each group with frequency compression ratio as the within-subjects factor. Increasing the compression ratio did not affect vowel identification for the normal hearing group but did cause a significant decrease in vowel identification for the hearing-impaired listeners. Increases in compression ratio were associated with significant decrements in ACC amplitudes, consonant identification scores, ripple discrimination thresholds, and speech perception in noise scores for both groups of listeners. CONCLUSIONS The ACC response, like speech and nonspeech perceptual measures, is sensitive to frequency compression ratio. Additional study is needed to establish optimal stimulus and recording parameters for the clinical application of this measure in the verification of hearing aid frequency compression settings.
Collapse
|
48
|
Elicitation of the Acoustic Change Complex to Long-Duration Speech Stimuli in Four-Month-Old Infants. Int J Otolaryngol 2015; 2015:562030. [PMID: 26798343 PMCID: PMC4700181 DOI: 10.1155/2015/562030] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Accepted: 11/26/2015] [Indexed: 12/21/2022] Open
Abstract
The acoustic change complex (ACC) is an auditory-evoked potential elicited to changes within an ongoing stimulus that indicates discrimination at the level of the auditory cortex. Only a few studies to date have attempted to record ACCs in young infants. The purpose of the present study was to investigate the elicitation of ACCs to long-duration speech stimuli in English-learning 4-month-old infants. ACCs were elicited to consonant contrasts made up of two concatenated speech tokens. The stimuli included native dental-dental /dada/ and dental-labial /daba/ contrasts and a nonnative Hindi dental-retroflex /daDa/ contrast. Each consonant-vowel speech token was 410 ms in duration. Slow cortical responses were recorded to the onset of the stimulus and to the acoustic change from /da/ to either /ba/ or /Da/ within the stimulus with significantly prolonged latencies compared with adults. ACCs were reliably elicited for all stimulus conditions with more robust morphology compared with our previous findings using stimuli that were shorter in duration. The P1 amplitudes elicited to the acoustic change in /daba/ and /daDa/ were significantly larger compared to /dada/ supporting that the brain discriminated between the speech tokens. These findings provide further evidence for the use of ACCs as an index of discrimination ability.
Collapse
|
49
|
Kim JR. Acoustic Change Complex: Clinical Implications. J Audiol Otol 2015; 19:120-4. [PMID: 26771009 PMCID: PMC4704548 DOI: 10.7874/jao.2015.19.3.120] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Revised: 11/16/2015] [Accepted: 11/18/2015] [Indexed: 11/22/2022] Open
Abstract
The acoustic change complex (ACC) is a cortical auditory evoked potential elicited in response to a change in an ongoing sound. The characteristics and potential clinical implications of the ACC are reviewed in this article. The P1-N1-P2 recorded from the auditory cortex following presentation of an acoustic stimulus is believed to reflect the neural encoding of a sound signal, but this provides no information regarding sound discrimination. However, the neural processing underlying behavioral discrimination capacity can be measured by modifying the traditional methodology for recording the P1-N1-P2. When obtained in response to an acoustic change within an ongoing sound, the resulting waveform is referred to as the ACC. When elicited, the ACC indicates that the brain has detected changes within a sound and the patient has the neural capacity to discriminate the sounds. In fact, results of several studies have shown that the ACC amplitude increases with increasing magnitude of acoustic changes in intensity, spectrum, and gap duration. In addition, the ACC can be reliably recorded with good test-retest reliability not only from listeners with normal hearing but also from individuals with hearing loss, hearing aids, and cochlear implants. The ACC can be obtained even in the absence of attention, and requires relatively few stimulus presentations to record a response with a good signal-to-noise ratio. Most importantly, the ACC shows reasonable agreement with behavioral measures. Therefore, these findings suggest that the ACC might represent a promising tool for the objective clinical evaluation of auditory discrimination and/or speech perception capacity.
Collapse
Affiliation(s)
- Jae-Ryong Kim
- Department of Otolaryngology-Head and Neck Surgery, Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| |
Collapse
|
50
|
Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP). Neurosci Lett 2015; 614:119-26. [PMID: 26700876 DOI: 10.1016/j.neulet.2015.12.020] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Revised: 10/22/2015] [Accepted: 12/10/2015] [Indexed: 01/10/2023]
Abstract
The purpose of the study was to determine whether P1-N1-P2 and T-complex morphology reflect spectro-temporal features within spoken words that approximate the natural variation of a speaker and whether waveform morphology is reliable at group and individual levels, necessary for probing auditory deficits. The P1-N1-P2 and T-complex to the syllables /pət/ and /sət/ within 70 natural word productions each were examined. EEG was recorded while participants heard nonsense word pairs and performed a syllable identification task to the second word in the pairs. Single trial auditory evoked potentials (AEP) to the first words were analyzed. Results found P1-N1-P2 and T-complex to reflect spectral and temporal feature processing. Also, results identified preliminary benchmarks for single trial response variability for individual subjects for sensory processing between 50 and 600ms. P1-N1-P2 and T-complex, at least at group level, may serve as phenotypic signatures to identify deficits in spectro-temporal feature recognition and to determine area of deficit, the superior temporal plane or lateral superior temporal gyrus.
Collapse
|