1
|
Delcenserie A, Genesee F, Champoux F. Exposure to sign language prior and after cochlear implantation increases language and cognitive skills in deaf children. Dev Sci 2024:e13481. [PMID: 38327110 DOI: 10.1111/desc.13481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/06/2023] [Accepted: 11/18/2023] [Indexed: 02/09/2024]
Abstract
Recent evidence suggests that deaf children with CIs exposed to nonnative sign language from hearing parents can attain age-appropriate vocabularies in both sign and spoken language. It remains to be explored whether deaf children with CIs who are exposed to early nonnative sign language, but only up to implantation, also benefit from this input and whether these benefits also extend to memory abilities, which are strongly linked to language development. The present study examined the impact of deaf children's early short-term exposure to nonnative sign input on their spoken language and their phonological memory abilities. Deaf children who had been exposed to nonnative sign input before and after cochlear implantation were compared to deaf children who never had any exposure to sign input as well as to children with typical hearing. The children were between 5;1 and 7;1 years of age at the time of testing and were matched on age, sex, and socioeconomic status. The results suggest that even short-term exposure to nonnative sign input has positive effects on general language and phonological memory abilities as well as on nonverbal working memory-with total length of exposure to sign input being the best predictor of deaf children's performance on these measures. The present data suggest that even access to early short-term nonnative visual language input is beneficial for the language and phonological memory abilities of deaf children with cochlear implants, suggesting also that parents should not be discouraged from learning and exposing their child to sign language. RESEARCH HIGHLIGHTS: This is the first study to examine the effects of early short-term exposure to nonnative sign input on French-speaking children with cochlear implants' spoken language and memory abilities. Early short-term nonnative exposure to sign input can have positive consequences for the language and phonological memory abilities of deaf children with CIs. Extended exposure to sign input has some additional and important benefits, allowing children to perform on par with children with typical hearing.
Collapse
Affiliation(s)
- A Delcenserie
- Université de Montréal, Québec, Canada
- School of Speech-Language Pathology and Audiology, Universite de Montreal, Quebec, Canada
| | | | - F Champoux
- School of Speech-Language Pathology and Audiology, Universite de Montreal, Quebec, Canada
| |
Collapse
|
2
|
The Acoustic Change Complex Compared to Hearing Performance in Unilaterally and Bilaterally Deaf Cochlear Implant Users. Ear Hear 2022; 43:1783-1799. [PMID: 35696186 PMCID: PMC9592183 DOI: 10.1097/aud.0000000000001248] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Clinical measures evaluating hearing performance in cochlear implant (CI) users depend on attention and linguistic skills, which limits the evaluation of auditory perception in some patients. The acoustic change complex (ACC), a cortical auditory evoked potential to a sound change, might yield useful objective measures to assess hearing performance and could provide insight in cortical auditory processing. The aim of this study is to examine the ACC in response to frequency changes as an objective measure for hearing performance in CI users. DESIGN Thirteen bilaterally deaf and six single-sided deaf subjects were included, all having used a unilateral CI for at least 1 year. Speech perception was tested with a consonant-vowel-consonant test (+10 dB signal-to-noise ratio) and a digits-in-noise test. Frequency discrimination thresholds were measured at two reference frequencies, using a 3-interval, 2-alternative forced-choice, adaptive staircase procedure. The two reference frequencies were selected using each participant's frequency allocation table and were centered in the frequency band of an electrode that included 500 or 2000 Hz, corresponding to the apical electrode or the middle electrode, respectively. The ACC was evoked with pure tones of the same two reference frequencies with varying frequency increases: within the frequency band of the middle or the apical electrode (+0.25 electrode step), and steps to the center frequency of the first (+1), second (+2), and third (+3) adjacent electrodes. RESULTS Reproducible ACCs were recorded in 17 out of 19 subjects. Most successful recordings were obtained with the largest frequency change (+3 electrode step). Larger frequency changes resulted in shorter N1 latencies and larger N1-P2 amplitudes. In both unilaterally and bilaterally deaf subjects, the N1 latency and N1-P2 amplitude of the CI ears correlated to speech perception as well as frequency discrimination, that is, short latencies and large amplitudes were indicative of better speech perception and better frequency discrimination. No significant differences in ACC latencies or amplitudes were found between the CI ears of the unilaterally and bilaterally deaf subjects, but the CI ears of the unilaterally deaf subjects showed substantially longer latencies and smaller amplitudes than their contralateral normal-hearing ears. CONCLUSIONS The ACC latency and amplitude evoked by tone frequency changes correlate well to frequency discrimination and speech perception capabilities of CI users. For patients unable to reliably perform behavioral tasks, the ACC could be of added value in assessing hearing performance.
Collapse
|
3
|
McGuire K, Firestone GM, Zhang N, Zhang F. The Acoustic Change Complex in Response to Frequency Changes and Its Correlation to Cochlear Implant Speech Outcomes. Front Hum Neurosci 2021; 15:757254. [PMID: 34744668 PMCID: PMC8566680 DOI: 10.3389/fnhum.2021.757254] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/01/2021] [Indexed: 12/12/2022] Open
Abstract
One of the biggest challenges that face cochlear implant (CI) users is the highly variable hearing outcomes of implantation across patients. Since speech perception requires the detection of various dynamic changes in acoustic features (e.g., frequency, intensity, timing) in speech sounds, it is critical to examine the ability to detect the within-stimulus acoustic changes in CI users. The primary objective of this study was to examine the auditory event-related potential (ERP) evoked by the within-stimulus frequency changes (F-changes), one type of the acoustic change complex (ACC), in adult CI users, and its correlation to speech outcomes. Twenty-one adult CI users (29 individual CI ears) were tested with psychoacoustic frequency change detection tasks, speech tests including the Consonant-Nucleus-Consonant (CNC) word recognition, Arizona Biomedical Sentence Recognition in quiet and noise (AzBio-Q and AzBio-N), and the Digit-in-Noise (DIN) tests, and electroencephalographic (EEG) recordings. The stimuli for the psychoacoustic tests and EEG recordings were pure tones at three different base frequencies (0.25, 1, and 4 kHz) that contained a F-change at the midpoint of the tone. Results showed that the frequency change detection threshold (FCDT), ACC N1' latency, and P2' latency did not differ across frequencies (p > 0.05). ACC N1'-P2 amplitude was significantly larger for 0.25 kHz than for other base frequencies (p < 0.05). The mean N1' latency across three base frequencies was negatively correlated with CNC word recognition (r = -0.40, p < 0.05) and CNC phoneme (r = -0.40, p < 0.05), and positively correlated with mean FCDT (r = 0.46, p < 0.05). The P2' latency was positively correlated with DIN (r = 0.47, p < 0.05) and mean FCDT (r = 0.47, p < 0.05). There was no statistically significant correlation between N1'-P2' amplitude and speech outcomes (all ps > 0.05). Results of this study indicated that variability in CI speech outcomes assessed with the CNC, AzBio-Q, and DIN tests can be partially explained (approximately 16-21%) by the variability of cortical sensory encoding of F-changes reflected by the ACC.
Collapse
Affiliation(s)
- Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Gabrielle M. Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
4
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
5
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
6
|
Moïn-Darbari K, Lafontaine L, Maheu M, Bacon BA, Champoux F. Vestibular status: A missing factor in our understanding of brain reorganization in deaf individuals. Cortex 2021; 138:311-317. [PMID: 33784514 DOI: 10.1016/j.cortex.2021.02.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 02/09/2021] [Accepted: 02/18/2021] [Indexed: 10/22/2022]
Abstract
The brain of deaf people is definitely not just deaf, and we have to reconsider what we know about the impact of hearing loss on brain development in light of comorbid vestibular impairments.
Collapse
Affiliation(s)
- K Moïn-Darbari
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
| | - L Lafontaine
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
| | - M Maheu
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada
| | - B A Bacon
- Department of Psychology, Carleton University, Ottawa, Ontario, Canada
| | - F Champoux
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada.
| |
Collapse
|
7
|
Firestone GM, McGuire K, Liang C, Zhang N, Blankenship CM, Xiang J, Zhang F. A Preliminary Study of the Effects of Attentive Music Listening on Cochlear Implant Users' Speech Perception, Quality of Life, and Behavioral and Objective Measures of Frequency Change Detection. Front Hum Neurosci 2020; 14:110. [PMID: 32296318 PMCID: PMC7136537 DOI: 10.3389/fnhum.2020.00110] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 03/11/2020] [Indexed: 11/17/2022] Open
Abstract
Introduction Most cochlear implant (CI) users have difficulty in listening tasks that rely strongly on perception of frequency changes (e.g., speech perception in noise, musical melody perception, etc.). Some previous studies using behavioral or subjective assessments have shown that short-term music training can benefit CI users’ perception of music and speech. Electroencephalographic (EEG) recordings may reveal the neural basis for music training benefits in CI users. Objective To examine the effects of short-term music training on CI hearing outcomes using a comprehensive test battery of subjective evaluation, behavioral tests, and EEG measures. Design Twelve adult CI users were recruited for a home-based music training program that focused on attentive listening to music genres and materials that have an emphasis on melody. The participants used a music streaming program (i.e., Pandora) downloaded onto personal electronic devices for training. The participants attentively listened to music through a direct audio cable or through Bluetooth streaming. The training schedule was 40 min/session/day, 5 days/week, for either 4 or 8 weeks. The pre-training and post-training tests included: hearing thresholds, Speech, Spatial and Qualities of Hearing Scale (SSQ12) questionnaire, psychoacoustic tests of frequency change detection threshold (FCDT), speech recognition tests (CNC words, AzBio sentences, and QuickSIN), and EEG responses to tones that contained different magnitudes of frequency changes. Results All participants except one finished the 4- or 8-week training, resulting in a dropout rate of 8.33%. Eleven participants performed all tests except for two who did not participate in EEG tests. Results showed a significant improvement in the FCDTs as well as performance on CNC and QuickSIN after training (p < 0.05), but no significant improvement in SSQ scores (p > 0.05). Results of the EEG tests showed larger post-training cortical auditory evoked potentials (CAEPs) in seven of the nine participants, suggesting a better cortical processing of both stimulus onset and within-stimulus frequency changes. Conclusion These preliminary data suggest that extensive, focused music listening can improve frequency perception and speech perception in CI users. Further studies that include a larger sample size and control groups are warranted to determine the efficacy of short-term music training in CI users.
Collapse
Affiliation(s)
- Gabrielle M Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Chelsea M Blankenship
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Jing Xiang
- Department of Pediatrics and Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
8
|
Jeddi Z, Lotfi Y, Moossavi A, Bakhshi E, Hashemi SB. Correlation between Auditory Spectral Resolution and Speech Perception in Children with Cochlear Implants. IRANIAN JOURNAL OF MEDICAL SCIENCES 2019; 44:382-389. [PMID: 31582862 PMCID: PMC6754529 DOI: 10.30476/ijms.2019.44967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Background: Variability in speech performance is a major concern for children with cochlear implants (CIs). Spectral resolution is an important acoustic component in speech perception. Considerable variability and limitations of spectral resolution in children with CIs may lead to individual differences in speech performance. The aim of this study was to assess the correlation between auditory spectral resolution and speech perception in pediatric CI users.
Methods: This cross-sectional study was conducted in Shiraz, Iran, in 2017. The frequency discrimination threshold (FDT) and the spectral-temporal modulated ripple discrimination threshold (SMRT) were measured for 75 pre-lingual hearing-impaired children with CIs (age=8-12 y). Word recognition and sentence perception tests were completed to assess speech perception. The Pearson correlation analysis and multiple linear regression analysis were used to determine the correlation between the variables and to determine the predictive variables of speech perception, respectively.
Results: There was a significant correlation between the SMRT and word recognition (r=0.573 and P<0.001). The FDT was significantly correlated with word recognition (r=0.487 and P<0.001). Sentence perception had a significant correlation with the SMRT and the FDT. There was a significant correlation between chronological age and age at implantation with SMRT but not the FDT.
Conclusion: Auditory spectral resolution correlated well with speech perception among our children with CIs. Spectral resolution ability accounted for approximately 40% of the variance in speech perception among the children with CIs.
Collapse
Affiliation(s)
- Zahra Jeddi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Younes Lotfi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Abdollah Moossavi
- Department of Otolaryngology and Head and Neck Surgery, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Enayatollah Bakhshi
- Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Seyed Basir Hashemi
- Department of Otolaryngology, Khalili Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
9
|
Zhang F, Underwood G, McGuire K, Liang C, Moore DR, Fu QJ. Frequency change detection and speech perception in cochlear implant users. Hear Res 2019; 379:12-20. [PMID: 31035223 DOI: 10.1016/j.heares.2019.04.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 03/21/2019] [Accepted: 04/15/2019] [Indexed: 10/27/2022]
Abstract
Dynamic frequency changes in sound provide critical cues for speech perception. Most previous studies examining frequency discrimination in cochlear implant (CI) users have employed behavioral tasks in which target and reference tones (differing in frequency) are presented statically in separate time intervals. Participants are required to identify the target frequency by comparing stimuli across these time intervals. However, perceiving dynamic frequency changes in speech requires detection of within-interval frequency change. This study explored the relationship between detection of within-interval frequency changes and speech perception performance of CI users. Frequency change detection thresholds (FCDTs) were measured in 20 adult CI users using a 3-alternative forced-choice (3AFC) procedure. Stimuli were 1-sec pure tones (base frequencies at 0.25, 1, 4 kHz) with frequency changes occurring 0.5 s after the tone onset. Speech tests were 1) Consonant-Nucleus-Consonant (CNC) monosyllabic word recognition, 2) Arizona Biomedical Sentence Recognition (AzBio) in Quiet, 3) AzBio in Noise (AzBio-N, +10 dB signal-to-noise/SNR ratio), and 4) Digits-in-noise (DIN). Participants' subjective satisfaction with the CI was obtained. Results showed that correlations between FCDTs and speech perception were all statistically significant. The satisfaction level of CI use was not related to FCDTs, after controlling for major demographic factors. DIN speech reception thresholds were significantly correlated to AzBio-N scores. The current findings suggest that the ability to detect within-interval frequency changes may play an important role in speech perception performance of CI users. FCDT and DIN can serve as simple and rapid tests that require no or minimal linguistic background for the prediction of CI speech outcomes.
Collapse
Affiliation(s)
- Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA.
| | - Gabrielle Underwood
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA; Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Otolaryngology, University of Cincinnati, Ohio, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
10
|
Abstract
Cochlear implants restore hearing in deaf individuals, but speech perception remains challenging. Poor discrimination of spectral components is thought to account for limitations of speech recognition in cochlear implant users. We investigated how combined variations of spectral components along two orthogonal dimensions can maximize neural discrimination between two vowels, as measured by mismatch negativity. Adult cochlear implant users and matched normal-hearing listeners underwent electroencephalographic event-related potentials recordings in an optimum-1 oddball paradigm. A standard /a/ vowel was delivered in an acoustic free field along with stimuli having a deviant fundamental frequency (+3 and +6 semitones), a deviant first formant making it a /i/ vowel or combined deviant fundamental frequency and first formant (+3 and +6 semitones /i/ vowels). Speech recognition was assessed with a word repetition task. An analysis of variance between both amplitude and latency of mismatch negativity elicited by each deviant vowel was performed. The strength of correlations between these parameters of mismatch negativity and speech recognition as well as participants' age was assessed. Amplitude of mismatch negativity was weaker in cochlear implant users but was maximized by variations of vowels' first formant. Latency of mismatch negativity was later in cochlear implant users and was particularly extended by variations of the fundamental frequency. Speech recognition correlated with parameters of mismatch negativity elicited by the specific variation of the first formant. This nonlinear effect of acoustic parameters on neural discrimination of vowels has implications for implant processor programming and aural rehabilitation.
Collapse
Affiliation(s)
- François Prévost
- 1 Department of Speech Pathology and Audiology, McGill University Health Centre, Montreal, Quebec, Canada.,2 International Laboratory for Brain, Music & Sound Research, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 2 International Laboratory for Brain, Music & Sound Research, Montreal, Quebec, Canada.,3 Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,4 Centre for Research on Brain, Language & Music, Montreal, Quebec, Canada
| |
Collapse
|
11
|
Liang C, Houston LM, Samy RN, Abedelrehim LMI, Zhang F. Cortical Processing of Frequency Changes Reflected by the Acoustic Change Complex in Adult Cochlear Implant Users. Audiol Neurootol 2018; 23:152-164. [PMID: 30300882 DOI: 10.1159/000492170] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 07/16/2018] [Indexed: 11/19/2022] Open
Abstract
The purpose of this study was to examine neural substrates of frequency change detection in cochlear implant (CI) recipients using the acoustic change complex (ACC), a type of cortical auditory evoked potential elicited by acoustic changes in an ongoing stimulus. A psychoacoustic test and electroencephalographic recording were administered in 12 postlingually deafened adult CI users. The stimuli were pure tones containing different magnitudes of upward frequency changes. Results showed that the frequency change detection threshold (FCDT) was 3.79% in the CI users, with a large variability. The ACC N1' latency was significantly correlated with the FCDT and the clinically collected speech perception score. The results suggested that the ACC evoked by frequency changes can serve as a useful objective tool in assessing frequency change detection capability and predicting speech perception performance in CI users.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, USA.,Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Lisa M Houston
- Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Ravi N Samy
- Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Lamiaa Mohamed Ibrahim Abedelrehim
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, USA.,Audiology Department, Sohag Faculty of Medicine, Sohag University, Sohag, Egypt
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio,
| |
Collapse
|
12
|
Zaltz Y, Goldsworthy RL, Kishon-Rabin L, Eisenberg LS. Voice Discrimination by Adults with Cochlear Implants: the Benefits of Early Implantation for Vocal-Tract Length Perception. J Assoc Res Otolaryngol 2018; 19:193-209. [PMID: 29313147 PMCID: PMC5878152 DOI: 10.1007/s10162-017-0653-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 12/21/2017] [Indexed: 01/25/2023] Open
Abstract
Cochlear implant (CI) users find it extremely difficult to discriminate between talkers, which may partially explain why they struggle to understand speech in a multi-talker environment. Recent studies, based on findings with postlingually deafened CI users, suggest that these difficulties may stem from their limited use of vocal-tract length (VTL) cues due to the degraded spectral resolution transmitted by the CI device. The aim of the present study was to assess the ability of adult CI users who had no prior acoustic experience, i.e., prelingually deafened adults, to discriminate between resynthesized "talkers" based on either fundamental frequency (F0) cues, VTL cues, or both. Performance was compared to individuals with normal hearing (NH), listening either to degraded stimuli, using a noise-excited channel vocoder, or non-degraded stimuli. Results show that (a) age of implantation was associated with VTL but not F0 cues in discriminating between talkers, with improved discrimination for those subjects who were implanted at earlier age; (b) there was a positive relationship for the CI users between VTL discrimination and speech recognition score in quiet and in noise, but not with frequency discrimination or cognitive abilities; (c) early-implanted CI users showed similar voice discrimination ability as the NH adults who listened to vocoded stimuli. These data support the notion that voice discrimination is limited by the speech processing of the CI device. However, they also suggest that early implantation may facilitate sensory-driven tonotopicity and/or improve higher-order auditory functions, enabling better perception of VTL spectral cues for voice discrimination.
Collapse
Affiliation(s)
- Yael Zaltz
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel-Aviv, Israel.
- USC Tina and Rick Caruso Department of Otolaryngology-Head & Neck Surgery Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Raymond L Goldsworthy
- USC Tina and Rick Caruso Department of Otolaryngology-Head & Neck Surgery Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Liat Kishon-Rabin
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel-Aviv, Israel
| | - Laurie S Eisenberg
- USC Tina and Rick Caruso Department of Otolaryngology-Head & Neck Surgery Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|