1
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Towler W, Alemi R, Bien AG, Koirala N, Hanna L, Henry L, Gracco VL. Auditory evoked response to an oddball paradigm in children wearing cochlear implants. Clin Neurophysiol 2023; 149:133-145. [PMID: 36965466 DOI: 10.1016/j.clinph.2023.02.179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - William Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| | - Alexander G Bien
- University of Oklahoma College of Medicine, Otolaryngology, 800 Stanton L Young Blvd., Oklahoma City, OK 73117, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Lauren Henry
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | | |
Collapse
|
2
|
Image-Guided Cochlear Implant Programming: A Systematic Review and Meta-analysis. Otol Neurotol 2022; 43:e924-e935. [PMID: 35973035 DOI: 10.1097/mao.0000000000003653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVE To review studies evaluating clinically implemented image-guided cochlear implant programing (IGCIP) and to determine its effect on cochlear implant (CI) performance. DATA SOURCES PubMed, EMBASE, and Google Scholar were searched for English language publications from inception to August 1, 2021. STUDY SELECTION Included studies prospectively compared intraindividual CI performance between an image-guided experimental map and a patient's preferred traditional map. Non-English studies, cadaveric studies, and studies where imaging did not directly inform programming were excluded. DATA EXTRACTION Seven studies were identified for review, and five reported comparable components of audiological testing and follow-up times appropriate for meta-analysis. Demographic, speech, spectral modulation, pitch accuracy, and quality-of-life survey data were collected. Aggregate data were used when individual data were unavailable. DATA SYNTHESIS Audiological test outcomes were evaluated as standardized mean change (95% confidence interval) using random-effects meta-analysis with raw score standardization. Improvements in speech and quality-of-life measures using the IGCIP map demonstrated nominal effect sizes: consonant-nucleus-consonant words, 0.15 (-0.12 to 0.42); AzBio quiet, 0.09 (-0.05 to 0.22); AzBio +10 dB signal-noise ratio, 0.14 (-0.01 to 0.30); Bamford-Kowel-Bench sentence in noise, -0.11 (-0.35 to 0.12); Abbreviated Profile of Hearing Aid Benefit, -0.14 (-0.28 to 0.00); and Speech Spatial and Qualities of Hearing Scale, 0.13 (-0.02 to 0.28). Nevertheless, 79% of patients allowed to keep their IGCIP map opted for continued use after the investigational period. CONCLUSION IGCIP has potential to precisely guide CI programming. Nominal effect sizes for objective outcome measures fail to reflect subjective benefits fully given discordance with the percentage of patients who prefer to maintain their IGCIP map.
Collapse
|
3
|
Application of Signals with Rippled Spectra as a Training Approach for Speech Intelligibility Improvements in Cochlear Implant Users. J Pers Med 2022; 12:jpm12091426. [PMID: 36143210 PMCID: PMC9503413 DOI: 10.3390/jpm12091426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 08/19/2022] [Accepted: 08/30/2022] [Indexed: 11/17/2022] Open
Abstract
In cochlear implant (CI) users, the discrimination of sound signals with rippled spectra correlates with speech discrimination. We suggest that rippled-spectrum signals could be a basis for training CI users to improve speech intelligibility. Fifteen CI users participated in the study. Ten of them used the software for training (the experimental group), and five did not (the control group). Software based on the phase reversal discrimination of rippled spectra was used. The experimental group was also tested for speech discrimination using phonetic material based on polysyllabic balanced speech material. An improvement in the discrimination of the rippled spectrum was observed in all CI users from the experimental group. There was no significant improvement in the control group. The result of the speech discrimination test showed that the percentage of recognized words increased after training in nine out of ten CI users. For five CI users who participated in the training program, the data on word recognition were also obtained earlier (at least eight months before training). The increase in the percentage of recognized words was greater after training compared to the period before training. The results allow the suggestion that sound signals with rippled spectra could be used not only for testing rehabilitation results after CI but also for training CI users to discriminate sounds with complex spectra.
Collapse
|
4
|
Jahn KN, Arenberg JG, Horn DL. Spectral Resolution Development in Children With Normal Hearing and With Cochlear Implants: A Review of Behavioral Studies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1646-1658. [PMID: 35201848 PMCID: PMC9499384 DOI: 10.1044/2021_jslhr-21-00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 09/09/2021] [Accepted: 12/01/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE This review article provides a theoretical overview of the development of spectral resolution in children with normal hearing (cNH) and in those who use cochlear implants (CIs), with an emphasis on methodological considerations. The aim was to identify key directions for future research on spectral resolution development in children with CIs. METHOD A comprehensive literature review was conducted to summarize and synthesize previously published behavioral research on spectral resolution development in normal and impaired auditory systems. CONCLUSIONS In cNH, performance on spectral resolution tasks continues to improve through the teenage years and is likely driven by gradual maturation of across-channel intensity resolution. A small but growing body of evidence from children with CIs suggests a more complex relationship between spectral resolution development, patient demographics, and the quality of the CI electrode-neuron interface. Future research should aim to distinguish between the effects of patient-specific variables and the underlying physiology on spectral resolution abilities in children of all ages who are hard of hearing and use auditory prostheses.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, The University of Texas at Dallas
| | - Julie G. Arenberg
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle
- Division of Otolaryngology, Seattle Children's Hospital, WA
| |
Collapse
|
5
|
Winn MB, O'Brien G. Distortion of Spectral Ripples Through Cochlear Implants Has Major Implications for Interpreting Performance Scores. Ear Hear 2021; 43:764-772. [PMID: 34966157 PMCID: PMC9010354 DOI: 10.1097/aud.0000000000001162] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The spectral ripple discrimination task is a psychophysical measure that has been found to correlate with speech recognition in listeners with cochlear implants (CIs). However, at ripple densities above a critical value (around 2 RPO, but device-specific), the sparse spectral sampling of CI processors results in stimulus distortions resulting in aliasing and unintended changes in modulation depth. As a result, spectral ripple thresholds above a certain number are not ordered monotonically along the RPO dimension and thus cannot be considered better or worse spectral resolution than each other, thus undermining correlation measurements. These stimulus distortions are not remediated by changing stimulus phase, indicating these issues cannot be solved by spectrotemporally modulated stimuli. Speech generally has very low-density spectral modulations, leading to questions about the mechanism of correlation between high ripple thresholds and speech recognition. Existing data showing correlations between ripple discrimination and speech recognition include many observations above the aliasing limit. These scores should be treated with caution, and experimenters could benefit by prospectively considering the limitations of the spectral ripple test.
Collapse
Affiliation(s)
- Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minnesota, USA School of Information, University of Michigan, Ann Arbor, Michigan, USA
| | | |
Collapse
|
6
|
Davidson LS, Geers AE, Uchanski RM. Spectral Modulation Detection Performance and Speech Perception in Pediatric Cochlear Implant Recipients. Am J Audiol 2021; 30:1076-1087. [PMID: 34670098 DOI: 10.1044/2021_aja-21-00076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The aims of this study were, for pediatric cochlear implant (CI) recipients, (a) to determine the effect of age on their spectral modulation detection (SMD) ability and compare their age effect to that of their typically hearing (TH) peers; (b) to identify demographic, cognitive, and audiological factors associated with SMD ability; and (c) to determine the unique contribution of SMD ability to segmental and suprasegmental speech perception performance. METHOD A total of 104 pediatric CI recipients and 38 TH peers (ages 6-11 years) completed a test of SMD. CI recipients completed tests of segmental (e.g., word recognition in noise and vowels and consonants in quiet) and suprasegmental (e.g., talker discrimination, stress discrimination, and emotion identification) perception, nonverbal intelligence, and working memory. Regressions analyses were used to examine the effects of group and age on percent-correct SMD scores. For the CI group, the effects of demographic, audiological, and cognitive variables on SMD performance and the effects of SMD on speech perception were examined. RESULTS The TH group performed significantly better than the CI group on SMD. Both groups showed better performance with increasing age. Significant predictors of SMD performance for the CI group were age and nonverbal intelligence. SMD performance predicted significant variance in segmental and suprasegmental perception. The variance predicted by SMD performance was nearly double for suprasegmental than for segmental perception. CONCLUSIONS Children in the CI group, on average, scored lower than their TH peers. The slopes of improvement in SMD with age did not differ between the groups. The significant effect of nonverbal intelligence on SMD performance in CI recipients indicates that difficulties inherent in the task affect outcomes. SMD ability predicted speech perception scores, with a more prominent role in suprasegmental than in segmental speech perception. SMD ability may provide a useful nonlinguistic tool for predicting speech perception benefit, with cautious interpretation based on age and cognitive function.
Collapse
Affiliation(s)
- Lisa S. Davidson
- Department of Otolaryngology, Washington University School of Medicine in St. Louis, MO
| | - Ann E. Geers
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
| | - Rosalie M. Uchanski
- Department of Otolaryngology, Washington University School of Medicine in St. Louis, MO
| |
Collapse
|
7
|
McGuire K, Firestone GM, Zhang N, Zhang F. The Acoustic Change Complex in Response to Frequency Changes and Its Correlation to Cochlear Implant Speech Outcomes. Front Hum Neurosci 2021; 15:757254. [PMID: 34744668 PMCID: PMC8566680 DOI: 10.3389/fnhum.2021.757254] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/01/2021] [Indexed: 12/12/2022] Open
Abstract
One of the biggest challenges that face cochlear implant (CI) users is the highly variable hearing outcomes of implantation across patients. Since speech perception requires the detection of various dynamic changes in acoustic features (e.g., frequency, intensity, timing) in speech sounds, it is critical to examine the ability to detect the within-stimulus acoustic changes in CI users. The primary objective of this study was to examine the auditory event-related potential (ERP) evoked by the within-stimulus frequency changes (F-changes), one type of the acoustic change complex (ACC), in adult CI users, and its correlation to speech outcomes. Twenty-one adult CI users (29 individual CI ears) were tested with psychoacoustic frequency change detection tasks, speech tests including the Consonant-Nucleus-Consonant (CNC) word recognition, Arizona Biomedical Sentence Recognition in quiet and noise (AzBio-Q and AzBio-N), and the Digit-in-Noise (DIN) tests, and electroencephalographic (EEG) recordings. The stimuli for the psychoacoustic tests and EEG recordings were pure tones at three different base frequencies (0.25, 1, and 4 kHz) that contained a F-change at the midpoint of the tone. Results showed that the frequency change detection threshold (FCDT), ACC N1' latency, and P2' latency did not differ across frequencies (p > 0.05). ACC N1'-P2 amplitude was significantly larger for 0.25 kHz than for other base frequencies (p < 0.05). The mean N1' latency across three base frequencies was negatively correlated with CNC word recognition (r = -0.40, p < 0.05) and CNC phoneme (r = -0.40, p < 0.05), and positively correlated with mean FCDT (r = 0.46, p < 0.05). The P2' latency was positively correlated with DIN (r = 0.47, p < 0.05) and mean FCDT (r = 0.47, p < 0.05). There was no statistically significant correlation between N1'-P2' amplitude and speech outcomes (all ps > 0.05). Results of this study indicated that variability in CI speech outcomes assessed with the CNC, AzBio-Q, and DIN tests can be partially explained (approximately 16-21%) by the variability of cortical sensory encoding of F-changes reflected by the ACC.
Collapse
Affiliation(s)
- Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Gabrielle M. Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
8
|
Brennan MA, McCreery RW. Audibility and Spectral-Ripple Discrimination Thresholds as Predictors of Word Recognition with Nonlinear Frequency Compression. J Am Acad Audiol 2021; 32:596-605. [PMID: 35176803 PMCID: PMC9112840 DOI: 10.1055/s-0041-1732333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Nonlinear frequency compression (NFC) lowers high-frequency sounds to a lower frequency and is used to improve high-frequency audibility. However, the efficacy of NFC varies widely-while some individuals benefit from NFC, many do not. Spectral resolution is one factor that might explain individual benefit from NFC. Because individuals with better spectral resolution understand more speech than those with poorer spectral resolution, it was hypothesized that individual benefit from NFC could be predicted from the change in spectral resolution measured with NFC relative to a condition without NFC. PURPOSE This study aimed to determine the impact of NFC on access to spectral information and whether these changes predict individual benefit from NFC for adults with sensorineural hearing loss (SNHL). RESEARCH DESIGN Present study is a quasi-experimental cohort study. Participants used a pair of hearing aids set to the Desired Sensation Level algorithm (DSL m[i/o]). STUDY SAMPLE Participants were 19 adults with SNHL, recruited from the Boys Town National Research Hospital Participant Registry. DATA COLLECTION AND ANALYSIS Participants were seated in a sound-attenuating booth and then percent-correct recognition of words, and spectral-ripple discrimination thresholds were measured for two different conditions, with and without NFC. Because audibility is known to influence spectral-ripple thresholds and benefit from NFC, audibility was quantified using the aided speech intelligibility index (SII). Linear mixed models were generated to predict word recognition using the aided SII and spectral-ripple discrimination thresholds. RESULTS While NFC did not influence percent-correct word recognition, participants with higher (better) aided SII and spectral-ripple discrimination thresholds understood more words than those with either a lower aided SII or spectral-ripple discrimination threshold. Benefit from NFC was not predictable from a participant's aided SII or spectral-ripple discrimination threshold. CONCLUSION We have extended previous work on the effect of audibility on benefit from NFC to include a measure of spectral resolution, the spectral-ripple discrimination threshold. Clinically, these results suggest that patients with better audibility and spectral resolution will understand speech better than those with poorer audibility or spectral resolution; however, these results are inconsistent with the notion that individual benefit from NFC is predictable from aided audibility or spectral resolution.
Collapse
|
9
|
Nittrouer S, Lowenstein JH, Sinex DG. The contribution of spectral processing to the acquisition of phonological sensitivity by adolescent cochlear implant users and normal-hearing controls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2116. [PMID: 34598601 PMCID: PMC8463097 DOI: 10.1121/10.0006416] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 08/27/2021] [Accepted: 09/01/2021] [Indexed: 05/31/2023]
Abstract
This study tested the hypotheses that (1) adolescents with cochlear implants (CIs) experience impaired spectral processing abilities, and (2) those impaired spectral processing abilities constrain acquisition of skills based on sensitivity to phonological structure but not those based on lexical or syntactic (lexicosyntactic) knowledge. To test these hypotheses, spectral modulation detection (SMD) thresholds were measured for 14-year-olds with normal hearing (NH) or CIs. Three measures each of phonological and lexicosyntactic skills were obtained and used to generate latent scores of each kind of skill. Relationships between SMD thresholds and both latent scores were assessed. Mean SMD threshold was poorer for adolescents with CIs than for adolescents with NH. Both latent lexicosyntactic and phonological scores were poorer for the adolescents with CIs, but the latent phonological score was disproportionately so. SMD thresholds were significantly associated with phonological but not lexicosyntactic skill for both groups. The only audiologic factor that also correlated with phonological latent scores for adolescents with CIs was the aided threshold, but it did not explain the observed relationship between SMD thresholds and phonological latent scores. Continued research is required to find ways of enhancing spectral processing for children with CIs to support their acquisition of phonological sensitivity.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Joanna H Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Donal G Sinex
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| |
Collapse
|
10
|
Berg KA, Noble JH, Dawant BM, Dwyer RT, Labadie RF, Gifford RH. Speech recognition as a function of the number of channels for an array with large inter-electrode distances. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2752. [PMID: 33940865 PMCID: PMC8062138 DOI: 10.1121/10.0004244] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 03/22/2021] [Accepted: 03/22/2021] [Indexed: 05/28/2023]
Abstract
This study investigated the number of channels available to cochlear implant (CI) recipients for maximum speech understanding and sound quality for lateral wall electrode arrays-which result in large electrode-to-modiolus distances-featuring the greatest inter-electrode distances (2.1-2.4 mm), the longest active lengths (23.1-26.4 mm), and the fewest number of electrodes commercially available. Participants included ten post-lingually deafened adult CI recipients with MED-EL electrode arrays (FLEX28 and STANDARD) entirely within scala tympani. Electrode placement and scalar location were determined using computerized tomography. The number of channels was varied from 4 to 12 with equal spatial distribution across the array. A continuous interleaved sampling-based strategy was used. Speech recognition, sound quality ratings, and a closed-set vowel recognition task were measured acutely for each electrode condition. Participants did not demonstrate statistically significant differences beyond eight channels at the group level for almost all measures. However, several listeners showed considerable improvements from 8 to 12 channels for speech and sound quality measures. These results suggest that channel interaction caused by the greater electrode-to-modiolus distances of straight electrode arrays could be partially compensated for by a large inter-electrode distance or spacing.
Collapse
Affiliation(s)
- Katelyn A Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - Jack H Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, 2201 West End Avenue, Nashville, Tennessee 37235, USA
| | - Benoit M Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, 2201 West End Avenue, Nashville, Tennessee 37235, USA
| | - Robert T Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - Robert F Labadie
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| |
Collapse
|
11
|
Goykhburg MV, Nechaev DI, Bakhshinyan VV, Tavartkiladze GA. [Evaluation of the cochlear implantation users rehabilitation results using psychoacoustic methods]. Vestn Otorinolaringol 2021; 86:10-16. [PMID: 34964322 DOI: 10.17116/otorino20218606110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
UNLABELLED Currently, the number of patients with bilateral sensorineural deafness treated with cochlear implantation (CI) is increasing in the Russian Federation. In this regard, methods of assessing the auditory rehabilitation of this category of patients become more relevant. OBJECTIVE To investigate the correlation of the speech intelligibility in quiet with frequency resolving power (FRP) of hearing using a ripple-spectrum phase reversion test (RSPRT) in CI users. MATERIAL AND METHODS The study includes 30 CI users, three of them after bilateral CI, aged from 13 to 63 years with CI usage experience from 1 year to 16 years. 19 patients used CI systems manufactured by Cochlear Ltd. (Australia), 11 patients used CI systems manufactured by Advanced Bionics (Switzerland). All subjects underwent a number of studies including pure tone audiometry (TPA), speech audiometry in quiet using a multi-syllable speech material on a two-channel clinical audiometer AC-40 (Interacoustics A/S, Denmark); PC with recorded phonetic material from which the signal was reproduced, acoustic speaker SP90 (Interacoustics A/S, Denmark), for FRP estimation - RSPRT test in a free sound field, which was installed on the PC and also reproduced through SP 90 speakers (Interacoustics A/S, Denmark) were used. RESULTS According to TPA results in a free sound field, the sound perception thresholds in all subjects corresponded to the mild degree sensorineural hearing loss. The sound perception threshold in the free sound field in the range from 500 Hz to 4 kHz was within the range of 25-30 dB nHL. The percentage of speech intelligibility in quiet in the free sound field ranged from 5 to 100%. During the FRP study of patients using RSPRT test, the following results were obtained: the average value of RSPRT test results at the frequency of 1 kHz was 1.94 RPO; for 2 kHz - 2.3 RPO; for 4 kHz - 2.2. The significant correlation between the speech intelligibility in quiet and frequency resolution of hearing was obtained at 1 and 4 kHz. The highest correlation coefficient was detected at 1 kHz - r=0.57 (p=0.0005), while at 4 kHz it was lower - r=0.46 (p=0.009), and at 2 kHz - at the boundary of the significance: r=0.34 (p=0.051). CONCLUSIONS As a result of the study, it was found that there is a correlation between speech intelligibility in quiet and FRP of hearing, which makes it possible to recommend the use of RSPRT in assessing the auditory rehabilitation of patients after CI.
Collapse
Affiliation(s)
- M V Goykhburg
- Russian Scientific and Clinical Center for Audiology and Hearing Prosthetics of the Federal Medical and Biological Agency, Moscow, Russia
| | - D I Nechaev
- Severtsov Institute of Ecology and Evolution of the Russian Academy of Sciences, Moscow, Russia
| | - V V Bakhshinyan
- Russian Scientific and Clinical Center for Audiology and Hearing Prosthetics of the Federal Medical and Biological Agency, Moscow, Russia
- Russian Medical Academy for Continuous Professional Education, Moscow, Russia
| | - G A Tavartkiladze
- Russian Scientific and Clinical Center for Audiology and Hearing Prosthetics of the Federal Medical and Biological Agency, Moscow, Russia
- Russian Medical Academy for Continuous Professional Education, Moscow, Russia
| |
Collapse
|
12
|
Zhou N, Dixon S, Zhu Z, Dong L, Weiner M. Spectrotemporal Modulation Sensitivity in Cochlear-Implant and Normal-Hearing Listeners: Is the Performance Driven by Temporal or Spectral Modulation Sensitivity? Trends Hear 2020; 24:2331216520948385. [PMID: 32895024 PMCID: PMC7482033 DOI: 10.1177/2331216520948385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
This study examined the contribution of temporal and spectral modulation sensitivity to discrimination of stimuli modulated in both the time and frequency domains. The spectrotemporally modulated stimuli contained spectral ripples that shifted systematically across frequency over time at a repetition rate of 5 Hz. As the ripple density increased in the stimulus, modulation depth of the 5 Hz amplitude modulation (AM) reduced. Spectrotemporal modulation discrimination was compared with subjects’ ability to discriminate static spectral ripples and the ability to detect slow AM. The general pattern from both the cochlear implant (CI) and normal hearing groups showed that spectrotemporal modulation thresholds were correlated more strongly with AM detection than with static ripple discrimination. CI subjects’ spectrotemporal modulation thresholds were also highly correlated with speech recognition in noise, when partialing out static ripple discrimination, but the correlation was not significant when partialing out AM detection. The results indicated that temporal information was more heavily weighted in spectrotemporal modulation discrimination, and for CI subjects, it was AM sensitivity that drove the correlation between spectrotemporal modulation thresholds and speech recognition. The results suggest that for the rates tested here, temporal information processing may limit performance more than spectral information processing in both CI users and normal hearing listeners.
Collapse
Affiliation(s)
- Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Susannah Dixon
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Zhen Zhu
- Department of Engineering, East Carolina University, Greenville, North Carolina, United States
| | - Lixue Dong
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Marti Weiner
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| |
Collapse
|
13
|
Kessler DM, Ananthakrishnan S, Smith SB, D'Onofrio K, Gifford RH. Frequency Following Response and Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. Trends Hear 2020; 24:2331216520902001. [PMID: 32003296 PMCID: PMC7257083 DOI: 10.1177/2331216520902001] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Multiple studies have shown significant speech recognition benefit when acoustic hearing is combined with a cochlear implant (CI) for a bimodal hearing configuration. However, this benefit varies greatly between individuals. There are few clinical measures correlated with bimodal benefit and those correlations are driven by extreme values prohibiting data-driven, clinical counseling. This study evaluated the relationship between neural representation of fundamental frequency (F0) and temporal fine structure via the frequency following response (FFR) in the nonimplanted ear as well as spectral and temporal resolution of the nonimplanted ear and bimodal benefit for speech recognition in quiet and noise. Participants included 14 unilateral CI users who wore a hearing aid (HA) in the nonimplanted ear. Testing included speech recognition in quiet and in noise with the HA-alone, CI-alone, and in the bimodal condition (i.e., CI + HA), measures of spectral and temporal resolution in the nonimplanted ear, and FFR recording for a 170-ms/da/stimulus in the nonimplanted ear. Even after controlling for four-frequency pure-tone average, there was a significant correlation (r = .83) between FFR F0 amplitude in the nonimplanted ear and bimodal benefit. Other measures of auditory function of the nonimplanted ear were not significantly correlated with bimodal benefit. The FFR holds potential as an objective tool that may allow data-driven counseling regarding expected benefit from the nonimplanted ear. It is possible that this information may eventually be used for clinical decision-making, particularly in difficult-to-test populations such as young children, regarding effectiveness of bimodal hearing versus bilateral CI candidacy.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Spencer B Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, TX, USA
| | - Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
14
|
Assessing the Quality of Low-Frequency Acoustic Hearing: Implications for Combined Electroacoustic Stimulation With Cochlear Implants. Ear Hear 2020; 42:475-486. [PMID: 32976249 DOI: 10.1097/aud.0000000000000949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES There are many potential advantages to combined electric and acoustic stimulation (EAS) with a cochlear implant (CI), including benefits for hearing in noise, localization, frequency selectivity, and music enjoyment. However, performance on these outcome measures is variable, and the residual acoustic hearing may not be beneficial for all patients. As such, we propose a measure of spectral resolution that might be more predictive of the usefulness of the residual hearing than the audiogram alone. In the following experiments, we measured performance on spectral resolution and speech perception tasks in individuals with normal hearing (NH) using low-pass filters to simulate steeply sloping audiograms of typical EAS candidates and compared it with performance on these tasks for individuals with sensorineural hearing loss with similar audiometric configurations. Because listeners with NH had similar levels of audibility and bandwidth to listeners with hearing loss, differences between the groups could be attributed to distortions due to hearing loss. DESIGN Listeners with NH (n = 12) and those with hearing loss (n = 23) with steeply sloping audiograms participated in this study. The group with hearing loss consisted of 7 EAS users, 14 hearing aid users, and 3 who did not use amplification in the test ear. Spectral resolution was measured with the spectral-temporal modulated ripple test (SMRT), and speech perception was measured with AzBio sentences in quiet and noise. Listeners with NH listened to stimuli through low-pass filters and at two levels (40 and 60 dBA) to simulate low and high audibility. Listeners with hearing loss listened to SMRT stimuli unaided at their most comfortable listening level and speech stimuli at 60 dBA. RESULTS Results suggest that performance with SMRT is significantly worse for listeners with hearing loss than for listeners with NH and is not related to audibility. Performance on the speech perception task declined with decreasing frequency information for both listeners with NH and hearing loss. Significant correlations were observed between speech perception, SMRT scores, and mid-frequency audiometric thresholds for listeners with hearing loss. CONCLUSIONS NH simulations describe a "best case scenario" for hearing loss where audibility is the only deficit. For listeners with hearing loss, the likely broadening of auditory filters, loss of cochlear nonlinearities, and possible cochlear dead regions may have contributed to distorted spectral resolution and thus deviations from the NH simulations. Measures of spectral resolution may capture an aspect of hearing loss not evident from the audiogram and be a useful tool for assessing the contributions of residual hearing post-cochlear implantation.
Collapse
|
15
|
Precompensating for spread of excitation in a cochlear implant coding strategy. Hear Res 2020; 395:107977. [PMID: 32653106 DOI: 10.1016/j.heares.2020.107977] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 03/11/2020] [Accepted: 04/15/2020] [Indexed: 11/22/2022]
Abstract
Cochlear implant users' limited ability to understand speech in noisy environments has been linked to the poor spatial resolution and the high degree of spectral smearing associated with the spread of neural excitation. A sound coding algorithm that aims to improve the spectro-temporal representation of the sound signal at the implanted ear by precompensating the electrical stimulation for the spread of excitation is presented in this study. The spread precompensation algorithm was integrated into the standard clinical advanced combination encoder (ACE) strategy and the resulting strategy was called SPACE. SPACE was evaluated acutely with a group of six implant users and was compared to their daily used ACE strategy in terms of preference rating and speech recognition in four-talker babble and stationary speech-shaped noise. While no significant differences in preference rating were observed, speech recognition in four-talker babble was improved by SPACE processing. Analysis of the group results revealed a significant improvement in mean speech reception threshold (SRT) over the ACE strategy of 1.4 dB in four-talker babble, whereas the difference of 0.9 dB in stationary noise did not reach statistical significance. Assessment of individual differences showed that four out of six listeners obtained significant SRT improvements with SPACE and that no subject scored significantly worse compared to ACE. The results suggest that the proposed sound coding strategy has the potential to improve speech perception for cochlear implant users in challenging listening situations.
Collapse
|
16
|
Jorgensen EJ, McCreery RW, Kirby BJ, Brennan M. Effect of level on spectral-ripple detection threshold for listeners with normal hearing and hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:908. [PMID: 32873021 PMCID: PMC7443170 DOI: 10.1121/10.0001706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 07/07/2020] [Accepted: 07/20/2020] [Indexed: 06/11/2023]
Abstract
This study investigated the effect of presentation level on spectral-ripple detection for listeners with and without sensorineural hearing loss (SNHL). Participants were 25 listeners with normal hearing and 25 listeners with SNHL. Spectral-ripple detection thresholds (SRDTs) were estimated at three spectral densities (0.5, 2, and 4 ripples per octave, RPO) and three to four sensation levels (SLs) (10, 20, 40, and, when possible, 60 dB SL). Each participant was also tested at 90 dB sound pressure level (SPL). Results indicate that level affected SRDTs. However, the effect of level depended on ripple density and hearing status. For all listeners and all RPO conditions, SRDTs improved from 10 to 40 dB SL. In the 2- and 4-RPO conditions, SRDTs became poorer from the 40 dB SL to the 90 dB SPL condition. The results suggest that audibility likely controls spectral-ripple detection at low SLs for all ripple densities, whereas spectral resolution likely controls spectral-ripple detection at high SLs and ripple densities. For optimal ripple detection across all listeners, clinicians and researchers should use a SL of 40 dB SL. To avoid absolute-level confounds, a presentation level of 80 dB SPL can also be used.
Collapse
Affiliation(s)
- Erik J Jorgensen
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa 52242, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Omaha, Nebraska 68124, USA
| | - Benjamin J Kirby
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas 76203, USA
| | - Marc Brennan
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, Nebraska 68588, USA
| |
Collapse
|
17
|
Auditory performance of post-lingually deafened adult cochlear implant recipients using electrode deactivation based on postoperative cone beam CT images. Eur Arch Otorhinolaryngol 2020; 278:977-986. [PMID: 32588169 DOI: 10.1007/s00405-020-06156-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 06/18/2020] [Indexed: 01/04/2023]
Abstract
PURPOSE The use of image processing techniques to estimate the position of intra-cochlear electrodes has enabled the creation of personalized maps to meet the individual stimulation needs of cochlear implant (CI) recipients. The aim of this study was to evaluate a novel technique of electrode deactivation based on postoperative cone beam computed tomography (CBCT) images in post-lingually deafened adult CI recipients. METHODS Based on postoperative CBCT images, the positioning of the electrodes was estimated in relation to the modiolus in 14 ears of 13 post-lingually deafened adult CI recipients. The electrodes sub-optimally positioned or involved in kinking and tip fold-over were deactivated. Speech perception scores in silence and in noise were obtained from subjects using the standard map and were followed up 4 weeks after image-based electrode deactivation reprogramming technique (IBEDRT). The participants selected their preferred map after 4 weeks of IBEDRT use. RESULTS There were statistically significant improvements in the speech recognition tests in silence and noise when comparing IBEDRT performance to the standard map. All participants elected the IBEDRT as their new preferred map. CONCLUSIONS IBEDRT is a promising technique for fitting CI recipients and minimizing channel interaction increased by the positioning of the electrodes sub-optimally placed, thereby improving their auditory performance. We propose a novel electrode deactivation technique based on postoperative CBCT imaging, with a limited number of deactivated electrodes and a low-dosing scanning which could be applied for clinical routine.
Collapse
|
18
|
Kessler DM, Wolfe J, Blanchard M, Gifford RH. Clinical Application of Spectral Modulation Detection: Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1561-1571. [PMID: 32379527 PMCID: PMC7842114 DOI: 10.1044/2020_jslhr-19-00304] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 12/16/2019] [Accepted: 01/27/2020] [Indexed: 05/29/2023]
Abstract
Purpose The purpose of this study was to investigate the relationship between speech recognition benefit derived from the addition of a hearing aid (HA) to the nonimplanted ear (i.e., bimodal benefit) and spectral modulation detection (SMD) performance in the nonimplanted ear in a large clinical sample. An additional purpose was to investigate the influence of low-frequency pure-tone average (PTA) of the nonimplanted ear and age at implantation on the variance in bimodal benefit. Method Participants included 311 unilateral cochlear implant (CI) users who wore an HA in the nonimplanted ear. Participants completed speech recognition testing in quiet and in noise with the CI-alone and in the bimodal condition (i.e., CI and contralateral HA) and SMD in the nonimplanted ear. Results SMD performance in the nonimplanted ear was significantly correlated with bimodal benefit in quiet and in noise. However, this relationship was much weaker than previous reports with smaller samples. SMD, low-frequency PTA of the nonimplanted ear from 125 to 750 Hz, and age at implantation together accounted for, at most, 19.1% of the variance in bimodal benefit. Conclusions Taken together, SMD, low-frequency PTA, and age at implantation account for the greatest amount of variance in bimodal benefit than each variable alone. A large portion of variance (~80%) in bimodal benefit is not explained by these variables. Supplemental Material https://doi.org/10.23641/asha.12185493.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | | | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
19
|
Liang C, Wenstrup LH, Samy RN, Xiang J, Zhang F. The Effect of Side of Implantation on the Cortical Processing of Frequency Changes in Adult Cochlear Implant Users. Front Neurosci 2020; 14:368. [PMID: 32410947 PMCID: PMC7201306 DOI: 10.3389/fnins.2020.00368] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Accepted: 03/25/2020] [Indexed: 12/03/2022] Open
Abstract
Cochlear implants (CI) are widely used in children and adults to restore hearing function. However, CI outcomes are vary widely. The affected factors have not been well understood. It is well known that the right and left hemispheres play different roles in auditory perception in adult normal hearing listeners. It is unknown how the implantation side may affect the outcomes of CIs. In this study, the effect of the implantation side on how the brain processes frequency changes within a sound was examined in 12 right-handed adult CI users. The outcomes of CIs were assessed with behaviorally measured frequency change detection threshold (FCDT), which has been reported to significantly affect CI speech performance. The brain activation and regions were also examined using acoustic change complex (ACC, a type of cortical potential evoked by acoustic changes within a stimulus), on which the waveform analysis and the standardized low-resolution brain electromagnetic tomography (sLORETA) were performed. CI users showed activation in the temporal lobe and non-temporal areas, such as the frontal lobe. Right-ear CIs could more efficiently activate the contralateral hemisphere compared to left-ear CIs. For right-ear CIs, the increased activation in the contralateral temporal lobe together with the decreased activation in the contralateral frontal lobe was correlated with good performance of frequency change detection (lower FCDTs). Such a trend was not found in left-ear CIs. These results suggest that the implantation side may significantly affect neuroplasticity patterns in adults.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States.,Child Psychiatry and Rehabilitation, Affiliated Shenzhen Maternity & Child Healthcare Hospital, Southern Medical University, Shenzhen, China
| | - Lisa H Wenstrup
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, OH, United States
| | - Ravi N Samy
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, OH, United States
| | - Jing Xiang
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
20
|
Tejani VD, Brown CJ. Speech masking release in Hybrid cochlear implant users: Roles of spectral and temporal cues in electric-acoustic hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3667. [PMID: 32486815 PMCID: PMC7255813 DOI: 10.1121/10.0001304] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 05/05/2020] [Accepted: 05/05/2020] [Indexed: 06/04/2023]
Abstract
When compared with cochlear implant (CI) users utilizing electric-only (E-Only) stimulation, CI users utilizing electric-acoustic stimulation (EAS) in the implanted ear show improved speech recognition in modulated noise relative to steady-state noise (i.e., speech masking release). It has been hypothesized, but not shown, that masking release is attributed to spectral resolution and temporal fine structure (TFS) provided by acoustic hearing. To address this question, speech masking release, spectral ripple density discrimination thresholds, and fundamental frequency difference limens (f0DLs) were evaluated in the acoustic-only (A-Only), E-Only, and EAS listening modes in EAS CI users. The spectral ripple and f0DL tasks are thought to reflect access to spectral and TFS cues, which could impact speech masking release. Performance in all three measures was poorest when EAS CI users were tested using the E-Only listening mode, with significant improvements in A-Only and EAS listening modes. f0DLs, but not spectral ripple density discrimination thresholds, significantly correlated with speech masking release when assessed in the EAS listening mode. Additionally, speech masking release correlated with AzBio sentence recognition in noise. The correlation between speech masking release and f0DLs likely indicates that TFS cues provided by residual hearing were used to obtain speech masking release, which aided sentence recognition in noise.
Collapse
Affiliation(s)
- Viral D Tejani
- Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, 21003 Pomerantz Family Pavilion, Iowa City, Iowa 52242-1078, USA
| | - Carolyn J Brown
- Communication Sciences and Disorders, Wendell Johnson Speech and Hearing Center-127B, University of Iowa, 250 Hawkins Drive, Iowa City, Iowa 52242, USA
| |
Collapse
|
21
|
Berg KA, Noble JH, Dawant BM, Dwyer RT, Labadie RF, Gifford RH. Speech recognition with cochlear implants as a function of the number of channels: Effects of electrode placement. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3646. [PMID: 32486813 PMCID: PMC7255811 DOI: 10.1121/10.0001316] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 05/08/2020] [Accepted: 05/08/2020] [Indexed: 05/28/2023]
Abstract
This study investigated the effects of cochlear implant (CI) electrode array type and scalar location on the number of channels available to CI recipients for maximum speech understanding and sound quality. Eighteen post-lingually deafened adult CI recipients participated, including 11 recipients with straight electrode arrays entirely in scala tympani and 7 recipients with translocated precurved electrode arrays. Computerized tomography was used to determine electrode placement and scalar location. In each condition, the number of channels varied from 4 to 22 with equal spatial distribution across the array. Speech recognition (monosyllables, sentences in quiet and in noise), subjective speech sound quality, and closed-set auditory tasks (vowels, consonants, and spectral modulation detection) were measured acutely. Recipients with well-placed straight electrode arrays and translocated precurved electrode arrays performed similarly, demonstrating asymptotic speech recognition scores with 8-10 channels, consistent with the classic literature. This finding contrasts with recent work [Berg, Noble, Dawant, Dwyer, Labadie, and Gifford. (2019). J. Acoust. Soc. Am. 145, 1556-1564] that found precurved electrode arrays well-placed in scala tympani demonstrate continuous performance gains beyond 8-10 channels. Given these results, straight and translocated precurved electrode arrays are theorized to have less channel independence secondary to their placement farther away from neural targets.
Collapse
Affiliation(s)
- Katelyn A Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - Jack H Noble
- Department of Electrical Engineering & Computer Science, Vanderbilt University, 2201 West End Avenue, Nashville, Tennessee 37235, USA
| | - Benoit M Dawant
- Department of Electrical Engineering & Computer Science, Vanderbilt University, 2201 West End Avenue, Nashville, Tennessee 37235, USA
| | - Robert T Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - Robert F Labadie
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| |
Collapse
|
22
|
Abstract
OBJECTIVES The Quick Spectral Modulation Detection (QSMD) test provides a quick and clinically implementable spectral resolution estimate for cochlear implant (CI) users. However, the original QSMD software (QSMD(MySound)) has technical and usability limitations that prevent widespread distribution and implementation. In this article, we introduce a new software package EasyQSMD, which is freely available software with the goal of both simplifying and standardizing spectral resolution measurements. DESIGN QSMD was measured for 20 CI users using both software packages. RESULTS No differences between the two software packages were detected, and based on the 95% confidence interval of the difference between tests, the difference between the tests is expected to be <2% points. The average test duration was under 4 minutes. CONCLUSIONS EasyQSMD is considered functionally equivalent to QSMD(MySound) providing a clinically feasible and quick estimate of spectral resolution for CI users.
Collapse
|
23
|
Forward masking patterns by low and high-rate stimulation in cochlear implant users: Differences in masking effectiveness and spread of neural excitation. Hear Res 2020; 389:107921. [PMID: 32097828 DOI: 10.1016/j.heares.2020.107921] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 01/15/2020] [Accepted: 02/13/2020] [Indexed: 11/20/2022]
Abstract
The goal of the present study was to compare forward masking patterns by stimulation of low and high rates in cochlear implant users. Postlingually deafened Cochlear Nucleus® device users participated in the study. In experiment 1, two maskers of different rates (250 and 1000 pulses per second) were set at levels that produced equal masking for a probe presented at the same electrode as the maskers. This aligned the two masking functions at the on-site probe location. Then their forward masking patterns for the far probes were compared. Results showed that slope of the masked probe-threshold decay as a function of probe-masker separation was steeper for the high-rate than the low-rate masker. A linear model indicated that this difference in spread of neural excitation (SOE) was accounted for by two factors that were not correlated with each other. One factor was that the low-rate masker required a considerably higher current level to be equally effective in masking as the high-rate masker. The second factor was the effect of stimulation rate on loudness, i.e., integration of multiple pulses. This was consistent with our hypothesis that if an increase in stimulation rate does not result in an increased total neural response, then it is unlikely that the change in rate would change spatial distribution of the neural activity. Interestingly, the difference in masking effectiveness of the maskers predicted subjects' speech recognition. Poorer performers were those who showed more comparable masking effects by maskers of different rates. The difference in the masking effectiveness may indirectly measure the auditory neurons' excitability, which predicts speech recognition. In experiment 2, SOE of the high-rate and low-rate maskers were compared at a level that is clinically relevant, i.e., equal loudness. At equal loudness, high-rate stimulation not only produced an overall greater amount of forward masking, but also a shallower decay of masking with probe-masker separation (wider SOE), compared to low rate. The difference in SOE was the opposite to the findings from experiment 1. Whether the maskers were calibrated for equal masking or loudness, the absolute current level was always higher for the low-rate masker, which suggests that the SOE patterns cannot be explained by current spread alone. The fact that high-rate stimulation produced greater masking and wider SOE at equal loudness may explain why using high stimulation rates has not produced consistent benefits for speech recognition, and why lowering stimulation rate from the manufacturer's default sometimes results in improved speech recognition for subjects.
Collapse
|
24
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
25
|
Berg K, Noble J, Dawant B, Dwyer R, Labadie R, Richards V, Gifford R. Musical Sound Quality as a Function of the Number of Channels in Modern Cochlear Implant Recipients. Front Neurosci 2019; 13:999. [PMID: 31607846 PMCID: PMC6769043 DOI: 10.3389/fnins.2019.00999] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Accepted: 09/03/2019] [Indexed: 11/18/2022] Open
Abstract
Objectives This study examined musical sound quality (SQ) in adult cochlear implant (CI) recipients. The study goals were to determine: the number of channels needed for high levels of musical SQ overall and by musical genre; the impact of device and patient factors on musical SQ ratings; and the relationship between musical SQ, speech recognition, and speech SQ to relate these findings to measures frequently used in clinical protocols. Methods Twenty-one post-lingually deafened adult CI recipients participated in this study. Electrode placement, including scalar location, average electrode-to-modiolus distance (M¯), and angular insertion depth were determined by CT imaging using validated CI position analysis algorithms (e.g., Noble et al., 2013; Zhao et al., 2018, 2019). CI programs were created using 4–22 electrodes with equal spatial distribution of active electrodes across the array. Speech recognition, speech SQ, music perception via a frequency discrimination task, and musical SQ were acutely assessed for all electrode conditions. Musical SQ was assessed using pre-selected musical excerpts from a variety of musical genres. Results CI recipients demonstrated continuous improvement in qualitative judgments of musical SQ with up to 10 active electrodes. Participants with straight electrodes placed in scala tympani (ST) and pre-curved electrodes with higher M¯ variance reported higher levels of musical SQ; however, this relationship is believed to be driven by levels of musical experience as well as the potential for preoperative bias in device selection. Participants reported significant increases in musical SQ beyond four channels for all musical genres examined in the current study except for Hip Hop/Rap. After musical experience outliers were removed, there was no relationship between musical experience or frequency discrimination ability and musical SQ ratings. There was a weak, but significant correlation between qualitative ratings for speech stimuli presented in quiet and in noise and musical SQ. Conclusion Modern CI recipients may need more channels for musical SQ than even required for asymptotic speech recognition or speech SQ. These findings may be used to provide clinical guidance for personalized expectations management of music appreciation depending on individual device and patient factors.
Collapse
Affiliation(s)
- Katelyn Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Jack Noble
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States.,Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, United States.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Benoit Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, United States.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Robert Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Robert Labadie
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, United States.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Virginia Richards
- Department of Cognitive Science, University of California, Irvine, Irvine, CA, United States
| | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, United States
| |
Collapse
|
26
|
Zhang F, Underwood G, McGuire K, Liang C, Moore DR, Fu QJ. Frequency change detection and speech perception in cochlear implant users. Hear Res 2019; 379:12-20. [PMID: 31035223 DOI: 10.1016/j.heares.2019.04.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 03/21/2019] [Accepted: 04/15/2019] [Indexed: 10/27/2022]
Abstract
Dynamic frequency changes in sound provide critical cues for speech perception. Most previous studies examining frequency discrimination in cochlear implant (CI) users have employed behavioral tasks in which target and reference tones (differing in frequency) are presented statically in separate time intervals. Participants are required to identify the target frequency by comparing stimuli across these time intervals. However, perceiving dynamic frequency changes in speech requires detection of within-interval frequency change. This study explored the relationship between detection of within-interval frequency changes and speech perception performance of CI users. Frequency change detection thresholds (FCDTs) were measured in 20 adult CI users using a 3-alternative forced-choice (3AFC) procedure. Stimuli were 1-sec pure tones (base frequencies at 0.25, 1, 4 kHz) with frequency changes occurring 0.5 s after the tone onset. Speech tests were 1) Consonant-Nucleus-Consonant (CNC) monosyllabic word recognition, 2) Arizona Biomedical Sentence Recognition (AzBio) in Quiet, 3) AzBio in Noise (AzBio-N, +10 dB signal-to-noise/SNR ratio), and 4) Digits-in-noise (DIN). Participants' subjective satisfaction with the CI was obtained. Results showed that correlations between FCDTs and speech perception were all statistically significant. The satisfaction level of CI use was not related to FCDTs, after controlling for major demographic factors. DIN speech reception thresholds were significantly correlated to AzBio-N scores. The current findings suggest that the ability to detect within-interval frequency changes may play an important role in speech perception performance of CI users. FCDT and DIN can serve as simple and rapid tests that require no or minimal linguistic background for the prediction of CI speech outcomes.
Collapse
Affiliation(s)
- Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA.
| | - Gabrielle Underwood
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA; Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Otolaryngology, University of Cincinnati, Ohio, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
27
|
Holder JT, Reynolds SM, Sunderhaus LW, Gifford RH. Current Profile of Adults Presenting for Preoperative Cochlear Implant Evaluation. Trends Hear 2019; 22:2331216518755288. [PMID: 29441835 PMCID: PMC6027468 DOI: 10.1177/2331216518755288] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Considerable advancements in cochlear implant technology (e.g., electric acoustic stimulation) and assessment materials have yielded expanded criteria. Despite this, it is unclear whether individuals with better audiometric thresholds and speech understanding are being referred for cochlear implant workup and pursuing cochlear implantation. The purpose of this study was to characterize the mean auditory and demographic profile of adults presenting for preoperative cochlear implant workup. Data were collected prospectively for all adult preoperative workups at Vanderbilt from 2013 to 2015. Subjects included 287 adults (253 postlingually deafened) with a mean age of 62.3 years. Each individual was assessed using the minimum speech test battery, spectral modulation detection, subjective questionnaires, and cognitive screening. Mean consonant-nucleus-consonant word scores, AzBio sentence scores, and pure-tone averages for postlingually deafened adults were 10%, 13%, and 89 dB HL, respectively, for the ear to be implanted. Seventy-three individuals (25.4%) met labeled indications for Hybrid-L and 207 individuals (72.1%) had aidable hearing in the better hearing ear to be used in a bimodal hearing configuration. These results suggest that mean speech understanding evaluated at cochlear implant workup remains very low despite recent advancements. Greater awareness and insurance accessibility may be needed to make cochlear implant technology available to those who qualify for electric acoustic stimulation devices as well as individuals meeting conventional cochlear implant criteria.
Collapse
Affiliation(s)
- Jourdan T Holder
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Susan M Reynolds
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Linsey W Sunderhaus
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Advanced Bionics, Valencia, CA, USA.,3 Cochlear Americas, Englewood, CO, USA.,4 Frequency Therapeutics, Woburn, MA, USA
| |
Collapse
|
28
|
Croghan NBH, Smith ZM. Speech Understanding With Various Maskers in Cochlear-Implant and Simulated Cochlear-Implant Hearing: Effects of Spectral Resolution and Implications for Masking Release. Trends Hear 2019; 22:2331216518787276. [PMID: 30022730 PMCID: PMC6053854 DOI: 10.1177/2331216518787276] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The purpose of this study was to investigate the relationship between psychophysical spectral resolution and sentence reception in various types of interfering backgrounds for listeners with cochlear implants and normal-hearing subjects listening to vocoded speech. Spectral resolution was measured with a spectral modulation detection (SMD) task. For speech testing, maskers included stationary speech-shaped noise (SSN), four-talker babble, multitone noise, and a competing talker. To explore the possible trade-offs between spectral resolution and susceptibility to different types of maskers, the degree of simulated current spread was varied within the vocoder group, achieving a range of performance for SMD and speech tasks. Greater simulated current spread was detrimental to both spectral resolution and speech recognition, suggesting that interventions that decrease current spread may improve performance for both tasks. Better SMD sensitivity was significantly correlated with improved sentence reception. In addition, differences in sentence reception across the four maskers were significantly associated with SMD across the combined group of cochlear-implant and vocoder subjects. Masking release (MR) was quantified as the signal-to-noise ratio difference in speech reception threshold between the SSN and competing talker. Several individual cochlear-implant subjects demonstrated substantial MR, in contrast to previous studies, and the degree of MR increased with better SMD thresholds across subjects. The results of this study suggest that alternative masker types, particularly competing talkers, are more sensitive than stationary SSN to differences in spectral resolution in the cochlear-implant population.
Collapse
Affiliation(s)
- Naomi B H Croghan
- 1 Denver Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA.,2 Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, CO, USA
| | - Zachary M Smith
- 1 Denver Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA.,3 Department of Physiology and Biophysics, School of Medicine, University of Colorado, Aurora, CO, USA
| |
Collapse
|
29
|
Gifford RH, Noble JH, Camarata SM, Sunderhaus LW, Dwyer RT, Dawant BM, Dietrich MS, Labadie RF. The Relationship Between Spectral Modulation Detection and Speech Recognition: Adult Versus Pediatric Cochlear Implant Recipients. Trends Hear 2019; 22:2331216518771176. [PMID: 29716437 PMCID: PMC5949922 DOI: 10.1177/2331216518771176] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Adult cochlear implant (CI) recipients demonstrate a reliable relationship between spectral modulation detection and speech understanding. Prior studies documenting this relationship have focused on postlingually deafened adult CI recipients—leaving an open question regarding the relationship between spectral resolution and speech understanding for adults and children with prelingual onset of deafness. Here, we report CI performance on the measures of speech recognition and spectral modulation detection for 578 CI recipients including 477 postlingual adults, 65 prelingual adults, and 36 prelingual pediatric CI users. The results demonstrated a significant correlation between spectral modulation detection and various measures of speech understanding for 542 adult CI recipients. For 36 pediatric CI recipients, however, there was no significant correlation between spectral modulation detection and speech understanding in quiet or in noise nor was spectral modulation detection significantly correlated with listener age or age at implantation. These findings suggest that pediatric CI recipients might not depend upon spectral resolution for speech understanding in the same manner as adult CI recipients. It is possible that pediatric CI users are making use of different cues, such as those contained within the temporal envelope, to achieve high levels of speech understanding. Further investigation is warranted to investigate the relationship between spectral and temporal resolution and speech recognition to describe the underlying mechanisms driving peripheral auditory processing in pediatric CI users.
Collapse
Affiliation(s)
- René H Gifford
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jack H Noble
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Stephen M Camarata
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Linsey W Sunderhaus
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert T Dwyer
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Benoit M Dawant
- 2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Mary S Dietrich
- 4 Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert F Labadie
- 2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
30
|
Berg KA, Noble JH, Dawant BM, Dwyer RT, Labadie RF, Gifford RH. Speech recognition as a function of the number of channels in perimodiolar electrode recipients. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:1556. [PMID: 31067952 PMCID: PMC6435372 DOI: 10.1121/1.5092350] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/04/2019] [Accepted: 02/09/2019] [Indexed: 05/28/2023]
Abstract
This study investigated the number of channels needed for maximum speech understanding and sound quality in 30 adult cochlear implant (CI) recipients with perimodiolar electrode arrays verified via imaging to be completely within scala tympani (ST). Performance was assessed using a continuous interleaved sampling (CIS) strategy with 4, 8, 10, and 16 channels and n-of-m with 16 maxima. Listeners were administered auditory tasks of speech understanding [monosyllables, sentences (quiet and +5 dB signal-to-noise ratio, SNR), vowels, consonants], spectral modulation detection, as well as subjective estimates of sound quality. Results were as follows: (1) significant performance gains were observed for speech in quiet (monosyllables and sentences) with 16- as compared to 8-channel CIS, (2) 16 channels in a 16-of-m strategy yielded significantly higher outcomes than 16-channel CIS for sentences in noise (percent correct and subjective sound quality) and spectral modulation detection, (3) 16 channels in a 16-of-m strategy yielded significantly higher outcomes as compared to 8- and 10-channel CIS for monosyllables, sentences (quiet and noise), consonants, spectral modulation detection, and subjective sound quality, (4) 16 versus 8 maxima yielded significantly higher speech recognition for monosyllables and sentences in noise using an n-of-m strategy, and (5) the degree of benefit afforded by 16 versus 8 maxima was inversely correlated with mean electrode-to-modiolus distance. These data demonstrate greater channel independence with perimodiolar electrode arrays as compared to previous studies with straight electrodes and warrant further investigation of the minimum number of maxima and number of channels needed for maximum auditory outcomes.
Collapse
Affiliation(s)
- Katelyn A Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - Jack H Noble
- Department of Electrical Engineering & Computer Science, Vanderbilt University, 2201 West End Avenue, Nashville, Tennessee 37235, USA
| | - Benoit M Dawant
- Department of Electrical Engineering & Computer Science, Vanderbilt University, 2201 West End Avenue, Nashville, Tennessee 37235, USA
| | - Robert T Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - Robert F Labadie
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| |
Collapse
|
31
|
Zhao Y, Chakravorti S, Labadie RF, Dawant BM, Noble JH. Automatic graph-based method for localization of cochlear implant electrode arrays in clinical CT with sub-voxel accuracy. Med Image Anal 2019; 52:1-12. [PMID: 30468968 PMCID: PMC6543817 DOI: 10.1016/j.media.2018.11.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 08/18/2018] [Accepted: 11/12/2018] [Indexed: 10/27/2022]
Abstract
Cochlear implants (CIs) are neural prosthetics that provide a sense of sound to people who experience severe to profound hearing loss. Recent studies have demonstrated a correlation between hearing outcomes and intra-cochlear locations of CI electrodes. Our group has been conducting investigations on this correlation and has been developing an image-guided cochlear implant programming (IGCIP) system to program CI devices to improve hearing outcomes. One crucial step that has not been automated in IGCIP is the localization of CI electrodes in clinical CTs. Existing methods for CI electrode localization do not generalize well on large-scale datasets of clinical CTs implanted with different brands of CI arrays. In this paper, we propose a novel method for localizing different brands of CI electrodes in clinical CTs. We firstly generate the candidate electrode positions at sub-voxel resolution in a whole head CT by thresholding an up-sampled feature image and voxel-thinning the result. Then, we use a graph-based path-finding algorithm to find a fixed-length path that consists of a subset of the candidates as the localization result. Validation on a large-scale dataset of clinical CTs shows that our proposed method outperforms the state-of-art CI electrode localization methods and achieves a mean error of 0.12 mm when compared to expert manual localization results. This represents a crucial step in translating IGCIP from the laboratory to large-scale clinical use.
Collapse
Affiliation(s)
- Yiyuan Zhao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| | - Srijata Chakravorti
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Robert F Labadie
- Department of Otolaryngology - Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| | - Benoit M Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Jack H Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| |
Collapse
|
32
|
Holder JT, Levin LM, Gifford RH. Speech Recognition in Noise for Adults With Normal Hearing: Age-Normative Performance for AzBio, BKB-SIN, and QuickSIN. Otol Neurotol 2018; 39:e972-e978. [PMID: 30247429 PMCID: PMC6242733 DOI: 10.1097/mao.0000000000002003] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Characterize performance for adults aged 20 to 79 years with normal hearing on tasks of AzBio, Bamford-Kowal-Bench speech-in-noise (BKB-SIN), quick speech-in-noise (QuickSIN), and acoustic Quick Spectral Modulation Detection (QSMD) in the sound field. SETTING Cochlear implant (CI) program. PATIENTS Eighty-one adults with normal hearing and cognitive function were recruited evenly across four age groups (20-49, 50-59, 60-69, and 70-79 yr). INTERVENTIONS Subjects completed AzBio sentence recognition testing in quiet and in five signal-to-noise ratios (SNRs: +10, +5, 0, -5, -10 dB), as well as the BKB-SIN, QuickSIN, and QSMD tasks. MAIN OUTCOME MEASURES AzBio, BKB-SIN, QuickSIN, and acoustic QSMD scores were analyzed to characterize typical sound field performance in an older adult population with normal hearing. RESULTS AzBio sentence recognition performance approached ceiling for sentences presented at ≥ 0 dB SNR with mean scores ranging from 3.5% at -10 dB SNR to 99% at +10 dB SNR. Mean QuickSIN SNR-50 was -0.02. Mean BKB-SIN SNR-50 was -1.31 dB. Mean acoustic QSMD score was 88%. Performance for all measures decreased with age. CONCLUSION Adults with age-normative hearing achieve ceiling-level performance for AzBio sentence recognition at SNRs used for clinical cochlear implant and/or hearing aid testing. Thus, these tasks are not inherently contraindicated for older listeners. Older adults with normal hearing, however, demonstrated greater deficits for speech in noise compared to younger listeners-an effect most pronounced at negative SNRs. Lastly, BKB-SIN data obtained in the sound field replicated previous normative data for only the youngest age group, suggesting that new norms should be considered for older populations.
Collapse
Affiliation(s)
- Jourdan T Holder
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | | | | |
Collapse
|
33
|
Souza P, Hoover E. The Physiologic and Psychophysical Consequences of Severe-to-Profound Hearing Loss. Semin Hear 2018; 39:349-363. [PMID: 30443103 DOI: 10.1055/s-0038-1670698] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Substantial loss of cochlear function is required to elevate pure-tone thresholds to the severe hearing loss range; yet, individuals with severe or profound hearing loss continue to rely on hearing for communication. Despite the impairment, sufficient information is encoded at the periphery to make acoustic hearing a viable option. However, the probability of significant cochlear and/or neural damage associated with the loss has consequences for sound perception and speech recognition. These consequences include degraded frequency selectivity, which can be assessed with tests including psychoacoustic tuning curves and broadband rippled stimuli. Because speech recognition depends on the ability to resolve frequency detail, a listener with severe hearing loss is likely to have impaired communication in both quiet and noisy environments. However, the extent of the impairment varies widely among individuals. A better understanding of the fundamental abilities of listeners with severe and profound hearing loss and the consequences of those abilities for communication can support directed treatment options in this population.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois
| | - Eric Hoover
- Department of Hearing and Speech Sciences, University of Maryland, Baltimore, Maryland
| |
Collapse
|
34
|
The effect of presentation level on spectrotemporal modulation detection. Hear Res 2018; 371:11-18. [PMID: 30439570 DOI: 10.1016/j.heares.2018.10.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 10/23/2018] [Accepted: 10/29/2018] [Indexed: 11/24/2022]
Abstract
The understanding of speech in noise relies (at least partially) on spectrotemporal modulation sensitivity. This sensitivity can be measured by spectral ripple tests, which can be administered at different presentation levels. However, it is not known how presentation level affects spectrotemporal modulation thresholds. In this work, we present behavioral data for normal-hearing adults which show that at higher ripple densities (2 and 4 ripples/oct), increasing presentation level led to worse discrimination thresholds. Results of a computational model suggested that the higher thresholds could be explained by a worsening of the spectrotemporal representation in the auditory nerve due to broadening of cochlear filters and neural activity saturation. Our results demonstrate the importance of taking presentation level into account when administering spectrotemporal modulation detection tests.
Collapse
|
35
|
Liang C, Houston LM, Samy RN, Abedelrehim LMI, Zhang F. Cortical Processing of Frequency Changes Reflected by the Acoustic Change Complex in Adult Cochlear Implant Users. Audiol Neurootol 2018; 23:152-164. [PMID: 30300882 DOI: 10.1159/000492170] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 07/16/2018] [Indexed: 11/19/2022] Open
Abstract
The purpose of this study was to examine neural substrates of frequency change detection in cochlear implant (CI) recipients using the acoustic change complex (ACC), a type of cortical auditory evoked potential elicited by acoustic changes in an ongoing stimulus. A psychoacoustic test and electroencephalographic recording were administered in 12 postlingually deafened adult CI users. The stimuli were pure tones containing different magnitudes of upward frequency changes. Results showed that the frequency change detection threshold (FCDT) was 3.79% in the CI users, with a large variability. The ACC N1' latency was significantly correlated with the FCDT and the clinically collected speech perception score. The results suggested that the ACC evoked by frequency changes can serve as a useful objective tool in assessing frequency change detection capability and predicting speech perception performance in CI users.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, USA.,Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Lisa M Houston
- Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Ravi N Samy
- Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Lamiaa Mohamed Ibrahim Abedelrehim
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, USA.,Audiology Department, Sohag Faculty of Medicine, Sohag University, Sohag, Egypt
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio,
| |
Collapse
|
36
|
Zhao Y, Dawant BM, Labadie RF, Noble JH. Automatic localization of closely spaced cochlear implant electrode arrays in clinical CTs. Med Phys 2018; 45:5030-5040. [PMID: 30218461 DOI: 10.1002/mp.13185] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 07/24/2018] [Accepted: 08/31/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Cochlear implants (CIs) are neural prosthetic devices that provide a sense of sound to people who experience profound hearing loss. Recent research has indicated that there is a significant correlation between hearing outcomes and the intracochlear locations of the electrodes. We have developed an image-guided cochlear implant programming (IGCIP) system based on this correlation to assist audiologists with programming CI devices. One crucial step in our IGCIP system is the localization of CI electrodes in postimplantation CTs. Existing methods for this step are either not fully automated or not robust. When the CI electrodes are closely spaced, it is more difficult to identify individual electrodes because there is no intensity contrast between them in a clinical CT. The goal of this work is to automatically segment the closely spaced CI electrode arrays in postimplantation clinical CTs. METHODS The proposed method involves firstly identifying a bounding box that contains the cochlea by using a reference CT. Then, the intensity image and the vesselness response of the VOI are used to segment the regions of interest (ROIs) that may contain the electrode arrays. For each ROI, we apply a voxel thinning method to generate the medial axis line. We exhaustively search through all the possible connections of medial axis lines. For each possible connection, we define CI array centerline candidates by selecting two points on the connected medial axis lines as the array endpoints. For each CI array centerline candidate, we use a cost function to evaluate its quality, and the one with the lowest cost is selected as the array centerline. Then, we fit an a priori known geometric model of the array to the centerline to localize the individual electrodes. The method was trained on 28 clinical CTs of CI recipients implanted with three models of closely spaced CI arrays. The localization results are compared with the ground truth localization results manually generated by an expert. RESULTS A validation study was conducted on 129 clinical CTs of CI recipients implanted with three models of closely spaced arrays. Ninety-eight percent of the localization results generated by the proposed method had maximum localization errors lower than one voxel diagonal of the CTs. The mean localization error was 0.13 mm, which was close to the rater's consistency error (0.11 mm). The method also outperformed the existing automatic electrode localization methods in our validation study. CONCLUSION Our validation study shows that our method can localize closely spaced CI arrays with an accuracy close to what is achievable by an expert on clinical CTs. This represents a crucial step toward automating IGCIP and translating it from the laboratory to the clinical workflow.
Collapse
Affiliation(s)
- Yiyuan Zhao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Benoit M Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Robert F Labadie
- Department of Otolaryngology - Head and Neck Surgery, Vanderbilt University, Nashville, TN, 37235, USA
| | - Jack H Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| |
Collapse
|
37
|
The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing. Ear Hear 2018; 37:282-8. [PMID: 26901264 DOI: 10.1097/aud.0000000000000281] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. DESIGN Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. RESULTS The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. CONCLUSIONS Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth. Knowledge of the effect of acoustic bandwidth on bimodal benefit in children may help direct clinical decisions regarding a second CI, continued bimodal hearing, and even optimizing acoustic amplification for the nonimplanted ear.
Collapse
|
38
|
Nonlinguistic Outcome Measures in Adult Cochlear Implant Users Over the First Year of Implantation. Ear Hear 2018; 37:354-64. [PMID: 26656317 DOI: 10.1097/aud.0000000000000261] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Postlingually deaf cochlear implant users' speech perception improves over several months after implantation due to a learning process which involves integration of the new acoustic information presented by the device. Basic tests of hearing acuity might evaluate sensitivity to the new acoustic information and be less sensitive to learning effects. It was hypothesized that, unlike speech perception, basic spectral and temporal discrimination abilities will not change over the first year of implant use. If there were limited change over time and the test scores were correlated with clinical outcome, the tests might be useful for acute diagnostic assessments of hearing ability and also useful for testing speakers of any language, many of which do not have validated speech tests. DESIGN Ten newly implanted cochlear implant users were tested for speech understanding in quiet and in noise at 1 and 12 months postactivation. Spectral-ripple discrimination, temporal-modulation detection, and Schroeder-phase discrimination abilities were evaluated at 1, 3, 6, 9, and 12 months postactivation. RESULTS Speech understanding in quiet improved between 1 and 12 months postactivation (mean 8% improvement). Speech in noise performance showed no statistically significant improvement. Mean spectral-ripple discrimination thresholds and temporal-modulation detection thresholds for modulation frequencies of 100 Hz and above also showed no significant improvement. Spectral-ripple discrimination thresholds were significantly correlated with speech understanding. Low FM detection and Schroeder-phase discrimination abilities improved over the period. Individual learning trends varied, but the majority of listeners followed the same stable pattern as group data. CONCLUSIONS Spectral-ripple discrimination ability and temporal-modulation detection at 100-Hz modulation and above might serve as a useful diagnostic tool for early acute assessment of cochlear implant outcome for listeners speaking any native language.
Collapse
|
39
|
|
40
|
Choi JE, Hong SH, Won JH, Park HS, Cho YS, Chung WH, Cho YS, Moon IJ. Evaluation of Cochlear Implant Candidates using a Non-linguistic Spectrotemporal Modulation Detection Test. Sci Rep 2016; 6:35235. [PMID: 27731425 PMCID: PMC5059668 DOI: 10.1038/srep35235] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 09/22/2016] [Indexed: 11/26/2022] Open
Abstract
Adults who score 50% correct or less in open-set sentence recognition test under the best aided listening condition may be considered as candidates for cochlear implant (CI). However, the requirement for ‘the best aided listening condition’ needs significant time and clinical resources to ensure such condition. As speech signals are composed of dynamic spectral and temporal modulations, psychoacoustic sensitivity to the combinations of spectral and temporal modulation cues may be a strong predictor for aided speech recognition. In this study, we tested 27 adults with moderately severe to profound hearing loss to explore the possibility that a non-linguistic unaided spectrotemporal modulation (STM) detection test might be a viable option as a surrogate measure to evaluate CI candidacy. Our results showed that STM detection thresholds were significantly correlated with aided sentence recognition scores for the 27 hearing impaired listeners. The receiver operator characteristic (ROC) curve analysis demonstrated that the CI candidacy evaluation by both unaided STM detection test and the traditional best-aided sentence recognition test was fairly consistent. More specifically, our results demonstrated that the STM detection test using a low spectral and temporal modulation rate might provide an efficient process for CI candidacy evaluation.
Collapse
Affiliation(s)
- Ji Eun Choi
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Sung Hwa Hong
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jong Ho Won
- Division of Ophthalmic and Ear, Nose and Throat Devices, Office of Device Evaluation, Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring 20993, Maryland, USA
| | - Hee-Sung Park
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Young Sang Cho
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Won-Ho Chung
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Yang-Sun Cho
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Il Joon Moon
- Department of Otorhinolaryngology - Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| |
Collapse
|
41
|
Abstract
HYPOTHESIS Image-guided cochlear implant (CI) programming can improve hearing outcomes for pediatric CI recipients. BACKGROUND CIs have been highly successful for children with severe-to-profound hearing loss, offering potential for mainstreamed education and auditory-oral communication. Despite this, a significant number of recipients still experience poor speech understanding, language delay, and, even among the best performers, restoration to normal auditory fidelity is rare. Although significant research efforts have been devoted to improving stimulation strategies, few developments have led to significant hearing improvement over the past two decades. Recently introduced techniques for image-guided CI programming (IGCIP) permit creating patient-customized CI programs by making it possible, for the first time, to estimate the position of implanted CI electrodes relative to the nerves they stimulate using CT images. This approach permits identification of electrodes with high levels of stimulation overlap and to deactivate them from a patient's map. Previous studies have shown that IGCIP can significantly improve hearing outcomes for adults with CIs. METHODS The IGCIP technique was tested for 21 ears of 18 pediatric CI recipients. Participants had long-term experience with their CI (5 mo to 13 yr) and ranged in age from 5 to 17 years old. Speech understanding was assessed after approximately 4 weeks of experience with the IGCIP map. RESULTS Using a two-tailed Wilcoxon signed-rank test, statistically significant improvement (p < 0.05) was observed for word and sentence recognition in quiet and noise, as well as pediatric self-reported quality-of-life (QOL) measures. CONCLUSION Our results indicate that image guidance significantly improves hearing and QOL outcomes for pediatric CI recipients.
Collapse
|
42
|
Results of Postoperative, CT-based, Electrode Deactivation on Hearing in Prelingually Deafened Adult Cochlear Implant Recipients. Otol Neurotol 2016; 37:137-45. [PMID: 26719955 DOI: 10.1097/mao.0000000000000926] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To test the use of a novel, image-guided cochlear implant (CI) programming (IGCIP) technique on prelingually deafened, adult CI recipients. STUDY DESIGN Prospective unblinded study. SETTING Tertiary referral center. PATIENTS Twenty-six prelingually deafened adult CI recipients with 29 CIs (3 bilateral). INTERVENTION(S) Temporal-bone CT scans were used as input to a series of semiautomated computer algorithms which estimate the location of electrodes in reference to the modiolus. This information was used to selectively deactivate suboptimally located electrodes, i.e., those for which the distance from the electrode to the modiolus was further than a neighboring electrode to the same site. Patients used the new IGCIP program exclusively for 3-5 weeks. MAIN OUTCOME MEASURE(S) Minimum Speech Test Battery (MSTB), quality of life (QOL), and spectral modulation detection (SMD). RESULTS On average one-third of electrodes were deactivated. At the group level, no significant differences were noted for MSTB measures nor for QOL estimates. Average SMD significantly improved after IGCIP reprogramming, which is consistent with improved spatial selectivity. Using 95% confidence interval data for CNC, AzBio, and BKB-SIN at the individual level, 76 to 90% of subjects demonstrated equivocal or significant improvement. Ultimately 21 of 29 (72.41%) elected to keep the IGCIP map because of perceived benefit often substantiated by improvement on either MSTB, QOL, and/or SMD. CONCLUSIONS Knowledge of the geometric relationship between CI electrodes and the modiolus appears to be useful in adjusting CI maps in prelingually deafened adults. Long-term improvements may be observed resulting from improved spatial selectivity and spectral resolution.
Collapse
|
43
|
Participant-generated Cochlear Implant Programs: Speech Recognition, Sound Quality, and Satisfaction. Otol Neurotol 2016; 37:e209-16. [PMID: 27228018 DOI: 10.1097/mao.0000000000001076] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To determine whether patient-derived programming of one's cochlear implant (CI) stimulation levels may affect performance outcomes. BACKGROUND Increases in patient population, device complexity, outcome expectations, and clinician responsibility have demonstrated the necessity for improved clinical efficiency. METHODS Eighteen postlingually deafened adult CI recipients (mean = 53 years; range, 24-83 years) participated in a repeated-measures, within-participant study designed to compare their baseline listening program to an experimental program they created. RESULTS No significant group differences in aided sound-field thresholds, monosyllabic word recognition, speech understanding in quiet, speech understanding in noise, nor spectral modulation detection (SMD) were observed (p > 0.05). Four ears (17%) improved with the experimental program for speech presented at 45 dB SPL and two ears (9%) performed worse. Six ears (27.3%) improved significantly with the self-fit program at +10 dB signal-to-noise ratio (SNR) and four ears (26.6%) improved in speech understanding at +5 dB SNR. No individual scored significantly worse when speech was presented in quiet at 60 dB SPL or in any of the noise conditions tested. All but one participant opted to keep at least one of the self-fitting programs at the completion of this study. Participants viewed the process of creating their program more favorably (t = 2.11, p = 0.012) and thought creating the program was easier than the traditional fitting methodology (t = 2.12, p = 0.003). Average time to create the self-fit program was 10 minutes, 10 seconds (mean = 9:22; range, 4:46-24:40). CONCLUSIONS Allowing experienced adult CI recipients to set their own stimulation levels without clinical guidance is not detrimental to success.
Collapse
|
44
|
Davies-Venn E, Nelson P, Souza P. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:492-503. [PMID: 26233047 PMCID: PMC4514721 DOI: 10.1121/1.4922700] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Collapse
Affiliation(s)
- Evelyn Davies-Venn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| | - Peggy Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| | - Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208, USA
| |
Collapse
|
45
|
Nittrouer S, Kuess J, Lowenstein JH. Speech perception of sine-wave signals by children with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2811-2822. [PMID: 25994709 PMCID: PMC4441708 DOI: 10.1121/1.4919316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Revised: 03/30/2015] [Accepted: 04/08/2015] [Indexed: 05/31/2023]
Abstract
Children need to discover linguistically meaningful structures in the acoustic speech signal. Being attentive to recurring, time-varying formant patterns helps in that process. However, that kind of acoustic structure may not be available to children with cochlear implants (CIs), thus hindering development. The major goal of this study was to examine whether children with CIs are as sensitive to time-varying formant structure as children with normal hearing (NH) by asking them to recognize sine-wave speech. The same materials were presented as speech in noise, as well, to evaluate whether any group differences might simply reflect general perceptual deficits on the part of children with CIs. Vocabulary knowledge, phonemic awareness, and "top-down" language effects were all also assessed. Finally, treatment factors were examined as possible predictors of outcomes. Results showed that children with CIs were as accurate as children with NH at recognizing sine-wave speech, but poorer at recognizing speech in noise. Phonemic awareness was related to that recognition. Top-down effects were similar across groups. Having had a period of bimodal stimulation near the time of receiving a first CI facilitated these effects. Results suggest that children with CIs have access to the important time-varying structure of vocal-tract formants.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Otolaryngology, The Ohio State University, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| | - Jamie Kuess
- Department of Otolaryngology, The Ohio State University, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| | - Joanna H Lowenstein
- Department of Otolaryngology, The Ohio State University, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| |
Collapse
|
46
|
Noble JH, Gifford RH, Hedley-Williams AJ, Dawant BM, Labadie RF. Clinical evaluation of an image-guided cochlear implant programming strategy. Audiol Neurootol 2014; 19:400-11. [PMID: 25402603 DOI: 10.1159/000365273] [Citation(s) in RCA: 94] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Accepted: 06/16/2014] [Indexed: 11/19/2022] Open
Abstract
The cochlear implant (CI) has been labeled the most successful neural prosthesis. Despite this success, a significant number of CI recipients experience poor speech understanding, and, even among the best performers, restoration to normal auditory fidelity is rare. While significant research efforts have been devoted to improving stimulation strategies, few developments have led to significant hearing improvement over the past two decades. We have recently introduced image processing techniques that open a new direction for advancement in this field by making it possible, for the first time, to determine the position of implanted CI electrodes relative to the nerves they stimulate using computed tomography images. In this article, we present results of an image-guided, patient-customized approach to stimulation that utilizes the electrode position information our image processing techniques provide. This approach allows us to identify electrodes that cause overlapping stimulation patterns and to deactivate them from a patient's map. This individualized mapping strategy yields significant improvement in speech understanding in both quiet and noise as well as improved spectral resolution in the 68 adult CI recipients studied to date. Our results indicate that image guidance can improve hearing outcomes for many existing CI recipients without requiring additional surgery or the use of 'experimental' stimulation strategies, hardware or software.
Collapse
Affiliation(s)
- Jack H Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tenn., USA
| | | | | | | | | |
Collapse
|
47
|
Dorman MF, Cook S, Spahr A, Zhang T, Loiselle L, Schramm D, Whittingham J, Gifford R. Factors constraining the benefit to speech understanding of combining information from low-frequency hearing and a cochlear implant. Hear Res 2014; 322:107-11. [PMID: 25285624 DOI: 10.1016/j.heares.2014.09.010] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Revised: 08/28/2014] [Accepted: 09/22/2014] [Indexed: 11/20/2022]
Abstract
Many studies have documented the benefits to speech understanding when cochlear implant (CI) patients can access low-frequency acoustic information from the ear opposite the implant. In this study we assessed the role of three factors in determining the magnitude of bimodal benefit - (i) the level of CI-only performance, (ii) the magnitude of the hearing loss in the ear with low-frequency acoustic hearing and (iii) the type of test material. The patients had low-frequency PTAs (average of 125, 250 and 500 Hz) varying over a large range (<30 dB HL to >70 dB HL) in the ear contralateral to the implant. The patients were tested with (i) CNC words presented in quiet (n = 105) (ii) AzBio sentences presented in quiet (n = 102), (iii) AzBio sentences in noise at +10 dB signal-to-noise ratio (SNR) (n = 69), and (iv) AzBio sentences at +5 dB SNR (n = 64). We find maximum bimodal benefit when (i) CI scores are less than 60 percent correct, (ii) hearing loss is less than 60 dB HL in low-frequencies and (iii) the test material is sentences presented against a noise background. When these criteria are met, some bimodal patients can gain 40-60 percentage points in performance relative to performance with a CI. This article is part of a Special Issue entitled <Lasker Award>.
Collapse
Affiliation(s)
- Michael F Dorman
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA.
| | - Sarah Cook
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA
| | - Anthony Spahr
- Advanced Bionics 28515 Westinghouse Pl, Valencia, CA 91355, USA
| | - Ting Zhang
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA
| | - Louise Loiselle
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA
| | - David Schramm
- University of Ottawa Faculty of Medicine, 451 Smyth Rd. Ottawa, Ontario, Canada K1H 8M5
| | - JoAnne Whittingham
- University of Ottawa Faculty of Medicine, 451 Smyth Rd. Ottawa, Ontario, Canada K1H 8M5
| | - Rene Gifford
- Vanderbilt University, Department of Hearing and Speech Sciences, Nashville, TN 37232, USA
| |
Collapse
|