1
|
Sendesen İ, Sendesen E, Yücel E. Evaluation of musical emotion perception and language development in children with cochlear implants. Int J Pediatr Otorhinolaryngol 2023; 175:111753. [PMID: 37839291 DOI: 10.1016/j.ijporl.2023.111753] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 10/17/2023]
Abstract
OBJECTIVES While the primary purpose of cochlear implant (CI) fitting is to improve individuals' receptive and expressive skills, musical emotion perception (MEP) is generally ignored. This study assesses the MEP and language skills (LS) of children using CI. METHODS 26 CI users and 26 matched healthy controls between the ages of 6 and 9 were included in the study. The Test of Language Development (TOLD) was applied to evaluate the LS of the participants, and the Montreal Emotion Identification Test (MEI) was applied to evaluate the MEP. RESULTS MEI test scores and all subtests of TOLD were statistically significantly lower in the CI group. Also, there was a statistically significant and moderate correlation between the listening subtest of TOLD and the MEI test. CONCLUSIONS MEP and language skills are poor in children with CI. Although language skills are primarily targeted in CI performance, improving MEP should also be included in rehabilitation programs. The relationship between music and the TOLD's listening subtest may provide evidence that listening skills can be improved by paying attention to the MEP, which is frequently ignored in rehabilitation programs.
Collapse
Affiliation(s)
- İrem Sendesen
- Department of Audiology, Gazi University, Ankara, Turkey; Ankara University, Faculty of Medicine, Otolaryngology Department, Audiology, Speech, Balance Disorders Diagnosis and Rehabilitation Unit, Ankara, Turkey.
| | - Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| | - Esra Yücel
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
2
|
Buz E, Dwyer NC, Lai W, Watson DG, Gifford RH. Integration of fundamental frequency and voice-onset-time to voicing categorization: Listeners with normal hearing and bimodal hearing configurations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:1580. [PMID: 37002096 PMCID: PMC9995168 DOI: 10.1121/10.0017429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 05/18/2023]
Abstract
This study investigates the integration of word-initial fundamental frequency (F0) and voice-onset-time (VOT) in stop voicing categorization for adult listeners with normal hearing (NH) and unilateral cochlear implant (CI) recipients utilizing a bimodal hearing configuration [CI + contralateral hearing aid (HA)]. Categorization was assessed for ten adults with NH and ten adult bimodal listeners, using synthesized consonant stimuli interpolating between /ba/ and /pa/ exemplars with five-step VOT and F0 conditions. All participants demonstrated the expected categorization pattern by reporting /ba/ for shorter VOTs and /pa/ for longer VOTs, with NH listeners showing more use of VOT as a voicing cue than CI listeners in general. When VOT becomes ambiguous between voiced and voiceless stops, NH users make more use of F0 as a cue to voicing than CI listeners, and CI listeners showed greater utilization of initial F0 during voicing identification in their bimodal (CI + HA) condition than in the CI-alone condition. The results demonstrate the adjunctive benefit of acoustic hearing from the non-implanted ear for listening conditions involving spectrotemporally complex stimuli. This finding may lead to the development of a clinically feasible perceptual weighting task that could inform clinicians about bimodal efficacy and the risk-benefit profile associated with bilateral CI recommendation.
Collapse
Affiliation(s)
- Esteban Buz
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee 37203, USA
| | - Nichole C Dwyer
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - Wei Lai
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee 37203, USA
| | - Duane G Watson
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee 37203, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37203, USA
| |
Collapse
|
3
|
Harding EE, Gaudrain E, Hrycyk IJ, Harris RL, Tillmann B, Maat B, Free RH, Başkent D. Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content. Trends Hear 2023; 27:23312165221141142. [PMID: 36628512 PMCID: PMC9837297 DOI: 10.1177/23312165221141142] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.
Collapse
Affiliation(s)
- Eleanor E. Harding
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands,Eleanor E. Harding, Department of Otorhinolarynology, University Medical Center Groningen, Hanzeplein 1 9713 GZ, Groningen, The Netherlands.
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, France
| | - Imke J. Hrycyk
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands
| | - Robert L. Harris
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, France
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Rolien H. Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands
| |
Collapse
|
4
|
Frosolini A, Badin G, Sorrentino F, Brotto D, Pessot N, Fantin F, Ceschin F, Lovato A, Coppola N, Mancuso A, Vedovelli L, Marioni G, de Filippis C. Application of Patient Reported Outcome Measures in Cochlear Implant Patients: Implications for the Design of Specific Rehabilitation Programs. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228770. [PMID: 36433364 PMCID: PMC9698641 DOI: 10.3390/s22228770] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/04/2022] [Accepted: 11/10/2022] [Indexed: 06/01/2023]
Abstract
INTRODUCTION Cochlear implants (CI) have been developed to enable satisfying verbal communication, while music perception has remained in the background in both the research and technological development, thus making CI users dissatisfied by the experience of listening to music. Indications for clinicians to test and train music abilities are at a preliminary stage compared to the existing and well-established hearing and speech rehabilitation programs. The main aim of the present study was to test the utility of the application of two different patient reporting outcome (PRO) measures in a group of CI users. A secondary objective was to identify items capable of driving the indication and design specific music rehabilitation programs for CI patients. MATERIALS AND METHODS A consecutive series of 73 CI patients referred to the Audiology Unit, University of Padova, was enrolled from November 2021 to May 2022 and evaluated with the audiological battery test and PRO measures: Musica e Qualità della Vita (MUSQUAV) and Nijmegen Cochlear Implant Questionnaire (NCIQ) Italian version. RESULTS The reliability analysis showed good consistency between the different PRO measures (Cronbach's alpha = 0.873). After accounting for the epidemiological and clinical variables, the PRO measures showed a correlation with audiological outcomes in only one case (rho = -0.304; adj. p = 0.039) for NCIQ-T with the CI-pure tone average. A willingness for musical rehabilitation was present in 63% of patients (Rehab Factor, mean value of 0.791 ± 0.675). CONCLUSIONS We support the role of the application of MUSQUAV and NCIQ to improve the clinical and audiological evaluation of CI patients. Moreover, we proposed a derivative item, called the rehab factor, which could be used in clinical practice and future studies to clarify the indication and priority of specific music rehabilitation programs.
Collapse
Affiliation(s)
- Andrea Frosolini
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| | - Giulio Badin
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| | - Flavia Sorrentino
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
- Department of Information Science, University of Milan, 20133 Milan, Italy
- Unit of Biostatistics, Epidemiology, and Public Health, Department of Cardiac, Thoracic, Vascular Sciences, and Public Health, University of Padova, 35100 Padova, Italy
- Otolaryngology Section, Department of Neuroscience DNS, University of Padova, 35100 Padova, Italy
| | - Davide Brotto
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
- Department of Information Science, University of Milan, 20133 Milan, Italy
- Unit of Biostatistics, Epidemiology, and Public Health, Department of Cardiac, Thoracic, Vascular Sciences, and Public Health, University of Padova, 35100 Padova, Italy
- Otolaryngology Section, Department of Neuroscience DNS, University of Padova, 35100 Padova, Italy
| | - Nicholas Pessot
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| | - Francesco Fantin
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| | - Federica Ceschin
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| | - Andrea Lovato
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| | - Nicola Coppola
- Department of Information Science, University of Milan, 20133 Milan, Italy
| | - Antonio Mancuso
- Department of Information Science, University of Milan, 20133 Milan, Italy
| | - Luca Vedovelli
- Unit of Biostatistics, Epidemiology, and Public Health, Department of Cardiac, Thoracic, Vascular Sciences, and Public Health, University of Padova, 35100 Padova, Italy
| | - Gino Marioni
- Otolaryngology Section, Department of Neuroscience DNS, University of Padova, 35100 Padova, Italy
| | - Cosimo de Filippis
- Audiology Unit, Department of Neuroscience DNS, University of Padova, 31100 Treviso, Italy
| |
Collapse
|
5
|
Smith S. Translational Applications of Machine Learning in Auditory Electrophysiology. Semin Hear 2022; 43:240-250. [PMID: 36313047 PMCID: PMC9605807 DOI: 10.1055/s-0042-1756166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Machine learning (ML) is transforming nearly every aspect of modern life including medicine and its subfields, such as hearing science. This article presents a brief conceptual overview of selected ML approaches and describes how these techniques are being applied to outstanding problems in hearing science, with a particular focus on auditory evoked potentials (AEPs). Two vignettes are presented in which ML is used to analyze subcortical AEP data. The first vignette demonstrates how ML can be used to determine if auditory learning has influenced auditory neurophysiologic function. The second vignette demonstrates how ML analysis of AEPs may be useful in determining whether hearing devices are optimized for discriminating speech sounds.
Collapse
Affiliation(s)
- Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, Texas
| |
Collapse
|
6
|
Holder JT, Holcomb MA, Snapp H, Labadie RF, Vroegop J, Rocca C, Elgandy MS, Dunn C, Gifford RH. Guidelines for Best Practice in the Audiological Management of Adults Using Bimodal Hearing Configurations. OTOLOGY & NEUROTOLOGY OPEN 2022; 2:e011. [PMID: 36274668 PMCID: PMC9581116 DOI: 10.1097/ono.0000000000000011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Clinics are treating a growing number of patients with greater amounts of residual hearing. These patients often benefit from a bimodal hearing configuration in which acoustic input from a hearing aid on 1 ear is combined with electrical stimulation from a cochlear implant on the other ear. The current guidelines aim to review the literature and provide best practice recommendations for the evaluation and treatment of individuals with bilateral sensorineural hearing loss who may benefit from bimodal hearing configurations. Specifically, the guidelines review: benefits of bimodal listening, preoperative and postoperative cochlear implant evaluation and programming, bimodal hearing aid fitting, contralateral routing of signal considerations, bimodal treatment for tinnitus, and aural rehabilitation recommendations.
Collapse
Affiliation(s)
| | | | | | | | | | - Christine Rocca
- Guy’s and St. Thomas’ Hearing Implant Centre, London, United Kingdom
| | | | | | | |
Collapse
|
7
|
Cheng FY, Smith S. Objective Detection of the Speech Frequency Following Response (sFFR): A Comparison of Two Methods. Audiol Res 2022; 12:89-94. [PMID: 35200259 PMCID: PMC8869319 DOI: 10.3390/audiolres12010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/22/2022] [Accepted: 01/24/2022] [Indexed: 02/01/2023] Open
Abstract
Speech frequency following responses (sFFRs) are increasingly used in translational auditory research. Statistically-based automated sFFR detection could aid response identification and provide a basis for stopping rules when recording responses in clinical and/or research applications. In this brief report, sFFRs were measured from 18 normal hearing adult listeners in quiet and speech-shaped noise. Two statistically-based automated response detection methods, the F-test and Hotelling’s T2 (HT2) test, were compared based on detection accuracy and test time. Similar detection accuracy across statistical tests and conditions was observed, although the HT2 test time was less variable. These findings suggest that automated sFFR detection is robust for responses recorded in quiet and speech-shaped noise using either the F-test or HT2 test. Future studies evaluating test performance with different stimuli and maskers are warranted to determine if the interchangeability of test performance extends to these conditions.
Collapse
|
8
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
9
|
D'Onofrio KL, Gifford RH. Bimodal Benefit for Music Perception: Effect of Acoustic Bandwidth. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1341-1353. [PMID: 33784471 PMCID: PMC8608177 DOI: 10.1044/2020_jslhr-20-00390] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 10/15/2020] [Accepted: 12/04/2020] [Indexed: 05/29/2023]
Abstract
Purpose The challenges associated with cochlear implant (CI)-mediated listening are well documented; however, they can be mitigated through the provision of aided acoustic hearing in the contralateral ear-a configuration termed bimodal hearing. This study extends previous literature to examine the effect of acoustic bandwidth in the non-CI ear for music perception. The primary aim was to determine the minimum and optimum acoustic bandwidth necessary to obtain bimodal benefit for music perception and speech perception. Method Participants included 12 adult bimodal listeners and 12 adult control listeners with normal hearing. Music perception was assessed via measures of timbre perception and subjective sound quality of real-world music samples. Speech perception was assessed via monosyllabic word recognition in quiet. Acoustic stimuli were presented to the non-CI ear in the following filter conditions: < 125, < 250, < 500, and < 750 Hz, and wideband (full bandwidth). Results Generally, performance for all stimuli improved with increasing acoustic bandwidth; however, the bandwidth that is both minimally and optimally beneficial may be dependent upon stimulus type. On average, music sound quality required wideband amplification, whereas speech recognition with a male talker in quiet required a narrower acoustic bandwidth (< 250 Hz) for significant benefit. Still, average speech recognition performance continued to improve with increasing bandwidth. Conclusion Further research is warranted to examine optimal acoustic bandwidth for additional stimulus types; however, these findings indicate that wideband amplification is most appropriate for speech and music perception in individuals with bimodal hearing.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
10
|
Bilateral Cochlear Implants or Bimodal Hearing for Children with Bilateral Sensorineural Hearing Loss. CURRENT OTORHINOLARYNGOLOGY REPORTS 2021; 8:385-394. [PMID: 33815965 DOI: 10.1007/s40136-020-00314-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Purpose of review This review describes speech perception and language outcomes for children using bimodal hearing (cochlear implant (CI) plus contralateral hearing aid) as compared to children with bilateral CIs and contrasts said findings with the adult literature. There is a lack of clinical evidence driving recommendations for bimodal versus bilateral CI candidacy and as such, clinicians are often unsure about when to recommend a second CI for children with residual acoustic hearing. Thus the goal of this review is to identify scientific information that may influence clinical decision making for pediatric CI candidates with residual acoustic hearing. Recent findings Bilateral CIs are considered standard of care for children with bilateral severe-to-profound sensorineural hearing loss. For children with aidable acoustic hearing-even in just the low frequencies-an early period of bimodal stimulation has been associated with significantly better speech perception, vocabulary, and language development. HA audibility, however, is generally poorer than that offered by a CI resulting in interaural asymmetry in speech perception, head shadow, as well as brainstem and cortical activity and development. Thus there is a need to optimize "two-eared" hearing while maximizing a child's potential with respect to hearing, speech, and language while ensuring that we limit asymmetrically driven auditory neuroplasticity. A recent large study of bimodal and bilateral CI users suggested that a period of bimodal stimulation was only beneficial for children with a better-ear pure tone average (PTA) ≤ 73 dB HL. This 73-dB-HL cutoff applied even to children who ultimately received bilateral CIs. Summary Though we do not yet have definitive guidelines for determining bimodal versus bilateral CI candidacy, there is increasing evidence that 1) bilateral CIs yield superior outcomes for children with bilateral severe-to-profound hearing loss and, 2) an early period of bimodal stimulation is beneficial for speech perception and language development, but only for children with better-ear PTA ≤ 73 dB HL. For children with residual acoustic hearing, even in just the low-frequency range, rapid sequential bilateral cochlear implantation following a trial period with bimodal stimulation will yield best outcomes for auditory, language, and academic development. Of course, there is also an increasing prevalence of cochlear implantation with acoustic hearing preservation allowing for combined electric and acoustic stimulation even following bilateral implantation.
Collapse
|