1
|
Leterme G, Guigou C, Guenser G, Bigand E, Bozorg Grayeli A. Effect of Sound Coding Strategies on Music Perception with a Cochlear Implant. J Clin Med 2022; 11:jcm11154425. [PMID: 35956042 PMCID: PMC9369156 DOI: 10.3390/jcm11154425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/15/2022] [Accepted: 07/26/2022] [Indexed: 11/21/2022] Open
Abstract
The goal of this study was to evaluate the music perception of cochlear implantees with two different sound processing strategies. Methods: Twenty-one patients with unilateral or bilateral cochlear implants (Oticon Medical®) were included. A music trial evaluated emotions (sad versus happy based on tempo and/or minor versus major modes) with three tests of increasing difficulty. This was followed by a test evaluating the perception of musical dissonances (marked out of 10). A novel sound processing strategy reducing spectral distortions (CrystalisXDP, Oticon Medical) was compared to the standard strategy (main peak interleaved sampling). Each strategy was used one week before the music trial. Results: Total music score was higher with CrystalisXDP than with the standard strategy. Nine patients (21%) categorized music above the random level (>5) on test 3 only based on mode with either of the strategies. In this group, CrystalisXDP improved the performances. For dissonance detection, 17 patients (40%) scored above random level with either of the strategies. In this group, CrystalisXDP did not improve the performances. Conclusions: CrystalisXDP, which enhances spectral cues, seemed to improve the categorization of happy versus sad music. Spectral cues could participate in musical emotions in cochlear implantees and improve the quality of musical perception.
Collapse
Affiliation(s)
- Gaëlle Leterme
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| | - Caroline Guigou
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
- Correspondence: ; Tel.: +33-615718531
| | - Geoffrey Guenser
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
| | - Emmanuel Bigand
- LEAD Research Laboratory, CNRS UMR 5022, Bourgogne-Franche-Comté University, 21000 Dijon, France;
| | - Alexis Bozorg Grayeli
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| |
Collapse
|
2
|
Heald SLM, Van Hedger SC, Nusbaum HC. Perceptual Plasticity for Auditory Object Recognition. Front Psychol 2017; 8:781. [PMID: 28588524 PMCID: PMC5440584 DOI: 10.3389/fpsyg.2017.00781] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 04/26/2017] [Indexed: 01/25/2023] Open
Abstract
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as "noise" in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed.
Collapse
|
3
|
Abstract
Combined use of a hearing aid (HA) and cochlear implant (CI) has been shown to improve CI users’ speech and music performance. However, different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefits. In this study, speech and music perception were measured in bimodal listeners for CI-only, HA-only, and CI + HA conditions, using the Sung Speech Corpus, a database of monosyllabic words produced at different fundamental frequencies. Sentence recognition was measured using sung speech in which pitch was held constant or varied across words, as well as for spoken speech. Melodic contour identification (MCI) was measured using sung speech in which the words were held constant or varied across notes. Results showed that sentence recognition was poorer with sung speech relative to spoken, with little difference between sung speech with a constant or variable pitch; mean performance was better with CI-only relative to HA-only, and best with CI + HA. MCI performance was better with constant words versus variable words; mean performance was better with HA-only than with CI-only and was best with CI + HA. Relative to CI-only, a strong bimodal benefit was observed for speech and music perception. Relative to the better ear, bimodal benefits remained strong for sentence recognition but were marginal for MCI. While variations in pitch and timbre may negatively affect CI users’ speech and music perception, bimodal listening may partially compensate for these deficits.
Collapse
Affiliation(s)
- Joseph D Crew
- University of Southern California, Los Angeles, CA, USA
| | | | - Qian-Jie Fu
- University of California-Los Angeles, CA, USA
| |
Collapse
|
4
|
Jeong E, Ryu H. Melodic Contour Identification Reflects the Cognitive Threshold of Aging. Front Aging Neurosci 2016; 8:134. [PMID: 27378907 PMCID: PMC4904015 DOI: 10.3389/fnagi.2016.00134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 05/27/2016] [Indexed: 01/16/2023] Open
Abstract
Cognitive decline is a natural phenomenon of aging. Although there exists a consensus that sensitivity to acoustic features of music is associated with such decline, no solid evidence has yet shown that structural elements and contexts of music explain this loss of cognitive performance. This study examined the extent and the type of cognitive decline that is related to the contour identification task (CIT) using tones with different pitches (i.e., melodic contours). Both younger and older adult groups participated in the CIT given in three listening conditions (i.e., focused, selective, and alternating). Behavioral data (accuracy and response times) and hemodynamic reactions were measured using functional near-infrared spectroscopy (fNIRS). Our findings showed cognitive declines in the older adult group but with a subtle difference from the younger adult group. The accuracy of the melodic CITs given in the target-like distraction task (CIT2) was significantly lower than that in the environmental noise (CIT1) condition in the older adult group, indicating that CIT2 may be a benchmark test for age-specific cognitive decline. The fNIRS findings also agreed with this interpretation, revealing significant increases in oxygenated hemoglobin (oxyHb) concentration in the younger (p < 0.05 for Δpre - on task; p < 0.01 for Δon – post task) rather than the older adult group (n.s for Δpre - on task; n.s for Δon – post task). We further concluded that the oxyHb difference was present in the brain regions near the right dorsolateral prefrontal cortex. Taken together, these findings suggest that CIT2 (i.e., the melodic contour task in the target-like distraction) is an optimized task that could indicate the degree and type of age-related cognitive decline.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| |
Collapse
|
5
|
Grasmeder ML, Verschuur CA. Perception of the pitch and naturalness of popular music by cochlear implant users. Cochlear Implants Int 2015; 16 Suppl 3:S79-90. [DOI: 10.1179/1467010015z.000000000266] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
6
|
van Besouw RM, Oliver BR, Hodkinson SM, Polfreman R, Grasmeder ML. Participatory design of a music aural rehabilitation programme. Cochlear Implants Int 2015; 16 Suppl 3:S39-50. [DOI: 10.1179/1467010015z.000000000264] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
7
|
Melodic pitch perception and lexical tone perception in Mandarin-speaking cochlear implant users. Ear Hear 2015; 36:102-10. [PMID: 25099401 DOI: 10.1097/aud.0000000000000086] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To examine the relationship between lexical tone perception and melodic pitch perception in Mandarin-speaking cochlear implant (CI) users and to investigate the influence of previous acoustic hearing on CI users' speech and music perception. DESIGN Lexical tone perception and melodic contour identification (MCI) were measured in 21 prelingual and 11 postlingual young (aged 6-26 years) Mandarin-speaking CI users. Lexical tone recognition was measured for four tonal patterns: tone 1 (flat F0), tone 2 (rising F0), tone 3 (falling-rising F0), and tone 4 (falling F0). MCI was measured using nine five-note melodic patterns that contained changes in pitch contour, as well as different semitone spacing between notes. RESULTS Lexical tone recognition was generally good (overall mean = 81% correct), and there was no significant difference between subject groups. MCI performance was generally poor (mean = 23% correct). MCI performance was significantly better for postlingual (mean = 32% correct) than for prelingual CI participants (mean = 18% correct). After correcting for outliers, there was no significant correlation between lexical tone recognition and MCI performance for prelingual or postlingual CI participants. Age at deafness was significantly correlated with MCI performance only for postlingual participants. CI experience was significantly correlated with MCI performance for both prelingual and postlingual participants. Duration of deafness was significantly correlated with tone recognition only for prelingual participants. CONCLUSIONS Despite the prevalence of pitch cues in Mandarin, the present CI participants had great difficulty perceiving melodic pitch. The availability of amplitude and duration cues in lexical tones most likely compensated for the poor pitch perception observed with these CI listeners. Previous acoustic hearing experience seemed to benefit postlingual CI users' melodic pitch perception. Longer CI experience was associated with better MCI performance for both subject groups, suggesting that CI users' music perception may improve as they gain experience with their device.
Collapse
|
8
|
Crew JD, Galvin III JJ, Landsberger DM, Fu QJ. Contributions of electric and acoustic hearing to bimodal speech and music perception. PLoS One 2015; 10:e0120279. [PMID: 25790349 PMCID: PMC4366155 DOI: 10.1371/journal.pone.0120279] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 01/26/2015] [Indexed: 11/18/2022] Open
Abstract
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.
Collapse
Affiliation(s)
- Joseph D. Crew
- Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America
| | - John J. Galvin III
- Department of Head and Neck Surgery, University of California-Los Angeles, Los Angeles, California, United States of America
| | - David M. Landsberger
- Department of Otolaryngology, New York University School of Medicine, New York, New York, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California-Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
9
|
van de Velde DJ, Dritsakis G, Frijns JHM, van Heuven VJ, Schiller NO. The effect of spectral smearing on the identification of pureF0intonation contours in vocoder simulations of cochlear implants. Cochlear Implants Int 2014; 16:77-87. [DOI: 10.1179/1754762814y.0000000086] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
10
|
Limb CJ, Roy AT. Technological, biological, and acoustical constraints to music perception in cochlear implant users. Hear Res 2014; 308:13-26. [DOI: 10.1016/j.heares.2013.04.009] [Citation(s) in RCA: 164] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2013] [Revised: 04/04/2013] [Accepted: 04/22/2013] [Indexed: 11/30/2022]
|
11
|
Buyens W, van Dijk B, Moonen M, Wouters J. Music mixing preferences of cochlear implant recipients: A pilot study. Int J Audiol 2014; 53:294-301. [DOI: 10.3109/14992027.2013.873955] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
12
|
Crew JD, Galvin JJ, Fu QJ. Channel interaction limits melodic pitch perception in simulated cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:EL429-35. [PMID: 23145706 PMCID: PMC3494451 DOI: 10.1121/1.4758770] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
In cochlear implants (CIs), melodic pitch perception is limited by the spectral resolution, which in turn is limited by the number of spectral channels as well as interactions between adjacent channels. This study investigated the effect of channel interaction on melodic contour identification (MCI) in normal-hearing subjects listening to novel 16-channel sinewave vocoders that simulated channel interaction in CI signal processing. MCI performance worsened as the degree of channel interaction increased. Although greater numbers of spectral channels may be beneficial to melodic pitch perception, the present data suggest that it is also important to improve independence among spectral channels.
Collapse
Affiliation(s)
- Joseph D Crew
- Department of Biomedical Engineering, University of Southern California, Los Angeles, California 90089, USA.
| | | | | |
Collapse
|
13
|
Eskridge EN, Galvin JJ, Aronoff JM, Li T, Fu QJ. Speech perception with music maskers by cochlear implant users and normal-hearing listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:800-810. [PMID: 22223890 PMCID: PMC5847337 DOI: 10.1044/1092-4388(2011/11-0124)] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. METHOD Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their clinical processors; NH subjects were tested while listening to unprocessed audio. Speech was presented with different music maskers (excerpts from musical pieces) and with steady, speech-shaped noise. To estimate the contributions of energetic and informational masking, SRTs were also measured in "music-shaped noise" and in music-shaped noise modulated by the music temporal envelopes. RESULTS NH performance was much better than CI performance. For both subject groups, SRTs were much lower with the music-related maskers than with speech-shaped noise. SRTs were strongly predicted by the amount of energetic masking in the music maskers. Unlike CI users, NH listeners obtained release from masking with envelope and fine structure cues in the modulated noise and music maskers. CONCLUSIONS Although speech understanding was greatly limited by energetic masking in both subject groups, CI performance worsened as more spectrotemporal complexity was added to the maskers, most likely due to poor spectral resolution.
Collapse
|