1
|
Harding EE, Gaudrain E, Tillmann B, Maat B, Harris RL, Free RH, Başkent D. Vocal and musical emotion perception, voice cue discrimination, and quality of life in cochlear implant users with and without acoustic hearing. Q J Exp Psychol (Hove) 2025:17470218251316499. [PMID: 39834040 DOI: 10.1177/17470218251316499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2025]
Abstract
This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorisation in both vocal (pseudo-speech) and musical domains and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorisation varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorisation were significantly correlated, indicating both similar processing of auditory emotional cues across the pseudo-speech and music domains as well as robustness of the tests. Only for musical emotion perception, emotion d' scores were higher in implant users with residual acoustic hearing compared to no acoustic hearing. The voice pitch perception did not significantly correlate with emotion categorisation in either domain, while the vocal tract length significantly correlated in both domains. For QoL, only the sub-domain of Speech production ability, but not the overall QoL scores, correlated with vocal emotion categorisation, partially supporting previous findings. Taken together, results indicate that auditory emotion perception is challenging for some CI users, possibly a consequence of how available the emotion-related cues are via electric hearing. Improving these cues, either via rehabilitation or training, may also help auditory emotion perception in CI users.
Collapse
Affiliation(s)
- Eleanor E Harding
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- The Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université Saint-Etienne, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université Saint-Etienne, Lyon, France
- Laboratory for Research on Learning and Development, LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- The Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Robert L Harris
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands
| | - Rolien H Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- The Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- The Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
2
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024; 45:1585-1599. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study ( Taitlebaum-Swead et al. 2022 ; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Valentin O, Lehmann A, Nguyen D, Paquette S. Integrating Emotion Perception in Rehabilitation Programs for Cochlear Implant Users: A Call for a More Comprehensive Approach. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1635-1642. [PMID: 38619441 DOI: 10.1044/2024_jslhr-23-00660] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
PURPOSE Postoperative rehabilitation programs for cochlear implant (CI) recipients primarily emphasize enhancing speech perception. However, effective communication in everyday social interactions necessitates consideration of diverse verbal social cues to facilitate language comprehension. Failure to discern emotional expressions may lead to maladjusted social behavior, underscoring the importance of integrating social cues perception into rehabilitation initiatives to enhance CI users' well-being. After conventional rehabilitation, CI users demonstrate varying levels of emotion perception abilities. This disparity notably impacts young CI users, whose emotion perception deficit can extend to social functioning, encompassing coping strategies and social competence, even when relying on nonauditory cues such as facial expressions. Knowing that emotion perception abilities generally decrease with age, acknowledging emotion perception impairments in aging CI users is crucial, especially since a direct correlation between quality-of-life scores and vocal emotion recognition abilities has been observed in adult CI users. After briefly reviewing the scope of CI rehabilitation programs and summarizing the mounting evidence on CI users' emotion perception deficits and their impact, we will present our recommendations for embedding emotional training as part of enriched and standardized evaluation/rehabilitation programs that can improve CI users' social integration and quality of life. CONCLUSIONS Evaluating all aspects, including emotion perception, in CI rehabilitation programs is crucial because it ensures a comprehensive approach that enhances speech comprehension and the emotional dimension of communication, potentially improving CI users' social interaction and overall well-being. The development of emotion perception training holds promises for CI users and individuals grappling with various forms of hearing loss and sensory deficits. Ultimately, adopting such a comprehensive approach has the potential to significantly elevate the overall quality of life for a broad spectrum of patients.
Collapse
Affiliation(s)
- Olivier Valentin
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Don Nguyen
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Psychology, Faculty of Arts and Science, Trent University, Peterborough, Ontario, Canada
| |
Collapse
|
4
|
Paquette S, Gouin S, Lehmann A. Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals. BMC Neurol 2024; 24:115. [PMID: 38589815 PMCID: PMC11000345 DOI: 10.1186/s12883-024-03616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 03/29/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Although cochlear implants can restore auditory inputs to deafferented auditory cortices, the quality of the sound signal transmitted to the brain is severely degraded, limiting functional outcomes in terms of speech perception and emotion perception. The latter deficit negatively impacts cochlear implant users' social integration and quality of life; however, emotion perception is not currently part of rehabilitation. Developing rehabilitation programs incorporating emotional cognition requires a deeper understanding of cochlear implant users' residual emotion perception abilities. METHODS To identify the neural underpinnings of these residual abilities, we investigated whether machine learning techniques could be used to identify emotion-specific patterns of neural activity in cochlear implant users. Using existing electroencephalography data from 22 cochlear implant users, we employed a random forest classifier to establish if we could model and subsequently predict from participants' brain responses the auditory emotions (vocal and musical) presented to them. RESULTS Our findings suggest that consistent emotion-specific biomarkers exist in cochlear implant users, which could be used to develop effective rehabilitation programs incorporating emotion perception training. CONCLUSIONS This study highlights the potential of machine learning techniques to improve outcomes for cochlear implant users, particularly in terms of emotion perception.
Collapse
Affiliation(s)
- Sebastien Paquette
- Psychology Department, Faculty of Arts and Science, Trent University, Peterborough, ON, Canada.
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada.
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada.
| | - Samir Gouin
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| | - Alexandre Lehmann
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
5
|
Paquette S, Deroche MLD, Goffi-Gomez MV, Hoshino ACH, Lehmann A. Predicting emotion perception abilities for cochlear implant users. Int J Audiol 2023; 62:946-954. [PMID: 36047767 DOI: 10.1080/14992027.2022.2111611] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 08/05/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE In daily life, failure to perceive emotional expressions can result in maladjusted behaviour. For cochlear implant users, perceiving emotional cues in sounds remains challenging, and the factors explaining the variability in patients' sensitivity to emotions are currently poorly understood. Understanding how these factors relate to auditory proficiency is a major challenge of cochlear implant research and is critical in addressing patients' limitations. DESIGN To fill this gap, we evaluated different auditory perception aspects in implant users (pitch discrimination, music processing and speech intelligibility) and correlated them to their performance in an emotion recognition task. STUDY SAMPLE Eighty-four adults (18-76 years old) participated in our investigation; 42 cochlear implant users and 42 controls. Cochlear implant users performed worse than their controls on all tasks, and emotion perception abilities were correlated to their age and their clinical outcome as measured in the speech intelligibility task. RESULTS As previously observed, emotion perception abilities declined with age (here by about 2-3% in a decade). Interestingly, even when emotional stimuli were musical, CI users' skills relied more on processes underlying speech intelligibility. CONCLUSIONS These results suggest that speech processing remains a clinical priority even when one is interested in affective skills.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| | - M L D Deroche
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
- Laboratory for Hearing and Cognition, Psychology Department, Concordia University, Montreal, Canada
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| |
Collapse
|
6
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Towler W, Alemi R, Bien AG, Koirala N, Hanna L, Henry L, Gracco VL. Auditory evoked response to an oddball paradigm in children wearing cochlear implants. Clin Neurophysiol 2023; 149:133-145. [PMID: 36965466 DOI: 10.1016/j.clinph.2023.02.179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - William Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| | - Alexander G Bien
- University of Oklahoma College of Medicine, Otolaryngology, 800 Stanton L Young Blvd., Oklahoma City, OK 73117, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Lauren Henry
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | | |
Collapse
|
7
|
Rothermich K, Dixon S, Weiner M, Capps M, Dong L, Paquette S, Zhou N. Perception of speaker sincerity in complex social interactions by cochlear implant users. PLoS One 2022; 17:e0269652. [PMID: 35675356 PMCID: PMC9176755 DOI: 10.1371/journal.pone.0269652] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 05/25/2022] [Indexed: 11/26/2022] Open
Abstract
Understanding insincere language (sarcasm and teasing) is a fundamental part of communication and crucial for maintaining social relationships. This can be a challenging task for cochlear implant (CIs) users who receive degraded suprasegmental information important for perceiving a speaker's attitude. We measured the perception of speaker sincerity (literal positive, literal negative, sarcasm, and teasing) in 16 adults with CIs using an established video inventory. Participants were presented with audio-only and audio-visual social interactions between two people with and without supporting verbal context. They were instructed to describe the content of the conversation and answer whether the speakers meant what they said. Results showed that subjects could not always identify speaker sincerity, even when the content of the conversation was perfectly understood. This deficit was greater for perceiving insincere relative to sincere utterances. Performance improved when additional visual cues or verbal context cues were provided. Subjects who were better at perceiving the content of the interactions in the audio-only condition benefited more from having additional visual cues for judging the speaker's sincerity, suggesting that the two modalities compete for cognitive recourses. Perception of content also did not correlate with perception of speaker sincerity, suggesting that what was said vs. how it was said were perceived using unrelated segmental versus suprasegmental cues. Our results further showed that subjects who had access to lower-order resolved harmonic information provided by hearing aids in the contralateral ear identified speaker sincerity better than those who used implants alone. These results suggest that measuring speech recognition alone in CI users does not fully describe the outcome. Our findings stress the importance of measuring social communication functions in people with CIs.
Collapse
Affiliation(s)
- Kathrin Rothermich
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Susannah Dixon
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Marti Weiner
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Madison Capps
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Lixue Dong
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | | | - Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| |
Collapse
|
8
|
Lin Y, Wu C, Limb CJ, Lu H, Feng IJ, Peng S, Deroche MLD, Chatterjee M. Voice emotion recognition by Mandarin-speaking pediatric cochlear implant users in Taiwan. Laryngoscope Investig Otolaryngol 2022; 7:250-258. [PMID: 35155805 PMCID: PMC8823186 DOI: 10.1002/lio2.732] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/29/2021] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES To explore the effects of obligatory lexical tone learning on speech emotion recognition and the cross-culture differences between United States and Taiwan for speech emotion understanding in children with cochlear implant. METHODS This cohort study enrolled 60 cochlear-implanted (cCI) Mandarin-speaking, school-aged children who underwent cochlear implantation before 5 years of age and 53 normal-hearing children (cNH) in Taiwan. The emotion recognition and the sensitivity of fundamental frequency (F0) changes for those school-aged cNH and cCI (6-17 years old) were examined in a tertiary referred center. RESULTS The mean emotion recognition score of the cNH group was significantly better than the cCI. Female speakers' vocal emotions are more easily to be recognized than male speakers' emotion. There was a significant effect of age at test on voice recognition performance. The average score of cCI with full-spectrum speech was close to the average score of cNH with eight-channel narrowband vocoder speech. The average performance of voice emotion recognition across speakers for cCI could be predicted by their sensitivity to changes in F0. CONCLUSIONS Better pitch discrimination ability comes with better voice emotion recognition for Mandarin-speaking cCI. Besides the F0 cues, cCI are likely to adapt their voice emotion recognition by relying more on secondary cues such as intensity and duration. Although cross-culture differences exist for the acoustic features of voice emotion, Mandarin-speaking cCI and their English-speaking cCI peer expressed a positive effect for age at test on emotion recognition, suggesting the learning effect and brain plasticity. Therefore, further device/processor development to improve presentation of pitch information and more rehabilitative efforts are needed to improve the transmission and perception of voice emotion in Mandarin. LEVEL OF EVIDENCE 3.
Collapse
Affiliation(s)
- Yung‐Song Lin
- Department of OtolaryngologyChi Mei Medical CenterTainanTaiwan
- Department of OtolaryngologySchool of Medicine, College of Medicine, Taipei Medical UniversityTaipeiTaiwan
| | - Che‐Ming Wu
- Department of OtorhinolaryngologyNew Taipei Municipal TuCheng Hospital (built and operated by Chang Gung Medical Foundation)New Taipei CityTaiwan
- Department of OtorhinolaryngologyChang Gung Memorial HospitalTaoyuanTaiwan
- School of Medicine, Chang Gung UniversityTaoyuanTaiwan
| | - Charles J. Limb
- School of Medicine, University of California San FranciscoSan FranciscoCaliforniaUSA
| | - Hui‐Ping Lu
- Center of Speech and Hearing, Department of OtolaryngologyChi Mei Medical CenterTainanTaiwan
| | - I. Jung Feng
- Institute of Precision Medicine, National Sun Yat‐sen UniversityKaohsiungTaiwan
| | - Shu‐Chen Peng
- Center for Devices and Radiological HealthUnited States Food and Drug AdministrationSilver SpringMarylandUSA
| | | | | |
Collapse
|
9
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
10
|
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear 2021; 41:1372-1382. [PMID: 32149924 DOI: 10.1097/aud.0000000000000862] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Collapse
|
11
|
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users. J Clin Med 2021; 10:jcm10102093. [PMID: 34068067 PMCID: PMC8152499 DOI: 10.3390/jcm10102093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/26/2021] [Accepted: 05/06/2021] [Indexed: 01/17/2023] Open
Abstract
In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.
Collapse
|
12
|
Cartocci G, Giorgi A, Inguscio BMS, Scorpecci A, Giannantonio S, De Lucia A, Garofalo S, Grassia R, Leone CA, Longo P, Freni F, Malerba P, Babiloni F. Higher Right Hemisphere Gamma Band Lateralization and Suggestion of a Sensitive Period for Vocal Auditory Emotional Stimuli Recognition in Unilateral Cochlear Implant Children: An EEG Study. Front Neurosci 2021; 15:608156. [PMID: 33767607 PMCID: PMC7985439 DOI: 10.3389/fnins.2021.608156] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/01/2021] [Indexed: 12/21/2022] Open
Abstract
In deaf children, huge emphasis was given to language; however, emotional cues decoding and production appear of pivotal importance for communication capabilities. Concerning neurophysiological correlates of emotional processing, the gamma band activity appears a useful tool adopted for emotion classification and related to the conscious elaboration of emotions. Starting from these considerations, the following items have been investigated: (i) whether emotional auditory stimuli processing differs between normal-hearing (NH) children and children using a cochlear implant (CI), given the non-physiological development of the auditory system in the latter group; (ii) whether the age at CI surgery influences emotion recognition capabilities; and (iii) in light of the right hemisphere hypothesis for emotional processing, whether the CI side influences the processing of emotional cues in unilateral CI (UCI) children. To answer these matters, 9 UCI (9.47 ± 2.33 years old) and 10 NH (10.95 ± 2.11 years old) children were asked to recognize nonverbal vocalizations belonging to three emotional states: positive (achievement, amusement, contentment, relief), negative (anger, disgust, fear, sadness), and neutral (neutral, surprise). Results showed better performances in NH than UCI children in emotional states recognition. The UCI group showed increased gamma activity lateralization index (LI) (relative higher right hemisphere activity) in comparison to the NH group in response to emotional auditory cues. Moreover, LI gamma values were negatively correlated with the percentage of correct responses in emotion recognition. Such observations could be explained by a deficit in UCI children in engaging the left hemisphere for more demanding emotional task, or alternatively by a higher conscious elaboration in UCI than NH children. Additionally, for the UCI group, there was no difference between the CI side and the contralateral side in gamma activity, but a higher gamma activity in the right in comparison to the left hemisphere was found. Therefore, the CI side did not appear to influence the physiologic hemispheric lateralization of emotional processing. Finally, a negative correlation was shown between the age at the CI surgery and the percentage of correct responses in emotion recognition and then suggesting the occurrence of a sensitive period for CI surgery for best emotion recognition skills development.
Collapse
Affiliation(s)
- Giulia Cartocci
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Andrea Giorgi
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Bianca M S Inguscio
- BrainSigns Srl, Rome, Italy.,Cochlear Implant Unit, Department of Sensory Organs, Sapienza University of Rome, Rome, Italy
| | - Alessandro Scorpecci
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Antonietta De Lucia
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Sabina Garofalo
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Patrizia Longo
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | | | - Fabio Babiloni
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy.,Department of Computer Science and Technology, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou, China
| |
Collapse
|
13
|
Paquette S, Rigoulot S, Grunewald K, Lehmann A. Temporal decoding of vocal and musical emotions: Same code, different timecourse? Brain Res 2020; 1741:146887. [PMID: 32422128 DOI: 10.1016/j.brainres.2020.146887] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/22/2020] [Accepted: 05/12/2020] [Indexed: 11/24/2022]
Abstract
From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.
Collapse
Affiliation(s)
- S Paquette
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada.
| | - S Rigoulot
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; Department of Psychology, Université du Québec à Trois-Rivières, Trois-Rivières, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - K Grunewald
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - A Lehmann
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| |
Collapse
|