1
|
Paquette S, Gouin S, Lehmann A. Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals. BMC Neurol 2024; 24:115. [PMID: 38589815 PMCID: PMC11000345 DOI: 10.1186/s12883-024-03616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 03/29/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Although cochlear implants can restore auditory inputs to deafferented auditory cortices, the quality of the sound signal transmitted to the brain is severely degraded, limiting functional outcomes in terms of speech perception and emotion perception. The latter deficit negatively impacts cochlear implant users' social integration and quality of life; however, emotion perception is not currently part of rehabilitation. Developing rehabilitation programs incorporating emotional cognition requires a deeper understanding of cochlear implant users' residual emotion perception abilities. METHODS To identify the neural underpinnings of these residual abilities, we investigated whether machine learning techniques could be used to identify emotion-specific patterns of neural activity in cochlear implant users. Using existing electroencephalography data from 22 cochlear implant users, we employed a random forest classifier to establish if we could model and subsequently predict from participants' brain responses the auditory emotions (vocal and musical) presented to them. RESULTS Our findings suggest that consistent emotion-specific biomarkers exist in cochlear implant users, which could be used to develop effective rehabilitation programs incorporating emotion perception training. CONCLUSIONS This study highlights the potential of machine learning techniques to improve outcomes for cochlear implant users, particularly in terms of emotion perception.
Collapse
Affiliation(s)
- Sebastien Paquette
- Psychology Department, Faculty of Arts and Science, Trent University, Peterborough, ON, Canada.
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada.
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada.
| | - Samir Gouin
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| | - Alexandre Lehmann
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
2
|
von Eiff CI, Kauk J, Schweinberger SR. The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities. Behav Res Methods 2023:10.3758/s13428-023-02249-4. [PMID: 37821750 DOI: 10.3758/s13428-023-02249-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/18/2023] [Indexed: 10/13/2023]
Abstract
We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.
Collapse
Affiliation(s)
- Celina I von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.
- Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.
- DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany.
- Jena University Hospital, 07747, Jena, Germany.
| | - Julian Kauk
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.
- Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.
- DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany.
- Jena University Hospital, 07747, Jena, Germany.
| |
Collapse
|
3
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Towler W, Alemi R, Bien AG, Koirala N, Hanna L, Henry L, Gracco VL. Auditory evoked response to an oddball paradigm in children wearing cochlear implants. Clin Neurophysiol 2023; 149:133-145. [PMID: 36965466 DOI: 10.1016/j.clinph.2023.02.179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - William Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| | - Alexander G Bien
- University of Oklahoma College of Medicine, Otolaryngology, 800 Stanton L Young Blvd., Oklahoma City, OK 73117, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Lauren Henry
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | | |
Collapse
|
4
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
5
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
6
|
Steinmetzger K, Meinhardt B, Praetorius M, Andermann M, Rupp A. A direct comparison of voice pitch processing in acoustic and electric hearing. Neuroimage Clin 2022; 36:103188. [PMID: 36113196 PMCID: PMC9483634 DOI: 10.1016/j.nicl.2022.103188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/24/2022] [Accepted: 09/06/2022] [Indexed: 12/14/2022]
Abstract
In single-sided deafness patients fitted with a cochlear implant (CI) in the affected ear and preserved normal hearing in the other ear, acoustic and electric hearing can be directly compared without the need for an external control group. Although poor pitch perception is a crucial limitation when listening through CIs, it remains unclear how exactly the cortical processing of pitch information differs between acoustic and electric hearing. Hence, we separately presented both ears of 20 of these patients with vowel sequences in which the pitch contours were either repetitive or variable, while simultaneously recording functional near-infrared spectroscopy (fNIRS) and EEG data. Overall, the results showed smaller and delayed auditory cortex activity in electric hearing, particularly for the P2 event-related potential component, which appears to reflect the processing of voice pitch information. Both the fNIRS data and EEG source reconstructions furthermore showed that vowel sequences with variable pitch contours evoked additional activity in posterior right auditory cortex in electric but not acoustic hearing. This surprising discrepancy demonstrates, firstly, that the acoustic detail transmitted by CIs is sufficient to distinguish between speech sounds that only vary regarding their pitch information. Secondly, the absence of a condition difference when stimulating the normal-hearing ears suggests a saturation of cortical activity levels following unilateral deafness. Taken together, these results provide strong evidence in favour of using CIs in this patient group.
Collapse
Affiliation(s)
- Kurt Steinmetzger
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany,Corresponding author.
| | - Bastian Meinhardt
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - Mark Praetorius
- Section of Otology and Neurootology, ENT Clinic, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| |
Collapse
|
7
|
Fleming JT, Winn MB. Strategic perceptual weighting of acoustic cues for word stress in listeners with cochlear implants, acoustic hearing, or simulated bimodal hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1300. [PMID: 36182279 PMCID: PMC9439712 DOI: 10.1121/10.0013890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 08/08/2022] [Accepted: 08/16/2022] [Indexed: 05/28/2023]
Abstract
Perception of word stress is an important aspect of recognizing speech, guiding the listener toward candidate words based on the perceived stress pattern. Cochlear implant (CI) signal processing is likely to disrupt some of the available cues for word stress, particularly vowel quality and pitch contour changes. In this study, we used a cue weighting paradigm to investigate differences in stress cue weighting patterns between participants listening with CIs and those with normal hearing (NH). We found that participants with CIs gave less weight to frequency-based pitch and vowel quality cues than NH listeners but compensated by upweighting vowel duration and intensity cues. Nonetheless, CI listeners' stress judgments were also significantly influenced by vowel quality and pitch, and they modulated their usage of these cues depending on the specific word pair in a manner similar to NH participants. In a series of separate online experiments with NH listeners, we simulated aspects of bimodal hearing by combining low-pass filtered speech with a vocoded signal. In these conditions, participants upweighted pitch and vowel quality cues relative to a fully vocoded control condition, suggesting that bimodal listening holds promise for restoring the stress cue weighting patterns exhibited by listeners with NH.
Collapse
Affiliation(s)
- Justin T Fleming
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
8
|
Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users. Ear Hear 2022; 43:1178-1188. [PMID: 34999594 PMCID: PMC9197138 DOI: 10.1097/aud.0000000000001181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. Results: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. Conclusions: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.
Collapse
|
9
|
More Than Words: the Relative Roles of Prosody and Semantics in the Perception of Emotions in Spoken Language by Postlingual Cochlear Implant Users. Ear Hear 2022; 43:1378-1389. [PMID: 35030551 DOI: 10.1097/aud.0000000000001199] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The processing of emotional speech calls for the perception and integration of semantic and prosodic cues. Although cochlear implants allow for significant auditory improvements, they are limited in the transmission of spectro-temporal fine-structure information that may not support the processing of voice pitch cues. The goal of the current study is to compare the performance of postlingual cochlear implant (CI) users and a matched control group on perception, selective attention, and integration of emotional semantics and prosody. DESIGN Fifteen CI users and 15 normal hearing (NH) peers (age range, 18-65 years) 1istened to spoken sentences composed of different combinations of four discrete emotions (anger, happiness, sadness, and neutrality) presented in prosodic and semantic channels-T-RES: Test for Rating Emotions in Speech. In three separate tasks, listeners were asked to attend to the sentence as a whole, thus integrating both speech channels (integration), or to focus on one channel only (rating of target emotion) and ignore the other (selective attention). Their task was to rate how much they agreed that the sentence conveyed each of the predefined emotions. In addition, all participants performed standard tests of speech perception. RESULTS When asked to focus on one channel, semantics or prosody, both groups rated emotions similarly with comparable levels of selective attention. When the task was called for channel integration, group differences were found. CI users appeared to use semantic emotional information more than did their NH peers. CI users assigned higher ratings than did their NH peers to sentences that did not present the target emotion, indicating some degree of confusion. In addition, for CI users, individual differences in speech comprehension over the phone and identification of intonation were significantly related to emotional semantic and prosodic ratings, respectively. CONCLUSIONS CI users and NH controls did not differ in perception of prosodic and semantic emotions and in auditory selective attention. However, when the task called for integration of prosody and semantics, CI users overused the semantic information (as compared with NH). We suggest that as CI users adopt diverse cue weighting strategies with device experience, their weighting of prosody and semantics differs from those used by NH. Finally, CI users may benefit from rehabilitation strategies that strengthen perception of prosodic information to better understand emotional speech.
Collapse
|
10
|
Cartocci G, Giorgi A, Inguscio BMS, Scorpecci A, Giannantonio S, De Lucia A, Garofalo S, Grassia R, Leone CA, Longo P, Freni F, Malerba P, Babiloni F. Higher Right Hemisphere Gamma Band Lateralization and Suggestion of a Sensitive Period for Vocal Auditory Emotional Stimuli Recognition in Unilateral Cochlear Implant Children: An EEG Study. Front Neurosci 2021; 15:608156. [PMID: 33767607 PMCID: PMC7985439 DOI: 10.3389/fnins.2021.608156] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/01/2021] [Indexed: 12/21/2022] Open
Abstract
In deaf children, huge emphasis was given to language; however, emotional cues decoding and production appear of pivotal importance for communication capabilities. Concerning neurophysiological correlates of emotional processing, the gamma band activity appears a useful tool adopted for emotion classification and related to the conscious elaboration of emotions. Starting from these considerations, the following items have been investigated: (i) whether emotional auditory stimuli processing differs between normal-hearing (NH) children and children using a cochlear implant (CI), given the non-physiological development of the auditory system in the latter group; (ii) whether the age at CI surgery influences emotion recognition capabilities; and (iii) in light of the right hemisphere hypothesis for emotional processing, whether the CI side influences the processing of emotional cues in unilateral CI (UCI) children. To answer these matters, 9 UCI (9.47 ± 2.33 years old) and 10 NH (10.95 ± 2.11 years old) children were asked to recognize nonverbal vocalizations belonging to three emotional states: positive (achievement, amusement, contentment, relief), negative (anger, disgust, fear, sadness), and neutral (neutral, surprise). Results showed better performances in NH than UCI children in emotional states recognition. The UCI group showed increased gamma activity lateralization index (LI) (relative higher right hemisphere activity) in comparison to the NH group in response to emotional auditory cues. Moreover, LI gamma values were negatively correlated with the percentage of correct responses in emotion recognition. Such observations could be explained by a deficit in UCI children in engaging the left hemisphere for more demanding emotional task, or alternatively by a higher conscious elaboration in UCI than NH children. Additionally, for the UCI group, there was no difference between the CI side and the contralateral side in gamma activity, but a higher gamma activity in the right in comparison to the left hemisphere was found. Therefore, the CI side did not appear to influence the physiologic hemispheric lateralization of emotional processing. Finally, a negative correlation was shown between the age at the CI surgery and the percentage of correct responses in emotion recognition and then suggesting the occurrence of a sensitive period for CI surgery for best emotion recognition skills development.
Collapse
Affiliation(s)
- Giulia Cartocci
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Andrea Giorgi
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Bianca M S Inguscio
- BrainSigns Srl, Rome, Italy.,Cochlear Implant Unit, Department of Sensory Organs, Sapienza University of Rome, Rome, Italy
| | - Alessandro Scorpecci
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Antonietta De Lucia
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Sabina Garofalo
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Patrizia Longo
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | | | - Fabio Babiloni
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy.,Department of Computer Science and Technology, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou, China
| |
Collapse
|
11
|
Skuk VG, Kirchen L, Oberhoffner T, Guntinas-Lichius O, Dobel C, Schweinberger SR. Parameter-Specific Morphing Reveals Contributions of Timbre and Fundamental Frequency Cues to the Perception of Voice Gender and Age in Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3155-3175. [PMID: 32881631 DOI: 10.1044/2020_jslhr-20-00026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Using naturalistic synthesized speech, we determined the relative importance of acoustic cues in voice gender and age perception in cochlear implant (CI) users. Method We investigated 28 CI users' abilities to utilize fundamental frequency (F0) and timbre in perceiving voice gender (Experiment 1) and vocal age (Experiment 2). Parameter-specific voice morphing was used to selectively control acoustic cues (F0; time; timbre, i.e., formant frequencies, spectral-level information, and aperiodicity, as defined in TANDEM-STRAIGHT) in voice stimuli. Individual differences in CI users' performance were quantified via deviations from the mean performance of 19 normal-hearing (NH) listeners. Results CI users' gender perception seemed exclusively based on F0, whereas NH listeners efficiently used timbre. For age perception, timbre was more informative than F0 for both groups, with minor contributions of temporal cues. While a few CI users performed comparable to NH listeners overall, others were at chance. Separate analyses confirmed that even high-performing CI users classified gender almost exclusively based on F0. While high performers could discriminate age in male and female voices, low performers were close to chance overall but used F0 as a misleading cue to age (classifying female voices as young and male voices as old). Satisfaction with CI generally correlated with performance in age perception. Conclusions We confirmed that CI users' gender classification is mainly based on F0. However, high performers could make reasonable usage of timbre cues in age perception. Overall, parameter-specific morphing can serve to objectively assess individual profiles of CI users' abilities to perceive nonverbal social-communicative vocal signals.
Collapse
Affiliation(s)
- Verena G Skuk
- DFG Research Unit Person Perception, Friedrich Schiller University of Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Germany
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
| | - Louisa Kirchen
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Germany
- Social-Pediatric Centre and Centre for Adults With Special Needs, Trier, Germany
| | - Tobias Oberhoffner
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
- Department of Otorhinolaryngology, Head and Neck Surgery, "Otto Körner," University Medical Center Rostock, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
| | - Stefan R Schweinberger
- DFG Research Unit Person Perception, Friedrich Schiller University of Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Germany
- Swiss Center for Affective Science, Geneva, Switzerland
| |
Collapse
|
12
|
Neurophysiological Differences in Emotional Processing by Cochlear Implant Users, Extending Beyond the Realm of Speech. Ear Hear 2020; 40:1197-1209. [PMID: 30762600 DOI: 10.1097/aud.0000000000000701] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.
Collapse
|
13
|
Xu D, Wang L, Chen F. An ERP Study on the Combined-stimulation Advantage in Vocoder Simulations. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2442-2445. [PMID: 30440901 DOI: 10.1109/embc.2018.8512890] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Electric hearing is presently the only treatment solution for patients with profound-to-severe hearing loss. For those patients also preserving low-frequency residual hearing on the ipsilateral ear, combined electric-and-acoustic stimulation (EAS) could notably improve their speech understanding abilities relative to those aided with electric-only (E-only) hearing. Early behavioral studies have consistently shown the advantage of combined stimulation. The aim of this work was to objectively examine the advantage of combined stimulation over electric-only hearing using an oddballparadigm based event-related potential (ERP) experiment. The vowel stimulus was processed by vocoding processes simulating the E-only and EAS conditions, and the generated stimuli were presented to normal-hearing listeners in the ERP experiment. Experiment results showed that the mismatch negativity (MMN) response elicited in the combined-stimulation condition featured a smaller peak amplitude and a more delayed peak latency than that in the E-only condition. The MMN results in this work demonstrated that compared with the ERP response elicited in the E-only condition, the response in the combinedstimulation condition was much closer to that elicited by the full-spectrum stimulus, yielding neurophysiological evidence on the combined-stimulation advantage.
Collapse
|
14
|
Ahmed DG, Paquette S, Zeitouni A, Lehmann A. Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation. Clin EEG Neurosci 2018; 49:143-151. [PMID: 28958161 DOI: 10.1177/1550059417733386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Collapse
Affiliation(s)
- Duha G Ahmed
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,3 Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - Sebastian Paquette
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,4 Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Anthony Zeitouni
- 2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
15
|
Picou EM, Singh G, Goy H, Russo F, Hickson L, Oxenham AJ, Buono GH, Ricketts TA, Launer S. Hearing, Emotion, Amplification, Research, and Training Workshop: Current Understanding of Hearing Loss and Emotion Perception and Priorities for Future Research. Trends Hear 2018; 22:2331216518803215. [PMID: 30270810 PMCID: PMC6168729 DOI: 10.1177/2331216518803215] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/18/2018] [Accepted: 09/03/2018] [Indexed: 12/19/2022] Open
Abstract
The question of how hearing loss and hearing rehabilitation affect patients' momentary emotional experiences is one that has received little attention but has considerable potential to affect patients' psychosocial function. This article is a product from the Hearing, Emotion, Amplification, Research, and Training workshop, which was convened to develop a consensus document describing research on emotion perception relevant for hearing research. This article outlines conceptual frameworks for the investigation of emotion in hearing research; available subjective, objective, neurophysiologic, and peripheral physiologic data acquisition research methods; the effects of age and hearing loss on emotion perception; potential rehabilitation strategies; priorities for future research; and implications for clinical audiologic rehabilitation. More broadly, this article aims to increase awareness about emotion perception research in audiology and to stimulate additional research on the topic.
Collapse
Affiliation(s)
- Erin M. Picou
- Vanderbilt University School of
Medicine, Nashville, TN, USA
| | - Gurjit Singh
- Phonak Canada, Mississauga, ON,
Canada
- Department of Speech-Language Pathology,
University of Toronto, ON, Canada
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Frank Russo
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Louise Hickson
- School of Health and Rehabilitation
Sciences, University of Queensland, Brisbane, Australia
| | | | | | | | | |
Collapse
|
16
|
Yusuf PA, Hubka P, Tillein J, Kral A. Induced cortical responses require developmental sensory experience. Brain 2017; 140:3153-3165. [PMID: 29155975 PMCID: PMC5841147 DOI: 10.1093/brain/awx286] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 09/12/2017] [Indexed: 01/25/2023] Open
Abstract
Sensory areas of the cerebral cortex integrate the sensory inputs with the ongoing activity. We studied how complete absence of auditory experience affects this process in a higher mammal model of complete sensory deprivation, the congenitally deaf cat. Cortical responses were elicited by intracochlear electric stimulation using cochlear implants in adult hearing controls and deaf cats. Additionally, in hearing controls, acoustic stimuli were used to assess the effect of stimulus mode (electric versus acoustic) on the cortical responses. We evaluated time-frequency representations of local field potential recorded simultaneously in the primary auditory cortex and a higher-order area, the posterior auditory field, known to be differentially involved in cross-modal (visual) reorganization in deaf cats. The results showed the appearance of evoked (phase-locked) responses at early latencies (<100 ms post-stimulus) and more abundant induced (non-phase-locked) responses at later latencies (>150 ms post-stimulus). In deaf cats, substantially reduced induced responses were observed in overall power as well as duration in both investigated fields. Additionally, a reduction of ongoing alpha band activity was found in the posterior auditory field (but not in primary auditory cortex) of deaf cats. The present study demonstrates that induced activity requires developmental experience and suggests that higher-order areas involved in the cross-modal reorganization show more auditory deficits than primary areas.
Collapse
Affiliation(s)
- Prasandhya Astagiri Yusuf
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany
| | - Peter Hubka
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany
| | - Jochen Tillein
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany.,ENT Clinics, J. W. Goethe University, Frankfurt am Main, Germany
| | - Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany.,School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| |
Collapse
|
17
|
Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7121239] [Citation(s) in RCA: 118] [Impact Index Per Article: 16.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
18
|
Fengler I, Nava E, Villwock AK, Büchner A, Lenarz T, Röder B. Multisensory emotion perception in congenitally, early, and late deaf CI users. PLoS One 2017; 12:e0185821. [PMID: 29023525 PMCID: PMC5638301 DOI: 10.1371/journal.pone.0185821] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 09/20/2017] [Indexed: 11/20/2022] Open
Abstract
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.
Collapse
Affiliation(s)
- Ineke Fengler
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Elena Nava
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Agnes K. Villwock
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Andreas Büchner
- German Hearing Centre, Department of Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Thomas Lenarz
- German Hearing Centre, Department of Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| |
Collapse
|
19
|
The MMN as a viable and objective marker of auditory development in CI users. Hear Res 2017; 353:57-75. [DOI: 10.1016/j.heares.2017.07.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 06/16/2017] [Accepted: 07/18/2017] [Indexed: 12/31/2022]
|
20
|
Silva LAF, Couto MIV, Magliaro FCL, Tsuji RK, Bento RF, de Carvalho ACM, Matas CG. Cortical maturation in children with cochlear implants: Correlation between electrophysiological and behavioral measurement. PLoS One 2017; 12:e0171177. [PMID: 28151961 PMCID: PMC5289550 DOI: 10.1371/journal.pone.0171177] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 01/17/2017] [Indexed: 11/18/2022] Open
Abstract
Central auditory pathway maturation in children depends on auditory sensory stimulation. The objective of the present study was to monitor the cortical maturation of children with cochlear implants using electrophysiological and auditory skills measurements. The study was longitudinal and consisted of 30 subjects, 15 (8 girls and 7 boys) of whom had a cochlear implant, with a mean age at activation time of 36.4 months (minimum, 17 months; maximum, 66 months), and 15 of whom were normal-hearing children who were matched based on gender and chronological age. The auditory and speech skills of the children with cochlear implants were evaluated using GASP, IT-MAIS and MUSS measures. Both groups underwent electrophysiological evaluation using long-latency auditory evoked potentials. Each child was evaluated at three and nine months after cochlear implant activation, with the same time interval adopted for the hearing children. The results showed improvements in auditory and speech skills as measured by IT-MAIS and MUSS. Similarly, the long-latency auditory evoked potential evaluation revealed a decrease in P1 component latency; however, the latency remained significantly longer than that of the hearing children, even after nine months of cochlear implant use. It was observed that a shorter P1 latency corresponded to more evident development of auditory skills. Regarding auditory behavior, it was observed that children who could master the auditory skill of discrimination showed better results in other evaluations, both behavioral and electrophysiological, than those who had mastered only the speech-detection skill. Therefore, cochlear implant auditory stimulation facilitated auditory pathway maturation, which decreased the latency of the P1 component and advanced the development of auditory and speech skills. The analysis of the long-latency auditory evoked potentials revealed that the P1 component was an important biomarker of auditory development during the rehabilitation process.
Collapse
Affiliation(s)
| | | | | | - Robinson Koji Tsuji
- Department of Otorhinolaryngology, Clinical Hospital, FMUSP, São Paulo (SP), Brazil
| | | | | | - Carla Gentile Matas
- Department of Physical, Speech and Occupational, FMUSP, São Paulo (SP), Brazil
| |
Collapse
|
21
|
Schierholz I, Finke M, Kral A, Büchner A, Rach S, Lenarz T, Dengler R, Sandmann P. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study. Hum Brain Mapp 2017; 38:2206-2225. [PMID: 28130910 DOI: 10.1002/hbm.23515] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 12/26/2016] [Accepted: 01/03/2017] [Indexed: 11/10/2022] Open
Abstract
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Irina Schierholz
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Mareike Finke
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andrej Kral
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany.,Institute of AudioNeuroTechnology and Department of Experimental Otology, Hannover Medical School, Hannover, Germany.,School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas
| | - Andreas Büchner
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Stefan Rach
- Department of Epidemiological Methods and Etiological Research, Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany
| | - Thomas Lenarz
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Reinhard Dengler
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany
| | - Pascale Sandmann
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otorhinolaryngology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
22
|
Jiam NT, Caldwell M, Deroche ML, Chatterjee M, Limb CJ. Voice emotion perception and production in cochlear implant users. Hear Res 2017; 352:30-39. [PMID: 28088500 DOI: 10.1016/j.heares.2017.01.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 12/14/2016] [Accepted: 01/06/2017] [Indexed: 10/20/2022]
Abstract
Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users.
Collapse
Affiliation(s)
- N T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M Caldwell
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M L Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - M Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - C J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA.
| |
Collapse
|
23
|
El Boghdady N, Kegel A, Lai WK, Dillier N. A neural-based vocoder implementation for evaluating cochlear implant coding strategies. Hear Res 2016; 333:136-149. [PMID: 26775182 DOI: 10.1016/j.heares.2016.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2015] [Revised: 12/18/2015] [Accepted: 01/07/2016] [Indexed: 10/22/2022]
Abstract
Most simulations of cochlear implant (CI) coding strategies rely on standard vocoders that are based on purely signal processing techniques. However, these models neither account for various biophysical phenomena, such as neural stochasticity and refractoriness, nor for effects of electrical stimulation, such as spectral smearing as a function of stimulus intensity. In this paper, a neural model that accounts for stochastic firing, parasitic spread of excitation across neuron populations, and neuronal refractoriness, was developed and augmented as a preprocessing stage for a standard 22-channel noise-band vocoder. This model was used to subjectively and objectively assess consonant discrimination in commercial and experimental coding strategies. Stimuli consisting of consonant-vowel (CV) and vowel-consonant-vowel (VCV) tokens were processed by either the Advanced Combination Encoder (ACE) or the Excitability Controlled Coding (ECC) strategies, and later resynthesized to audio using the aforementioned vocoder model. Baseline performance was measured using unprocessed versions of the speech tokens. Behavioural responses were collected from seven normal hearing (NH) volunteers, while EEG data were recorded from five NH participants. Psychophysical results indicate that while there may be a difference in consonant perception between the two tested coding strategies, mismatch negativity (MMN) waveforms do not show any marked trends in CV or VCV contrast discrimination.
Collapse
Affiliation(s)
- Nawal El Boghdady
- Institute for Neuroinformatics (INI), Universität Zürich (UZH)/ ETH Zürich (ETHZ), Zürich, Switzerland.
| | - Andrea Kegel
- Laboratory of Experimental Audiology, ENT Department, Universitätsspital Zürich (USZ), Zürich, Switzerland
| | - Wai Kong Lai
- Laboratory of Experimental Audiology, ENT Department, Universitätsspital Zürich (USZ), Zürich, Switzerland
| | - Norbert Dillier
- Laboratory of Experimental Audiology, ENT Department, Universitätsspital Zürich (USZ), Zürich, Switzerland
| |
Collapse
|
24
|
Shirvani S, Jafari Z, Motasaddi Zarandi M, Jalaie S, Mohagheghi H, Tale MR. Emotional Perception of Music in Children With Bimodal Fitting and Unilateral Cochlear Implant. Ann Otol Rhinol Laryngol 2015; 125:470-7. [PMID: 26681623 DOI: 10.1177/0003489415619943] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Biological, structural, and acoustical constraints faced by cochlear implant (CI) users can alter the perception of music. Bimodal fitting not only provides bilateral hearing but can also improve auditory skills. This study was conducted to assess the impact of this amplification style on the emotional perception of music among children with hearing loss (HL). METHODS Twenty-five children with congenital severe to profound HL and unilateral CIs, 20 children with bimodal fitting, and 30 children with normal hearing participated in this study. Their emotional perceptions of music were measured using a method where children indicated happy or sad feelings induced by music by pointing to pictures of faces showing these emotions. RESULTS Children with bimodal fitting obtained significantly higher mean scores than children with unilateral CIs for both happy and sad music items and in overall test scores (P < .001). Both groups with HL obtained significantly lower scores than children with normal hearing (P < .001). CONCLUSIONS Bimodal fitting results in a better emotional perception of music compared to unilateral CI. Given the influence of music in neurological and linguistic development and social interactions, it is important to evaluate the possible benefits of bimodal fitting prescriptions for individuals with unilateral CIs.
Collapse
Affiliation(s)
- Sareh Shirvani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Zahra Jafari
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran Canadian Center for Behavioral Neuroscience (CCBN), Lethbridge University, Lethbridge, Alberta, Canada
| | - Masoud Motasaddi Zarandi
- Cochlear Implant Research Center, AmirAlam Hospital, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Shohre Jalaie
- Department of Physiotherapy, School of Rehabilitation, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Hamed Mohagheghi
- Department of Audiology, School of Rehabilitation Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
25
|
Lehmann A, Paquette S. Cross-domain processing of musical and vocal emotions in cochlear implant users. Front Neurosci 2015; 9:343. [PMID: 26441512 PMCID: PMC4585154 DOI: 10.3389/fnins.2015.00343] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 09/10/2015] [Indexed: 01/08/2023] Open
Affiliation(s)
- Alexandre Lehmann
- Department of Otolaryngology Head and Neck Surgery, McGill University Montreal, QC, Canada ; International Laboratory for Brain, Music and Sound Research, Center for Research on Brain, Language and Music Montreal, QC, Canada ; Department of Psychology, University of Montreal Montreal, QC, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research, Center for Research on Brain, Language and Music Montreal, QC, Canada ; Department of Psychology, University of Montreal Montreal, QC, Canada
| |
Collapse
|
26
|
Petersen B, Weed E, Sandmann P, Brattico E, Hansen M, Sørensen SD, Vuust P. Brain responses to musical feature changes in adolescent cochlear implant users. Front Hum Neurosci 2015; 9:7. [PMID: 25705185 PMCID: PMC4319402 DOI: 10.3389/fnhum.2015.00007] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 01/07/2015] [Indexed: 11/13/2022] Open
Abstract
Cochlear implants (CIs) are primarily designed to assist deaf individuals in perception of speech, although possibilities for music fruition have also been documented. Previous studies have indicated the existence of neural correlates of residual music skills in postlingually deaf adults and children. However, little is known about the behavioral and neural correlates of music perception in the new generation of prelingually deaf adolescents who grew up with CIs. With electroencephalography (EEG), we recorded the mismatch negativity (MMN) of the auditory event-related potential to changes in musical features in adolescent CI users and in normal-hearing (NH) age mates. EEG recordings and behavioral testing were carried out before (T1) and after (T2) a 2-week music training program for the CI users and in two sessions equally separated in time for NH controls. We found significant MMNs in adolescent CI users for deviations in timbre, intensity, and rhythm, indicating residual neural prerequisites for musical feature processing. By contrast, only one of the two pitch deviants elicited an MMN in CI users. This pitch discrimination deficit was supported by behavioral measures, in which CI users scored significantly below the NH level. Overall, MMN amplitudes were significantly smaller in CI users than in NH controls, suggesting poorer music discrimination ability. Despite compliance from the CI participants, we found no effect of the music training, likely resulting from the brevity of the program. This is the first study showing significant brain responses to musical feature changes in prelingually deaf adolescent CI users and their associations with behavioral measures, implying neural predispositions for at least some aspects of music processing. Future studies should test any beneficial effects of a longer lasting music intervention in adolescent CI users.
Collapse
Affiliation(s)
- Bjørn Petersen
- Center for Functionally Integrative Neuroscience, Aarhus University Hospital, Aarhus, Denmark
- Royal Academy of Music, Aarhus, Denmark
| | - Ethan Weed
- Center for Functionally Integrative Neuroscience, Aarhus University Hospital, Aarhus, Denmark
- Department of Aesthetics and Communication – Linguistics, Aarhus University, Aarhus, Denmark
| | - Pascale Sandmann
- Central Auditory Diagnostics Lab, Department of Neurology, Cluster of Excellence “Hearing4all”, Hannover Medical School, Hannover, Germany
| | - Elvira Brattico
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University, Aalto, Finland
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
| | - Mads Hansen
- Center for Functionally Integrative Neuroscience, Aarhus University Hospital, Aarhus, Denmark
- Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
| | - Stine Derdau Sørensen
- Department of Aesthetics and Communication – Linguistics, Aarhus University, Aarhus, Denmark
| | - Peter Vuust
- Center for Functionally Integrative Neuroscience, Aarhus University Hospital, Aarhus, Denmark
- Royal Academy of Music, Aarhus, Denmark
| |
Collapse
|
27
|
Shirvani S, Jafari Z, Sheibanizadeh A, Motasaddi Zarandy M, Jalaie S. Emotional perception of music in children with unilateral cochlear implants. IRANIAN JOURNAL OF OTORHINOLARYNGOLOGY 2014; 26:225-33. [PMID: 25320700 PMCID: PMC4196446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2014] [Accepted: 02/16/2014] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Cochlear implantation (CI) improves language skills among children with hearing loss. However, children with CIs still fall short of fulfilling some other needs, including musical perception. This is often attributed to the biological, technological, and acoustic limitations of CIs. Emotions play a key role in the understanding and enjoyment of music. The present study aimed to investigate the emotional perception of music in children with bilaterally severe-to-profound hearing loss and unilateral CIs. MATERIALS AND METHODS Twenty-five children with congenital severe-to-profound hearing loss and unilateral CIs and 30 children with normal hearing participated in the study. The children's emotional perceptions of music, as defined by Peretz (1998), were measured. Children were instructed to indicate happy or sad feelings fostered in them by the music by pointing to pictures of faces showing these emotions. RESULTS Children with CI obtained significantly lower scores than children with normal hearing, for both happy and sad items of music as well as in overall test scores (P<0.001). Furthermore, both in CI group (P=0.49) and the control one (P<0.001), the happy items were more often recognized correctly than the sad items. CONCLUSION Hearing-impaired children with CIs had poorer emotional perception of music than their normal peers. Due to the importance of music in the development of language, cognitive and social interaction skills, aural rehabilitation programs for children with CIs should focus particularly on music. Furthermore, it is essential to enhance the quality of musical perception by improving the quality of implant prostheses.
Collapse
Affiliation(s)
- Sareh Shirvani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran.
| | - Zahra Jafari
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Rehabilitation Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Abdolreza Sheibanizadeh
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran.
| | - Masoud Motasaddi Zarandy
- Otorhinolaryngology Research Center, AmirAlam Hospital, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Shohre Jalaie
- Department of Physiotherapy, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|