1
|
Jahn KN, Wiegand-Shahani BM, Moturi V, Kashiwagura ST, Doak KR. Cochlear-implant simulated spectral degradation attenuates emotional responses to environmental sounds. Int J Audiol 2025; 64:518-524. [PMID: 39146030 PMCID: PMC11833750 DOI: 10.1080/14992027.2024.2385552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 07/22/2024] [Indexed: 08/17/2024]
Abstract
OBJECTIVE Cochlear implants (CI) provide users with a spectrally degraded acoustic signal that could impact their auditory emotional experiences. This study evaluated the effects of CI-simulated spectral degradation on emotional valence and arousal elicited by environmental sounds. DESIGN Thirty emotionally evocative sounds were filtered through a noise-band vocoder. Participants rated the perceived valence and arousal elicited by each of the full-spectrum and vocoded stimuli. These ratings were compared across acoustic conditions (full-spectrum, vocoded) and as a function of stimulus type (unpleasant, neutral, pleasant). STUDY SAMPLE Twenty-five young adults (age 19 to 34 years) with normal hearing. RESULTS Emotional responses were less extreme for spectrally degraded (i.e., vocoded) sounds than for full-spectrum sounds. Specifically, spectrally degraded stimuli were perceived as more negative and less arousing than full-spectrum stimuli. CONCLUSION By meticulously replicating CI spectral degradation while controlling for variables that are confounded within CI users, these findings indicate that CI spectral degradation can compress the range of sound-induced emotion independent of hearing loss and other idiosyncratic device- or person-level variables. Future work will characterize emotional reactions to sound in CI users via objective, psychoacoustic, and subjective measures.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| | - Braden M. Wiegand-Shahani
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| | - Vaishnavi Moturi
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
| | - Sean Takamoto Kashiwagura
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| | - Karlee R. Doak
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| |
Collapse
|
2
|
Rachman L, Babaoğlu G, Özkişi Yazgan B, Ertürk P, Gaudrain E, Nagels L, Launer S, Derleth P, Singh G, Uhlemayr F, Chatterjee M, Yücel E, Sennaroğlu G, Başkent D. Vocal Emotion Recognition in School-Age Children With Hearing Aids. Ear Hear 2025:00003446-990000000-00413. [PMID: 40111426 DOI: 10.1097/aud.0000000000001645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2025]
Abstract
OBJECTIVES In individuals with normal hearing, vocal emotion recognition continues to develop over many years during childhood. In children with hearing loss, vocal emotion recognition may be affected by combined effects from loss of audibility due to elevated thresholds, suprathreshold distortions from hearing loss, and the compensatory features of hearing aids. These effects could be acute, affecting the perceived signal quality, or accumulated over time, affecting emotion recognition development. This study investigates if, and to what degree, children with hearing aids have difficulties in perceiving vocal emotions, beyond what would be expected from age-typical levels. DESIGN We used a vocal emotion recognition test with non-language-specific pseudospeech audio sentences expressed in three basic emotions: happy, sad, and angry, along with a child-friendly gamified test interface. The test group consisted of 55 school-age children (5.4 to 17.8 years) with bilateral hearing aids, all with sensorineural hearing loss with no further exclusion based on hearing loss degree or configuration. For characterization of complete developmental trajectories, the control group with normal audiometric thresholds consisted of 86 age-matched children (6.0 to 17.1 years), and 68 relatively young adults (19.1 to 35.0 years). RESULTS Vocal emotion recognition of the control group with normal-hearing children and adults improved across age and reached a plateau around age 20. Although vocal emotion recognition in children with hearing aids also improved with age, it seemed to lag compared with the control group of children with normal hearing. A group comparison showed a significant difference from around age 8 years. Individual data indicated that a number of hearing-aided children, even with severe degrees of hearing loss, performed at age-expected levels, while some others scored lower than age-expected levels, even at chance levels. The recognition scores of hearing-aided children were not predicted by unaided or aided hearing thresholds, nor by previously measured voice cue discrimination sensitivity, for example, related to mean pitch or vocal tract length perception. CONCLUSIONS In line with previous literature, even in normal hearing, vocal emotion recognition develops over many years toward adulthood, likely due to interactions with linguistic and cognitive development. Given the long development period, any potential difficulties for vocal emotion recognition in children with hearing loss can only be identified with respect to what would be realistic based on their age. With such a comparison, we were able to show that, as a group, children with hearing aids also develop in vocal emotion recognition, however, seemingly at a slower pace. Individual data indicated a number of the hearing-aided children showed age-expected vocal emotion recognition. Hence, even though hearing aids have been developed and optimized for speech perception, these data indicate that hearing aids can also support age-typical development of vocal emotion recognition. For the children whose recognition scores were lower than age-expected levels, there were no predictive hearing-related factors. This could be potentially reflecting inherent variations related to development of relevant cognitive mechanisms, but a role from cumulative effects from hearing loss is also a possibility. As follow-up research, we plan to investigate if vocal emotion recognition will improve over time for these children.
Collapse
Affiliation(s)
- Laura Rachman
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands
- Pento Speech and Hearing Centers, Apeldoorn, the Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical Sciences, University of Groningen, Groningen, the Netherlands
| | - Gizem Babaoğlu
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Başak Özkişi Yazgan
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Pinar Ertürk
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Etienne Gaudrain
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Leanne Nagels
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical Sciences, University of Groningen, Groningen, the Netherlands
| | - Stefan Launer
- Department of Audiology and Health Innovation, Research and Development, Sonova AG, Stäfa, Switzerland
| | - Peter Derleth
- Department of Audiology and Health Innovation, Research and Development, Sonova AG, Stäfa, Switzerland
| | - Gurjit Singh
- Department of Audiology and Health Innovation, Research and Development, Sonova AG, Stäfa, Switzerland
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada; and
| | - Frédérick Uhlemayr
- Department of Audiology and Health Innovation, Research and Development, Sonova AG, Stäfa, Switzerland
| | - Monita Chatterjee
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Esra Yücel
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Gonca Sennaroğlu
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical Sciences, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
3
|
Marcrum SC, Rakita L, Picou EM. Effect of Sound Genre on Emotional Responses for Adults With and Without Hearing Loss. Ear Hear 2025; 46:34-43. [PMID: 39129128 DOI: 10.1097/aud.0000000000001561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
OBJECTIVES Adults with permanent hearing loss exhibit a reduced range of valence ratings in response to nonspeech sounds; however, the degree to which sound genre might affect such ratings is unclear. The purpose of this study was to determine if ratings of valence covary with sound genre (e.g., social communication, technology, music), or only expected valence (pleasant, neutral, unpleasant). DESIGN As part of larger study protocols, participants rated valence and arousal in response to nonspeech sounds. For this study, data were reanalyzed by assigning sounds to unidimensional genres and evaluating relationships between hearing loss, age, and gender and ratings of valence. In total, results from 120 adults with normal hearing (M = 46.3 years, SD = 17.7, 33 males and 87 females) and 74 adults with hearing loss (M = 66.1 years, SD = 6.1, 46 males and 28 females) were included. RESULTS Principal component analysis confirmed valence ratings loaded onto eight unidimensional factors: positive and negative social communication, positive and negative technology, music, animal, activities, and human body noises. Regression analysis revealed listeners with hearing loss rated some genres as less extreme (less pleasant/less unpleasant) than peers with better hearing, with the relationship between hearing loss and valence ratings being similar across genres within an expected valence category. In terms of demographic factors, female gender was associated with less pleasant ratings of negative social communication, positive and negative technology, activities, and human body noises, while increasing age was related to a subtle rise in valence ratings across all genres. CONCLUSIONS Taken together, these results confirm and extend previous findings that hearing loss is related to a reduced range of valence ratings and suggest that this effect is mediated by expected sound valence, rather than sound genre.
Collapse
Affiliation(s)
- Steven C Marcrum
- Department of Otolaryngology, University Hospital Regensburg, Regensburg, Germany
| | - Lori Rakita
- Meta Platforms, Inc., Menlo Park, California, USA
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
4
|
Hood KE, Hurley LM. Listening to your partner: serotonin increases male responsiveness to female vocal signals in mice. Front Hum Neurosci 2024; 17:1304653. [PMID: 38328678 PMCID: PMC10847236 DOI: 10.3389/fnhum.2023.1304653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 12/28/2023] [Indexed: 02/09/2024] Open
Abstract
The context surrounding vocal communication can have a strong influence on how vocal signals are perceived. The serotonergic system is well-positioned for modulating the perception of communication signals according to context, because serotonergic neurons are responsive to social context, influence social behavior, and innervate auditory regions. Animals like lab mice can be excellent models for exploring how serotonin affects the primary neural systems involved in vocal perception, including within central auditory regions like the inferior colliculus (IC). Within the IC, serotonergic activity reflects not only the presence of a conspecific, but also the valence of a given social interaction. To assess whether serotonin can influence the perception of vocal signals in male mice, we manipulated serotonin systemically with an injection of its precursor 5-HTP, and locally in the IC with an infusion of fenfluramine, a serotonin reuptake blocker. Mice then participated in a behavioral assay in which males suppress their ultrasonic vocalizations (USVs) in response to the playback of female broadband vocalizations (BBVs), used in defensive aggression by females when interacting with males. Both 5-HTP and fenfluramine increased the suppression of USVs during BBV playback relative to controls. 5-HTP additionally decreased the baseline production of a specific type of USV and male investigation, but neither drug treatment strongly affected male digging or grooming. These findings show that serotonin modifies behavioral responses to vocal signals in mice, in part by acting in auditory brain regions, and suggest that mouse vocal behavior can serve as a useful model for exploring the mechanisms of context in human communication.
Collapse
Affiliation(s)
- Kayleigh E. Hood
- Hurley Lab, Department of Biology, Indiana University, Bloomington, IN, United States
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, IN, United States
| | - Laura M. Hurley
- Hurley Lab, Department of Biology, Indiana University, Bloomington, IN, United States
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, IN, United States
| |
Collapse
|
5
|
Nowacki K, Łakomy K, Marczak W. Speech Impaired by Half Masks Used for the Respiratory Tract Protection. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:7012. [PMID: 35742261 PMCID: PMC9222881 DOI: 10.3390/ijerph19127012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 06/06/2022] [Accepted: 06/07/2022] [Indexed: 02/04/2023]
Abstract
Filtering half masks belong to the group of personal protective equipment in the work environment. They protect the respiratory tract but may hinder breath and suppress speech. The present work is focused on the attenuation of sound by the half masks known as "filtering facepieces", FFPs, of various construction and filtration efficiency. Rather than study the perception of speech by humans, we used a generator of white noise and artificial speech to obtain objective characteristics of the attenuation. The generator speaker was either covered by an FFP or remained uncovered while a class 1 meter measured sound pressure levels in 1/3 octave bands with center frequencies 100-20 kHz at distances from 1 to 5 m from the speaker. All five FFPs suppressed acoustic waves from the octave bands with center frequencies of 1 kHz and higher, i.e., in the frequency range responsible for 80% of the perceived speech intelligibility, particularly in the 2 kHz-octave band. FFPs of higher filtration efficiency stronger attenuated the sound. Moreover, the FFPs changed the voice timbre because the attenuation depended on the wave frequency. The two combined factors can impede speech intelligibility.
Collapse
Affiliation(s)
- Krzysztof Nowacki
- Department of Production Engineering, Faculty of Materials Engineering, Silesian University of Technology, Akademicka 2A Street, 44-100 Gliwice, Poland;
| | - Karolina Łakomy
- Department of Production Engineering, Faculty of Materials Engineering, Silesian University of Technology, Akademicka 2A Street, 44-100 Gliwice, Poland;
| | - Wojciech Marczak
- Faculty of Science and Technology, Jan Długosz University, Al. Armii Krajowej 13/15, 42-200 Częstochowa, Poland;
| |
Collapse
|
6
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
7
|
Picou EM, Singh G, Russo FA. A Comparison between a remote testing and a laboratory test setting for evaluating emotional responses to non-speech sounds. Int J Audiol 2021; 61:799-808. [PMID: 34883031 DOI: 10.1080/14992027.2021.2007422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE To evaluate remote testing as a tool for measuring emotional responses to non-speech sounds. DESIGN Participants self-reported their hearing status and rated valence and arousal in response to non-speech sounds on an Internet crowdsourcing platform. These ratings were compared to data obtained in a laboratory setting with participants who had confirmed normal or impaired hearing. STUDY SAMPLE Adults with normal and impaired hearing. RESULTS In both settings, participants with hearing loss rated pleasant sounds as less pleasant than did their peers with normal hearing. The difference in valence ratings between groups was generally smaller when measured in the remote setting than in the laboratory setting. This difference was the result of participants with normal hearing rating sounds as less extreme (less pleasant, less unpleasant) in the remote setting than did their peers in the laboratory setting, whereas no such difference was noted for participants with hearing loss. Ratings of arousal were similar from participants with normal and impaired hearing; the similarity persisted in both settings. CONCLUSIONS In both test settings, participants with hearing loss rated pleasant sounds as less pleasant than did their normal hearing counterparts. Future work is warranted to explain the ratings of participants with normal hearing.
Collapse
Affiliation(s)
- Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Gurjit Singh
- Phonak, Canada, Mississauga, Canada.,Department of Psychology, Ryerson University, Toronto, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
8
|
Picou EM, Rakita L, Buono GH, Moore TM. Effects of Increasing the Overall Level or Fitting Hearing Aids on Emotional Responses to Sounds. Trends Hear 2021; 25:23312165211049938. [PMID: 34866509 PMCID: PMC8825634 DOI: 10.1177/23312165211049938] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adults with hearing loss demonstrate a reduced range of emotional responses to nonspeech
sounds compared to their peers with normal hearing. The purpose of this study was to
evaluate two possible strategies for addressing the effects of hearing loss on emotional
responses: (a) increasing overall level and (b) hearing aid use (with and without
nonlinear frequency compression, NFC). Twenty-three adults (mean age = 65.5 years) with
mild-to-severe sensorineural hearing loss and 17 adults (mean age = 56.2 years) with
normal hearing participated. All adults provided ratings of valence and arousal without
hearing aids in response to nonspeech sounds presented at a moderate and at a high level.
Adults with hearing loss also provided ratings while using individually fitted study
hearing aids with two settings (NFC-OFF or NFC-ON). Hearing loss and hearing aid use
impacted ratings of valence but not arousal. Listeners with hearing loss rated pleasant
sounds as less pleasant than their peers, confirming findings in the extant literature.
For both groups, increasing the overall level resulted in lower ratings of valence. For
listeners with hearing loss, the use of hearing aids (NFC-OFF) also resulted in lower
ratings of valence but to a lesser extent than increasing the overall level. Activating
NFC resulted in ratings that were similar to ratings without hearing aids (with a moderate
presentation level) but did not improve ratings to match those from the listeners with
normal hearing. These findings suggest that current interventions do not ameliorate the
effects of hearing loss on emotional responses to sound.
Collapse
Affiliation(s)
- Erin M Picou
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | - Lori Rakita
- Department of Otolaryngology, 1866Massachusetts Ear and Eye Infirmary, Harvard Medical School, Boston, MA, USA
| | - Gabrielle H Buono
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | | |
Collapse
|