1
|
Jahn KN, Wiegand-Shahani BM, Moturi V, Kashiwagura ST, Doak KR. Cochlear-implant simulated spectral degradation attenuates emotional responses to environmental sounds. Int J Audiol 2025; 64:518-524. [PMID: 39146030 PMCID: PMC11833750 DOI: 10.1080/14992027.2024.2385552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 07/22/2024] [Indexed: 08/17/2024]
Abstract
OBJECTIVE Cochlear implants (CI) provide users with a spectrally degraded acoustic signal that could impact their auditory emotional experiences. This study evaluated the effects of CI-simulated spectral degradation on emotional valence and arousal elicited by environmental sounds. DESIGN Thirty emotionally evocative sounds were filtered through a noise-band vocoder. Participants rated the perceived valence and arousal elicited by each of the full-spectrum and vocoded stimuli. These ratings were compared across acoustic conditions (full-spectrum, vocoded) and as a function of stimulus type (unpleasant, neutral, pleasant). STUDY SAMPLE Twenty-five young adults (age 19 to 34 years) with normal hearing. RESULTS Emotional responses were less extreme for spectrally degraded (i.e., vocoded) sounds than for full-spectrum sounds. Specifically, spectrally degraded stimuli were perceived as more negative and less arousing than full-spectrum stimuli. CONCLUSION By meticulously replicating CI spectral degradation while controlling for variables that are confounded within CI users, these findings indicate that CI spectral degradation can compress the range of sound-induced emotion independent of hearing loss and other idiosyncratic device- or person-level variables. Future work will characterize emotional reactions to sound in CI users via objective, psychoacoustic, and subjective measures.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| | - Braden M. Wiegand-Shahani
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| | - Vaishnavi Moturi
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
| | - Sean Takamoto Kashiwagura
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| | - Karlee R. Doak
- Department of Speech, Language, and Hearing, The University
of Texas at Dallas, Richardson, TX 75080, USA
- Callier Center for Communication Disorders, The University
of Texas at Dallas, Dallas, TX 75235, USA
| |
Collapse
|
2
|
Lu Y, Wu Y, Zeng D, Chen C, Bian P, Xu B. Music perception and its correlation with auditory speech perception in pediatric Mandarin-speaking cochlear implant users. Acta Otolaryngol 2025; 145:51-58. [PMID: 39668767 DOI: 10.1080/00016489.2024.2437553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 11/21/2024] [Accepted: 11/27/2024] [Indexed: 12/14/2024]
Abstract
BACKGROUND Cochlear implants (CI) help regain perception of sound for patients with sensorineural hearing loss. The ability to recognize music pitch may be crucial for recognizing and producing speech for Mandarin. AIMS/OBJECTIVES This study aims to search for possible influencing factors of music perception and correlations between music perception and auditory speech abilities among prelingually deaf pediatric Mandarin-speaking CI users. MATERIAL AND METHODS Music perception of 24 pediatric CI users and 12 normal hearing children was measured using the MuSIC test. Auditory speech perception of the 24 CI users was also measured and analyzed with their music perception results. RESULTS Pediatric CI users performed worse than normal hearing children in pitch, rhythm and melody discrimination tests (p < .05). Significant difference in pitch and melody discrimination tests between age at implantation <5 and >5 groups was found. There were significant correlations between perception of consonants, tones, and speech in a noisy environment and perception of music pitch and melody. CONCLUSION AND SIGNIFICANCE Prelingually deaf pediatric CI users who received implantation before the age of five perform better in music perception tests. Pediatric CI users with better music perception show better auditory speech perception of Mandarin.
Collapse
Affiliation(s)
- Yunyi Lu
- Department of Otorhinolaryngology, Lanzhou University Second Hospital, Lanzhou, China
| | - Yutong Wu
- Department of Otorhinolaryngology, Lanzhou University Second Hospital, Lanzhou, China
| | - Dong Zeng
- Department of Otorhinolaryngology, Lanzhou University Second Hospital, Lanzhou, China
| | - Chi Chen
- Department of Otorhinolaryngology, Lanzhou University Second Hospital, Lanzhou, China
| | - Panpan Bian
- Department of Otorhinolaryngology, Lanzhou University Second Hospital, Lanzhou, China
| | - Baicheng Xu
- Department of Otorhinolaryngology, Lanzhou University Second Hospital, Lanzhou, China
| |
Collapse
|
3
|
Ashjaei S, Behroozmand R, Fozdar S, Farrar R, Arjmandi M. Vocal control and speech production in cochlear implant listeners: A review within auditory-motor processing framework. Hear Res 2024; 453:109132. [PMID: 39447319 DOI: 10.1016/j.heares.2024.109132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 10/11/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024]
Abstract
A comprehensive literature review is conducted to summarize and discuss prior findings on how cochlear implants (CI) affect the users' abilities to produce and control vocal and articulatory movements within the auditory-motor integration framework of speech. Patterns of speech production pre- versus post-implantation, post-implantation adjustments, deviations from the typical ranges of speakers with normal hearing (NH), the effects of switching the CI on and off, as well as the impact of altered auditory feedback on vocal and articulatory speech control are discussed. Overall, findings indicate that CIs enhance the vocal and articulatory control aspects of speech production at both segmental and suprasegmental levels. While many CI users achieve speech quality comparable to NH individuals, some features still deviate in a group of CI users even years post-implantation. More specifically, contracted vowel space, increased vocal jitter and shimmer, longer phoneme and utterance durations, shorter voice onset time, decreased contrast in fricative production, limited prosodic patterns, and reduced intelligibility have been reported in subgroups of CI users compared to NH individuals. Significant individual variations among CI users have been observed in both the pace of speech production adjustments and long-term speech outcomes. Few controlled studies have explored how the implantation age and the duration of CI use influence speech features, leaving substantial gaps in our understanding about the effects of spectral resolution, auditory rehabilitation, and individual auditory-motor processing abilities on vocal and articulatory speech outcomes in CI users. Future studies under the auditory-motor integration framework are warranted to determine how suboptimal CI auditory feedback impacts auditory-motor processing and precise vocal and articulatory control in CI users.
Collapse
Affiliation(s)
- Samin Ashjaei
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 2811 North Floyd Road, Richardson, TX 75080, USA
| | - Shaivee Fozdar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Reed Farrar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Meisam Arjmandi
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA; Institute for Mind and Brain, University of South Carolina, Barnwell Street, Columbia, SC 29208, USA.
| |
Collapse
|
4
|
Yüksel M, Çiprut A. Reduced Channel Interaction Improves Timbre Recognition Under Vocoder Simulation of Cochlear Implant Processing. Otol Neurotol 2024; 45:e297-e306. [PMID: 38437807 DOI: 10.1097/mao.0000000000004151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE This study aimed to investigate the influence of the number of channels and channel interaction on timbre perception in cochlear implant (CI) processing. By utilizing vocoder simulations of CI processing, the effects of different numbers of channels and channel interaction were examined to assess their impact on timbre perception, an essential aspect of music and auditory performance. STUDY DESIGN, SETTING, AND PATIENTS Fourteen CI recipients, with at least 1 year of CI device use, and two groups (N = 16 and N = 19) of normal hearing (NH) participants completed a timbre recognition (TR) task. NH participants were divided into two groups, with each group being tested on different aspects of the study. The first group underwent testing with varying numbers of channels (8, 12, 16, and 20) to determine an ideal number that closely reflected the TR performance of CI recipients. Subsequently, the second group of NH participants participated in the assessment of channel interaction, utilizing the identified ideal number of 20 channels, with three conditions: low interaction (54 dB/octave), medium interaction (24 dB/octave), and high interaction (12 dB/octave). Statistical analyses, including repeated-measures analysis of variance and pairwise comparisons, were conducted to examine the effects. RESULTS The number of channels did not demonstrate a statistically significant effect on TR in NH participants ( p > 0.05). However, it was observed that the condition with 20 channels closely resembled the TR performance of CI recipients. In contrast, channel interaction exhibited a significant effect ( p < 0.001) on TR. Both the low interaction (54 dB/octave) and high interaction (12 dB/octave) conditions differed significantly from the actual CI recipients' performance. CONCLUSION Timbre perception, a complex ability reliant on highly detailed spectral resolution, was not significantly influenced by the number of channels. However, channel interaction emerged as a significant factor affecting timbre perception. The differences observed under different channel interaction conditions suggest potential mechanisms, including reduced spectro-temporal resolution and degraded spectral cues. These findings highlight the importance of considering channel interaction and optimizing CI processing strategies to enhance music perception and overall auditory performance for CI recipients.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Department of Audiology, Ankara Medipol University Faculty of Health Sciences, Ankara
| | - Ayça Çiprut
- Department of Audiology, Marmara University Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
5
|
Cychosz M, Winn MB, Goupell MJ. How to vocode: Using channel vocoders for cochlear-implant research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2407-2437. [PMID: 38568143 PMCID: PMC10994674 DOI: 10.1121/10.0025274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 02/14/2024] [Accepted: 02/23/2024] [Indexed: 04/05/2024]
Abstract
The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation-particularly the spectral and temporal degradation that reflect cochlear-implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear-implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term "vocoder" is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper highlights the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments, given the experimenter's goals and research questions, to avoid common signal processing mistakes. Throughout, we will assume that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, Maryland 20742, USA
| |
Collapse
|
6
|
Oxenham AJ. Questions and controversies surrounding the perception and neural coding of pitch. Front Neurosci 2023; 16:1074752. [PMID: 36699531 PMCID: PMC9868815 DOI: 10.3389/fnins.2022.1074752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 12/16/2022] [Indexed: 01/12/2023] Open
Abstract
Pitch is a fundamental aspect of auditory perception that plays an important role in our ability to understand speech, appreciate music, and attend to one sound while ignoring others. The questions surrounding how pitch is represented in the auditory system, and how our percept relates to the underlying acoustic waveform, have been a topic of inquiry and debate for well over a century. New findings and technological innovations have led to challenges of some long-standing assumptions and have raised new questions. This article reviews some recent developments in the study of pitch coding and perception and focuses on the topic of how pitch information is extracted from peripheral representations based on frequency-to-place mapping (tonotopy), stimulus-driven auditory-nerve spike timing (phase locking), or a combination of both. Although a definitive resolution has proved elusive, the answers to these questions have potentially important implications for mitigating the effects of hearing loss via devices such as cochlear implants.
Collapse
Affiliation(s)
- Andrew J. Oxenham
- Center for Applied and Translational Sensory Science, University of Minnesota Twin Cities, Minneapolis, MN, United States
- Department of Psychology, University of Minnesota Twin Cities, Minneapolis, MN, United States
| |
Collapse
|
7
|
Tamati TN, Sevich VA, Clausing EM, Moberly AC. Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners. Front Psychol 2022; 13:837644. [PMID: 35432072 PMCID: PMC9010567 DOI: 10.3389/fpsyg.2022.837644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/16/2022] [Indexed: 11/13/2022] Open
Abstract
When listening to degraded speech, such as speech delivered by a cochlear implant (CI), listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener's age. The current study investigated lexical effects in the compensation for speech that was degraded via noise-vocoding in younger and older listeners. In an online experiment, younger and older normal-hearing (NH) listeners rated the clarity of noise-vocoded sentences on a scale from 1 ("very unclear") to 7 ("completely clear"). Lexical information was provided by matching text primes and the lexical content of the target utterance. Half of the sentences were preceded by a matching text prime, while half were preceded by a non-matching prime. Each sentence also consisted of three key words of high or low lexical frequency and neighborhood density. Sentences were processed to simulate CI hearing, using an eight-channel noise vocoder with varying filter slopes. Results showed that lexical information impacted the perceived clarity of noise-vocoded speech. Noise-vocoded speech was perceived as clearer when preceded by a matching prime, and when sentences included key words with high lexical frequency and low neighborhood density. However, the strength of the lexical effects depended on the level of degradation. Matching text primes had a greater impact for speech with poorer spectral resolution, but lexical content had a smaller impact for speech with poorer spectral resolution. Finally, lexical information appeared to benefit both younger and older listeners. Findings demonstrate that lexical knowledge can be employed by younger and older listeners in cognitive compensation during the processing of noise-vocoded speech. However, lexical content may not be as reliable when the signal is highly degraded. Clinical implications are that for adult CI users, lexical knowledge might be used to compensate for the degraded speech signal, regardless of age, but some CI users may be hindered by a relatively poor signal.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Victoria A. Sevich
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH, United States
| | - Emily M. Clausing
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| | - Aaron C. Moberly
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| |
Collapse
|
8
|
Arjmandi M, Houston D, Wang Y, Dilley L. Estimating the reduced benefit of infant-directed speech in cochlear implant-related speech processing. Neurosci Res 2021; 171:49-61. [PMID: 33484749 PMCID: PMC8289972 DOI: 10.1016/j.neures.2021.01.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 12/19/2020] [Accepted: 01/17/2021] [Indexed: 11/27/2022]
Abstract
Caregivers modify their speech when talking to infants, a specific type of speech known as infant-directed speech (IDS). This speaking style facilitates language learning compared to adult-directed speech (ADS) in infants with normal hearing (NH). While infants with NH and those with cochlear implants (CIs) prefer listening to IDS over ADS, it is yet unknown how CI processing may affect the acoustic distinctiveness between ADS and IDS, as well as the degree of intelligibility of these. This study analyzed speech of seven female adult talkers to model the effects of simulated CI processing on (1) acoustic distinctiveness between ADS and IDS, (2) estimates of intelligibility of caregivers' speech in ADS and IDS, and (3) individual differences in caregivers' ADS-to-IDS modification and estimated speech intelligibility. Results suggest that CI processing is substantially detrimental to the acoustic distinctiveness between ADS and IDS, as well as to the intelligibility benefit derived from ADS-to-IDS modifications. Moreover, the observed variability across individual talkers in acoustic implementation of ADS-to-IDS modification and the estimated speech intelligibility was significantly reduced due to CI processing. The findings are discussed in the context of the link between IDS and language learning in infants with CIs.
Collapse
Affiliation(s)
- Meisam Arjmandi
- Department of Communicative Sciences and Disorders, Michigan State University, 1026 Red Cedar Road, East Lansing, MI 48824, USA.
| | - Derek Houston
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Yuanyuan Wang
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Laura Dilley
- Department of Communicative Sciences and Disorders, Michigan State University, 1026 Red Cedar Road, East Lansing, MI 48824, USA
| |
Collapse
|
9
|
Listening to speech with a guinea pig-to-human brain-to-brain interface. Sci Rep 2021; 11:12231. [PMID: 34112826 PMCID: PMC8192924 DOI: 10.1038/s41598-021-90823-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 05/12/2021] [Indexed: 11/30/2022] Open
Abstract
Nicolelis wrote in his 2003 review on brain-machine interfaces (BMIs) that the design of a successful BMI relies on general physiological principles describing how neuronal signals are encoded. Our study explored whether neural information exchanged between brains of different species is possible, similar to the information exchange between computers. We show for the first time that single words processed by the guinea pig auditory system are intelligible to humans who receive the processed information via a cochlear implant. We recorded the neural response patterns to single-spoken words with multi-channel electrodes from the guinea inferior colliculus. The recordings served as a blueprint for trains of biphasic, charge-balanced electrical pulses, which a cochlear implant delivered to the cochlear implant user’s ear. Study participants completed a four-word forced-choice test and identified the correct word in 34.8% of trials. The participants' recognition, defined by the ability to choose the same word twice, whether right or wrong, was 53.6%. For all sessions, the participants received no training and no feedback. The results show that lexical information can be transmitted from an animal to a human auditory system. In the discussion, we will contemplate how learning from the animals might help developing novel coding strategies.
Collapse
|
10
|
Agarwal A, Tan X, Xu Y, Richter CP. Channel Interaction During Infrared Light Stimulation in the Cochlea. Lasers Surg Med 2021; 53:986-997. [PMID: 33476051 DOI: 10.1002/lsm.23360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 10/21/2020] [Accepted: 11/07/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND AND OBJECTIVES The number of perceptually independent channels to encode acoustic information is limited in contemporary cochlear implants (CIs) because of the current spread in the tissue. It has been suggested that neighboring electrodes have to be separated in humans by a distance of more than 2 mm to eliminate significant overlap of the electric current fields and subsequent interaction between the channels. It has also been argued that an increase in the number of independent channels could improve CI user performance in challenging listening environments, such as speech in noise, tonal languages, or music perception. Optical stimulation has been suggested as an alternative modality for neural stimulation because it is spatially selective. This study reports the results of experiments designed to quantify the interaction between neighboring optical sources in the cochlea during stimulation with infrared radiation. STUDY DESIGN/MATERIALS AND METHODS In seven adult albino guinea pigs, a forward masking method was used to quantify the interaction between two neighboring optical sources during stimulation. Two optical fibers were placed through cochleostomies into the scala tympani of the basal cochlear turn. The radiation beams were directed towards different neuron populations along the spiral ganglion. Optically evoked compound action potentials were recorded for different radiant energies and distances between the optical fibers. The outcome measure was the radiant energy of a masker pulse delivered 3 milliseconds before a probe pulse to reduce the response evoked by the probe pulse by 3 dB. Results were compared for different distances between the fibers placed along the cochlea. RESULTS The energy required to reduce the probe's response by 3 dB increased by 20.4 dB/mm and by 26.0 dB/octave. The inhibition was symmetrical for the masker placed basal to the probe (base-to-apex) and the masker placed apical to the probe (apex-to-base). CONCLUSION The interaction between neighboring optical sources during infrared laser stimulation is less than the interaction between neighboring electrical contacts during electrical stimulation. Previously published data for electrical stimulation reported an average current spread in human and cat cochleae of 2.8 dB/mm. With the increased number of independent channels for optical stimulation, it is anticipated that speech and music performance will improve. Lasers Surg. Med. © 2020 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Aditi Agarwal
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611
| | - Xiaodong Tan
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611
| | - Yingyue Xu
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611
| | - Claus-Peter Richter
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611.,Department of Biomedical Engineering, Northwestern University, 2145 Sheridan Road, Tech E310, Evanston, Illinois, 60208.,Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, 60208.,Department of Communication Sciences and Disorders, The Hugh Knowles Center, Northwestern University, Evanston, Illinois, 60208
| |
Collapse
|