1
|
Bissmeyer SRS, Goldsworthy RL. Combining Place and Rate of Stimulation Improves Frequency Discrimination in Cochlear Implant Users. Hear Res 2022; 424:108583. [PMID: 35930901 PMCID: PMC10849775 DOI: 10.1016/j.heares.2022.108583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 06/20/2022] [Accepted: 07/21/2022] [Indexed: 11/04/2022]
Abstract
In the auditory system, frequency is represented as tonotopic and temporal response properties of the auditory nerve. While these response properties are inextricably linked in normal hearing, cochlear implants can separately excite tonotopic location and temporal synchrony using different electrodes and stimulation rates, respectively. This separation allows for the investigation of the contributions of tonotopic and temporal cues for frequency discrimination. The present study examines frequency discrimination in adult cochlear implant users as conveyed by electrode position and stimulation rate, separately and combined. The working hypothesis is that frequency discrimination is better provided by place and rate cues combined compared to either cue alone. This hypothesis was tested in two experiments. In the first experiment, frequency discrimination needed for melodic contour identification was measured for frequencies near 100, 200, and 400 Hz using frequency allocation modeled after clinical processors. In the second experiment, frequency discrimination for pitch ranking was measured for frequencies between 100 and 1600 Hz using an experimental frequency allocation designed to provide better access to place cues. The results of both experiments indicate that frequency discrimination is better with place and rate cues combined than with either cue alone. These results clarify how signal processing for cochlear implants could better encode frequency into place and rate of electrical stimulation. Further, the results provide insight into the contributions of place and rate cues for pitch.
Collapse
Affiliation(s)
- Susan R S Bissmeyer
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States; Auditory Research Center, Health Research Association, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, 1640 Marengo Street Suite 326, Los Angeles, CA 90033, United States.
| | - Raymond L Goldsworthy
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States; Auditory Research Center, Health Research Association, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, 1640 Marengo Street Suite 326, Los Angeles, CA 90033, United States
| |
Collapse
|
2
|
Bissmeyer SRS, Ortiz JR, Gan H, Goldsworthy RL. Computer-based musical interval training program for Cochlear implant users and listeners with no known hearing loss. Front Neurosci 2022; 16:903924. [PMID: 35968373 PMCID: PMC9363605 DOI: 10.3389/fnins.2022.903924] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 07/11/2022] [Indexed: 11/15/2022] Open
Abstract
A musical interval is the difference in pitch between two sounds. The way that musical intervals are used in melodies relative to the tonal center of a key can strongly affect the emotion conveyed by the melody. The present study examines musical interval identification in people with no known hearing loss and in cochlear implant users. Pitch resolution varies widely among cochlear implant users with average resolution an order of magnitude worse than in normal hearing. The present study considers the effect of training on musical interval identification and tests for correlations between low-level psychophysics and higher-level musical abilities. The overarching hypothesis is that cochlear implant users are limited in their ability to identify musical intervals both by low-level access to frequency cues for pitch as well as higher-level mapping of the novel encoding of pitch that implants provide. Participants completed a 2-week, online interval identification training. The benchmark tests considered before and after interval identification training were pure tone detection thresholds, pure tone frequency discrimination, fundamental frequency discrimination, tonal and rhythm comparisons, and interval identification. The results indicate strong correlations between measures of pitch resolution with interval identification; however, only a small effect of training on interval identification was observed for the cochlear implant users. Discussion focuses on improving access to pitch cues for cochlear implant users and on improving auditory training for musical intervals.
Collapse
Affiliation(s)
- Susan Rebekah Subrahmanyam Bissmeyer
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
- *Correspondence: Susan Rebekah Subrahmanyam Bissmeyer,
| | - Jacqueline Rose Ortiz
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Helena Gan
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Raymond Lee Goldsworthy
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
3
|
Abstract
Cochlear implants have been the most successful neural prosthesis, with one million users globally. Researchers used the source-filter model and speech vocoder to design the modern multi-channel implants, allowing implantees to achieve 70%-80% correct sentence recognition in quiet, on average. Researchers also used the cochlear implant to help understand basic mechanisms underlying loudness, pitch, and cortical plasticity. While front-end processing advances improved speech recognition in noise, the unilateral implant speech recognition in quiet has plateaued since the early 1990s. This lack of progress calls for action on re-designing the cochlear stimulating interface and collaboration with the general neurotechnology community.
Collapse
Affiliation(s)
- Fan-Gang Zeng
- Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology-Head and Neck Surgery and Center for Hearing Research, University of California, 110 Medical Sciences E, Irvine, California 92697, USA
| |
Collapse
|
4
|
Goldsworthy RL, Bissmeyer SRS, Camarena A. Advantages of Pulse Rate Compared to Modulation Frequency for Temporal Pitch Perception in Cochlear Implant Users. J Assoc Res Otolaryngol 2022; 23:137-150. [PMID: 34981263 PMCID: PMC8782986 DOI: 10.1007/s10162-021-00828-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 12/01/2021] [Indexed: 02/03/2023] Open
Abstract
Most cochlear implants encode the fundamental frequency of periodic sounds by amplitude modulation of constant-rate pulsatile stimulation. Pitch perception provided by such stimulation strategies is markedly poor. Two experiments are reported here that consider potential advantages of pulse rate compared to modulation frequency for providing stimulation timing cues for pitch. The first experiment examines beat frequency distortion that occurs when modulating constant-rate pulsatile stimulation. This distortion has been reported on previously, but the results presented here indicate that distortion occurs for higher stimulation rates than previously reported. The second experiment examines pitch resolution as provided by pulse rate compared to modulation frequency. The results indicate that pitch discrimination is better with pulse rate than with modulation frequency. The advantage was large for rates near what has been suggested as the upper limit of temporal pitch perception conveyed by cochlear implants. The results are relevant to sound processing design for cochlear implants particularly for algorithms that encode fundamental frequency into deep envelope modulations or into precisely timed pulsatile stimulation.
Collapse
Affiliation(s)
- Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Susan R S Bissmeyer
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
5
|
Gao X, Grayden D, McDonnell M. Unifying information theory and machine learning in a model of electrode discrimination in cochlear implants. PLoS One 2021; 16:e0257568. [PMID: 34543336 PMCID: PMC8451994 DOI: 10.1371/journal.pone.0257568] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 09/06/2021] [Indexed: 12/02/2022] Open
Abstract
Despite the development and success of cochlear implants over several decades, wide inter-subject variability in speech perception is reported. This suggests that cochlear implant user-dependent factors limit speech perception at the individual level. Clinical studies have demonstrated the importance of the number, placement, and insertion depths of electrodes on speech recognition abilities. However, these do not account for all inter-subject variability and to what extent these factors affect speech recognition abilities has not been studied. In this paper, an information theoretic method and machine learning technique are unified in a model to investigate the extent to which key factors limit cochlear implant electrode discrimination. The framework uses a neural network classifier to predict which electrode is stimulated for a given simulated activation pattern of the auditory nerve, and mutual information is then estimated between the actual stimulated electrode and predicted ones. We also investigate how and to what extent the choices of parameters affect the performance of the model. The advantages of this framework include i) electrode discrimination ability is quantified using information theory, ii) it provides a flexible framework that may be used to investigate the key factors that limit the performance of cochlear implant users, and iii) it provides insights for future modeling studies of other types of neural prostheses.
Collapse
Affiliation(s)
- Xiao Gao
- Department of Biomedical Engineering, University of Melbourne, Parkville, VIC, Australia
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- * E-mail:
| | - David Grayden
- Department of Biomedical Engineering, University of Melbourne, Parkville, VIC, Australia
| | - Mark McDonnell
- Computational Learning Systems Laboratory, School of Information Technology & Mathematical Sciences, University of South Australia, Mawson Lakes, SA, Australia
| |
Collapse
|
6
|
Tabas A, von Kriegstein K. Neural modelling of the encoding of fast frequency modulation. PLoS Comput Biol 2021; 17:e1008787. [PMID: 33657098 PMCID: PMC7959405 DOI: 10.1371/journal.pcbi.1008787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2020] [Revised: 03/15/2021] [Accepted: 02/12/2021] [Indexed: 11/19/2022] Open
Abstract
Frequency modulation (FM) is a basic constituent of vocalisation in many animals as well as in humans. In human speech, short rising and falling FM-sweeps of around 50 ms duration, called formant transitions, characterise individual speech sounds. There are two representations of FM in the ascending auditory pathway: a spectral representation, holding the instantaneous frequency of the stimuli; and a sweep representation, consisting of neurons that respond selectively to FM direction. To-date computational models use feedforward mechanisms to explain FM encoding. However, from neuroanatomy we know that there are massive feedback projections in the auditory pathway. Here, we found that a classical FM-sweep perceptual effect, the sweep pitch shift, cannot be explained by standard feedforward processing models. We hypothesised that the sweep pitch shift is caused by a predictive feedback mechanism. To test this hypothesis, we developed a novel model of FM encoding incorporating a predictive interaction between the sweep and the spectral representation. The model was designed to encode sweeps of the duration, modulation rate, and modulation shape of formant transitions. It fully accounted for experimental data that we acquired in a perceptual experiment with human participants as well as previously published experimental results. We also designed a new class of stimuli for a second perceptual experiment to further validate the model. Combined, our results indicate that predictive interaction between the frequency encoding and direction encoding neural representations plays an important role in the neural processing of FM. In the brain, this mechanism is likely to occur at early stages of the processing hierarchy.
Collapse
Affiliation(s)
- Alejandro Tabas
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Saxony, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Saxony, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| |
Collapse
|
7
|
Thompson AC, Wise AK, Hart WL, Needham K, Fallon JB, Gunewardene N, Stoddart PR, Richardson RT. Hybrid optogenetic and electrical stimulation for greater spatial resolution and temporal fidelity of cochlear activation. J Neural Eng 2020; 17:056046. [PMID: 33036009 DOI: 10.1088/1741-2552/abbff0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Compared to electrical stimulation, optogenetic stimulation has the potential to improve the spatial precision of neural activation in neuroprostheses, but it requires intense light and has relatively poor temporal kinetics. We tested the effect of hybrid stimulation, which is the combination of subthreshold optical and electrical stimuli, on spectral and temporal fidelity in the cochlea by recording multiunit activity in the inferior colliculus of channelrhodopsin (H134R variant) transgenic mice. APPROACH Pulsed light or biphasic electrical pulses were delivered to cochlear spiral ganglion neurons of acutely deafened mice, either as individual stimuli or as hybrid stimuli for which the timing of the electrical pulse had a varied delay relative to the start of the optical pulse. Response thresholds, spread of activation and entrainment data were obtained from multi-unit recordings from the auditory midbrain. MAIN RESULTS Facilitation occurred when subthreshold electrical stimuli were applied at the end of, or up to 3.75 ms after subthreshold optical pulses. The spread of activation resulting from hybrid stimulation was significantly narrower than electrical-only and optical-only stimulation (p < 0.01), measured at equivalent suprathreshold levels of loudness that are relevant to cochlear implant users. Furthermore, temporal fidelity, measured as maximum following rates to 300 ms pulse trains bursts up to 240 Hz, was 2.4-fold greater than optical-only stimulation (p < 0.05). SIGNIFICANCE By significantly improving spectral resolution of electrical- and optical-only stimulation and the temporal fidelity of optical-only stimulation, hybrid stimulation has the potential to increase the number of perceptually independent stimulating channels in a cochlear implant.
Collapse
|
8
|
Tama BA, Kim DH, Kim G, Kim SW, Lee S. Recent Advances in the Application of Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery. Clin Exp Otorhinolaryngol 2020; 13:326-339. [PMID: 32631041 PMCID: PMC7669308 DOI: 10.21053/ceo.2020.00654] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/24/2020] [Accepted: 06/09/2020] [Indexed: 12/12/2022] Open
Abstract
This study presents an up-to-date survey of the use of artificial intelligence (AI) in the field of otorhinolaryngology, considering opportunities, research challenges, and research directions. We searched PubMed, the Cochrane Central Register of Controlled Trials, Embase, and the Web of Science. We initially retrieved 458 articles. The exclusion of non-English publications and duplicates yielded a total of 90 remaining studies. These 90 studies were divided into those analyzing medical images, voice, medical devices, and clinical diagnoses and treatments. Most studies (42.2%, 38/90) used AI for image-based analysis, followed by clinical diagnoses and treatments (24 studies). Each of the remaining two subcategories included 14 studies. Machine learning and deep learning have been extensively applied in the field of otorhinolaryngology. However, the performance of AI models varies and research challenges remain.
Collapse
Affiliation(s)
- Bayu Adhi Tama
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Do Hyun Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Gyuwon Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Soo Whan Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seungchul Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology, Pohang, Korea
| |
Collapse
|
9
|
Abstract
This study presents a computational model to reproduce the biological dynamics of "listening to music." A biologically plausible model of periodicity pitch detection is proposed and simulated. Periodicity pitch is computed across a range of the auditory spectrum. Periodicity pitch is detected from subsets of activated auditory nerve fibers (ANFs). These activate connected model octopus cells, which trigger model neurons detecting onsets and offsets; thence model interval-tuned neurons are innervated at the right interval times; and finally, a set of common interval-detecting neurons indicate pitch. Octopus cells rhythmically spike with the pitch periodicity of the sound. Batteries of interval-tuned neurons stopwatch-like measure the inter-spike intervals of the octopus cells by coding interval durations as first spike latencies (FSLs). The FSL-triggered spikes synchronously coincide through a monolayer spiking neural network at the corresponding receiver pitch neurons.
Collapse
Affiliation(s)
- Frank Klefenz
- Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
| | - Tamas Harczos
- Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- audifon GmbH & Co. KG, Kölleda, Germany
| |
Collapse
|
10
|
Machine Learning and Cochlear Implantation-A Structured Review of Opportunities and Challenges. Otol Neurotol 2019; 41:e36-e45. [PMID: 31644477 DOI: 10.1097/mao.0000000000002440] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The use of machine learning technology to automate intellectual processes and boost clinical process efficiency in medicine has exploded in the past 5 years. Machine learning excels in automating pattern recognition and in adapting learned representations to new settings. Moreover, machine learning techniques have the advantage of incorporating complexity and are free from many of the limitations of traditional deterministic approaches. Cochlear implants (CI) are a unique fit for machine learning techniques given the need for optimization of signal processing to fit complex environmental scenarios and individual patients' CI MAPping. However, there are many other opportunities where machine learning may assist in CI beyond signal processing. The objective of this review was to synthesize past applications of machine learning technologies for pediatric and adult CI and describe novel opportunities for research and development. DATA SOURCES The PubMed/MEDLINE, EMBASE, Scopus, and ISI Web of Knowledge databases were mined using a directed search strategy to identify the nexus between CI and artificial intelligence/machine learning literature. STUDY SELECTION Non-English language articles, articles without an available abstract or full-text, and nonrelevant articles were manually appraised and excluded. Included articles were evaluated for specific machine learning methodologies, content, and application success. DATA SYNTHESIS The database search identified 298 articles. Two hundred fifty-nine articles (86.9%) were excluded based on the available abstract/full-text, language, and relevance. The remaining 39 articles were included in the review analysis. There was a marked increase in year-over-year publications from 2013 to 2018. Applications of machine learning technologies involved speech/signal processing optimization (17; 43.6% of articles), automated evoked potential measurement (6; 15.4%), postoperative performance/efficacy prediction (5; 12.8%), and surgical anatomy location prediction (3; 7.7%), and 2 (5.1%) in each of robotics, electrode placement performance, and biomaterials performance. CONCLUSION The relationship between CI and artificial intelligence is strengthening with a recent increase in publications reporting successful applications. Considerable effort has been directed toward augmenting signal processing and automating postoperative MAPping using machine learning algorithms. Other promising applications include augmenting CI surgery mechanics and personalized medicine approaches for boosting CI patient performance. Future opportunities include addressing scalability and the research and clinical communities' acceptance of machine learning algorithms as effective techniques.
Collapse
|
11
|
Abstract
Cochlear implants restore hearing in deaf individuals, but speech perception remains challenging. Poor discrimination of spectral components is thought to account for limitations of speech recognition in cochlear implant users. We investigated how combined variations of spectral components along two orthogonal dimensions can maximize neural discrimination between two vowels, as measured by mismatch negativity. Adult cochlear implant users and matched normal-hearing listeners underwent electroencephalographic event-related potentials recordings in an optimum-1 oddball paradigm. A standard /a/ vowel was delivered in an acoustic free field along with stimuli having a deviant fundamental frequency (+3 and +6 semitones), a deviant first formant making it a /i/ vowel or combined deviant fundamental frequency and first formant (+3 and +6 semitones /i/ vowels). Speech recognition was assessed with a word repetition task. An analysis of variance between both amplitude and latency of mismatch negativity elicited by each deviant vowel was performed. The strength of correlations between these parameters of mismatch negativity and speech recognition as well as participants' age was assessed. Amplitude of mismatch negativity was weaker in cochlear implant users but was maximized by variations of vowels' first formant. Latency of mismatch negativity was later in cochlear implant users and was particularly extended by variations of the fundamental frequency. Speech recognition correlated with parameters of mismatch negativity elicited by the specific variation of the first formant. This nonlinear effect of acoustic parameters on neural discrimination of vowels has implications for implant processor programming and aural rehabilitation.
Collapse
Affiliation(s)
- François Prévost
- 1 Department of Speech Pathology and Audiology, McGill University Health Centre, Montreal, Quebec, Canada.,2 International Laboratory for Brain, Music & Sound Research, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 2 International Laboratory for Brain, Music & Sound Research, Montreal, Quebec, Canada.,3 Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,4 Centre for Research on Brain, Language & Music, Montreal, Quebec, Canada
| |
Collapse
|
12
|
Harczos T, Klefenz FM. Modeling Pitch Perception With an Active Auditory Model Extended by Octopus Cells. Front Neurosci 2018; 12:660. [PMID: 30319340 PMCID: PMC6167605 DOI: 10.3389/fnins.2018.00660] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 09/04/2018] [Indexed: 11/13/2022] Open
Abstract
Pitch is an essential category for musical sensations. Models of pitch perception are vividly discussed up to date. Most of them rely on definitions of mathematical methods in the spectral or temporal domain. Our proposed pitch perception model is composed of an active auditory model extended by octopus cells. The active auditory model is the same as used in the Stimulation based on Auditory Modeling (SAM), a successful cochlear implant sound processing strategy extended here by modeling the functional behavior of the octopus cells in the ventral cochlear nucleus and by modeling their connections to the auditory nerve fibers (ANFs). The neurophysiological parameterization of the extended model is fully described in the time domain. The model is based on latency-phase en- and decoding as octopus cells are latency-phase rectifiers in their local receptive fields. Pitch is ubiquitously represented by cascaded firing sweeps of octopus cells. Based on the firing patterns of octopus cells, inter-spike interval histograms can be aggregated, in which the place of the global maximum is assumed to encode the pitch.
Collapse
Affiliation(s)
- Tamas Harczos
- Fraunhofer Institute for Digital Media Technology, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Goettingen, Germany
- Institut für Mikroelektronik- und Mechatronik-Systeme gGmbH, Ilmenau, Germany
| | | |
Collapse
|