1
|
Köhler MHA, Weisz N. Cochlear Theta Activity Oscillates in Phase Opposition during Interaural Attention. J Cogn Neurosci 2023; 35:588-602. [PMID: 36626349 DOI: 10.1162/jocn_a_01959] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
It is widely established that sensory perception is a rhythmic process as opposed to a continuous one. In the context of auditory perception, this effect is only established on a cortical and behavioral level. Yet, the unique architecture of the auditory sensory system allows its primary sensory cortex to modulate the processes of its sensory receptors at the cochlear level. Previously, we could demonstrate the existence of a genuine cochlear theta (∼6-Hz) rhythm that is modulated in amplitude by intermodal selective attention. As the study's paradigm was not suited to assess attentional effects on the oscillatory phase of cochlear activity, the question of whether attention can also affect the temporal organization of the cochlea's ongoing activity remained open. The present study utilizes an interaural attention paradigm to investigate ongoing otoacoustic activity during a stimulus-free cue-target interval and an omission period of the auditory target in humans. We were able to replicate the existence of the cochlear theta rhythm. Importantly, we found significant phase opposition between the two ears and attention conditions of anticipatory as well as cochlear oscillatory activity during target presentation. Yet, the amplitude was unaffected by interaural attention. These results are the first to demonstrate that intermodal and interaural attention deploy different aspects of excitation and inhibition at the first level of auditory processing. Whereas intermodal attention modulates the level of cochlear activity, interaural attention modulates the timing.
Collapse
Affiliation(s)
| | - Nathan Weisz
- University of Salzburg.,Paracelsus Medical University, Salzburg, Austria
| |
Collapse
|
2
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
3
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
4
|
Otoacoustic Emissions Evoked by the Time-Varying Harmonic Structure of Speech. eNeuro 2021; 8:ENEURO.0428-20.2021. [PMID: 33632811 PMCID: PMC8046024 DOI: 10.1523/eneuro.0428-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 02/11/2021] [Accepted: 02/15/2021] [Indexed: 11/23/2022] Open
Abstract
The human auditory system is exceptional at comprehending an individual speaker even in complex acoustic environments. Because the inner ear, or cochlea, possesses an active mechanism that can be controlled by subsequent neural processing centers through descending nerve fibers, it may already contribute to speech processing. The cochlear activity can be assessed by recording otoacoustic emissions (OAEs), but employing these emissions to assess speech processing in the cochlea is obstructed by the complexity of natural speech. Here, we develop a novel methodology to measure OAEs that are related to the time-varying harmonic structure of speech [speech-distortion-product OAEs (DPOAEs)]. We then employ the method to investigate the effect of selective attention on the speech-DPOAEs. We provide tentative evidence that the speech-DPOAEs are larger when the corresponding speech signal is attended than when it is ignored. Our development of speech-DPOAEs opens up a path to further investigations of the contribution of the cochlea to the processing of complex real-world signals.
Collapse
|
5
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
6
|
Köhler MHA, Demarchi G, Weisz N. Cochlear activity in silent cue-target intervals shows a theta-rhythmic pattern and is correlated to attentional alpha and theta modulations. BMC Biol 2021; 19:48. [PMID: 33726746 PMCID: PMC7968255 DOI: 10.1186/s12915-021-00992-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 02/24/2021] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND A long-standing debate concerns where in the processing hierarchy of the central nervous system (CNS) selective attention takes effect. In the auditory system, cochlear processes can be influenced via direct and mediated (by the inferior colliculus) projections from the auditory cortex to the superior olivary complex (SOC). Studies illustrating attentional modulations of cochlear responses have so far been limited to sound-evoked responses. The aim of the present study is to investigate intermodal (audiovisual) selective attention in humans simultaneously at the cortical and cochlear level during a stimulus-free cue-target interval. RESULTS We found that cochlear activity in the silent cue-target intervals was modulated by a theta-rhythmic pattern (~ 6 Hz). While this pattern was present independently of attentional focus, cochlear theta activity was clearly enhanced when attending to the upcoming auditory input. On a cortical level, classical posterior alpha and beta power enhancements were found during auditory selective attention. Interestingly, participants with a stronger release of inhibition in auditory brain regions show a stronger attentional modulation of cochlear theta activity. CONCLUSIONS These results hint at a putative theta-rhythmic sampling of auditory input at the cochlear level. Furthermore, our results point to an interindividual variable engagement of efferent pathways in an attentional context that are linked to processes within and beyond processes in auditory cortical regions.
Collapse
Affiliation(s)
- Moritz Herbert Albrecht Köhler
- Centre for Cognitive Neuroscience, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria.
- Department of Psychology, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria.
| | - Gianpaolo Demarchi
- Centre for Cognitive Neuroscience, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria
- Department of Psychology, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria
| | - Nathan Weisz
- Centre for Cognitive Neuroscience, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria
- Department of Psychology, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria
| |
Collapse
|
7
|
Bell A, Jedrzejczak WW. Muscles in and around the ear as the source of "physiological noise" during auditory selective attention: A review and novel synthesis. Eur J Neurosci 2021; 53:2726-2739. [PMID: 33484588 DOI: 10.1111/ejn.15122] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 01/17/2021] [Indexed: 12/01/2022]
Abstract
The sensitivity of the auditory system is regulated via two major efferent pathways: the medial olivocochlear system that connects to the outer hair cells, and by the middle ear muscles-the tensor tympani and stapedius. The role of the former system in suppressing otoacoustic emissions has been extensively studied, but that of the complementary network has not. In studies of selective attention, decreases in otoacoustic emissions from contralateral stimulation have been ascribed to the medial olivocochlear system, but the acknowledged problem is that the results can be confounded by parallel muscle activity. Here, the potential role of the muscle system is examined through a wide but not exhaustive review of the selective attention literature, and the unifying hypothesis is made that the prominent "physiological noise" detected in such experiments, which is reduced during attention, is the sound produced by the muscles in proximity to the ear-including the middle ear muscles. All muscles produce low-frequency sound during contraction, but the implications for selective attention experiments-in which muscles near the ear are likely to be active-have not been adequately considered. This review and synthesis suggests that selective attention may reduce physiological noise in the ear canal by reducing the activity of muscles close to the ear. Indeed, such an experiment has already been done, but the significance of its findings have not been widely appreciated. Further sets of experiments are needed in this area.
Collapse
Affiliation(s)
- Andrew Bell
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, Australian National University, Canberra, ACT, Australia
| | | |
Collapse
|
8
|
Asilador A, Llano DA. Top-Down Inference in the Auditory System: Potential Roles for Corticofugal Projections. Front Neural Circuits 2021; 14:615259. [PMID: 33551756 PMCID: PMC7862336 DOI: 10.3389/fncir.2020.615259] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/17/2020] [Indexed: 01/28/2023] Open
Abstract
It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.
Collapse
Affiliation(s)
- Alexander Asilador
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
| | - Daniel A. Llano
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
- Molecular and Integrative Physiology, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|
9
|
Jedrzejczak WW, Milner R, Ganc M, Pilka E, Skarzynski H. No Change in Medial Olivocochlear Efferent Activity during an Auditory or Visual Task: Dual Evidence from Otoacoustic Emissions and Event-Related Potentials. Brain Sci 2020; 10:E894. [PMID: 33238438 PMCID: PMC7700184 DOI: 10.3390/brainsci10110894] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/17/2020] [Accepted: 11/21/2020] [Indexed: 11/17/2022] Open
Abstract
The medial olivocochlear (MOC) system is thought to be responsible for modulation of peripheral hearing through descending (efferent) pathways. This study investigated the connection between peripheral hearing function and conscious attention during two different modality tasks, auditory and visual. Peripheral hearing function was evaluated by analyzing the amount of suppression of otoacoustic emissions (OAEs) by contralateral acoustic stimulation (CAS), a well-known effect of the MOC. Simultaneously, attention was evaluated by event-related potentials (ERPs). Although the ERPs showed clear differences in processing of auditory and visual tasks, there were no differences in the levels of OAE suppression. We also analyzed OAEs for the highest magnitude resonant mode signal detected by the matching pursuit method, but again did not find a significant effect of task, and no difference in noise level or number of rejected trials. However, for auditory tasks, the amplitude of the P3 cognitive wave negatively correlated with the level of OAE suppression. We conclude that there seems to be no change in MOC function when performing different modality tasks, although the cortex still remains able to modulate some aspects of MOC activity.
Collapse
Affiliation(s)
- W. Wiktor Jedrzejczak
- Institute of Physiology and Pathology of Hearing, ul. M. Mochnackiego 10, 02-042 Warsaw, Poland; (R.M.); (M.G.); (E.P.); (H.S.)
- World Hearing Center, ul. Mokra 17, 05-830 Nadarzyn, Poland
| | - Rafal Milner
- Institute of Physiology and Pathology of Hearing, ul. M. Mochnackiego 10, 02-042 Warsaw, Poland; (R.M.); (M.G.); (E.P.); (H.S.)
- World Hearing Center, ul. Mokra 17, 05-830 Nadarzyn, Poland
| | - Malgorzata Ganc
- Institute of Physiology and Pathology of Hearing, ul. M. Mochnackiego 10, 02-042 Warsaw, Poland; (R.M.); (M.G.); (E.P.); (H.S.)
- World Hearing Center, ul. Mokra 17, 05-830 Nadarzyn, Poland
| | - Edyta Pilka
- Institute of Physiology and Pathology of Hearing, ul. M. Mochnackiego 10, 02-042 Warsaw, Poland; (R.M.); (M.G.); (E.P.); (H.S.)
- World Hearing Center, ul. Mokra 17, 05-830 Nadarzyn, Poland
| | - Henryk Skarzynski
- Institute of Physiology and Pathology of Hearing, ul. M. Mochnackiego 10, 02-042 Warsaw, Poland; (R.M.); (M.G.); (E.P.); (H.S.)
- World Hearing Center, ul. Mokra 17, 05-830 Nadarzyn, Poland
| |
Collapse
|
10
|
Riecke L, Marianu IA, De Martino F. Effect of Auditory Predictability on the Human Peripheral Auditory System. Front Neurosci 2020; 14:362. [PMID: 32351361 PMCID: PMC7174672 DOI: 10.3389/fnins.2020.00362] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/24/2020] [Indexed: 11/13/2022] Open
Abstract
Auditory perception is facilitated by prior knowledge about the statistics of the acoustic environment. Predictions about upcoming auditory stimuli are processed at various stages along the human auditory pathway, including the cortex and midbrain. Whether such auditory predictions are processed also at hierarchically lower stages-in the peripheral auditory system-is unclear. To address this question, we assessed outer hair cell (OHC) activity in response to isochronous tone sequences and varied the predictability and behavioral relevance of the individual tones (by manipulating tone-to-tone probabilities and the human participants' task, respectively). We found that predictability alters the amplitude of distortion-product otoacoustic emissions (DPOAEs, a measure of OHC activity) in a manner that depends on the behavioral relevance of the tones. Simultaneously recorded cortical responses showed a significant effect of both predictability and behavioral relevance of the tones, indicating that their experimental manipulations were effective in central auditory processing stages. Our results provide evidence for a top-down effect on the processing of auditory predictability in the human peripheral auditory system, in line with previous studies showing peripheral effects of auditory attention.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Irina-Andreea Marianu
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
11
|
Joseph J, Suman A, Jayasree GK, Prabhu P. Evaluation of Contralateral Suppression of Otoacoustic Emissions in Bharatanatyam Dancers and Non-Dancers. J Int Adv Otol 2020; 15:118-120. [PMID: 30541728 DOI: 10.5152/iao.2018.5645] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
OBJECTIVES There is limited literature regarding the objective estimation of auditory attention in healthy individuals who regularly practice dance. This study attempted to evaluate the contralateral suppression of otoacoustic emissions (OAE) in Bharatanatyam dancers and non-dancers. MATERIALS AND METHODS The study included40 adults (20 dancers and 20 non-dancers) with normal hearing. The differences in the contralateral suppression of distortion product OAE between the groups were compared. RESULTS The results of the present study revealed that there was an increased amount of suppression of OAE in dancers compared with non-dancers. It suggests that dance practice enhances sensory perception and improves auditory attention. The constant practice of dance could have led to plasticity of the efferent auditory system. CONCLUSION Thus, dance training may be used to strengthen efferent auditory system functioning. However, further studies witha larger sample size are essential for better generalization of the results.
Collapse
Affiliation(s)
- Joel Joseph
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Ankita Suman
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - G K Jayasree
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Prashanth Prabhu
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| |
Collapse
|
12
|
Ortega-Llebaria M, Olson DJ, Tuninetti A. Explaining Cross-Language Asymmetries in Prosodic Processing: The Cue-Driven Window Length Hypothesis. LANGUAGE AND SPEECH 2019; 62:701-736. [PMID: 30444184 DOI: 10.1177/0023830918808823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Cross-language studies have shown that English speakers use suprasegmental cues to lexical stress less consistently than speakers of Spanish and other Germanic languages ; accordingly, these studies have attributed this asymmetry to a possible trade-off between the use of vowel reduction and suprasegmental cues in lexical access. We put forward the hypothesis that this "cue trade-off" modulates intonation processing as well, so that English speakers make less use of suprasegmental cues in comparison to Spanish speakers when processing intonation in utterances causing processing asymmetries between these two languages. In three cross-language experiments comparing English and Spanish speakers' prediction of hypo-articulated utterances in focal sentences and reporting speech, we have provided evidence for our hypothesis and proposed a mechanism, the Cue-Driven Window Length model, which accounts for the observed cross-language processing asymmetries between English and Spanish at both lexical and utterance levels. Altogether, results from these experiments illustrated in detail how different types of low-level acoustic information (e.g., vowel reduction versus duration) interacted with higher-level expectations based on the speakers' knowledge of intonation providing support for our hypothesis. These interactions were coherent with an active model of speech perception that entailed real-time adjusting to feedback and to information from the context, challenging more traditional models that consider speech perception as a passive, bottom-up pattern-matching process.
Collapse
Affiliation(s)
| | | | - Alba Tuninetti
- Western Sydney University, Australia; ARC Centre of Excellence for the Dynamics of Language, Australia
| |
Collapse
|
13
|
Hartmann T, Weisz N. Auditory cortical generators of the Frequency Following Response are modulated by intermodal attention. Neuroimage 2019; 203:116185. [PMID: 31520743 DOI: 10.1016/j.neuroimage.2019.116185] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 09/03/2019] [Accepted: 09/10/2019] [Indexed: 11/20/2022] Open
Abstract
The efferent auditory system suggests that brainstem auditory regions could also be sensitive to top-down processes. In electrophysiology, the Frequency Following Response (FFR) to speech stimuli has been used extensively to study brainstem areas. Despite seemingly straight-forward in addressing the issue of attentional modulations of brainstem regions by means of the FFR, the existing results are inconsistent. Moreover, the notion that the FFR exclusively represents subcortical generators has been challenged. We aimed to gain a more differentiated perspective on how the generators of the FFR are modulated by either attending to the visual or auditory input while neural activity was recorded using magnetoencephalography (MEG). In a first step our results confirm the strong contribution of also cortical regions to the FFR. Interestingly, of all regions exhibiting a measurable FFR response, only the right primary auditory cortex was significantly affected by intermodal attention. By showing a clear cortical contribution to the attentional FFR effect, our work significantly extends previous reports that focus on surface level recordings only. It underlines the importance of making a greater effort to disentangle the different contributing sources of the FFR and serves as a clear precaution of simplistically interpreting the FFR as brainstem response.
Collapse
Affiliation(s)
- Thomas Hartmann
- Centre for Cognitive Neuroscience and Department of Psychology, Paris-Lodron Universität Salzburg, Hellbrunnerstraße 34/II, 5020, Salzburg, Austria.
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Department of Psychology, Paris-Lodron Universität Salzburg, Hellbrunnerstraße 34/II, 5020, Salzburg, Austria.
| |
Collapse
|
14
|
Beim JA, Oxenham AJ, Wojtczak M. No effects of attention or visual perceptual load on cochlear function, as measured with stimulus-frequency otoacoustic emissions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:1475. [PMID: 31472524 PMCID: PMC6715442 DOI: 10.1121/1.5123391] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Revised: 08/02/2019] [Accepted: 08/05/2019] [Indexed: 06/10/2023]
Abstract
The effects of selectively attending to a target stimulus in a background containing distractors can be observed in cortical representations of sound as an attenuation of the representation of distractor stimuli. The locus in the auditory system at which attentional modulations first arise is unknown, but anatomical evidence suggests that cortically driven modulation of neural activity could extend as peripherally as the cochlea itself. Previous studies of selective attention have used otoacoustic emissions to probe cochlear function under varying conditions of attention with mixed results. In the current study, two experiments combined visual and auditory tasks to maximize sustained attention, perceptual load, and cochlear dynamic range in an attempt to improve the likelihood of observing selective attention effects on cochlear responses. Across a total of 45 listeners in the two experiments, no systematic effects of attention or perceptual load were observed on stimulus-frequency otoacoustic emissions. The results revealed significant between-subject variability in the otoacoustic-emission measure of cochlear function that does not depend on listener performance in the behavioral tasks and is not related to movement-generated noise. The findings suggest that attentional modulation of auditory information in humans arises at stages of processing beyond the cochlea.
Collapse
Affiliation(s)
- Jordan A Beim
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
15
|
Abstract
Cholinergic efferent neurons originating in the brainstem innervate the acoustico-lateralis organs (inner ear, lateral line) of vertebrates. These release acetylcholine (ACh) to inhibit hair cells through activation of calcium-dependent potassium channels. In the mammalian cochlea, ACh shunts and suppresses outer hair cell (OHC) electromotility, reducing the essential amplification of basilar membrane motion. Consequently, medial olivocochlear neurons that inhibit OHCs reduce the sensitivity and frequency selectivity of afferent neurons driven by cochlear vibration of inner hair cells (IHCs). The cholinergic synapse on hair cells involves an unusual ionotropic ACh receptor, and a near-membrane postsynaptic cistern. Lateral olivocochlear (LOC) neurons modulate type I afferents by still-to-be-defined synaptic mechanisms. Olivocochlear neurons can be activated by a reflex arc that includes the auditory nerve and projections from the cochlear nucleus. They are also subject to modulation by higher-order central auditory interneurons. Through its actions on cochlear hair cells, afferent neurons, and higher centers, the olivocochlear system protects against age-related and noise-induced hearing loss, improves signal coding in noise under certain conditions, modulates selective attention to sensory stimuli, and influences sound localization.
Collapse
Affiliation(s)
- Paul Albert Fuchs
- The Center for Hearing and Balance, Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205-2195
| | - Amanda M Lauer
- The Center for Hearing and Balance, Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205-2195
| |
Collapse
|
16
|
Mattsson TS, Lind O, Follestad T, Grøndahl K, Wilson W, Nordgård S. Contralateral suppression of otoacoustic emissions in a clinical sample of children with auditory processing disorder. Int J Audiol 2019; 58:301-310. [DOI: 10.1080/14992027.2019.1570358] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tone Stokkereit Mattsson
- Department of Otorhinolaryngology, Head and Neck Surgery, Ålesund Hospital, Ålesund, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ola Lind
- Department of Otorhinolaryngology, Head and Neck Surgery, Haukeland University Hospital, Bergen, Norway
| | - Turid Follestad
- Department of Public Health and General Practice, Norwegian University of Science and Technology, Trondheim, Norway
| | - Kjell Grøndahl
- Department of Clinical Engineering, Haukeland University Hospital, Bergen, Norway
| | - Wayne Wilson
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
| | - Ståle Nordgård
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Otorhinolaryngology, Head and Neck Surgery, St. Olavs University Hospital, Trondheim, Norway
| |
Collapse
|
17
|
Beim JA, Oxenham AJ, Wojtczak M. Examining replicability of an otoacoustic measure of cochlear function during selective attention. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2882. [PMID: 30522315 PMCID: PMC6246073 DOI: 10.1121/1.5079311] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 10/12/2018] [Accepted: 10/27/2018] [Indexed: 06/09/2023]
Abstract
Attention to a target stimulus within a complex scene often results in enhanced cortical representations of the target relative to the background. It remains unclear where along the auditory pathways attentional effects can first be measured. Anatomy suggests that attentional modulation could occur through corticofugal connections extending as far as the cochlea itself. Earlier attempts to investigate the effects of attention on human cochlear processing have revealed small and inconsistent effects. In this study, stimulus-frequency otoacoustic emissions were recorded from a total of 30 human participants as they performed tasks that required sustained selective attention to auditory or visual stimuli. In the first sample of 15 participants, emission magnitudes were significantly weaker when participants attended to the visual stimuli than when they attended to the auditory stimuli, by an average of 5.4 dB. However, no such effect was found in the second sample of 15 participants. When the data were pooled across samples, the average attentional effect was significant, but small (2.48 dB), with 12 of 30 listeners showing a significant effect, based on bootstrap analysis of the individual data. The results highlight the need for considering sources of individual differences and using large sample sizes in future investigations.
Collapse
Affiliation(s)
- Jordan A Beim
- Department of Psychology, N218 Elliott Hall, 75 East River Parkway, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, N218 Elliott Hall, 75 East River Parkway, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, N218 Elliott Hall, 75 East River Parkway, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
18
|
Francis NA, Zhao W, Guinan Jr. JJ. Auditory Attention Reduced Ear-Canal Noise in Humans by Reducing Subject Motion, Not by Medial Olivocochlear Efferent Inhibition: Implications for Measuring Otoacoustic Emissions During a Behavioral Task. Front Syst Neurosci 2018; 12:42. [PMID: 30271329 PMCID: PMC6146202 DOI: 10.3389/fnsys.2018.00042] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 08/24/2018] [Indexed: 12/12/2022] Open
Abstract
Otoacoustic emissions (OAEs) are often measured to non-invasively determine activation of medial olivocochlear (MOC) efferents in humans. Usually these experiments assume that ear-canal noise remains constant. However, changes in ear-canal noise have been reported in some behavioral experiments. We studied the variability of ear-canal noise in eight subjects who performed a two-interval-forced-choice (2IFC) sound-level-discrimination task on monaural tone pips in masker noise. Ear-canal noise was recorded directly from the unstimulated ear opposite the task ear. Recordings were also made with similar sounds presented, but no task done. In task trials, ear-canal noise was reduced at the time the subject did the discrimination, relative to the ear-canal noise level earlier in the trial. In two subjects, there was a decrease in ear-canal noise, primarily at 1-2 kHz, with a time course similar to that expected from inhibition by MOC activity elicited by the task-ear masker noise. These were the only subjects with spontaneous OAEs (SOAEs). We hypothesize that the SOAEs were inhibited by MOC activity elicited by the task-ear masker. Based on the standard rationale in OAE experiments that large bursts of ear-canal noise are artifacts due to subject movement, ear-canal noise bursts above a sound-level criterion were removed. As the criterion was lowered and more high- and moderate-level ear-canal noise bursts were removed, the reduction in ear-canal noise level at the time of the 2IFC discrimination decreased to almost zero, for the six subjects without SOAEs. This pattern is opposite that expected from MOC-induced inhibition (which is greater on lower-level sounds), but can be explained by the hypothesis that subjects move less and create fewer bursts of ear-canal noise when they concentrate on doing the task. In no-task trials for these six subjects, the ear-canal noise level was little changed throughout the trial. Our results show that measurements of MOC effects on OAEs must measure and account for changes in ear-canal noise, especially in behavioral experiments. The results also provide a novel way of showing the time course of the buildup of attention via the time course of the reduction in ear-canal noise.
Collapse
Affiliation(s)
- Nikolas A. Francis
- Speech and Hearing Bioscience and Technology, Harvard-Massachusetts Institute of Technology (MIT) Division of Health Sciences and Technology, Cambridge, MA, United States
- Eaton Peabody Laboratories, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA, United States
| | - Wei Zhao
- Eaton Peabody Laboratories, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology, Harvard Medical School, Harvard University, Boston, MA, United States
| | - John J. Guinan Jr.
- Speech and Hearing Bioscience and Technology, Harvard-Massachusetts Institute of Technology (MIT) Division of Health Sciences and Technology, Cambridge, MA, United States
- Eaton Peabody Laboratories, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology, Harvard Medical School, Harvard University, Boston, MA, United States
| |
Collapse
|
19
|
Lopez-Poveda EA. Olivocochlear Efferents in Animals and Humans: From Anatomy to Clinical Relevance. Front Neurol 2018; 9:197. [PMID: 29632514 PMCID: PMC5879449 DOI: 10.3389/fneur.2018.00197] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 03/13/2018] [Indexed: 11/13/2022] Open
Abstract
Olivocochlear efferents allow the central auditory system to adjust the functioning of the inner ear during active and passive listening. While many aspects of efferent anatomy, physiology and function are well established, others remain controversial. This article reviews the current knowledge on olivocochlear efferents, with emphasis on human medial efferents. The review covers (1) the anatomy and physiology of olivocochlear efferents in animals; (2) the methods used for investigating this auditory feedback system in humans, their limitations and best practices; (3) the characteristics of medial-olivocochlear efferents in humans, with a critical analysis of some discrepancies across human studies and between animal and human studies; (4) the possible roles of olivocochlear efferents in hearing, discussing the evidence in favor and against their role in facilitating the detection of signals in noise and in protecting the auditory system from excessive acoustic stimulation; and (5) the emerging association between abnormal olivocochlear efferent function and several health conditions. Finally, we summarize some open issues and introduce promising approaches for investigating the roles of efferents in human hearing using cochlear implants.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.,Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
20
|
Lewis JD. Synchronized Spontaneous Otoacoustic Emissions Provide a Signal-to-Noise Ratio Advantage in Medial-Olivocochlear Reflex Assays. J Assoc Res Otolaryngol 2017; 19:53-65. [PMID: 29134475 DOI: 10.1007/s10162-017-0645-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 10/23/2017] [Indexed: 11/28/2022] Open
Abstract
Detection of medial olivocochlear-induced (MOC) changes to transient-evoked otoacoustic emissions (TEOAE) requires high signal-to-noise ratios (SNR). TEOAEs associated with synchronized spontaneous (SS) OAEs exhibit higher SNRs than TEOAEs in the absence of SSOAEs, potentially making the former well suited for MOC assays. Although SSOAEs may complicate interpretation of MOC-induced changes to TEOAE latency, recent work suggests SSOAEs are not a problem in non-latency-dependent MOC assays. The current work examined the potential benefit of SSOAEs in TEOAE-based assays of the MOC efferents. It was hypothesized that the higher SNR afforded by SSOAEs would permit detection of smaller changes to the TEOAE upon activation of the MOC reflex. TEOAEs were measured in 24 female subjects in the presence and absence of contralateral broadband noise. Frequency bands with and without SSOAEs were identified for each subject. The prevalence of TEOAEs and statistically significant MOC effects were highest in frequency bands that also contained SSOAEs. The median TEOAE SNR in frequency bands with SSOAEs was approximately 8 dB higher than the SNR in frequency bands lacking SSOAEs. After normalizing by TEOAE amplitude, MOC-induced changes to the TEOAE were similar between frequency bands with and without SSOAEs. Smaller MOC effects were detectable across a subset of the frequency bands with SSOAEs, presumably due to a higher TEOAE SNR. These findings demonstrate that SSOAEs are advantageous in assays of the MOC reflex.
Collapse
Affiliation(s)
- James D Lewis
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, 578 South Stadium Hall, Knoxville, TN, 37996, USA.
| |
Collapse
|
21
|
Heald SLM, Van Hedger SC, Nusbaum HC. Perceptual Plasticity for Auditory Object Recognition. Front Psychol 2017; 8:781. [PMID: 28588524 PMCID: PMC5440584 DOI: 10.3389/fpsyg.2017.00781] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 04/26/2017] [Indexed: 01/25/2023] Open
Abstract
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as "noise" in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed.
Collapse
|
22
|
Maruthy S, Kumar UA, Gnanateja GN. Functional Interplay Between the Putative Measures of Rostral and Caudal Efferent Regulation of Speech Perception in Noise. J Assoc Res Otolaryngol 2017; 18:635-648. [PMID: 28447225 DOI: 10.1007/s10162-017-0623-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 03/22/2017] [Indexed: 01/23/2023] Open
Abstract
Efferent modulation has been demonstrated to be very important for speech perception, especially in the presence of noise. We examined the functional relationship between two efferent systems: the rostral and caudal efferent pathways and their individual influences on speech perception in noise. Earlier studies have shown that these two efferent mechanisms were correlated with speech perception in noise. However, previously, these mechanisms were studied in isolation, and their functional relationship with each other was not investigated. We used a correlational design to study the relationship if any, between these two mechanisms in young and old normal hearing individuals. We recorded context-dependent brainstem encoding as an index of rostral efferent function and contralateral suppression of otoacoustic emissions as an index of caudal efferent function in groups with good and poor speech perception in noise. These efferent mechanisms were analysed for their relationship with each other and with speech perception in noise. We found that the two efferent mechanisms did not show any functional relationship. Interestingly, both the efferent mechanisms correlated with speech perception in noise and they even emerged as significant predictors. Based on the data, we posit that the two efferent mechanisms function relatively independently but with a common goal of fine-tuning the afferent input and refining auditory perception in degraded listening conditions.
Collapse
Affiliation(s)
- Sandeep Maruthy
- Electrophysiology Laboratory, Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore, Karnataka, IN-570006, India
| | - U Ajith Kumar
- Electrophysiology Laboratory, Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore, Karnataka, IN-570006, India
| | - G Nike Gnanateja
- Electrophysiology Laboratory, Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore, Karnataka, IN-570006, India.
| |
Collapse
|
23
|
Solesio-Jofre E, López-Frutos JM, Cashdollar N, Aurtenetxe S, de Ramón I, Maestú F. The effects of aging on the working memory processes of multimodal information. AGING NEUROPSYCHOLOGY AND COGNITION 2016; 24:299-320. [PMID: 27405057 DOI: 10.1080/13825585.2016.1207749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Normal aging is associated with deficits in working memory processes. However, the majority of research has focused on storage or inhibitory processes using unimodal paradigms, without addressing their relationships using different sensory modalities. Hence, we pursued two objectives. First, was to examine the effects of aging on storage and inhibitory processes. Second, was to evaluate aging effects on multisensory integration of visual and auditory stimuli. To this end, young and older participants performed a multimodal task for visual and auditory pairs of stimuli with increasing memory load at encoding and interference during retention. Our results showed an age-related increased vulnerability to interrupting and distracting interference reflecting inhibitory deficits related to the off-line reactivation and on-line suppression of relevant and irrelevant information, respectively. Storage capacity was impaired with increasing task demands in both age groups. Additionally, older adults showed a deficit in multisensory integration, with poorer performance for new visual compared to new auditory information.
Collapse
Affiliation(s)
- Elena Solesio-Jofre
- a Department of Basic Psychology , University Autónoma of Madrid , Madrid , Spain
| | | | - Nathan Cashdollar
- b Centro Interdipartimentale Mente/Cervello (CIMeC) - Università degli Studi di Trento , Trento , Italy
| | - Sara Aurtenetxe
- c Laboratory for Cognitive and Computational Neuroscience. Centre for Biomedical Technology , Madrid University of Technology/Complutense University of Madrid , Madrid , Spain
| | - Ignacio de Ramón
- c Laboratory for Cognitive and Computational Neuroscience. Centre for Biomedical Technology , Madrid University of Technology/Complutense University of Madrid , Madrid , Spain
| | - Fernando Maestú
- c Laboratory for Cognitive and Computational Neuroscience. Centre for Biomedical Technology , Madrid University of Technology/Complutense University of Madrid , Madrid , Spain.,d Department of Basic Psychology II (Cognitive Processes) , Complutense University of Madrid , Madrid , Spain
| |
Collapse
|
24
|
Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses. Brain Res 2015; 1626:146-64. [PMID: 26187756 DOI: 10.1016/j.brainres.2015.06.038] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 06/18/2015] [Accepted: 06/24/2015] [Indexed: 11/20/2022]
Abstract
Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
|
25
|
Walsh KP, Pasanen EG, McFadden D. Changes in otoacoustic emissions during selective auditory and visual attention. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2737-57. [PMID: 25994703 PMCID: PMC4441704 DOI: 10.1121/1.4919350] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Revised: 04/16/2015] [Accepted: 04/17/2015] [Indexed: 06/04/2023]
Abstract
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.
Collapse
Affiliation(s)
- Kyle P Walsh
- Department of Psychology and Center for Perceptual Systems, University of Texas, 1 University Station A8000, Austin, Texas 78712-0187, USA
| | - Edward G Pasanen
- Department of Psychology and Center for Perceptual Systems, University of Texas, 1 University Station A8000, Austin, Texas 78712-0187, USA
| | - Dennis McFadden
- Department of Psychology and Center for Perceptual Systems, University of Texas, 1 University Station A8000, Austin, Texas 78712-0187, USA
| |
Collapse
|
26
|
Schröger E, Marzecová A, SanMiguel I. Attention and prediction in human audition: a lesson from cognitive psychophysiology. Eur J Neurosci 2015; 41:641-64. [PMID: 25728182 PMCID: PMC4402002 DOI: 10.1111/ejn.12816] [Citation(s) in RCA: 147] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2014] [Revised: 11/27/2014] [Accepted: 12/01/2014] [Indexed: 11/30/2022]
Abstract
Attention is a hypothetical mechanism in the service of perception that facilitates the processing of relevant information and inhibits the processing of irrelevant information. Prediction is a hypothetical mechanism in the service of perception that considers prior information when interpreting the sensorial input. Although both (attention and prediction) aid perception, they are rarely considered together. Auditory attention typically yields enhanced brain activity, whereas auditory prediction often results in attenuated brain responses. However, when strongly predicted sounds are omitted, brain responses to silence resemble those elicited by sounds. Studies jointly investigating attention and prediction revealed that these different mechanisms may interact, e.g. attention may magnify the processing differences between predicted and unpredicted sounds. Following the predictive coding theory, we suggest that prediction relates to predictions sent down from predictive models housed in higher levels of the processing hierarchy to lower levels and attention refers to gain modulation of the prediction error signal sent up to the higher level. As predictions encode contents and confidence in the sensory data, and as gain can be modulated by the intention of the listener and by the predictability of the input, various possibilities for interactions between attention and prediction can be unfolded. From this perspective, the traditional distinction between bottom-up/exogenous and top-down/endogenous driven attention can be revisited and the classic concepts of attentional gain and attentional trace can be integrated.
Collapse
Affiliation(s)
- Erich Schröger
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| | - Anna Marzecová
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| | - Iria SanMiguel
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| |
Collapse
|
27
|
Smith DW, Keil A. The biological role of the medial olivocochlear efferents in hearing: separating evolved function from exaptation. Front Syst Neurosci 2015; 9:12. [PMID: 25762901 PMCID: PMC4340171 DOI: 10.3389/fnsys.2015.00012] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Accepted: 01/23/2015] [Indexed: 11/13/2022] Open
Abstract
Cochlear outer hair cells (OHCs) are remarkable, mechanically-active receptors that determine the exquisite sensitivity and frequency selectivity characteristic of the mammalian auditory system. While there are three to four times as many OHCs compared with inner hair cells, OHCs lack a significant afferent innervation and, instead, receive a rich efferent innervation from medial olivocochlear (MOC) efferent neurons. Activation of the MOC has been shown to exert a considerable suppressive effect over OHC activity. The precise function of these efferent tracts in auditory behavior, however, is the matter of considerable debate. The most frequent functions assigned to the MOC tracts are to protect the cochlea from traumatic damage associated with intense sound and to aid the detection of signals in noise. While considerable evidence shows that interruption of MOC activity exacerbates damage due to high-level sound exposure, the well characterized MOC physiology and evolutionary studies do not support such a role. Instead, a MOC protective effect is well explained as being a byproduct of the suppressive nature of MOC action on OHC mechanical behavior. A role in the enhancement of signals in noise backgrounds, on the other hand, is well supported by (1) an extensive physiological literature (2) examination of naturally occurring environmental acoustic conditions (3) recent data from multiple laboratories showing that the MOC plays a significant role in auditory selective attention by suppressing the response to unattended or ignored stimuli. This presentation will argue that, based on the extant literature combining the suppression of background noise through MOC-mediated rapid adaptation (RA) with the suppression of non-attended signals, in concert with the corticofugal pathways descending from the auditory cortex, the MOC system has one evolved function-to increase the signal-to-noise ratio, aiding in the detection of target signals. By contrast, the MOC system role in reducing noise damage and the effects of aging in the cochlea may well represent an exaptation, or evolutionary "spandrel".
Collapse
Affiliation(s)
- David W Smith
- Program in Behavioral and Cognitive Neuroscience, Department of Psychology, University of Florida Gainesville, FL, USA ; Center for Smell and Taste, University of Florida Gainesville, FL, USA
| | - Andreas Keil
- Program in Behavioral and Cognitive Neuroscience, Department of Psychology, University of Florida Gainesville, FL, USA ; Center for the Study of Emotion and Attention, University of Florida Gainesville, FL, USA
| |
Collapse
|
28
|
Mishra SK, Abdala C. Stability of the medial olivocochlear reflex as measured by distortion product otoacoustic emissions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:122-134. [PMID: 25320951 PMCID: PMC4712848 DOI: 10.1044/2014_jslhr-h-14-0013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2014] [Revised: 05/05/2014] [Accepted: 09/18/2014] [Indexed: 06/04/2023]
Abstract
PURPOSE The purpose of this study was to assess the repeatability of a fine-resolution, distortion product otoacoustic emission (DPOAE)-based assay of the medial olivocochlear (MOC) reflex in normal-hearing adults. METHOD Data were collected during 36 test sessions from 4 normal-hearing adults to assess short-term stability and 5 normal-hearing adults to assess long-term stability. DPOAE level and phase measurements were recorded with and without contralateral acoustic stimulation. MOC reflex indices were computed by (a) noting contralateral acoustic stimulation-induced changes in DPOAE level (both absolute and normalized) at fine-structure peaks, (b) recording the effect as a vector difference, and (c) separating DPOAE components and considering a component-specific metric. RESULTS Analyses indicated good repeatability of all indices of the MOC reflex in most frequency ranges. Short- and long-term repeatability were generally comparable. Indices normalized to a subject's own baseline fared best, showing strong short- and long-term stability across all frequency intervals. CONCLUSIONS These results suggest that fine-resolution DPOAE-based measures of the MOC reflex measured at strategic frequencies are stable, and natural variance from day-to-day or week-to-week durations is small enough to detect between-group differences and possibly to monitor intervention-related success. However, this is an empirical question that must be directly tested to confirm its utility.
Collapse
|
29
|
Wojtczak M, Beim JA, Oxenham AJ. Exploring the role of feedback-based auditory reflexes in forward masking by schroeder-phase complexes. J Assoc Res Otolaryngol 2014; 16:81-99. [PMID: 25338224 DOI: 10.1007/s10162-014-0495-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 10/02/2014] [Indexed: 10/24/2022] Open
Abstract
Several studies have postulated that psychoacoustic measures of auditory perception are influenced by efferent-induced changes in cochlear responses, but these postulations have generally remained untested. This study measured the effect of stimulus phase curvature and temporal envelope modulation on the medial olivocochlear reflex (MOCR) and on the middle-ear muscle reflex (MEMR). The role of the MOCR was tested by measuring changes in the ear-canal pressure at 6 kHz in the presence and absence of a band-limited harmonic complex tone with various phase curvatures, centered either at (on-frequency) or well below (off-frequency) the 6-kHz probe frequency. The influence of possible MEMR effects was examined by measuring phase-gradient functions for the elicitor effects and by measuring changes in the ear-canal pressure with a continuous suppressor of the 6-kHz probe. Both on- and off-frequency complex tone elicitors produced significant changes in ear canal sound pressure. However, the pattern of results was not consistent with the earlier hypotheses postulating that efferent effects produce the psychoacoustic dependence of forward-masked thresholds on masker phase curvature. The results also reveal unexpectedly long time constants associated with some efferent effects, the source of which remains unknown.
Collapse
Affiliation(s)
- Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N218 Elliott Hall, 75 East River Rd., Minneapolis, MN, 55455, USA,
| | | | | |
Collapse
|
30
|
Srinivasan S, Keil A, Stratis K, Osborne AF, Cerwonka C, Wong J, Rieger BL, Polcz V, Smith DW. Interaural attention modulates outer hair cell function. Eur J Neurosci 2014; 40:3785-92. [PMID: 25302959 DOI: 10.1111/ejn.12746] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2014] [Revised: 09/03/2014] [Accepted: 09/05/2014] [Indexed: 11/27/2022]
Abstract
Mounting evidence suggests that auditory attention tasks may modulate the sensitivity of the cochlea by way of the corticofugal and the medial olivocochlear (MOC) efferent pathways. Here, we studied the extent to which a separate efferent tract, the 'uncrossed' MOC, which functionally connects the two ears, mediates inter-aural selective attention. We compared distortion product otoacoustic emissions (DPOAEs) in one ear with binaurally presented primaries, using an intermodal target detection task in which participants were instructed to report the occurrence of brief target events (visual changes, tones). Three tasks were compared under identical physical stimulation: (i) report brief tones in the ear in which DPOAE responses were recorded; (ii) report brief tones presented to the contralateral, non-recorded ear; and (iii) report brief phase shifts of a visual grating at fixation. Effects of attention were observed as parallel shifts in overall DPOAE contour level, with DPOAEs relatively higher in overall level when subjects ignored the auditory stimuli and attended to the visual stimulus, compared with both of the auditory-attending conditions. Importantly, DPOAE levels were statistically lowest when attention was directed to the ipsilateral ear in which the DPOAE recordings were made. These data corroborate notions that top-down mechanisms, via the corticofugal and medial efferent pathways, mediate cochlear responses during intermodal attention. New findings show attending to one ear can significantly alter the physiological response of the contralateral, unattended ear, probably through the uncrossed-medial olivocochlear efferent fibers connecting the two ears.
Collapse
Affiliation(s)
- Sridhar Srinivasan
- Program in Behavioral and Cognitive Neuroscience, Department of Psychology, University of Florida, Gainesville, FL, 32611, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Walsh KP, Pasanen EG, McFadden D. Selective attention reduces physiological noise in the external ear canals of humans. I: auditory attention. Hear Res 2014; 312:143-59. [PMID: 24732069 PMCID: PMC4036535 DOI: 10.1016/j.heares.2014.03.012] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2013] [Revised: 02/13/2014] [Accepted: 03/28/2014] [Indexed: 11/20/2022]
Abstract
In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear.
Collapse
Affiliation(s)
- Kyle P Walsh
- Department of Psychology and Center for Perceptual Systems, 1 University Station A8000, University of Texas, Austin, TX 78712-0187, USA.
| | - Edward G Pasanen
- Department of Psychology and Center for Perceptual Systems, 1 University Station A8000, University of Texas, Austin, TX 78712-0187, USA
| | - Dennis McFadden
- Department of Psychology and Center for Perceptual Systems, 1 University Station A8000, University of Texas, Austin, TX 78712-0187, USA
| |
Collapse
|
32
|
Heald SLM, Nusbaum HC. Speech perception as an active cognitive process. Front Syst Neurosci 2014; 8:35. [PMID: 24672438 PMCID: PMC3956139 DOI: 10.3389/fnsys.2014.00035] [Citation(s) in RCA: 98] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Accepted: 02/25/2014] [Indexed: 11/13/2022] Open
Abstract
One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processing with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or therapy.
Collapse
|
33
|
Sörqvist P, Rönnberg J. Individual differences in distractibility: An update and a model. Psych J 2014; 3:42-57. [PMID: 25632345 PMCID: PMC4285120 DOI: 10.1002/pchj.47] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Accepted: 11/18/2013] [Indexed: 11/08/2022]
Abstract
This paper reviews the current literature on individual differences in susceptibility to the effects of background sound on visual-verbal task performance. A large body of evidence suggests that individual differences in working memory capacity (WMC) underpin individual differences in susceptibility to auditory distraction in most tasks and contexts. Specifically, high WMC is associated with a more steadfast locus of attention (thus overruling the call for attention that background noise may evoke) and a more constrained auditory-sensory gating (i.e., less processing of the background sound). The relation between WMC and distractibility is a general framework that may also explain distractibility differences between populations that differ along variables that covary with WMC (such as age, developmental disorders, and personality traits). A neurocognitive task-engagement/distraction trade-off (TEDTOFF) model that summarizes current knowledge is outlined and directions for future research are proposed.
Collapse
Affiliation(s)
- Patrik Sörqvist
- Department of Building, Energy and Environmental Engineering, University of GävleGävle, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| |
Collapse
|
34
|
Chintanpalli A, Heinz MG. The use of confusion patterns to evaluate the neural basis for concurrent vowel identification. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:2988-3000. [PMID: 24116434 PMCID: PMC3799688 DOI: 10.1121/1.4820888] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2012] [Revised: 05/31/2013] [Accepted: 08/26/2013] [Indexed: 06/02/2023]
Abstract
Normal-hearing listeners take advantage of differences in fundamental frequency (F0) to segregate competing talkers. Computational modeling using an F0-based segregation algorithm and auditory-nerve temporal responses captures the gradual improvement in concurrent-vowel identification with increasing F0 difference. This result has been taken to suggest that F0-based segregation is the basis for this improvement; however, evidence suggests that other factors may also contribute. The present study further tested models of concurrent-vowel identification by evaluating their ability to predict the specific confusions made by listeners. Measured human confusions consisted of at most one to three confusions per vowel pair, typically from an error in only one of the two vowels. An improvement due to F0 difference was correlated with spectral differences between vowels; however, simple models based on acoustic and cochlear spectral patterns predicted some confusions not made by human listeners. In contrast, a neural temporal model was better at predicting listener confusion patterns. However, the full F0-based segregation algorithm using these neural temporal analyses was inconsistent across F0 difference in capturing listener confusions, being worse for smaller differences. The inability of this commonly accepted model to fully account for listener confusions suggests that other factors besides F0 segregation are likely to contribute.
Collapse
|
35
|
Perrot X, Collet L. Function and plasticity of the medial olivocochlear system in musicians: a review. Hear Res 2013; 308:27-40. [PMID: 23994434 DOI: 10.1016/j.heares.2013.08.010] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Revised: 08/11/2013] [Accepted: 08/21/2013] [Indexed: 10/26/2022]
Abstract
The outer hair cells of the organ of Corti are the target of abundant efferent projections from the olivocochlear system. This peripheral efferent auditory subsystem is currently thought to be modulated by central activity via corticofugal descending auditory system, and to modulate active cochlear micromechanics. Although the function of this efferent subsystem remains unclear, physiological, psychophysical, and modeling data suggest that it may be involved in ear protection against noise damage and auditory perception, especially in the presence of background noise. Moreover, there is mounting evidence that its activity is modulated by auditory and visual attention. A commonly used approach to measure olivocochlear activity noninvasively in humans relies on the suppression of otoacoustic emissions by contralateral noise. Previous studies have found substantial interindividual variability in this effect, and statistical differences have been observed between professional musicians and non-musicians, with stronger bilateral suppression effects in the former. In this paper, we review these studies and discuss various possible interpretations for these findings, including experience-dependent neuroplasticity. We ask whether differences in olivocochlear function between musicians and non-musicians reflect differences in peripheral auditory function or in more central factors, such as top-down attentional modulation.
Collapse
Affiliation(s)
- Xavier Perrot
- Université de Lyon, Lyon F-69000, France; INSERM U1028, CNRS UMR5292, Université Lyon 1, Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, Lyon F-69000, France; Claude Bernard Lyon 1 University, Lyon F-69500, France; Hospices Civils de Lyon, Lyon Sud Teaching Hospital, Department of Audiology and Orofacial Explorations, Pierre-Bénite F-69310, France.
| | - Lionel Collet
- Université de Lyon, Lyon F-69000, France; INSERM U1028, CNRS UMR5292, Université Lyon 1, Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, Lyon F-69000, France; Claude Bernard Lyon 1 University, Lyon F-69500, France; Hospices Civils de Lyon, Lyon Sud Teaching Hospital, Department of Audiology and Orofacial Explorations, Pierre-Bénite F-69310, France.
| |
Collapse
|
36
|
Hairston WD, Letowski TR, McDowell K. Task-related suppression of the brainstem frequency following response. PLoS One 2013; 8:e55215. [PMID: 23441150 PMCID: PMC3575437 DOI: 10.1371/journal.pone.0055215] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2012] [Accepted: 12/20/2012] [Indexed: 11/25/2022] Open
Abstract
Recent evidence has shown top-down modulation of the brainstem frequency following response (FFR), generally in the form of signal enhancement from concurrent stimuli or from switching between attention-demanding task stimuli. However, it is also possible that the opposite may be true--the addition of a task, instead of a resting, passive state may suppress the FFR. Here we examined the influence of a subsequent task, and the relevance of the task modality, on signal clarity within the FFR. Participants performed visual and auditory discrimination tasks in the presence of an irrelevant background sound, as well as a baseline consisting of the same background stimuli in the absence of a task. FFR pitch strength and amplitude of the primary frequency response were assessed within non-task stimulus periods in order to examine influences due solely to general cognitive state, independent of stimulus-driven effects. Results show decreased signal clarity with the addition of a task, especially within the auditory modality. We additionally found consistent relationships between the extent of this suppressive effect and perceptual measures such as response time and proclivity towards one sensory modality. Together these results suggest that the current focus of attention can have a global, top-down effect on the quality of encoding early in the auditory pathway.
Collapse
Affiliation(s)
- W David Hairston
- Human Research and Engineering Directorate, United States Army Research Laboratory, Aberdeen Proving Ground, Maryland, United States of America.
| | | | | |
Collapse
|
37
|
Guerreiro MJS, Murphy DR, Van Gerven PWM. Making sense of age-related distractibility: the critical role of sensory modality. Acta Psychol (Amst) 2013; 142:184-94. [PMID: 23337081 DOI: 10.1016/j.actpsy.2012.11.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2012] [Revised: 10/26/2012] [Accepted: 11/14/2012] [Indexed: 11/29/2022] Open
Abstract
Older adults are known to have reduced inhibitory control and therefore to be more distractible than young adults. Recently, we have proposed that sensory modality plays a crucial role in age-related distractibility. In this study, we examined age differences in vulnerability to unimodal and cross-modal visual and auditory distraction. A group of 24 younger (mean age=21.7 years) and 22 older adults (mean age=65.4 years) performed visual and auditory n-back tasks while ignoring visual and auditory distraction. Whereas reaction time data indicated that both young and older adults are particularly affected by unimodal distraction, accuracy data revealed that older adults, but not younger adults, are vulnerable to cross-modal visual distraction. These results support the notion that age-related distractibility is modality dependent.
Collapse
Affiliation(s)
- Maria J S Guerreiro
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | | | | |
Collapse
|
38
|
Kauramäki J, Jääskeläinen IP, Hänninen JL, Auranen T, Nummenmaa A, Lampinen J, Sams M. Two-stage processing of sounds explains behavioral performance variations due to changes in stimulus contrast and selective attention: an MEG study. PLoS One 2012; 7:e46872. [PMID: 23071654 PMCID: PMC3469590 DOI: 10.1371/journal.pone.0046872] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 09/10/2012] [Indexed: 11/18/2022] Open
Abstract
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally (p = 0.1) replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at ~100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300-400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (~100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (~300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.
Collapse
Affiliation(s)
- Jaakko Kauramäki
- Department of Biomedical Engineering and Computational Science (BECS), Brain and Mind Laboratory, Aalto University School of Science, Espoo, Finland.
| | | | | | | | | | | | | |
Collapse
|
39
|
Srinivasan S, Keil A, Stratis K, Woodruff Carr KL, Smith DW. Effects of cross-modal selective attention on the sensory periphery: cochlear sensitivity is altered by selective attention. Neuroscience 2012; 223:325-32. [PMID: 22871520 DOI: 10.1016/j.neuroscience.2012.07.062] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2012] [Revised: 07/25/2012] [Accepted: 07/27/2012] [Indexed: 10/28/2022]
Abstract
There is increasing evidence that alterations in the focus of attention result in changes in neural responding at the most peripheral levels of the auditory system. To date, however, those studies have not ruled out differences in task demands or overall arousal in explaining differences in responding across intermodal attentional conditions. The present study sought to compare changes in the response of cochlear outer hair cells, employing distortion product otoacoustic emissions (DPOAEs), under different, balanced conditions of intermodal attention. DPOAEs were measured while the participants counted infrequent, brief exemplars of the DPOAE primary tones (auditory attending), and while counting visual targets, which were instances of Gabor gradient phase shifts (visual attending). Corroborating an earlier study from our laboratory, the results show that DPOAEs recorded in the auditory-ignoring condition were significantly higher in overall amplitude, compared with DPOAEs recorded while participants attended to the eliciting primaries; a finding in apparent contradiction with more central measures of intermodal attention. Also consistent with our previous findings, DPOAE rapid adaptation, believed to be mediated by the medial olivocochlear efferents (MOC), was unaffected by changes in intermodal attention. The present findings indicate that manipulations in the conditions of attention, through the corticofugal pathway, and its last relay to cochlear outer hair cells (OHCs), the MOC, alter cochlear sensitivity to sound. These data also suggest that the MOC influence on OHC sensitivity is composed of two independent processes, one of which is under attentional control.
Collapse
Affiliation(s)
- S Srinivasan
- Program in Behavioral and Cognitive Neuroscience, Department of Psychology, University of Florida, Gainesville, FL, USA
| | | | | | | | | |
Collapse
|
40
|
Sörqvist P, Stenfelt S, Rönnberg J. Working memory capacity and visual-verbal cognitive load modulate auditory-sensory gating in the brainstem: toward a unified view of attention. J Cogn Neurosci 2012; 24:2147-54. [PMID: 22849400 DOI: 10.1162/jocn_a_00275] [Citation(s) in RCA: 96] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.
Collapse
|
41
|
Althen H, Wittekindt A, Gaese B, Kössl M, Abel C. Effect of contralateral pure tone stimulation on distortion emissions suggests a frequency-specific functioning of the efferent cochlear control. J Neurophysiol 2012; 107:1962-9. [DOI: 10.1152/jn.00418.2011] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Contralateral acoustic stimulation (CAS) with white noise and pure tone stimuli was used to assess frequency specificity of efferent olivocochlear control of cochlear mechanics in the gerbil. Changes of the cochlear amplifier can be monitored by distortion product otoacoustic emissions (DPOAEs), which are a byproduct of the nonlinear amplification by the outer hair cells. We used the quadratic DPOAE f2-f1 as ipsilateral probe, as it is known to be sensitive to efferent olivocochlear activity. White noise CAS, used to evoke efferent activity, had maximal effects on the DPOAE level for f2-stimulus frequencies of 5–7 kHz. The dominant effect during CAS was a DPOAE level increase of up to 13.5 dB. The frequency specificity of the olivocochlear system was evaluated by presenting pure tones (0.5–38 kHz) as contralateral stimuli to evoke efferent activity. Maximal DPOAE level changes were triggered by CAS frequencies close to the frequency of the DPOAE elicitor tones (tested f2 range: 2.5–15 kHz). The effective CAS frequency range covered 1.4–2.4 octaves and was centered 0.42 octaves below the DPOAE elicitor tone f2. The frequency-specific effect of CAS with pure tones suggests a dedicated central control of mechanical adjustments for peripheral frequency processing.
Collapse
Affiliation(s)
- H. Althen
- Institute for Cell Biology and Neuroscience, Department of Biological Sciences, and
| | - A. Wittekindt
- Institute for Cell Biology and Neuroscience, Department of Biological Sciences, and
| | - B. Gaese
- Institute for Cell Biology and Neuroscience, Department of Biological Sciences, and
| | - M. Kössl
- Institute for Cell Biology and Neuroscience, Department of Biological Sciences, and
| | - C. Abel
- Institute of Medical Psychology, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
42
|
Smith DW, Aouad RK, Keil A. Cognitive task demands modulate the sensitivity of the human cochlea. Front Psychol 2012; 3:30. [PMID: 22347870 PMCID: PMC3277933 DOI: 10.3389/fpsyg.2012.00030] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2011] [Accepted: 01/24/2012] [Indexed: 11/25/2022] Open
Abstract
Recent studies lead to the conclusion that focused attention, through the activity of corticofugal and medial olivocochlear (MOC) efferent pathways, modulates activity at the most peripheral aspects of the auditory system within the cochlea. In two experiments, we investigated the effects of different intermodal attention manipulations on the response of outer hair cells (OHCs), and the control exerted by the MOC efferent system. The effect of the MOCs on OHC activity was characterized by measuring the amplitude and rapid adaptation time course of distortion product otoacoustic emissions (DPOAEs). In the first, DPOAE recordings were compared while participants were reading a book and counting the occurrence of the letter "a" (auditory-ignoring) and while counting either short- or long-duration eliciting tones (auditory-attending). In the second, DPOAEs were recorded while subjects watched muted movies with subtitles (auditory-ignoring/visual distraction) and were compared with DPOAEs recorded while subjects counted the same tones (auditory-attending) as in Experiment 1. In both Experiments 1 and 2, the absolute level of the averaged DPOAEs recorded during the auditory-ignoring condition was statistically higher than that recorded in the auditory-attending condition. Efferent-induced rapid adaptation was evident in all DPOAE contours, under all attention conditions, suggesting that two medial efferent processes act independently to determine rapid adaptation, which is unaffected by attention, and the overall DPOAE level, which is significantly affected by changes in the focus of attention.
Collapse
Affiliation(s)
- David W. Smith
- Program in Behavioral and Cognitive Neuroscience, Department of Psychology, University of FloridaGainesville, FL, USA
- Center for Smell and Taste, University of FloridaGainesville, FL, USA
- Department of Otolaryngology-Head and Neck Surgery, University of FloridaGainesville, FL, USA
| | - Rony K. Aouad
- Department of Surgery, Duke University Medical CenterDurham, NC, USA
| | - Andreas Keil
- Program in Behavioral and Cognitive Neuroscience, Department of Psychology, University of FloridaGainesville, FL, USA
- NIMH Center for the Study of Emotion and Attention, University of FloridaGainesville, FL, USA
| |
Collapse
|
43
|
Garinis AC, Glattke T, Cone BK. The MOC reflex during active listening to speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:1464-76. [PMID: 21862678 DOI: 10.1044/1092-4388(2011/10-0223)] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. METHOD Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in quiet; (b) with 60-dB SPL contralateral broadband noise; (c) with words embedded (at -3-dB signal-to-noise ratio [SNR]) in 60-dB SPL contralateral noise during which listeners directed attention to the words; and (d) for the same SNR as in the 3rd condition, with words played backwards. RESULTS There was greater suppression during active listening compared with passive listening that was apparent in the latency range of 6- to 18-ms poststimulus onset. Ear differences in CEOAE amplitude were observed in all conditions, with right-ear amplitudes larger than those for the left. The absolute difference between CEOAE amplitude in quiet and with contralateral noise, a metric of suppression, was equivalent for right and left ears. When the amplitude differences were normalized, suppression was greater for noise presented to the right and the effect measured for a probe in the left ear. CONCLUSION The findings support the theory that cortical mechanisms involved in listening to speech affect cochlear function through the MOC efferent system.
Collapse
|
44
|
Garinis A, Werner L, Abdala C. The relationship between MOC reflex and masked threshold. Hear Res 2011; 282:128-37. [PMID: 21878379 DOI: 10.1016/j.heares.2011.08.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2011] [Revised: 08/04/2011] [Accepted: 08/19/2011] [Indexed: 10/17/2022]
Abstract
Otoacoustic emission (OAE) amplitude can be reduced by acoustic stimulation. This effect is produced by the medial olivocochlear (MOC) reflex. Past studies have shown that the MOC reflex is related to listening in noise and attention. In the present study, the relationship between strength of the contralateral MOC reflex and masked threshold was investigated in 19 adults. Detection thresholds were determined for 1000-Hz, 300-ms tone presented simultaneously with one repetition of a 300-ms masker in an ongoing train of masker bursts. Three masking conditions were tested: 1) broadband noise 2) a fixed-frequency 4-tone complex masker and 3) a random-frequency 4-tone complex masker. Broadband noise was expected to produce energetic masking and the tonal maskers were expected to produce informational masking in some listeners. DPOAEs were recorded at fine frequency intervals from 500 to 4000 Hz, with and without contralateral acoustic stimulation. MOC reflex strength was estimated as a reduction in baseline level and a shift in frequency of DPOAE fine-structure maxima near 1000-Hz. MOC reflex and psychophysical testing were completed in separate sessions. Individuals with poorer thresholds in broadband noise and in random-frequency maskers were found to have stronger MOC reflexes.
Collapse
Affiliation(s)
- Angela Garinis
- University of Washington, Department of Speech and Hearing Sciences, 1417 N.E. 42nd Street, Seattle, WA 98105-6246, USA.
| | | | | |
Collapse
|
45
|
Markevych V, Asbjørnsen AE, Lind O, Plante E, Cone B. Dichotic listening and otoacoustic emissions: Shared variance between cochlear function and dichotic listening performance in adults with normal hearing. Brain Cogn 2011; 76:332-9. [DOI: 10.1016/j.bandc.2011.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Revised: 02/04/2011] [Accepted: 02/05/2011] [Indexed: 10/18/2022]
|
46
|
Medwetsky L. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention. Lang Speech Hear Serv Sch 2011; 42:286-96. [DOI: 10.1044/0161-1461(2011/10-0036)] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose
This article outlines the author’s conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language processing disorders occur when a breakdown in any of these mechanisms impacts an individual’s ability to effectively process and use the information that is heard. The symptoms vary depending on the underlying deficit(s). The primary purpose of this article is to provide the reader with a basic understanding of these mechanisms, and, in turn, enable readers to (a) review the literature concerning processing disorders with discernment and (b) have a foundation for developing a test battery to derive composite profiles of individuals' processing abilities.
Method
A review of the literature, overview of the spoken language processing model, and suggested approach to diagnostic assessment are presented.
Conclusion
Spoken language processing can break down due to a myriad of underlying causes. Central auditory nervous system deficits can impact not only the initial processing of stimuli but possibly the development of effective language skills. On the other hand, deficits in various cognitive and language mechanisms can similarly impact the auditory processing of speech stimuli. Therefore, it is critical to understand how these mechanisms interact and contribute to the processing of speech stimuli.
Collapse
|
47
|
Schmithorst VJ, Holland SK, Plante E. Diffusion tensor imaging reveals white matter microstructure correlations with auditory processing ability. Ear Hear 2011; 32:156-67. [PMID: 21063207 DOI: 10.1097/aud.0b013e3181f7a481] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Correlation of white matter microstructure with various cognitive processing tasks and with overall intelligence has been previously demonstrated. We investigate the correlation of white matter microstructure with various higher-order auditory processing tasks, including interpretation of speech-in-noise, recognition of low-pass frequency filtered words, and interpretation of time-compressed sentences at two different values of compression. These tests are typically used to diagnose auditory processing disorder (APD) in children. Our hypothesis is that correlations between white matter microstructure in tracts connecting the temporal, frontal, and parietal lobes, as well as callosal pathways, will be seen. Previous functional imaging studies have shown correlations between activation in temporal, frontal, and parietal regions from higher-order auditory processing tasks. In addition, we hypothesize that the regions displaying correlations will vary according to the task because each task uses a different set of skills. DESIGN Diffusion tensor imaging (DTI) data were acquired from a cohort of 17 normal-hearing children aged 9 to 11 yrs. Fractional anisotropy (FA), a measure of white matter fiber tract integrity and organization, was computed and correlated on a voxelwise basis with performance on the auditory processing tasks, controlling for age, sex, and full-scale IQ. RESULTS Divergent correlations of white matter FA depending on the particular auditory processing task were found. Positive correlations were found between FA and speech-in-noise in white matter adjoining prefrontal areas and between FA and filtered words in the corpus callosum. Regions exhibiting correlations with time-compressed sentences varied depending on the degree of compression: the greater degree of compression (with the greatest difficulty) resulted in correlations in white matter adjoining prefrontal (dorsal and ventral), whereas the smaller degree of compression (with less difficulty) resulted in correlations in white matter adjoining audiovisual association areas and the posterior cingulate. Only the time-compressed sentences with the lowest degree of compression resulted in positive correlations in the centrum semiovale; all the other tasks resulted in negative correlations. CONCLUSIONS The dependence of performance on higher-order auditory processing tasks on brain anatomical connectivity was seen in normal-hearing children aged 9 to 11 yrs. Results support a previously hypothesized dual-stream (dorsal and ventral) model of auditory processing, and that higher-order processing tasks rely less on the dorsal stream related to articulatory networks and more on the ventral stream related to semantic comprehension. Results also show that the regions correlating with auditory processing vary according to the specific task, indicating that the neurological bases for the various tests used to diagnose APD in children may be partially independent.
Collapse
Affiliation(s)
- Vincent J Schmithorst
- Department of Radiology, Pediatric Neuroimaging Research Consortium, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio 45229, USA.
| | | | | |
Collapse
|
48
|
Guinan JJ. Physiology of the Medial and Lateral Olivocochlear Systems. AUDITORY AND VESTIBULAR EFFERENTS 2011. [DOI: 10.1007/978-1-4419-7070-1_3] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
49
|
May PJC, Tiitinen H. Mismatch negativity (MMN), the deviance-elicited auditory deflection, explained. Psychophysiology 2010; 47:66-122. [DOI: 10.1111/j.1469-8986.2009.00856.x] [Citation(s) in RCA: 374] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
50
|
Bidet-Caulet A, Mikyska C, Knight RT. Load effects in auditory selective attention: evidence for distinct facilitation and inhibition mechanisms. Neuroimage 2009; 50:277-84. [PMID: 20026231 DOI: 10.1016/j.neuroimage.2009.12.039] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2009] [Revised: 12/02/2009] [Accepted: 12/08/2009] [Indexed: 10/20/2022] Open
Abstract
It is unknown whether facilitation and inhibition of stimulus processing represent one or two mechanisms in auditory attention. We performed electrophysiological experiments in humans to address these two competing hypothesis. Participants performed an attention task under low or high memory load. Facilitation and inhibition were measured by recording electrophysiological responses to attended and ignored sounds and comparing them to responses to these same sounds when attention was considered to be equally distributed towards all sounds. We observed two late frontally distributed components: a negative one in response to attended sounds, and a positive one to ignored sounds. These two frontally distributed responses had distinct timing and scalp topographies and were differentially affected by memory load. Taken together these results provide evidence that attention-mediated top-down control reflects the activity of distinct facilitation and inhibition mechanisms.
Collapse
Affiliation(s)
- Aurélie Bidet-Caulet
- Helen Wills Neuroscience Institute, University of California, Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|