1
|
Green HL, Shen G, Franzen RE, Mcnamee M, Berman JI, Mowad TG, Ku M, Bloy L, Liu S, Chen YH, Airey M, McBride E, Goldin S, Dipiero MA, Blaskey L, Kuschner ES, Kim M, Konka K, Roberts TPL, Edgar JC. Differential Maturation of Auditory Cortex Activity in Young Children with Autism and Typical Development. J Autism Dev Disord 2023; 53:4076-4089. [PMID: 35960416 PMCID: PMC9372967 DOI: 10.1007/s10803-022-05696-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/22/2022] [Indexed: 11/20/2022]
Abstract
Maturation of auditory cortex neural encoding processes was assessed in children with typical development (TD) and autism. Children 6-9 years old were enrolled at Time 1 (T1), with follow-up data obtained ~ 18 months later at Time 2 (T2), and ~ 36 months later at Time 3 (T3). Findings suggested an initial period of rapid auditory cortex maturation in autism, earlier than TD (prior to and surrounding the T1 exam), followed by a period of faster maturation in TD than autism (T1-T3). As a result of group maturation differences, post-stimulus group differences were observed at T1 but not T3. In contrast, stronger pre-stimulus activity in autism than TD was found at all time points, indicating this brain measure is stable across time.
Collapse
Affiliation(s)
- Heather L Green
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA.
| | - Guannan Shen
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Rose E Franzen
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Marybeth Mcnamee
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Jeffrey I Berman
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Theresa G Mowad
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Matthew Ku
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Luke Bloy
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Song Liu
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Yu-Han Chen
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Megan Airey
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Emma McBride
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sophia Goldin
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Marissa A Dipiero
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Lisa Blaskey
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Center for Autism Research, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Emily S Kuschner
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Center for Autism Research, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Mina Kim
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Kimberly Konka
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Timothy P L Roberts
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - J Christopher Edgar
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
2
|
Berger JI, Gander PE, Kim S, Schwalje AT, Woo J, Na YM, Holmes A, Hong JM, Dunn CC, Hansen MR, Gantz BJ, McMurray B, Griffiths TD, Choi I. Neural Correlates of Individual Differences in Speech-in-Noise Performance in a Large Cohort of Cochlear Implant Users. Ear Hear 2023; 44:1107-1120. [PMID: 37144890 PMCID: PMC10426791 DOI: 10.1097/aud.0000000000001357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 01/11/2023] [Indexed: 05/06/2023]
Abstract
OBJECTIVES Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.
Collapse
Affiliation(s)
- Joel I. Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Phillip E. Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Adam T. Schwalje
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Young-min Na
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Ann Holmes
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, USA
| | - Jean M. Hong
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Camille C. Dunn
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Marlan R. Hansen
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bruce J. Gantz
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bob McMurray
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Timothy D. Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Inyong Choi
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
3
|
Ozmeral EJ, Menon KN. Selective auditory attention modulates cortical responses to sound location change for speech in quiet and in babble. PLoS One 2023; 18:e0268932. [PMID: 36638116 PMCID: PMC9838839 DOI: 10.1371/journal.pone.0268932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
Listeners use the spatial location or change in spatial location of coherent acoustic cues to aid in auditory object formation. From stimulus-evoked onset responses in normal-hearing listeners using electroencephalography (EEG), we have previously shown measurable tuning to stimuli changing location in quiet, revealing a potential window into the cortical representations of auditory scene analysis. These earlier studies used non-fluctuating, spectrally narrow stimuli, so it was still unknown whether previous observations would translate to speech stimuli, and whether responses would be preserved for stimuli in the presence of background maskers. To examine the effects that selective auditory attention and interferers have on object formation, we measured cortical responses to speech changing location in the free field with and without background babble (+6 dB SNR) during both passive and active conditions. Active conditions required listeners to respond to the onset of the speech stream when it occurred at a new location, explicitly indicating 'yes' or 'no' to whether the stimulus occurred at a block-specific location either 30 degrees to the left or right of midline. In the aggregate, results show similar evoked responses to speech stimuli changing location in quiet compared to babble background. However, the effect of the two background environments diverges somewhat when considering the magnitude and direction of the location change and where the subject was attending. In quiet, attention to the right hemifield appeared to evoke a stronger response than attention to the left hemifield when speech shifted in the rightward direction. No such difference was found in babble conditions. Therefore, consistent with challenges associated with cocktail party listening, directed spatial attention could be compromised in the presence of stimulus noise and likely leads to poorer use of spatial cues in auditory streaming.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States of America
| | - Katherine N Menon
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States of America
| |
Collapse
|
4
|
The timecourse of multisensory speech processing in unilaterally stimulated cochlear implant users revealed by ERPs. Neuroimage Clin 2022; 34:102982. [PMID: 35303598 PMCID: PMC8927996 DOI: 10.1016/j.nicl.2022.102982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 11/21/2022]
Abstract
Both normal-hearing (NH) and cochlear implant (CI) users show a clear benefit in multisensory speech processing. Group differences in ERP topographies and cortical source activation suggest distinct audiovisual speech processing in CI users when compared to NH listeners. Electrical neuroimaging, including topographic and ERP source analysis, provides a suitable tool to study the timecourse of multisensory speech processing in CI users.
A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions.
Collapse
|
5
|
Ross JM, Ozdemir RA, Lian SJ, Fried PJ, Schmitt EM, Inouye SK, Pascual-Leone A, Shafi MM. A structured ICA-based process for removing auditory evoked potentials. Sci Rep 2022; 12:1391. [PMID: 35082350 PMCID: PMC8791940 DOI: 10.1038/s41598-022-05397-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 12/22/2021] [Indexed: 12/13/2022] Open
Abstract
Transcranial magnetic stimulation (TMS)-evoked potentials (TEPs), recorded using electroencephalography (EEG), reflect a combination of TMS-induced cortical activity and multi-sensory responses to TMS. The auditory evoked potential (AEP) is a high-amplitude sensory potential-evoked by the "click" sound produced by every TMS pulse-that can dominate the TEP and obscure observation of other neural components. The AEP is peripherally evoked and therefore should not be stimulation site specific. We address the problem of disentangling the peripherally evoked AEP of the TEP from components evoked by cortical stimulation and ask whether removal of AEP enables more accurate isolation of TEP. We hypothesized that isolation of the AEP using Independent Components Analysis (ICA) would reveal features that are stimulation site specific and unique individual features. In order to improve the effectiveness of ICA for removal of AEP from the TEP, and thus more clearly separate the transcranial-evoked and non-specific TMS-modulated potentials, we merged sham and active TMS datasets representing multiple stimulation conditions, removed the resulting AEP component, and evaluated performance across different sham protocols and clinical populations using reduction in Global and Local Mean Field Power (GMFP/LMFP) and cosine similarity analysis. We show that removing AEPs significantly reduced GMFP and LMFP in the post-stimulation TEP (14 to 400 ms), driven by time windows consistent with the N100 and P200 temporal characteristics of AEPs. Cosine similarity analysis supports that removing AEPs reduces TEP similarity between subjects and reduces TEP similarity between stimulation conditions. Similarity is reduced most in a mid-latency window consistent with the N100 time-course, but nevertheless remains high in this time window. Residual TEP in this window has a time-course and topography unique from AEPs, which follow-up exploratory analyses suggest could be a modulation in the alpha band that is not stimulation site specific but is unique to individual subject. We show, using two datasets and two implementations of sham, evidence in cortical topography, TEP time-course, GMFP/LMFP and cosine similarity analyses that this procedure is effective and conservative in removing the AEP from TEP, and may thus better isolate TMS-evoked activity. We show TEP remaining in early, mid and late latencies. The early response is site and subject specific. Later response may be consistent with TMS-modulated alpha activity that is not site specific but is unique to the individual. TEP remaining after removal of AEP is unique and can provide insight into TMS-evoked potentials and other modulated oscillatory dynamics.
Collapse
Affiliation(s)
- Jessica M Ross
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-423, Boston, MA, USA.
- Department of Neurology, Harvard Medical School, Boston, MA, USA.
| | - Recep A Ozdemir
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-423, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Shu Jing Lian
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-423, Boston, MA, USA
| | - Peter J Fried
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-423, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Eva M Schmitt
- Hinda and Arthur Marcus Institute for Aging Research, and Deanna and Sidney Wolk Center for Memory Health, Hebrew SeniorLife, Boston, MA, USA
| | - Sharon K Inouye
- Department of Medicine, Harvard Medical School, Boston, MA, USA
- Hinda and Arthur Marcus Institute for Aging Research, and Deanna and Sidney Wolk Center for Memory Health, Hebrew SeniorLife, Boston, MA, USA
| | - Alvaro Pascual-Leone
- Department of Neurology, Harvard Medical School, Boston, MA, USA
- Hinda and Arthur Marcus Institute for Aging Research, and Deanna and Sidney Wolk Center for Memory Health, Hebrew SeniorLife, Boston, MA, USA
- Guttmann Brain Health Institute, Institut Guttmann, Institut Universitari de Neurorehabilitació adscrit a la UAB, Badalona, Barcelona, Spain
| | - Mouhsin M Shafi
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-423, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
MEG correlates of temporal regularity relevant to pitch perception in human auditory cortex. Neuroimage 2022; 249:118879. [PMID: 34999204 PMCID: PMC8883111 DOI: 10.1016/j.neuroimage.2022.118879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/01/2021] [Accepted: 01/05/2022] [Indexed: 11/20/2022] Open
Abstract
We recorded neural responses in human participants to three types of pitch-evoking regular stimuli at rates below and above the lower limit of pitch using magnetoencephalography (MEG). These bandpass filtered (1–4 kHz) stimuli were harmonic complex tones (HC), click trains (CT), and regular interval noise (RIN). Trials consisted of noise-regular-noise (NRN) or regular-noise-regular (RNR) segments in which the repetition rate (or fundamental frequency F0) was either above (250 Hz) or below (20 Hz) the lower limit of pitch. Neural activation was estimated and compared at the senor and source levels. The pitch-relevant regular stimuli (F0 = 250 Hz) were all associated with marked evoked responses at around 140 ms after noise-to-regular transitions at both sensor and source levels. In particular, greater evoked responses to pitch-relevant stimuli than pitch-irrelevant stimuli (F0 = 20 Hz) were localized along the Heschl's sulcus around 140 ms. The regularity-onset responses for RIN were much weaker than for the other types of regular stimuli (HC, CT). This effect was localized over planum temporale, planum polare, and lateral Heschl's gyrus. Importantly, the effect of pitch did not interact with the stimulus type. That is, we did not find evidence to support different responses for different types of regular stimuli from the spatiotemporal cluster of the pitch effect (∼140 ms). The current data demonstrate cortical sensitivity to temporal regularity relevant to pitch that is consistently present across different pitch-relevant stimuli in the Heschl's sulcus between Heschl's gyrus and planum temporale, both of which have been identified as a “pitch center” based on different modalities.
Collapse
|
7
|
Karawani H, Jenkins K, Anderson S. Neural Plasticity Induced by Hearing Aid Use. Front Aging Neurosci 2022; 14:884917. [PMID: 35663566 PMCID: PMC9160992 DOI: 10.3389/fnagi.2022.884917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 04/28/2022] [Indexed: 12/21/2022] Open
Abstract
Age-related hearing loss is one of the most prevalent health conditions in older adults. Although hearing aid technology has advanced dramatically, a large percentage of older adults do not use hearing aids. This untreated hearing loss may accelerate declines in cognitive and neural function and dramatically affect the quality of life. Our previous findings have shown that the use of hearing aids improves cortical and cognitive function and offsets subcortical physiological decline. The current study tested the time course of neural adaptation to hearing aids over the course of 6 months and aimed to determine whether early measures of cortical processing predict the capacity for neural plasticity. Seventeen (9 females) older adults (mean age = 75 years) with age-related hearing loss with no history of hearing aid use were fit with bilateral hearing aids and tested in six testing sessions. Neural changes were observed as early as 2 weeks following the initial fitting of hearing aids. Increases in N1 amplitudes were observed as early as 2 weeks following the hearing aid fitting, whereas changes in P2 amplitudes were not observed until 12 weeks of hearing aid use. The findings suggest that increased audibility through hearing aids may facilitate rapid increases in cortical detection, but a longer time period of exposure to amplified sound may be required to integrate features of the signal and form auditory object representations. The results also showed a relationship between neural responses in earlier sessions and the change predicted after 6 months of the use of hearing aids. This study demonstrates rapid cortical adaptation to increased auditory input. Knowledge of the time course of neural adaptation may aid audiologists in counseling their patients, especially those who are struggling to adjust to amplification. A future comparison of a control group with no use of hearing aids that undergoes the same testing sessions as the study's group will validate these findings.
Collapse
Affiliation(s)
- Hanin Karawani
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | - Kimberly Jenkins
- Walter Reed National Military Medical Center, Bethesda, MD, United States
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
8
|
Mapping the human auditory cortex using spectrotemporal receptive fields generated with magnetoencephalography. Neuroimage 2021; 238:118222. [PMID: 34058330 DOI: 10.1016/j.neuroimage.2021.118222] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/24/2022] Open
Abstract
We present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a temporally dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.
Collapse
|
9
|
Auditory Mapping With MEG: An Update on the Current State of Clinical Research and Practice With Considerations for Clinical Practice Guidelines. J Clin Neurophysiol 2020; 37:574-584. [DOI: 10.1097/wnp.0000000000000518] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
|
10
|
Sysoeva OV, Molholm S, Djukic A, Frey HP, Foxe JJ. Atypical processing of tones and phonemes in Rett Syndrome as biomarkers of disease progression. Transl Psychiatry 2020; 10:188. [PMID: 32522978 PMCID: PMC7287060 DOI: 10.1038/s41398-020-00877-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 05/19/2020] [Accepted: 05/26/2020] [Indexed: 12/27/2022] Open
Abstract
Due to severe motor impairments and the lack of expressive language abilities seen in most patients with Rett Syndrome (RTT), it has proven extremely difficult to obtain accurate measures of auditory processing capabilities in this population. Here, we examined early auditory cortical processing of pure tones and more complex phonemes in females with Rett Syndrome (RTT), by recording high-density auditory evoked potentials (AEP), which allow for objective evaluation of the timing and severity of processing deficits along the auditory processing hierarchy. We compared AEPs of 12 females with RTT to those of 21 typically developing (TD) peers aged 4-21 years, interrogating the first four major components of the AEP (P1: 60-90 ms; N1: 100-130 ms; P2: 135-165 ms; and N2: 245-275 ms). Atypicalities were evident in RTT at the initial stage of processing. Whereas the P1 showed increased amplitude to phonemic inputs relative to tones in TD participants, this modulation by stimulus complexity was absent in RTT. Interestingly, the subsequent N1 did not differ between groups, whereas the following P2 was hugely diminished in RTT, regardless of stimulus complexity. The N2 was similarly smaller in RTT and did not differ as a function of stimulus type. The P2 effect was remarkably robust in differentiating between groups with near perfect separation between the two groups despite the wide age range of our samples. Given this robustness, along with the observation that P2 amplitude was significantly associated with RTT symptom severity, the P2 has the potential to serve as a monitoring, treatment response, or even surrogate endpoint biomarker. Compellingly, the reduction of P2 in patients with RTT mimics findings in animal models of RTT, providing a translational bridge between pre-clinical and human research.
Collapse
Affiliation(s)
- Olga V. Sysoeva
- grid.412750.50000 0004 1936 9166The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY USA ,grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA ,grid.4886.20000 0001 2192 9124The Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russia
| | - Sophie Molholm
- grid.412750.50000 0004 1936 9166The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY USA ,grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA
| | - Aleksandra Djukic
- grid.240283.f0000 0001 2152 0791The Rett Syndrome Center, Department of Neurology, Montefiore Medical Center & Albert Einstein College of Medicine, Bronx, NY USA
| | - Hans-Peter Frey
- grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA
| | - John J. Foxe
- grid.412750.50000 0004 1936 9166The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY USA ,grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA
| |
Collapse
|
11
|
Green HL, Edgar JC, Matsuzaki J, Roberts TPL. Magnetoencephalography Research in Pediatric Autism Spectrum Disorder. Neuroimaging Clin N Am 2020; 30:193-203. [PMID: 32336406 PMCID: PMC7216756 DOI: 10.1016/j.nic.2020.01.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Magnetoencephalography (MEG) research indicates differences in neural brain measures in children with autism spectrum disorder (ASD) compared to typically developing (TD) children. As reviewed here, resting-state MEG exams are of interest as well as MEG paradigms that assess neural function across domains (e.g., auditory, resting state). To date, MEG research has primarily focused on group-level differences. Research is needed to explore whether MEG measures can predict, at the individual level, ASD diagnosis, prognosis (future severity), and response to therapy.
Collapse
Affiliation(s)
- Heather L Green
- Department of Radiology, Lurie Family Foundations MEG Imaging Center, The Children's Hospital of Philadelphia, 3401 Civic Center Boulevard, Philadelphia, PA 19104, USA.
| | - J Christopher Edgar
- Department of Radiology, Lurie Family Foundations MEG Imaging Center, The Children's Hospital of Philadelphia, 3401 Civic Center Boulevard, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Boulevard, Philadelphia, PA 19104, USA
| | - Junko Matsuzaki
- Department of Radiology, Lurie Family Foundations MEG Imaging Center, The Children's Hospital of Philadelphia, 3401 Civic Center Boulevard, Philadelphia, PA 19104, USA
| | - Timothy P L Roberts
- Department of Radiology, Lurie Family Foundations MEG Imaging Center, The Children's Hospital of Philadelphia, 3401 Civic Center Boulevard, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Boulevard, Philadelphia, PA 19104, USA
| |
Collapse
|
12
|
Huang N, Elhilali M. Push-pull competition between bottom-up and top-down auditory attention to natural soundscapes. eLife 2020; 9:52984. [PMID: 32196457 PMCID: PMC7083598 DOI: 10.7554/elife.52984] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 02/13/2020] [Indexed: 12/17/2022] Open
Abstract
In everyday social environments, demands on attentional resources dynamically shift to balance our attention to targets of interest while alerting us to important objects in our surrounds. The current study uses electroencephalography to explore how the push-pull interaction between top-down and bottom-up attention manifests itself in dynamic auditory scenes. Using natural soundscapes as distractors while subjects attend to a controlled rhythmic sound sequence, we find that salient events in background scenes significantly suppress phase-locking and gamma responses to the attended sequence, countering enhancement effects observed for attended targets. In line with a hypothesis of limited attentional resources, the modulation of neural activity by bottom-up attention is graded by degree of salience of ambient events. The study also provides insights into the interplay between endogenous and exogenous attention during natural soundscapes, with both forms of attention engaging a common fronto-parietal network at different time lags.
Collapse
Affiliation(s)
- Nicholas Huang
- Laboratory for Computational Audio Perception, Department of Electrical Engineering, Johns Hopkins University, Baltimore, United States
| | - Mounya Elhilali
- Laboratory for Computational Audio Perception, Department of Electrical Engineering, Johns Hopkins University, Baltimore, United States
| |
Collapse
|
13
|
The First 250 ms of Auditory Processing: No Evidence of Early Processing Negativity in the Go/NoGo Task. Sci Rep 2020; 10:4041. [PMID: 32132630 PMCID: PMC7055275 DOI: 10.1038/s41598-020-61060-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 01/30/2020] [Indexed: 12/02/2022] Open
Abstract
Past evidence of an early Processing Negativity in auditory Go/NoGo event-related potential (ERP) data suggests that young adults proactively process sensory information in two-choice tasks. This study aimed to clarify the occurrence of Go/NoGo Processing Negativity and investigate the ERP component series related to the first 250 ms of auditory processing in two Go/NoGo tasks differing in target probability. ERP data related to each task were acquired from 60 healthy young adults (M = 20.4, SD = 3.1 years). Temporal principal components analyses were used to decompose ERP data in each task. Statistical analyses compared component amplitudes between stimulus type (Go vs. NoGo) and probability (High vs. Low). Neuronal source localisation was also conducted for each component. Processing Negativity was not evident; however, P1, N1a, N1b, and N1c were identified in each task, with Go P2 and NoGo N2b. The absence of Processing Negativity in this study indicated that young adults do not proactively process targets to complete the Go/NoGo task and/or questioned Processing Negativity’s conceptualisation. Additional analyses revealed stimulus-specific processing as early as P1, and outlined a complex network of active neuronal sources underlying each component, providing useful insight into Go and NoGo information processing in young adults.
Collapse
|
14
|
Kim SG, Poeppel D, Overath T. Modulation change detection in human auditory cortex: Evidence for asymmetric, non-linear edge detection. Eur J Neurosci 2020; 52:2889-2904. [PMID: 32080939 DOI: 10.1111/ejn.14707] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 01/18/2020] [Accepted: 02/10/2020] [Indexed: 11/28/2022]
Abstract
Changes in modulation rate are important cues for parsing acoustic signals, such as speech. We parametrically controlled modulation rate via the correlation coefficient (r) of amplitude spectra across fixed frequency channels between adjacent time frames: broadband modulation spectra are biased toward slow modulate rates with increasing r, and vice versa. By concatenating segments with different r, acoustic changes of various directions (e.g., changes from low to high correlation coefficients, that is, random-to-correlated or vice versa) and sizes (e.g., changes from low to high or from medium to high correlation coefficients) can be obtained. Participants listened to sound blocks and detected changes in correlation while MEG was recorded. Evoked responses to changes in correlation demonstrated (a) an asymmetric representation of change direction: random-to-correlated changes produced a prominent evoked field around 180 ms, while correlated-to-random changes evoked an earlier response with peaks at around 70 and 120 ms, whose topographies resemble those of the canonical P50m and N100m responses, respectively, and (b) a highly non-linear representation of correlation structure, whereby even small changes involving segments with a high correlation coefficient were much more salient than relatively large changes that did not involve segments with high correlation coefficients. Induced responses revealed phase tracking in the delta and theta frequency bands for the high correlation stimuli. The results confirm a high sensitivity for low modulation rates in human auditory cortex, both in terms of their representation and their segregation from other modulation rates.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA.,Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA.,Duke Institute for Brain Sciences, Duke University, Durham, NC, USA.,Center for Cognitive Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|
15
|
Silva DMR, Rothe-Neves R, Melges DB. Long-latency event-related responses to vowels: N1-P2 decomposition by two-step principal component analysis. Int J Psychophysiol 2019; 148:93-102. [PMID: 31863852 DOI: 10.1016/j.ijpsycho.2019.11.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 11/16/2019] [Accepted: 11/18/2019] [Indexed: 11/26/2022]
Abstract
The N1-P2 complex of the auditory event-related potential (ERP) has been used to examine neural activity associated with speech sound perception. Since it is thought to reflect multiple generator processes, its functional significance is difficult to infer. In the present study, a temporospatial principal component analysis (PCA) was used to decompose the N1-P2 response into latent factors underlying covariance patterns in ERP data recorded during passive listening to pairs of successive vowels. In each trial, one of six sounds drawn from an /i/-/e/ vowel continuum was followed either by an identical sound, a different token of the same vowel category, or a token from the other category. Responses were examined as to how they were modulated by within- and across-category vowel differences and by adaptation (repetition suppression) effects. Five PCA factors were identified as corresponding to three well-known N1 subcomponents and two P2 subcomponents. Results added evidence that the N1 peak reflects both generators that are sensitive to spectral information and generators that are not. For later latency ranges, different patterns of sensitivity to vowel quality were found, including category-related effects. Particularly, a subcomponent identified as the Tb wave showed release from adaptation in response to an /i/ followed by an /e/ sound. A P2 subcomponent varied linearly with spectral shape along the vowel continuum, while the other was stronger the closer the vowel was to the category boundary, suggesting separate processing of continuous and category-related information. Thus, the PCA-based decomposition of the N1-P2 complex was functionally meaningful, revealing distinct underlying processes at work during speech sound perception.
Collapse
Affiliation(s)
- Daniel M R Silva
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Rui Rothe-Neves
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil.
| | - Danilo B Melges
- Graduate Program in Electrical Engineering, Department of Electrical Engineering, Federal University of Minas Gerais
| |
Collapse
|
16
|
Sysoeva OV, Smirnov K, Stroganova TA. Sensory evoked potentials in patients with Rett syndrome through the lens of animal studies: Systematic review. Clin Neurophysiol 2019; 131:213-224. [PMID: 31812082 DOI: 10.1016/j.clinph.2019.11.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 11/07/2019] [Accepted: 11/11/2019] [Indexed: 01/04/2023]
Abstract
OBJECTIVE Systematically review the abnormalities in event related potential (ERP) recorded in Rett Syndrome (RTT) patients and animals in search of translational biomarkers of deficits related to the particular neurophysiological processes of known genetic origin (MECP2 mutations). METHODS Pubmed, ISI Web of Knowledge and BIORXIV were searched for the relevant articles according to PRISMA standards. RESULTS ERP components are generally delayed across all sensory modalities both in RTT patients and its animal model, while findings on ERPs amplitude strongly depend on stimulus properties and presentation rate. Studies on RTT animal models uncovered the abnormalities in the excitatory and inhibitory transmission as critical mechanisms underlying the ERPs changes, but showed that even similar ERP alterations in auditory and visual domains have a diverse neural basis. A range of novel approaches has been developed in animal studies bringing along the meaningful neurophysiological interpretation of ERP measures in RTT patients. CONCLUSIONS While there is a clear evidence for sensory ERPs abnormalities in RTT, to further advance the field there is a need in a large-scale ERP studies with the functionally-relevant experimental paradigms. SIGNIFICANCE The review provides insights into domain-specific neural basis of the ERP abnormalities and promotes clinical application of the ERP measures as the non-invasive functional biomarkers of RTT pathophysiology.
Collapse
Affiliation(s)
- Olga V Sysoeva
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA; The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA; The Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russia.
| | - Kirill Smirnov
- Department of Neuroontogenesis, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, Moscow, Russia.
| | - Tatiana A Stroganova
- Center for Neurocognitive Research (MEG-Center), Moscow State University of Psychology and Education (MSUPE), Moscow, Russia; Autism Research Laboratory, Moscow State University of Psychology and Education (MSUPE), Moscow, Russia.
| |
Collapse
|
17
|
Hajizadeh A, Matysiak A, May PJC, König R. Explaining event-related fields by a mechanistic model encapsulating the anatomical structure of auditory cortex. BIOLOGICAL CYBERNETICS 2019; 113:321-345. [PMID: 30820663 PMCID: PMC6510841 DOI: 10.1007/s00422-019-00795-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 02/08/2019] [Indexed: 06/09/2023]
Abstract
Event-related fields of the magnetoencephalogram are triggered by sensory stimuli and appear as a series of waves extending hundreds of milliseconds after stimulus onset. They reflect the processing of the stimulus in cortex and have a highly subject-specific morphology. However, we still have an incomplete picture of how event-related fields are generated, what the various waves signify, and why they are so subject-specific. Here, we focus on this problem through the lens of a computational model which describes auditory cortex in terms of interconnected cortical columns as part of hierarchically placed fields of the core, belt, and parabelt areas. We develop an analytical approach arriving at solutions to the system dynamics in terms of normal modes: damped harmonic oscillators emerging out of the coupled excitation and inhibition in the system. Each normal mode is a global feature which depends on the anatomical structure of the entire auditory cortex. Further, normal modes are fundamental dynamical building blocks, in that the activity of each cortical column represents a combination of all normal modes. This approach allows us to replicate a typical auditory event-related response as a weighted sum of the single-column activities. Our work offers an alternative to the view that the event-related field arises out of spatially discrete, local generators. Rather, there is only a single generator process distributed over the entire network of the auditory cortex. We present predictions for testing to what degree subject-specificity is due to cross-subject variations in dynamical parameters rather than in the cortical surface morphology.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Artur Matysiak
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Patrick J. C. May
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF UK
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Reinhard König
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| |
Collapse
|
18
|
Uluç I, Schmidt TT, Wu YH, Blankenburg F. Content-specific codes of parametric auditory working memory in humans. Neuroimage 2018; 183:254-262. [PMID: 30107259 DOI: 10.1016/j.neuroimage.2018.08.024] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 08/09/2018] [Accepted: 08/11/2018] [Indexed: 10/28/2022] Open
Abstract
Brain activity in frontal regions has been found to represent frequency information with a parametric code during working memory delay phases. The mental representation of frequencies has furthermore been shown to be modality independent in non-human primate electrophysiology and human EEG studies, suggesting frontal regions encoding quantitative information in a supramodal manner. A recent fMRI study using multivariate pattern analysis (MVPA) supports an overlapping multimodal network for the maintenance of visual and tactile frequency information over frontal and parietal brain regions. The present study extends the investigation of working memory representation of frequency information to the auditory domain. To this aim, we used MVPA on fMRI data recorded during an auditory frequency maintenance task. A support vector regression analysis revealed working memory information in auditory association areas and, consistent with earlier findings of parametric working memory, in a frontoparietal network. A direct comparison to an analogous dataset of vibrotactile parametric working memory revealed an overlap of information coding in prefrontal regions, particularly in the right inferior frontal gyrus. Therefore, our findings indicate that the prefrontal cortex represents frequency-specific working memory content irrespective of the modality as has been now also revealed for the auditory modality.
Collapse
Affiliation(s)
- Işıl Uluç
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | - Timo Torsten Schmidt
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Institute of Cognitive Science, University of Osnabrück, 49090 Osnabrück, Germany
| | - Yuan-Hao Wu
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Felix Blankenburg
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
19
|
Jaffe-Dax S, Kimel E, Ahissar M. Shorter cortical adaptation in dyslexia is broadly distributed in the superior temporal lobe and includes the primary auditory cortex. eLife 2018; 7:30018. [PMID: 29488880 PMCID: PMC5860871 DOI: 10.7554/elife.30018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 02/27/2018] [Indexed: 12/25/2022] Open
Abstract
Studies of the performance of individuals with dyslexia in perceptual tasks suggest that their implicit inference of sound statistics is impaired. Previously, using two-tone frequency discrimination, we found that the effect of previous trials' frequencies on the judgments of individuals with dyslexia decays faster than the effect on controls' judgments, and that the adaptation (decrease of neural response to repeated stimuli) of their ERP responses to tones is shorter (Jaffe-Dax et al., 2017). Here, we show the cortical distribution of these abnormal dynamics of adaptation using fast-acquisition fMRI. We find that faster decay of adaptation in dyslexia is widespread, although the most significant effects are found in the left superior temporal lobe, including the auditory cortex. This broad distribution suggests that the faster decay of implicit memory of individuals with dyslexia is a general characteristic of their cortical dynamics, which also affects sensory cortices.
Collapse
Affiliation(s)
- Sagi Jaffe-Dax
- Department of Psychology, Princeton University, Princeton, United States
| | - Eva Kimel
- The Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Merav Ahissar
- The Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel.,Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
20
|
de Boer J, Krumbholz K. Auditory Attention Causes Gain Enhancement and Frequency Sharpening at Successive Stages of Cortical Processing-Evidence from Human Electroencephalography. J Cogn Neurosci 2018; 30:785-798. [PMID: 29488851 DOI: 10.1162/jocn_a_01245] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Previous findings have suggested that auditory attention causes not only enhancement in neural processing gain, but also sharpening in neural frequency tuning in human auditory cortex. The current study was aimed to reexamine these findings. Specifically, we aimed to investigate whether attentional gain enhancement and frequency sharpening emerge at the same or different processing levels and whether they represent independent or cooperative effects. For that, we examined the pattern of attentional modulation effects on early, sensory-driven cortical auditory-evoked potentials occurring at different latencies. Attention was manipulated using a dichotic listening task and was thus not selectively directed to specific frequency values. Possible attention-related changes in frequency tuning selectivity were measured with an adaptation paradigm. Our results show marked disparities in attention effects between the earlier N1 deflection and the subsequent P2 deflection, with the N1 showing a strong gain enhancement effect, but no sharpening, and the P2 showing clear evidence of sharpening, but no independent gain effect. They suggest that gain enhancement and frequency sharpening represent successive stages of a cooperative attentional modulation mechanism that increases the representational bandwidth of attended versus unattended sounds.
Collapse
|
21
|
Horváth J, Gaál ZA, Volosin M. Sound offset-related brain potentials show retained sensory processing, but increased cognitive control activity in older adults. Neurobiol Aging 2017; 57:232-246. [DOI: 10.1016/j.neurobiolaging.2017.05.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Revised: 05/16/2017] [Accepted: 05/30/2017] [Indexed: 10/19/2022]
|
22
|
Volosin M, Gaál ZA, Horváth J. Age-related processing delay reveals cause of apparent sensory excitability following auditory stimulation. Sci Rep 2017; 7:10143. [PMID: 28860638 PMCID: PMC5579239 DOI: 10.1038/s41598-017-10696-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 08/14/2017] [Indexed: 11/30/2022] Open
Abstract
When background auditory events lead to enhanced auditory event-related potentials (ERPs) for closely following sounds, this is generally interpreted as a transient increase in the responsiveness of the auditory system. We measured ERPs elicited by irrelevant probes (gaps in a continuous tone) at several time-points following rare auditory events (pitch glides) in younger and older adults, who watched movies during stimulation. Fitting previous results, in younger adults, gaps elicited increasing N1 auditory ERPs with decreasing glide-gap separation. N1 increase was paralleled by an ERP decrease in the P2 interval. In older adults, only a glide-gap separation dependent P2 decrease, but no N1-effect was observable. This ERP pattern was likely caused by a fronto-central negative waveform, which was delayed in the older adult group, thus overlapping N1 and P2 in the younger, but overlapping only P2 in the older adult group. Because the waveform exhibited a polarity reversal at the mastoids, it was identified as a mismatch negativity (MMN). This interpretation also fits previous studies showing that gap-related MMN is delayed in older adults, reflecting an age-related deterioration of fine temporal auditory resolution. These results provide a plausible alternative explanation for the ERP enhancement for sounds following background auditory events.
Collapse
Affiliation(s)
- Márta Volosin
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, H-1117, Magyar Tudósok körútja 2., Hungary.
- Eötvös Loránd University, Faculty of Education and Psychology, Budapest, H-1075, Kazinczy utca 23-27., Hungary.
- University of Leipzig, Institute of Psychology, Cognitive and Biological Psychology, Leipzig, D-04109, Neumarkt 9-19, Germany.
| | - Zsófia Anna Gaál
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, H-1117, Magyar Tudósok körútja 2., Hungary
| | - János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, H-1117, Magyar Tudósok körútja 2., Hungary
| |
Collapse
|
23
|
Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation. Sci Rep 2016; 6:31052. [PMID: 27545435 PMCID: PMC4992856 DOI: 10.1038/srep31052] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Accepted: 07/13/2016] [Indexed: 11/08/2022] Open
Abstract
The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses.
Collapse
|
24
|
Tan A, Hu L, Tu Y, Chen R, Hung YS, Zhang Z. N1 Magnitude of Auditory Evoked Potentials and Spontaneous Functional Connectivity Between Bilateral Heschl's Gyrus Are Coupled at Interindividual Level. Brain Connect 2016; 6:496-504. [PMID: 27105665 DOI: 10.1089/brain.2016.0418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
N1 component of auditory evoked potentials is extensively used to investigate the propagation and processing of auditory inputs. However, the substantial interindividual variability of N1 could be a possible confounding factor when comparing different individuals or groups. Therefore, identifying the neuronal mechanism and origin of the interindividual variability of N1 is crucial in basic research and clinical applications. This study is aimed to use simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data to investigate the coupling between N1 and spontaneous functional connectivity (FC). EEG and fMRI data were simultaneously collected from a group of healthy individuals during a pure-tone listening task. Spontaneous FC was estimated from spontaneous blood oxygenation level-dependent (BOLD) signals that were isolated by regressing out task evoked BOLD signals from raw BOLD signals and then was correlated to N1 magnitude across individuals. It was observed that spontaneous FC between bilateral Heschl's gyrus was significantly and positively correlated with N1 magnitude across individuals (Spearman's R = 0.829, p < 0.001). The specificity of this observation was further confirmed by two whole-brain voxelwise analyses (voxel-mirrored homotopic connectivity analysis and seed-based connectivity analysis). These results enriched our understanding of the functional significance of the coupling between event-related brain responses and spontaneous brain connectivity, and hold the potential to increase the applicability of brain responses as a probe to the mechanism underlying pathophysiological conditions.
Collapse
Affiliation(s)
- Ao Tan
- 1 Department of Electrical and Electronic Engineering, The University of Hong Kong , Hong Kong, China
| | - Li Hu
- 2 Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China .,3 Faculty of Psychology, Southwest University , Chongqing, China
| | - Yiheng Tu
- 1 Department of Electrical and Electronic Engineering, The University of Hong Kong , Hong Kong, China
| | - Rui Chen
- 3 Faculty of Psychology, Southwest University , Chongqing, China
| | - Yeung Sam Hung
- 1 Department of Electrical and Electronic Engineering, The University of Hong Kong , Hong Kong, China
| | - Zhiguo Zhang
- 4 School of Data and Computer Science, Sun Yat-Sen University , Guangzhou, China
| |
Collapse
|
25
|
Tabas A, Siebert A, Supek S, Pressnitzer D, Balaguer-Ballester E, Rupp A. Insights on the Neuromagnetic Representation of Temporal Asymmetry in Human Auditory Cortex. PLoS One 2016; 11:e0153947. [PMID: 27096960 PMCID: PMC4838253 DOI: 10.1371/journal.pone.0153947] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 04/06/2016] [Indexed: 11/26/2022] Open
Abstract
Communication sounds are typically asymmetric in time and human listeners are highly sensitive to this short-term temporal asymmetry. Nevertheless, causal neurophysiological correlates of auditory perceptual asymmetry remain largely elusive to our current analyses and models. Auditory modelling and animal electrophysiological recordings suggest that perceptual asymmetry results from the presence of multiple time scales of temporal integration, central to the auditory periphery. To test this hypothesis we recorded auditory evoked fields (AEF) elicited by asymmetric sounds in humans. We found a strong correlation between perceived tonal salience of ramped and damped sinusoids and the AEFs, as quantified by the amplitude of the N100m dynamics. The N100m amplitude increased with stimulus half-life time, showing a maximum difference between the ramped and damped stimulus for a modulation half-life time of 4 ms which is greatly reduced at 0.5 ms and 32 ms. This behaviour of the N100m closely parallels psychophysical data in a manner that: i) longer half-life times are associated with a stronger tonal percept, and ii) perceptual differences between damped and ramped are maximal at 4 ms half-life time. Interestingly, differences in evoked fields were significantly stronger in the right hemisphere, indicating some degree of hemispheric specialisation. Furthermore, the N100m magnitude was successfully explained by a pitch perception model using multiple scales of temporal integration of auditory nerve activity patterns. This striking correlation between AEFs, perception, and model predictions suggests that the physiological mechanisms involved in the processing of pitch evoked by temporal asymmetric sounds are reflected in the N100m.
Collapse
Affiliation(s)
- Alejandro Tabas
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- * E-mail:
| | - Anita Siebert
- Institute of Pharmacology and Toxicology, University of Zurich, Zürich, Zürich, Switzerland
| | - Selma Supek
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| | - Daniel Pressnitzer
- Département d’Études Cognitives, École Normale Supérieure, Paris, France
| | - Emili Balaguer-Ballester
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- The Bernstein Center for Computational Neuroscience Heidelberg-Mannheim, Mannheim, Baden-Würtemberg, Germany
| | - André Rupp
- Department of Neurology, Heidelberg University, Heidelberg, Baden-Würtemberg, Germany
| |
Collapse
|
26
|
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. J Neurosci 2016; 35:16046-54. [PMID: 26658858 DOI: 10.1523/jneurosci.2931-15.2015] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic "push-pull" pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing.
Collapse
|
27
|
Duque D, Wang X, Nieto-Diego J, Krumbholz K, Malmierca MS. Neurons in the inferior colliculus of the rat show stimulus-specific adaptation for frequency, but not for intensity. Sci Rep 2016; 6:24114. [PMID: 27066835 PMCID: PMC4828641 DOI: 10.1038/srep24114] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Accepted: 03/21/2016] [Indexed: 11/09/2022] Open
Abstract
Electrophysiological and psychophysical responses to a low-intensity probe sound tend to be suppressed by a preceding high-intensity adaptor sound. Nevertheless, rare low-intensity deviant sounds presented among frequent high-intensity standard sounds in an intensity oddball paradigm can elicit an electroencephalographic mismatch negativity (MMN) response. This has been taken to suggest that the MMN is a correlate of true change or “deviance” detection. A key question is where in the ascending auditory pathway true deviance sensitivity first emerges. Here, we addressed this question by measuring low-intensity deviant responses from single units in the inferior colliculus (IC) of anesthetized rats. If the IC exhibits true deviance sensitivity to intensity, IC neurons should show enhanced responses to low-intensity deviant sounds presented among high-intensity standards. Contrary to this prediction, deviant responses were only enhanced when the standards and deviants differed in frequency. The results could be explained with a model assuming that IC neurons integrate over multiple frequency-tuned channels and that adaptation occurs within each channel independently. We used an adaptation paradigm with multiple repeated adaptors to measure the tuning widths of these adaption channels in relation to the neurons’ overall tuning widths.
Collapse
Affiliation(s)
- Daniel Duque
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Salamanca 37007, Spain
| | - Xin Wang
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Salamanca 37007, Spain
| | - Javier Nieto-Diego
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Salamanca 37007, Spain
| | - Katrin Krumbholz
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, UK
| | - Manuel S Malmierca
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Salamanca 37007, Spain.,Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Campus Miguel de Unamuno, 37007, Salamanca, Spain.,Salamanca Institute for Biomedical Research (IBSAL), Salamanca, Spain
| |
Collapse
|
28
|
Gransier R, Deprez H, Hofmann M, Moonen M, van Wieringen A, Wouters J. Auditory steady-state responses in cochlear implant users: Effect of modulation frequency and stimulation artifacts. Hear Res 2016; 335:149-160. [PMID: 26994660 DOI: 10.1016/j.heares.2016.03.006] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Revised: 03/04/2016] [Accepted: 03/14/2016] [Indexed: 11/29/2022]
Abstract
Previous studies have shown that objective measures based on stimulation with low-rate pulse trains fail to predict the threshold levels of cochlear implant (CI) users for high-rate pulse trains, as used in clinical devices. Electrically evoked auditory steady-state responses (EASSRs) can be elicited by modulated high-rate pulse trains, and can potentially be used to objectively determine threshold levels of CI users. The responsiveness of the auditory pathway of profoundly hearing-impaired CI users to modulation frequencies is, however, not known. In the present study we investigated the responsiveness of the auditory pathway of CI users to a monopolar 500 pulses per second (pps) pulse train modulated between 1 and 100 Hz. EASSRs to forty-three modulation frequencies, elicited at the subject's maximum comfort level, were recorded by means of electroencephalography. Stimulation artifacts were removed by a linear interpolation between a pre- and post-stimulus sample (i.e., blanking). The phase delay across modulation frequencies was used to differentiate between the neural response and a possible residual stimulation artifact after blanking. Stimulation artifacts were longer than the inter-pulse interval of the 500pps pulse train for recording electrodes ipsilateral to the CI. As a result the stimulation artifacts could not be removed by artifact removal on the bases of linear interpolation for recording electrodes ipsilateral to the CI. However, artifact-free responses could be obtained in all subjects from recording electrodes contralateral to the CI, when subject specific reference electrodes (Cz or Fpz) were used. EASSRs to modulation frequencies within the 30-50 Hz range resulted in significant responses in all subjects. Only a small number of significant responses could be obtained, during a measurement period of 5 min, that originate from the brain stem (i.e., modulation frequencies in the 80-100 Hz range). This reduced synchronized activity of brain stem responses in long-term severely-hearing impaired CI users could be an attribute of processes associated with long-term hearing impairment and/or electrical stimulation.
Collapse
Affiliation(s)
- Robin Gransier
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 bus 721, 3000 Leuven, Belgium.
| | - Hanne Deprez
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 bus 721, 3000 Leuven, Belgium; STADIUS, Dept. of Electrical Engineering (ESAT), KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium
| | - Michael Hofmann
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Marc Moonen
- STADIUS, Dept. of Electrical Engineering (ESAT), KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium
| | - Astrid van Wieringen
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Jan Wouters
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 bus 721, 3000 Leuven, Belgium
| |
Collapse
|
29
|
|
30
|
Abstract
Recent studies establish that cortical oscillations track naturalistic speech in a remarkably faithful way. Here, we test whether such neural activity, particularly low-frequency (<8 Hz; delta-theta) oscillations, similarly entrain to music and whether experience modifies such a cortical phenomenon. Music of varying tempi was used to test entrainment at different rates. In three magnetoencephalography experiments, we recorded from nonmusicians, as well as musicians with varying years of experience. Recordings from nonmusicians demonstrate cortical entrainment that tracks musical stimuli over a typical range of tempi, but not at tempi below 1 note per second. Importantly, the observed entrainment correlates with performance on a concurrent pitch-related behavioral task. In contrast, the data from musicians show that entrainment is enhanced by years of musical training, at all presented tempi. This suggests a bidirectional relationship between behavior and cortical entrainment, a phenomenon that has not previously been reported. Additional analyses focus on responses in the beta range (∼15-30 Hz)-often linked to delta activity in the context of temporal predictions. Our findings provide evidence that the role of beta in temporal predictions scales to the complex hierarchical rhythms in natural music and enhances processing of musical content. This study builds on important findings on brainstem plasticity and represents a compelling demonstration that cortical neural entrainment is tightly coupled to both musical training and task performance, further supporting a role for cortical oscillatory activity in music perception and cognition.
Collapse
|
31
|
Evidence for differential modulation of primary and nonprimary auditory cortex by forward masking in tinnitus. Hear Res 2015; 327:9-27. [PMID: 25937134 DOI: 10.1016/j.heares.2015.04.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2014] [Revised: 04/07/2015] [Accepted: 04/10/2015] [Indexed: 11/21/2022]
Abstract
It has been proposed that tinnitus is generated by aberrant neural activity that develops among neurons in tonotopic of regions of primary auditory cortex (A1) affected by hearing loss, which is also the frequency region where tinnitus percepts localize (Eggermont and Roberts 2004; Roberts et al., 2010, 2013). These models suggest (1) that differences between tinnitus and control groups of similar age and audiometric function should depend on whether A1 is probed in tinnitus frequency region (TFR) or below it, and (2) that brain responses evoked from A1 should track changes in the tinnitus percept when residual inhibition (RI) is induced by forward masking. We tested these predictions by measuring (128-channel EEG) the sound-evoked 40-Hz auditory steady-state response (ASSR) known to localize tonotopically to neural sources in A1. For comparison the N1 transient response localizing to distributed neural sources in nonprimary cortex (A2) was also studied. When tested under baseline conditions where tinnitus subjects would have heard their tinnitus, ASSR responses were larger in a tinnitus group than in controls when evoked by 500 Hz probes while the reverse was true for tinnitus and control groups tested with 5 kHz probes, confirming frequency-dependent group differences in this measure. On subsequent trials where RI was induced by masking (narrow band noise centered at 5 kHz), ASSR amplitude increased in the tinnitus group probed at 5 kHz but not in the tinnitus group probed at 500 Hz. When collapsed into a single sample tinnitus subjects reporting comparatively greater RI depth and duration showed comparatively larger ASSR increases after masking regardless of probe frequency. Effects of masking on ASSR amplitude in the control groups were completely reversed from those in the tinnitus groups, with no change seen to 5 kHz probes but ASSR increases to 500 Hz probes even though the masking sound contained no energy at 500 Hz (an "off-frequency" masking effect). In contrast to these findings for the ASSR, N1 amplitude was larger in tinnitus than control groups at both probe frequencies under baseline conditions, decreased after masking in all conditions, and did not relate to RI. These results suggest that aberrant neural activity occurring in the TFR of A1 underlies tinnitus and its modulation during RI. They indicate further that while neural changes occur in A2 in tinnitus, these changes do not reflect the tinnitus percept. Models for tinnitus and forward masking are described that integrate these findings within a common framework.
Collapse
|
32
|
Andreou LV, Griffiths TD, Chait M. Sensitivity to the temporal structure of rapid sound sequences - An MEG study. Neuroimage 2015; 110:194-204. [PMID: 25659464 PMCID: PMC4389832 DOI: 10.1016/j.neuroimage.2015.01.052] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Revised: 12/15/2014] [Accepted: 01/27/2015] [Indexed: 11/28/2022] Open
Abstract
To probe sensitivity to the time structure of ongoing sound sequences, we measured MEG responses, in human listeners, to the offset of long tone-pip sequences containing various forms of temporal regularity. If listeners learn sequence temporal properties and form expectancies about the arrival time of an upcoming tone, sequence offset should be detectable as soon as an expected tone fails to arrive. Therefore, latencies of offset responses are indicative of the extent to which the temporal pattern has been acquired. In Exp1, sequences were isochronous with tone inter-onset-interval (IOI) set to 75, 125 or 225ms. Exp2 comprised of non-isochronous, temporally regular sequences, comprised of the IOIs above. Exp3 used the same sequences as Exp2 but listeners were required to monitor them for occasional frequency deviants. Analysis of the latency of offset responses revealed that the temporal structure of (even rather simple) regular sequences is not learnt precisely when the sequences are ignored. Pattern coding, supported by a network of temporal, parietal and frontal sources, improved considerably when the signals were made behaviourally pertinent. Thus, contrary to what might be expected in the context of an 'early warning system' framework, learning of temporal structure is not automatic, but affected by the signal's behavioural relevance.
Collapse
Affiliation(s)
| | - Timothy D Griffiths
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3BG, UK; Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Maria Chait
- UCL Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK.
| |
Collapse
|
33
|
Krishnan A, Gandour JT, Ananthakrishnan S, Vijayaraghavan V. Language experience enhances early cortical pitch-dependent responses. JOURNAL OF NEUROLINGUISTICS 2015; 33:128-148. [PMID: 25506127 PMCID: PMC4261237 DOI: 10.1016/j.jneuroling.2014.08.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Pitch processing at cortical and subcortical stages of processing is shaped by language experience. We recently demonstrated that specific components of the cortical pitch response (CPR) index the more rapidly-changing portions of the high rising Tone 2 of Mandarin Chinese, in addition to marking pitch onset and sound offset. In this study, we examine how language experience (Mandarin vs. English) shapes the processing of different temporal attributes of pitch reflected in the CPR components using stimuli representative of within-category variants of Tone 2. Results showed that the magnitude of CPR components (Na-Pb and Pb-Nb) and the correlation between these two components and pitch acceleration were stronger for the Chinese listeners compared to English listeners for stimuli that fell within the range of Tone 2 citation forms. Discriminant function analysis revealed that the Na-Pb component was more than twice as important as Pb-Nb in grouping listeners by language affiliation. In addition, a stronger stimulus-dependent, rightward asymmetry was observed for the Chinese group at the temporal, but not frontal, electrode sites. This finding may reflect selective recruitment of experience-dependent, pitch-specific mechanisms in right auditory cortex to extract more complex, time-varying pitch patterns. Taken together, these findings suggest that long-term language experience shapes early sensory level processing of pitch in the auditory cortex, and that the sensitivity of the CPR may vary depending on the relative linguistic importance of specific temporal attributes of dynamic pitch.
Collapse
|
34
|
Sleep-dependent neuroplastic changes during auditory perceptual learning. Neurobiol Learn Mem 2014; 118:133-42. [PMID: 25490057 DOI: 10.1016/j.nlm.2014.12.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2014] [Revised: 10/26/2014] [Accepted: 12/02/2014] [Indexed: 11/24/2022]
Abstract
Auditory perceptual learning is accompanied by a significant increase in the amplitude of sensory evoked responses on the second day of training. This is thought to reflect memory consolidation after the first practice session. However, it is unclear whether the changes in sensory evoked responses depend on sleep per se or whether a break between training sessions would sufficiently yield similar changes. To assess the relative contributions of sleep and passage of time (wakefulness) on the sensory evoked responses, we recorded auditory evoked fields using magnetoencephalography while participants performed a vowel segregation task in three different sessions separated by 12h over two consecutive days. The first two practice sessions were scheduled in the morning and evening of the same day for one group and the evening and morning of subsequent days for the other group. For each participant, we modeled the auditory evoked magnetic field with single dipoles in bilateral superior temporal planes. We then examined the amplitudes and latencies of the resulting source waveforms as a function of sleep and passage of time. In both groups, performance gradually improved with repeated testing. Auditory learning was paralleled by increased sustained field between 250 and 350ms after sound onset as well as sensory evoked fields around 200ms after sound onset (i.e., P2m amplitude) for sessions taking place on the same and different days, respectively. These neuromagnetic changes suggest that auditory learning involves a consolidation phase that occurs during the wake state, which is followed by a sleep-dependent consolidation stage indexed by the P2m amplitude.
Collapse
|
35
|
Moerel M, De Martino F, Formisano E. An anatomical and functional topography of human auditory cortical areas. Front Neurosci 2014; 8:225. [PMID: 25120426 PMCID: PMC4114190 DOI: 10.3389/fnins.2014.00225] [Citation(s) in RCA: 147] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Accepted: 07/08/2014] [Indexed: 12/22/2022] Open
Abstract
While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.
Collapse
Affiliation(s)
- Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands ; Maastricht Brain Imaging Center, Maastricht University Maastricht, Netherlands ; Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota Minneapolis, MN, USA
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands ; Maastricht Brain Imaging Center, Maastricht University Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands ; Maastricht Brain Imaging Center, Maastricht University Maastricht, Netherlands
| |
Collapse
|
36
|
Sielużycki C, Kordowski P. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG. Biomed Eng Online 2014; 13:75. [PMID: 24939398 PMCID: PMC4060856 DOI: 10.1186/1475-925x-13-75] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2013] [Accepted: 04/10/2014] [Indexed: 11/17/2022] Open
Abstract
Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing.
Collapse
Affiliation(s)
- Cezary Sielużycki
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestr, 6, 39118 Magdeburg, Germany.
| | | |
Collapse
|
37
|
Edgar JC, Lanza MR, Daina AB, Monroe JF, Khan SY, Blaskey L, Cannon KM, Jenkins J, Qasmieh S, Levy SE, Roberts TPL. Missing and delayed auditory responses in young and older children with autism spectrum disorders. Front Hum Neurosci 2014; 8:417. [PMID: 24936181 PMCID: PMC4047517 DOI: 10.3389/fnhum.2014.00417] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2013] [Accepted: 05/23/2014] [Indexed: 12/04/2022] Open
Abstract
Background: The development of left and right superior temporal gyrus (STG) 50 ms (M50) and 100 ms (M100) auditory responses in typically developing (TD) children and in children with autism spectrum disorder (ASD) was examined. Reflecting differential development of primary/secondary auditory areas and supporting previous studies, it was hypothesized that whereas left and right M50 STG responses would be observed equally often in younger and older children, left and right M100 STG responses would more often be absent in younger than older children. In ASD, delayed neurodevelopment would be indicated via the observation of a greater proportion of ASD than TD subjects showing missing M100 but not M50 responses in both age groups. Missing M100 responses would be observed primarily in children with ASD with language impairment (ASD + LI) (and perhaps concomitantly lower general cognitive abilities). Methods: Thirty-five TD controls, 63 ASD without language impairment (ASD − LI), and 38 ASD + LI were recruited. Binaural tones were presented. The presence or absence of a STG M50 and M100 was scored. Subjects were grouped into younger (6–10 years old) and older groups (11–15 years old). Results: Although M50 responses were observed equally often in older and younger subjects and equally often in TD and ASD, left and right M50 responses were delayed in ASD − LI and ASD + LI. Group comparisons showed that in younger subjects M100 responses were observed more often in TD than ASD + LI (90 versus 66%, p = 0.04), with no differences between TD and ASD − LI (90 versus 76%, p = 0.14) or between ASD − LI and ASD + LI (76 versus 66%, p = 0.53). In older subjects, whereas no differences were observed between TD and ASD + LI, responses were observed more often in ASD − LI than ASD + LI. Findings were similar when splitting the ASD group into lower- and higher-cognitive functioning groups. Conclusion: Although present in all groups, M50 responses were delayed in ASD. Examining the TD data, findings indicated that by 11 years, a right M100 should be observed in 100% of subjects and a left M100 in 80% of subjects. Thus, by 11 years, lack of a left and especially right M100 offers neurobiological insight into sensory processing that may underlie language or cognitive impairment.
Collapse
Affiliation(s)
- J Christopher Edgar
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Matthew R Lanza
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Aleksandra B Daina
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Justin F Monroe
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Sarah Y Khan
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Lisa Blaskey
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA ; Department of Pediatrics, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Katelyn M Cannon
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Julian Jenkins
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Saba Qasmieh
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA ; Department of Pediatrics, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Susan E Levy
- Department of Pediatrics, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| | - Timothy P L Roberts
- Department of Radiology, Lurie Family Foundation MEG Imaging Center, The Children's Hospital of Philadelphia , Philadelphia, PA , USA
| |
Collapse
|
38
|
Simon JZ. The encoding of auditory objects in auditory cortex: insights from magnetoencephalography. Int J Psychophysiol 2014; 95:184-90. [PMID: 24841996 DOI: 10.1016/j.ijpsycho.2014.05.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2013] [Revised: 03/22/2014] [Accepted: 05/01/2014] [Indexed: 11/16/2022]
Abstract
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects.
Collapse
Affiliation(s)
- Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742 USA; Department of Biology, University of Maryland, College Park, MD 20742, USA; Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
39
|
Krishnan A, Gandour JT, Ananthakrishnan S, Vijayaraghavan V. Cortical pitch response components index stimulus onset/offset and dynamic features of pitch contours. Neuropsychologia 2014; 59:1-12. [PMID: 24751993 DOI: 10.1016/j.neuropsychologia.2014.04.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2013] [Revised: 03/12/2014] [Accepted: 04/11/2014] [Indexed: 11/19/2022]
Abstract
Voice pitch is an important information-bearing component of language that is subject to experience dependent plasticity at both early cortical and subcortical stages of processing. We have already demonstrated that pitch onset component (Na) of the cortical pitch response (CPR) is sensitive to flat pitch and its salience … CPR responses from Chinese listeners were elicited by three citation forms varying in pitch acceleration and duration. Results showed that the pitch onset component (Na) was invariant to changes in acceleration. In contrast, Na–Pb and Pb–Nb showed a systematic decrease in the interpeak latency and decrease in amplitude with increase in pitch acceleration that followed the time course of pitch change across the three stimuli. A strong correlation with pitch acceleration was observed for these two components only – a putative index of pitch-relevant neural activity associated with the more rapidly-changing portions of the pitch contour. Pc–Nc marks unambiguously the stimulus offset … and their functional roles as related to sensory and cognitive properties of the stimulus. [Corrected]
Collapse
Affiliation(s)
| | - Jackson T Gandour
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, IN, USA.
| | | | | |
Collapse
|
40
|
Barascud N, Griffiths TD, McAlpine D, Chait M. "Change deafness" arising from inter-feature masking within a single auditory object. J Cogn Neurosci 2014; 26:514-28. [PMID: 24047385 PMCID: PMC4346202 DOI: 10.1162/jocn_a_00481] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.
Collapse
Affiliation(s)
| | - Timothy D Griffiths
- Newcastle University Medical School
- UCL Wellcome Trust Centre for Neuroimaging
| | | | | |
Collapse
|
41
|
Okamoto H, Teismann H, Keceli S, Pantev C, Kakigi R. Differential effects of temporal regularity on auditory-evoked response amplitude: a decrease in silence and increase in noise. Behav Brain Funct 2013; 9:44. [PMID: 24299193 PMCID: PMC4220810 DOI: 10.1186/1744-9081-9-44] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2013] [Accepted: 11/23/2013] [Indexed: 11/10/2022] Open
Abstract
Background In daily life, we are continuously exposed to temporally regular and irregular sounds. Previous studies have demonstrated that the temporal regularity of sound sequences influences neural activity. However, it remains unresolved how temporal regularity affects neural activity in noisy environments, when attention of the listener is not focused on the sound input. Methods In the present study, using magnetoencephalography we investigated the effects of temporal regularity in sound signal sequencing (regular vs. irregular) in silent versus noisy environments during distracted listening. Results The results demonstrated that temporal regularity differentially affected the auditory-evoked N1m response depending on the background acoustic environment: the N1m amplitudes elicited by the temporally regular sounds were smaller in silence and larger in noise than those elicited by the temporally irregular sounds. Conclusions Our results indicate that the human auditory system is able to involuntarily utilize temporal regularity in sound signals to modulate the neural activity in the auditory cortex in accordance with the surrounding acoustic environment.
Collapse
Affiliation(s)
- Hidehiko Okamoto
- Department of Integrative Physiology, National Institute for Physiological Sciences, 38 Nishigo-Naka, Myodaiji, Okazaki 444-8585, JAPAN.
| | | | | | | | | |
Collapse
|
42
|
Ding N, Chatterjee M, Simon JZ. Robust cortical entrainment to the speech envelope relies on the spectro-temporal fine structure. Neuroimage 2013; 88:41-6. [PMID: 24188816 DOI: 10.1016/j.neuroimage.2013.10.054] [Citation(s) in RCA: 147] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2013] [Revised: 10/25/2013] [Accepted: 10/27/2013] [Indexed: 10/26/2022] Open
Abstract
Speech recognition is robust to background noise. One underlying neural mechanism is that the auditory system segregates speech from the listening background and encodes it reliably. Such robust internal representation has been demonstrated in auditory cortex by neural activity entrained to the temporal envelope of speech. A paradox, however, then arises, as the spectro-temporal fine structure rather than the temporal envelope is known to be the major cue to segregate target speech from background noise. Does the reliable cortical entrainment in fact reflect a robust internal "synthesis" of the attended speech stream rather than direct tracking of the acoustic envelope? Here, we test this hypothesis by degrading the spectro-temporal fine structure while preserving the temporal envelope using vocoders. Magnetoencephalography (MEG) recordings reveal that cortical entrainment to vocoded speech is severely degraded by background noise, in contrast to the robust entrainment to natural speech. Furthermore, cortical entrainment in the delta-band (1-4Hz) predicts the speech recognition score at the level of individual listeners. These results demonstrate that reliable cortical entrainment to speech relies on the spectro-temporal fine structure, and suggest that cortical entrainment to the speech envelope is not merely a representation of the speech envelope but a coherent representation of multiscale spectro-temporal features that are synchronized to the syllabic and phrasal rhythms of speech.
Collapse
Affiliation(s)
- Nai Ding
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD 20742, USA; Department of Psychology, New York University, New York, NY 10003, USA.
| | | | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD 20742, USA; Department of Biology, University of Maryland, College Park, College Park, MD 20742, USA; Institute for Systems Research, University of Maryland, College Park, College Park, MD 20742, USA.
| |
Collapse
|
43
|
Temporal expectation and spectral expectation operate in distinct fashion on neuronal populations. Neuropsychologia 2013; 51:2548-55. [DOI: 10.1016/j.neuropsychologia.2013.09.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2013] [Revised: 08/07/2013] [Accepted: 09/06/2013] [Indexed: 11/17/2022]
|
44
|
Saenz M, Langers DRM. Tonotopic mapping of human auditory cortex. Hear Res 2013; 307:42-52. [PMID: 23916753 DOI: 10.1016/j.heares.2013.07.016] [Citation(s) in RCA: 105] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2013] [Revised: 07/19/2013] [Accepted: 07/25/2013] [Indexed: 11/26/2022]
Abstract
Since the early days of functional magnetic resonance imaging (fMRI), retinotopic mapping emerged as a powerful and widely-accepted tool, allowing the identification of individual visual cortical fields and furthering the study of visual processing. In contrast, tonotopic mapping in auditory cortex proved more challenging primarily because of the smaller size of auditory cortical fields. The spatial resolution capabilities of fMRI have since advanced, and recent reports from our labs and several others demonstrate the reliability of tonotopic mapping in human auditory cortex. Here we review the wide range of stimulus procedures and analysis methods that have been used to successfully map tonotopy in human auditory cortex. We point out that recent studies provide a remarkably consistent view of human tonotopic organisation, although the interpretation of the maps continues to vary. In particular, there remains controversy over the exact orientation of the primary gradients with respect to Heschl's gyrus, which leads to different predictions about the location of human A1, R, and surrounding fields. We discuss the development of this debate and argue that literature is converging towards an interpretation that core fields A1 and R fold across the rostral and caudal banks of Heschl's gyrus, with tonotopic gradients laid out in a distinctive V-shaped manner. This suggests an organisation that is largely homologous with non-human primates. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie (LREN), CHUV, Department of Clinical Neurosciences, Lausanne University Hospital, Mont Paisible 16, Lausanne 1011, Switzerland; Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne 1015, Switzerland.
| | | |
Collapse
|
45
|
Doelling KB, Arnal LH, Ghitza O, Poeppel D. Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing. Neuroimage 2013; 85 Pt 2:761-8. [PMID: 23791839 DOI: 10.1016/j.neuroimage.2013.06.035] [Citation(s) in RCA: 309] [Impact Index Per Article: 28.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Revised: 06/06/2013] [Accepted: 06/07/2013] [Indexed: 11/19/2022] Open
Abstract
A growing body of research suggests that intrinsic neuronal slow (<10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the 'sharpness' of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility.
Collapse
|
46
|
Adaptive temporal encoding leads to a background-insensitive cortical representation of speech. J Neurosci 2013; 33:5728-35. [PMID: 23536086 DOI: 10.1523/jneurosci.5297-12.2013] [Citation(s) in RCA: 206] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech recognition is remarkably robust to the listening background, even when the energy of background sounds strongly overlaps with that of speech. How the brain transforms the corrupted acoustic signal into a reliable neural representation suitable for speech recognition, however, remains elusive. Here, we hypothesize that this transformation is performed at the level of auditory cortex through adaptive neural encoding, and we test the hypothesis by recording, using MEG, the neural responses of human subjects listening to a narrated story. Spectrally matched stationary noise, which has maximal acoustic overlap with the speech, is mixed in at various intensity levels. Despite the severe acoustic interference caused by this noise, it is here demonstrated that low-frequency auditory cortical activity is reliably synchronized to the slow temporal modulations of speech, even when the noise is twice as strong as the speech. Such a reliable neural representation is maintained by intensity contrast gain control and by adaptive processing of temporal modulations at different time scales, corresponding to the neural δ and θ bands. Critically, the precision of this neural synchronization predicts how well a listener can recognize speech in noise, indicating that the precision of the auditory cortical representation limits the performance of speech recognition in noise. Together, these results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.
Collapse
|
47
|
Prior knowledge on cortex organization in the reconstruction of source current densities from EEG. Neuroimage 2013; 67:7-24. [DOI: 10.1016/j.neuroimage.2012.11.013] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2012] [Revised: 09/19/2012] [Accepted: 11/08/2012] [Indexed: 11/18/2022] Open
|
48
|
Ross B. Steady-state auditory evoked responses. DISORDERS OF PERIPHERAL AND CENTRAL AUDITORY PROCESSING 2013. [DOI: 10.1016/b978-0-7020-5310-8.00008-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
49
|
Pang EW. Neuroimaging studies of bilingual expressive language representation in the brain: potential applications for magnetoencephalography. Neurosci Bull 2012; 28:759-64. [PMID: 23124647 DOI: 10.1007/s12264-012-1278-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2012] [Accepted: 04/25/2012] [Indexed: 12/01/2022] Open
Abstract
Bilingualism is the ability to use two or more languages with equal or near equal fluency. How the brain, often seamlessly, selects, controls, and switches between languages is an enigma. Neuroimaging studies offer the unique opportunity to probe the mechanisms underlying bilingual brain function. Non-invasive methods, in particular, functional MRI (fMRI) and event-related potentials (ERPs), have allowed examination in healthy control populations. Whole-head magnetoencephalography (MEG), a relatively new addition to the cadre of neuroimaging tools, offers a combination of the high spatial resolution of fMRI with the high temporal resolution of ERPs. Thus far, MEG has been applied to the studies of bilingual receptive language, or bilingual language comprehension. MEG has not yet been applied to the study of bilingual language production as such studies have faced more challenges (see Salmelin, 2007 for a review), and these have only recently been addressed. Here, we review the literature on MEG expressive language studies and point out a direction for the application of MEG to the study of bilingual language production.
Collapse
Affiliation(s)
- Elizabeth W Pang
- Division of Neurology, Hospital for Sick Children, and Department of Paediatrics, University of Toronto, Toronto, Ontario M5G 1x8, Canada.
| |
Collapse
|
50
|
Emergence of neural encoding of auditory objects while listening to competing speakers. Proc Natl Acad Sci U S A 2012; 109:11854-9. [PMID: 22753470 DOI: 10.1073/pnas.1205381109] [Citation(s) in RCA: 447] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation.
Collapse
|