1
|
Increased fiber density of the fornix in patients with chronic tinnitus revealed by diffusion-weighted MRI. Front Neurosci 2023; 17:1293133. [PMID: 38192511 PMCID: PMC10773749 DOI: 10.3389/fnins.2023.1293133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 12/05/2023] [Indexed: 01/10/2024] Open
Abstract
Up to 45% of the elderly population suffer from chronic tinnitus - the phantom perception of sound that is often perceived as ringing, whistling, or hissing "in the ear" without external stimulation. Previous research investigated white matter changes in tinnitus patients using diffusion-weighted magnetic resonance imaging (DWI) to assess measures such as fractional anisotropy (a measure of microstructural integrity of fiber tracts) or mean diffusivity (a measure for general water diffusion). However, findings overlap only minimally and are sometimes even contradictory. We here present the first study encompassing higher diffusion data that allow to focus on changes in tissue microstructure, such as number of axons (fiber density) and macroscopic alterations, including axon diameter, and a combination of both. In order to deal with the crossing-fibers problem, we applied a fixel-based analysis using a constrained spherical deconvolution signal modeling approach. We investigated differences between tinnitus patients and control participants as well as how cognitive abilities and tinnitus distress are related to changes in white matter morphology in chronic tinnitus. For that aim, 20 tinnitus patients and 20 control participants, matched in age, sex and whether they had hearing loss or not, underwent DWI, audiometric and cognitive assessments, and filled in questionnaires targeting anxiety and depression. Our results showed increased fiber density in the fornix in tinnitus patients compared to control participants. The observed changes might, reflect compensatory structural alterations related to the processing of negative emotions or maladaptive changes related to the reinforced learning of the chronic tinnitus sensation. Due to the low sample size, the study should be seen as a pilot study that motivates further research to investigate underlying white matter morphology alterations in tinnitus.
Collapse
|
2
|
An energy costly architecture of neuromodulators for human brain evolution and cognition. SCIENCE ADVANCES 2023; 9:eadi7632. [PMID: 38091393 PMCID: PMC10848727 DOI: 10.1126/sciadv.adi7632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 11/10/2023] [Indexed: 12/18/2023]
Abstract
In comparison to other species, the human brain exhibits one of the highest energy demands relative to body metabolism. It remains unclear whether this heightened energy demand uniformly supports an enlarged brain or if specific signaling mechanisms necessitate greater energy. We hypothesized that the regional distribution of energy demands will reveal signaling strategies that have contributed to human cognitive development. We measured the energy distribution within the brain functional connectome using multimodal brain imaging and found that signaling pathways in evolutionarily expanded regions have up to 67% higher energetic costs than those in sensory-motor regions. Additionally, histology, transcriptomic data, and molecular imaging independently reveal an up-regulation of signaling at G-protein-coupled receptors in energy-demanding regions. Our findings indicate that neuromodulator activity is predominantly involved in cognitive functions, such as reading or memory processing. This study suggests that an up-regulation of neuromodulator activity, alongside increased brain size, is a crucial aspect of human brain evolution.
Collapse
|
3
|
Anatomy of the auditory cortex then and now. J Comp Neurol 2023; 531:1883-1892. [PMID: 38010215 PMCID: PMC10872810 DOI: 10.1002/cne.25560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/29/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Using neuroanatomical investigations in the macaque, Deepak Pandya and his colleagues have established the framework for auditory cortex organization, with subdivisions into core and belt areas. This has aided subsequent neurophysiological and imaging studies in monkeys and humans, and a nomenclature building on Pandya's work has also been adopted by the Human Connectome Project. The foundational work by Pandya and his colleagues is highlighted here in the context of subsequent and ongoing studies on the functional anatomy and physiology of auditory cortex in primates, including humans, and their relevance for understanding cognitive aspects of speech and language.
Collapse
|
4
|
Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One 2023; 18:e0286512. [PMID: 37992062 PMCID: PMC10664868 DOI: 10.1371/journal.pone.0286512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 05/17/2023] [Indexed: 11/24/2023] Open
Abstract
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.
Collapse
|
5
|
Evidence for a Spoken Word Lexicon in the Auditory Ventral Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:420-434. [PMID: 37588129 PMCID: PMC10426387 DOI: 10.1162/nol_a_00108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 04/27/2023] [Indexed: 08/18/2023]
Abstract
The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the visual word form area. Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using functional magnetic resonance imaging rapid adaptation techniques, we provide evidence for an auditory lexicon in the auditory word form area in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the auditory word form area. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
Collapse
|
6
|
Auditory cortical connectivity in humans. Cereb Cortex 2023; 33:6207-6227. [PMID: 36573464 PMCID: PMC10422925 DOI: 10.1093/cercor/bhac496] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/28/2022] Open
Abstract
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
Collapse
|
7
|
Listening to familiar music induces continuous inhibition of alpha and low-beta power. J Neurophysiol 2023; 129:1344-1358. [PMID: 37141051 DOI: 10.1152/jn.00269.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023] Open
Abstract
How the brain responds temporally and spectrally when we listen to familiar versus unfamiliar musical sequences remains unclear. This study uses EEG techniques to investigate the continuous electrophysiological changes in the human brain during passive listening to familiar and unfamiliar musical excerpts. EEG activity was recorded in twenty participants while passively listening to 10 seconds of classical music, and they were then asked to indicate their self-assessment of familiarity. We analyzed the EEG data in two manners: familiarity based on the within-subject design, i.e., averaging trials for each condition and participant, and familiarity based on the same music excerpt, i.e., averaging trials for each condition and music excerpt. By comparing the familiar condition with the unfamiliar condition and local baseline, sustained low-beta power (12-16 Hz) suppression was observed in both analyses in frontocentral and left frontal electrodes after 800 ms. However, sustained alpha power (8-12 Hz) decreased in frontocentral and posterior electrodes after 850 ms only in the first type of analysis. Our study indicates that listening to familiar music elicits a late sustained spectral response (inhibition of alpha/low-beta power from 800 ms to 10 s). Moreover, the results showed alpha suppression reflects increased attention or arousal/engagement due to listening to familiar music; nevertheless, low-beta suppression exhibits the effect of familiarity.
Collapse
|
8
|
Disruptions of default mode network and precuneus connectivity associated with cognitive dysfunctions in tinnitus. Sci Rep 2023; 13:5746. [PMID: 37029175 PMCID: PMC10082191 DOI: 10.1038/s41598-023-32599-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 03/29/2023] [Indexed: 04/09/2023] Open
Abstract
Tinnitus is the perception of a ringing, buzzing or hissing sound "in the ear" without external stimulation. Previous research has demonstrated changes in resting-state functional connectivity in tinnitus, but findings do not overlap and are even contradictory. Furthermore, how altered functional connectivity in tinnitus is related to cognitive abilities is currently unknown. Here we investigated resting-state functional connectivity differences between 20 patients with chronic tinnitus and 20 control participants matched in age, sex and hearing loss. All participants underwent functional magnetic resonance imaging, audiometric and cognitive assessments, and filled in questionnaires targeting anxiety and depression. Significant differences in functional connectivity between tinnitus patients and control participants were not obtained. However, we did find significant associations between cognitive scores and functional coupling of the default mode network and the precuneus with the superior parietal lobule, supramarginal gyrus, and orbitofrontal cortex. Further, tinnitus distress correlated with connectivity between the precuneus and the lateral occipital complex. This is the first study providing evidence for disruptions of default mode network and precuneus coupling that are related to cognitive dysfunctions in tinnitus. The constant attempt to decrease the tinnitus sensation might occupy certain brain resources otherwise available for concurrent cognitive operations.
Collapse
|
9
|
Neuroanatomical alterations in middle frontal gyrus and the precuneus related to tinnitus and tinnitus distress. Hear Res 2022; 424:108595. [DOI: 10.1016/j.heares.2022.108595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 07/25/2022] [Accepted: 07/31/2022] [Indexed: 11/04/2022]
|
10
|
Overlapping Anatomical Networks Convey Cross-Modal Suppression in the Sighted and Coactivation of "Visual" and Auditory Cortex in the Blind. Cereb Cortex 2020; 29:4863-4876. [PMID: 30843062 DOI: 10.1093/cercor/bhz021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 01/09/2019] [Accepted: 01/29/2019] [Indexed: 12/13/2022] Open
Abstract
In the present combined DTI/fMRI study we investigated adaptive plasticity of neural networks involved in controlling spatial and nonspatial auditory working memory in the early blind (EB). In both EB and sighted controls (SC), fractional anisotropy (FA) within the right inferior longitudinal fasciculus correlated positively with accuracy in a one-back sound localization but not sound identification task. The neural tracts passing through the cluster of significant correlation connected auditory and "visual" areas in the right hemisphere. Activity in these areas during both sound localization and identification correlated with FA within the anterior corpus callosum, anterior thalamic radiation, and inferior fronto-occipital fasciculus. In EB, FA in these structures correlated positively with activity in both auditory and "visual" areas, whereas FA in SC correlated positively with activity in auditory and negatively with activity in visual areas. The results indicate that frontal white matter conveys cross-modal suppression of occipital areas in SC, while it mediates coactivation of auditory and reorganized "visual" cortex in EB.
Collapse
|
11
|
Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
|
12
|
Abstract
Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.
Collapse
|
13
|
Inter-subject Similarity of Brain Activity in Expert Musicians After Multimodal Learning: A Behavioral and Neuroimaging Study on Learning to Play a Piano Sonata. Neuroscience 2020; 441:102-116. [PMID: 32569807 DOI: 10.1016/j.neuroscience.2020.06.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 06/11/2020] [Accepted: 06/14/2020] [Indexed: 11/26/2022]
Abstract
Human behavior is inherently multimodal and relies on sensorimotor integration. This is evident when pianists exhibit activity in motor and premotor cortices, as part of a dorsal pathway, while listening to a familiar piece of music, or when naïve participants learn to play simple patterns on the piano. Here we investigated the interaction between multimodal learning and dorsal-stream activity over the course of four weeks in ten skilled pianists by adopting a naturalistic data-driven analysis approach. We presented the pianists with audio-only, video-only and audiovisual recordings of a piano sonata during functional magnetic resonance imaging (fMRI) before and after they had learned to play the sonata by heart for a total of four weeks. We followed the learning process and its outcome with questionnaires administered to the pianists, one piano instructor following their training, and seven external expert judges. The similarity of the pianists' brain activity during stimulus presentations was examined before and after learning by means of inter-subject correlation (ISC) analysis. After learning, an increased ISC was found in the pianists while watching the audiovisual performance, particularly in motor and premotor regions of the dorsal stream. While these brain structures have previously been associated with learning simple audio-motor sequences, our findings are the first to suggest their involvement in learning a complex and demanding audiovisual-motor task. Moreover, the most motivated learners and the best performers of the sonata showed ISC in the dorsal stream and in the reward brain network.
Collapse
|
14
|
Effective connectivity in the default mode network is distinctively disrupted in Alzheimer's disease-A simultaneous resting-state FDG-PET/fMRI study. Hum Brain Mapp 2019; 42:4134-4143. [PMID: 30697878 DOI: 10.1002/hbm.24517] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 12/08/2018] [Accepted: 12/28/2018] [Indexed: 02/02/2023] Open
Abstract
A prominent finding of postmortem and molecular imaging studies on Alzheimer's disease (AD) is the accumulation of neuropathological proteins in brain regions of the default mode network (DMN). Molecular models suggest that the progression of disease proteins depends on the directionality of signaling pathways. At network level, effective connectivity (EC) reflects directionality of signaling pathways. We hypothesized a specific pattern of EC in the DMN of patients with AD, related to cognitive impairment. Metabolic connectivity mapping is a novel measure of EC identifying regions of signaling input based on neuroenergetics. We simultaneously acquired resting-state functional MRI and FDG-PET data from patients with early AD (n = 35) and healthy subjects (n = 18) on an integrated PET/MR scanner. We identified two distinct subnetworks of EC in the DMN of healthy subjects: an anterior part with bidirectional EC between hippocampus and medial prefrontal cortex and a posterior part with predominant input into medial parietal cortex. Patients had reduced input into the medial parietal system and absent input from hippocampus into medial prefrontal cortex (p < 0.05, corrected). In a multiple linear regression with unimodal imaging and EC measures (F4,25 = 5.63, p = 0.002, r2 = 0.47), we found that EC (β = 0.45, p = 0.012) was stronger associated with cognitive deficits in patients than any of the PET and fMRI measures alone. Our approach indicates specific disruptions of EC in the DMN of patients with AD and might be suitable to test molecular theories about downstream and upstream spreading of neuropathology in AD.
Collapse
|
15
|
Distinct brain areas process novel and repeating tone sequences. BRAIN AND LANGUAGE 2018; 187:104-114. [PMID: 30278992 DOI: 10.1016/j.bandl.2018.09.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 09/23/2018] [Indexed: 06/08/2023]
Abstract
The auditory dorsal stream has been implicated in sensorimotor integration and concatenation of sequential sound events, both being important for processing of speech and music. The auditory ventral stream, by contrast, is characterized as subserving sound identification and recognition. We studied the respective roles of the dorsal and ventral streams, including recruitment of basal ganglia and medial temporal lobe structures, in the processing of tone sequence elements. A sequence was presented incrementally across several runs during functional magnetic resonance imaging in humans, and we compared activation by sequence elements when heard for the first time ("novel") versus when the elements were repeating ("familiar"). Our results show a shift in tone-sequence-dependent activation from posterior-dorsal cortical areas and the basal ganglia during the processing of less familiar sequence elements towards anterior and ventral cortical areas and the medial temporal lobe after the encoding of highly familiar sequence elements into identifiable auditory objects.
Collapse
|
16
|
Abstract
At first glance, the monkey brain looks like a smaller version of the human brain. Indeed, the anatomical and functional architecture of the cortical auditory system in monkeys is very similar to that of humans, with dual pathways segregated into a ventral and a dorsal processing stream. Yet, monkeys do not speak. Repeated attempts to pin this inability on one particular cause have failed. A closer look at the necessary components of language, according to Darwin, reveals that all of them got a significant boost during evolution from nonhuman to human primates. The vocal-articulatory system, in particular, has developed into the most sophisticated of all human sensorimotor systems with about a dozen effectors that, in combination with each other, result in an auditory communication system like no other. This sensorimotor network possesses all the ingredients of an internal model system that permits the emergence of sequence processing, as required for phonology and syntax in modern languages.
Collapse
|
17
|
Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2017; 98:262-268. [PMID: 29183630 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
|
18
|
Localization of complex sounds is modulated by behavioral relevance and sound category. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1757. [PMID: 29092572 PMCID: PMC5626571 DOI: 10.1121/1.5003779] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Meaningful sounds represent the majority of sounds that humans hear and process in everyday life. Yet studies of human sound localization mainly use artificial stimuli such as clicks, pure tones, and noise bursts. The present study investigated the influence of behavioral relevance, sound category, and acoustic properties on the localization of complex, meaningful sounds in the horizontal plane. Participants localized vocalizations and traffic sounds with two levels of behavioral relevance (low and high) within each category, as well as amplitude-modulated tones. Results showed a small but significant effect of behavioral relevance: localization acuity was higher for complex sounds with a high level of behavioral relevance at several target locations. The data also showed category-specific effects: localization biases were lower, and localization precision higher, for vocalizations than for traffic sounds in central space. Several acoustic parameters influenced sound localization performance as well. Correcting localization responses for front-back reversals reduced the overall variability across sounds, but behavioral relevance and sound category still had a modulatory effect on sound localization performance in central auditory space. The results thus demonstrate that spatial hearing performance for complex sounds is influenced not only by acoustic characteristics, but also by sound category and behavioral relevance.
Collapse
|
19
|
Does Tinnitus Depend on Time-of-Day? An Ecological Momentary Assessment Study with the "TrackYourTinnitus" Application. Front Aging Neurosci 2017; 9:253. [PMID: 28824415 PMCID: PMC5539131 DOI: 10.3389/fnagi.2017.00253] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2017] [Accepted: 07/17/2017] [Indexed: 12/25/2022] Open
Abstract
Only few previous studies used ecological momentary assessments to explore the time-of-day-dependence of tinnitus. The present study used data from the mobile application “TrackYourTinnitus” to explore whether tinnitus loudness and tinnitus distress fluctuate within a 24-h interval. Multilevel models were performed to account for the nested structure of assessments (level 1: 17,209 daily life assessments) nested within days (level 2: 3,570 days with at least three completed assessments), and days nested within participants (level 3: 350 participants). Results revealed a time-of-day-dependence of tinnitus. In particular, tinnitus was perceived as louder and more distressing during the night and early morning hours (from 12 a.m. to 8 a.m.) than during the upcoming day. Since previous studies suggested that stress (and stress-associated hormones) show a circadian rhythm and this might influence the time-of-day-dependence of tinnitus, we evaluated whether the described results change when statistically controlling for subjectively reported stress-levels. Correcting for subjective stress-levels, however, did not change the result that tinnitus (loudness and distress) was most severe at night and early morning. These results show that time-of-day contributes to the level of both tinnitus loudness and tinnitus distress. Possible implications of our results for the clinical management of tinnitus are that tailoring the timing of therapeutic interventions to the circadian rhythm of individual patients (chronotherapy) might be promising.
Collapse
|
20
|
Widespread and Opponent fMRI Signals Represent Sound Location in Macaque Auditory Cortex. Neuron 2017; 93:971-983.e4. [PMID: 28190642 DOI: 10.1016/j.neuron.2017.01.013] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 12/05/2016] [Accepted: 01/15/2017] [Indexed: 11/15/2022]
Abstract
In primates, posterior auditory cortical areas are thought to be part of a dorsal auditory pathway that processes spatial information. But how posterior (and other) auditory areas represent acoustic space remains a matter of debate. Here we provide new evidence based on functional magnetic resonance imaging (fMRI) of the macaque indicating that space is predominantly represented by a distributed hemifield code rather than by a local spatial topography. Hemifield tuning in cortical and subcortical regions emerges from an opponent hemispheric pattern of activation and deactivation that depends on the availability of interaural delay cues. Importantly, these opponent signals allow responses in posterior regions to segregate space similarly to a hemifield code representation. Taken together, our results reconcile seemingly contradictory views by showing that the representation of space follows closely a hemifield code and suggest that enhanced posterior-dorsal spatial specificity in primates might emerge from this form of coding.
Collapse
|
21
|
Frontostriatal Gating of Tinnitus and Chronic Pain. Trends Cogn Sci 2016; 19:567-578. [PMID: 26412095 DOI: 10.1016/j.tics.2015.08.002] [Citation(s) in RCA: 154] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 08/04/2015] [Accepted: 08/07/2015] [Indexed: 12/18/2022]
Abstract
Tinnitus and chronic pain are sensory-perceptual disorders associated with negative affect and high impact on well-being and behavior. It is now becoming increasingly clear that higher cognitive and affective brain systems are centrally involved in the pathology of both disorders. We propose that the ventromedial prefrontal cortex and the nucleus accumbens are part of a central 'gatekeeping' system in both sensory modalities, a system which evaluates the relevance and affective value of sensory stimuli and controls information flow via descending pathways. If this frontostriatal system is compromised, long-lasting disturbances are the result. Parallels in both systems are striking and mutually informative, and progress in understanding central gating mechanisms might provide a new impetus to the therapy of tinnitus and chronic pain.
Collapse
|
22
|
Intrinsic network activity in tinnitus investigated using functional MRI. Hum Brain Mapp 2016; 37:2717-35. [PMID: 27091485 PMCID: PMC4945432 DOI: 10.1002/hbm.23204] [Citation(s) in RCA: 84] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Revised: 02/29/2016] [Accepted: 03/24/2016] [Indexed: 12/13/2022] Open
Abstract
Tinnitus is an increasingly common disorder in which patients experience phantom auditory sensations, usually ringing or buzzing in the ear. Tinnitus pathophysiology has been repeatedly shown to involve both auditory and non-auditory brain structures, making network-level studies of tinnitus critical. In this magnetic resonance imaging (MRI) study, two resting-state functional connectivity (RSFC) approaches were used to better understand functional network disturbances in tinnitus. First, we demonstrated tinnitus-related reductions in RSFC between specific brain regions and resting-state networks (RSNs), defined by independent components analysis (ICA) and chosen for their overlap with structures known to be affected in tinnitus. Then, we restricted ICA to data from tinnitus patients, and identified one RSN not apparent in control data. This tinnitus RSN included auditory-sensory regions like inferior colliculus and medial Heschl's gyrus, as well as classically non-auditory regions like the mediodorsal nucleus of the thalamus, striatum, lateral prefrontal, and orbitofrontal cortex. Notably, patients' reported tinnitus loudness was positively correlated with RSFC between the mediodorsal nucleus and the tinnitus RSN, indicating that this network may underlie the auditory-sensory experience of tinnitus. These data support the idea that tinnitus involves network dysfunction, and further stress the importance of communication between auditory-sensory and fronto-striatal circuits in tinnitus pathophysiology. Hum Brain Mapp 37:2717-2735, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
|
23
|
Meta-analytic connectivity modeling of the human superior temporal sulcus. Brain Struct Funct 2016; 222:267-285. [PMID: 27003288 DOI: 10.1007/s00429-016-1215-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 03/06/2016] [Indexed: 12/11/2022]
Abstract
The superior temporal sulcus (STS) is a critical region for multiple neural processes in the human brain Hein and Knight (J Cogn Neurosci 20(12): 2125-2136, 2008). To better understand the multiple functions of the STS it would be useful to know more about its consistent functional coactivations with other brain regions. We used the meta-analytic connectivity modeling technique to determine consistent functional coactivation patterns across experiments and behaviors associated with bilateral anterior, middle, and posterior anatomical STS subregions. Based on prevailing models for the cortical organization of audition and language, we broadly hypothesized that across various behaviors the posterior STS (pSTS) would coactivate with dorsal-stream regions, whereas the anterior STS (aSTS) would coactivate with ventral-stream regions. The results revealed distinct coactivation patterns for each STS subregion, with some overlap in the frontal and temporal areas, and generally similar coactivation patterns for the left and right STS. Quantitative comparison of STS subregion coactivation maps demonstrated that the pSTS coactivated more strongly than other STS subregions in the same hemisphere with dorsal-stream regions, such as the inferior parietal lobule (only left pSTS), homotopic pSTS, precentral gyrus and supplementary motor area. In contrast, the aSTS showed more coactivation with some ventral-stream regions, such as the homotopic anterior temporal cortex and left inferior frontal gyrus, pars orbitalis (only right aSTS). These findings demonstrate consistent coactivation maps across experiments and behaviors for different anatomical STS subregions, which may help future studies consider various STS functions in the broader context of generalized coactivations for individuals with and without neurological disorders.
Collapse
|
24
|
Early-latency categorical speech sound representations in the left inferior frontal gyrus. Neuroimage 2016; 129:214-223. [PMID: 26774614 DOI: 10.1016/j.neuroimage.2016.01.016] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 12/17/2015] [Accepted: 01/06/2016] [Indexed: 11/30/2022] Open
Abstract
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Collapse
|
25
|
Auditory and visual cortex of primates: a comparison of two sensory systems. Eur J Neurosci 2015; 41:579-85. [PMID: 25728177 DOI: 10.1111/ejn.12844] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 12/23/2014] [Accepted: 12/23/2014] [Indexed: 11/29/2022]
Abstract
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separation of the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features at the columnar level are direction selectivity, size/bandwidth selectivity, and receptive fields with segregated vs. overlapping ON and OFF subregions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: (i) identification of objects; and (ii) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independently of sensory modality.
Collapse
|
26
|
Convergent evidence for the causal involvement of anterior superior temporal gyrus in auditory single-word comprehension. Cortex 2015; 77:164-166. [PMID: 26387007 DOI: 10.1016/j.cortex.2015.08.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Accepted: 08/14/2015] [Indexed: 11/16/2022]
|
27
|
Auditory-limbic interactions in chronic tinnitus: Challenges for neuroimaging research. Hear Res 2015; 334:49-57. [PMID: 26299843 DOI: 10.1016/j.heares.2015.08.005] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Revised: 07/07/2015] [Accepted: 08/17/2015] [Indexed: 01/09/2023]
Abstract
Tinnitus is a widespread auditory disorder affecting approximately 10-15% of the population, often with debilitating consequences. Although tinnitus commonly begins with damage to the auditory system due to loud-noise exposure, aging, or other etiologies, the exact neurophysiological basis of chronic tinnitus remains unknown. Many researchers point to a central auditory origin of tinnitus; however, a growing body of evidence also implicates other brain regions, including the limbic system. Correspondingly, we and others have proposed models of tinnitus in which the limbic and auditory systems both play critical roles and interact with one another. Specifically, we argue that damage to the auditory system generates an initial tinnitus signal, consistent with previous research. In our model, this "transient" tinnitus is suppressed when a limbic frontostriatal network, comprised of ventromedial prefrontal cortex and ventral striatum, successfully modulates thalamocortical transmission in the auditory system. Thus, in chronic tinnitus, limbic-system damage and resulting inefficiency of auditory-limbic interactions prevents proper compensation of the tinnitus signal. Neuroimaging studies utilizing connectivity methods like resting-state fMRI and diffusion MRI continue to uncover tinnitus-related anomalies throughout auditory, limbic, and other brain systems. However, directly assessing interactions between these brain regions and networks has proved to be more challenging. Here, we review existing empirical support for models of tinnitus stressing a critical role for involvement of "non-auditory" structures in tinnitus pathophysiology, and discuss the possible impact of newly refined connectivity techniques from neuroimaging on tinnitus research.
Collapse
|
28
|
Response to Skeide and Friederici: the myth of the uniquely human 'direct' dorsal pathway. Trends Cogn Sci 2015; 19:484-5. [PMID: 26092212 DOI: 10.1016/j.tics.2015.05.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2015] [Accepted: 05/26/2015] [Indexed: 10/23/2022]
|
29
|
Functional MRI of the vocalization-processing network in the macaque brain. Front Neurosci 2015; 9:113. [PMID: 25883546 PMCID: PMC4381638 DOI: 10.3389/fnins.2015.00113] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 03/17/2015] [Indexed: 12/12/2022] Open
Abstract
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt.
Collapse
|
30
|
Neurobiological roots of language in primate audition: common computational properties. Trends Cogn Sci 2015; 19:142-50. [PMID: 25600585 PMCID: PMC4348204 DOI: 10.1016/j.tics.2014.12.008] [Citation(s) in RCA: 125] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Revised: 12/06/2014] [Accepted: 12/12/2014] [Indexed: 11/26/2022]
Abstract
Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions.
Collapse
|
31
|
Visual imagery and functional connectivity in blindness: a single-case study. Brain Struct Funct 2015; 221:2367-74. [PMID: 25690326 DOI: 10.1007/s00429-015-1010-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Accepted: 02/09/2015] [Indexed: 12/20/2022]
Abstract
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input.
Collapse
|
32
|
Is there a tape recorder in your head? How the brain stores and retrieves musical melodies. Front Syst Neurosci 2014; 8:149. [PMID: 25221479 PMCID: PMC4147715 DOI: 10.3389/fnsys.2014.00149] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2013] [Accepted: 08/04/2014] [Indexed: 11/19/2022] Open
Abstract
Music consists of strings of sound that vary over time. Technical devices, such as tape recorders, store musical melodies by transcribing event times of temporal sequences into consecutive locations on the storage medium. Playback occurs by reading out the stored information in the same sequence. However, it is unclear how the brain stores and retrieves auditory sequences. Neurons in the anterior lateral belt of auditory cortex are sensitive to the combination of sound features in time, but the integration time of these neurons is not sufficient to store longer sequences that stretch over several seconds, minutes or more. Functional imaging studies in humans provide evidence that music is stored instead within the auditory dorsal stream, including premotor and prefrontal areas. In monkeys, these areas are the substrate for learning of motor sequences. It appears, therefore, that the auditory dorsal stream transforms musical into motor sequence information and vice versa, realizing what are known as forward and inverse models. The basal ganglia and the cerebellum are involved in setting up the sensorimotor associations, translating timing information into spatial codes and back again.
Collapse
|
33
|
Processing of harmonics in the lateral belt of macaque auditory cortex. Front Neurosci 2014; 8:204. [PMID: 25100935 PMCID: PMC4104551 DOI: 10.3389/fnins.2014.00204] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 06/30/2014] [Indexed: 11/23/2022] Open
Abstract
Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations (“coos”). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB.
Collapse
|
34
|
An ALE meta-analysis on the audiovisual integration of speech signals. Hum Brain Mapp 2014; 35:5587-605. [PMID: 24996043 DOI: 10.1002/hbm.22572] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 05/28/2014] [Accepted: 06/24/2014] [Indexed: 11/09/2022] Open
Abstract
The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals.
Collapse
|
35
|
Distinct cortical locations for integration of audiovisual speech and the McGurk effect. Front Psychol 2014; 5:534. [PMID: 24917840 PMCID: PMC4040936 DOI: 10.3389/fpsyg.2014.00534] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2014] [Accepted: 05/14/2014] [Indexed: 11/13/2022] Open
Abstract
Audiovisual (AV) speech integration is often studied using the McGurk effect, where the combination of specific incongruent auditory and visual speech cues produces the perception of a third illusory speech percept. Recently, several studies have implicated the posterior superior temporal sulcus (pSTS) in the McGurk effect; however, the exact roles of the pSTS and other brain areas in "correcting" differing AV sensory inputs remain unclear. Using functional magnetic resonance imaging (fMRI) in ten participants, we aimed to isolate brain areas specifically involved in processing congruent AV speech and the McGurk effect. Speech stimuli were composed of sounds and/or videos of consonant-vowel tokens resulting in four stimulus classes: congruent AV speech (AVCong), incongruent AV speech resulting in the McGurk effect (AVMcGurk), acoustic-only speech (AO), and visual-only speech (VO). In group- and single-subject analyses, left pSTS exhibited significantly greater fMRI signal for congruent AV speech (i.e., AVCong trials) than for both AO and VO trials. Right superior temporal gyrus, medial prefrontal cortex, and cerebellum were also identified. For McGurk speech (i.e., AVMcGurk trials), two clusters in the left posterior superior temporal gyrus (pSTG), just posterior to Heschl's gyrus or on its border, exhibited greater fMRI signal than both AO and VO trials. We propose that while some brain areas, such as left pSTS, may be more critical for the integration of AV speech, other areas, such as left pSTG, may generate the "corrected" or merged percept arising from conflicting auditory and visual cues (i.e., as in the McGurk effect). These findings are consistent with the concept that posterior superior temporal areas represent part of a "dorsal auditory stream," which is involved in multisensory integration, sensorimotor control, and optimal state estimation (Rauschecker and Scott, 2009).
Collapse
|
36
|
Evidence for distinct human auditory cortex regions for sound location versus identity processing. Nat Commun 2014; 4:2585. [PMID: 24121634 PMCID: PMC3932554 DOI: 10.1038/ncomms3585] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Accepted: 09/10/2013] [Indexed: 11/16/2022] Open
Abstract
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.
Collapse
|
37
|
Relationship Between Cortical Thickness and Functional Activation in the Early Blind. Cereb Cortex 2014; 25:2035-48. [PMID: 24518755 DOI: 10.1093/cercor/bhu009] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Early blindness results in both structural and functional changes of the brain. However, these changes have rarely been studied in relation to each other. We measured alterations in cortical thickness (CT) caused by early visual deprivation and their relationship with cortical activity. Structural and functional magnetic resonance imaging was performed in 12 early blind (EB) humans and 12 sighted controls (SC). Experimental conditions included one-back tasks for auditory localization and pitch identification, and a simple sound-detection task. Structural and functional data were analyzed in a whole-brain approach and within anatomically defined regions of interest in sensory areas of the spared (auditory) and deprived (visual) modalities. Functional activation during sound-localization or pitch-identification tasks correlated negatively with CT in occipital areas of EB (calcarine sulcus, lingual gyrus, superior and middle occipital gyri, and cuneus) and in nonprimary auditory areas of SC. These results suggest a link between CT and activation and demonstrate that the relationship between cortical structure and function may depend on early sensory experience, probably via selective pruning of exuberant connections. Activity-dependent effects of early sensory deprivation and long-term practice are superimposed on normal maturation and aging. Together these processes shape the relationship between brain structure and function over the lifespan.
Collapse
|
38
|
Selectivity for space and time in early areas of the auditory dorsal stream in the rhesus monkey. J Neurophysiol 2014; 111:1671-85. [PMID: 24501260 DOI: 10.1152/jn.00436.2013] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The respective roles of ventral and dorsal cortical processing streams are still under discussion in both vision and audition. We characterized neural responses in the caudal auditory belt cortex, an early dorsal stream region of the macaque. We found fast neural responses with elevated temporal precision as well as neurons selective to sound location. These populations were partly segregated: Neurons in a caudomedial area more precisely followed temporal stimulus structure but were less selective to spatial location. Response latencies in this area were even shorter than in primary auditory cortex. Neurons in a caudolateral area showed higher selectivity for sound source azimuth and elevation, but responses were slower and matching to temporal sound structure was poorer. In contrast to the primary area and other regions studied previously, latencies in the caudal belt neurons were not negatively correlated with best frequency. Our results suggest that two functional substreams may exist within the auditory dorsal stream.
Collapse
|
39
|
Abstract
Auditory word-form recognition was originally proposed by Wernicke to occur within left superior temporal gyrus (STG), later further specified to be in posterior STG. To account for clinical observations (specifically paraphasia), Wernicke proposed his sensory speech center was also essential for correcting output from frontal speech-motor regions. Recent work, in contrast, has established a role for anterior STG, part of the auditory ventral stream, in the recognition of species-specific vocalizations in nonhuman primates and word-form recognition in humans. Recent work also suggests monitoring self-produced speech and motor control are associated with posterior STG, part of the auditory dorsal stream. Working without quantitative methods or evidence of sensory cortex' hierarchical organization, Wernicke co-localized functions that today appear dissociable. "Wernicke's area" thus may be better construed as two cortical modules, an auditory word-form area (AWFA) in the auditory ventral stream and an "inner speech area" in the auditory dorsal stream.
Collapse
|
40
|
Are you listening? Brain activation associated with sustained nonspatial auditory attention in the presence and absence of stimulation. Hum Brain Mapp 2013; 35:2233-52. [PMID: 23913818 DOI: 10.1002/hbm.22323] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2012] [Revised: 02/22/2013] [Accepted: 04/15/2013] [Indexed: 11/12/2022] Open
Abstract
Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations.
Collapse
|
41
|
Cortical plasticity and preserved function in early blindness. Neurosci Biobehav Rev 2013; 41:53-63. [PMID: 23453908 DOI: 10.1016/j.neubiorev.2013.01.025] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Revised: 01/09/2013] [Accepted: 01/28/2013] [Indexed: 10/27/2022]
Abstract
The "neural Darwinism" theory predicts that when one sensory modality is lacking, as in congenital blindness, the target structures are taken over by the afferent inputs from other senses that will promote and control their functional maturation (Edelman, 1993). This view receives support from both cross-modal plasticity experiments in animal models and functional imaging studies in man, which are presented here.
Collapse
|
42
|
Processing streams in the early blind. Multisens Res 2013. [DOI: 10.1163/22134808-000s0002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
43
|
|
44
|
Functional MRI evidence for a role of ventral prefrontal cortex in tinnitus. Brain Res 2012; 1485:22-39. [PMID: 22982009 DOI: 10.1016/j.brainres.2012.08.052] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2012] [Revised: 08/23/2012] [Accepted: 08/27/2012] [Indexed: 12/26/2022]
Abstract
It has long been known that subjective tinnitus, a constant or intermittent phantom sound perceived by 10 to 15% of the adult population, is not a purely auditory phenomenon but is also tied to limbic-related brain regions. Supporting evidence comes from data indicating that stress and emotion can modulate tinnitus, and from brain imaging studies showing functional and anatomical differences in limbic-related brain regions of tinnitus patients and controls. Recent studies from our lab revealed altered blood oxygen level-dependent (BOLD) responses to stimulation at the tinnitus frequency in the ventral striatum (specifically, the nucleus accumbens) and gray-matter reductions (i.e., anatomical changes) in ventromedial prefrontal cortex (vmPFC), of tinnitus patients compared to controls. The present study extended these findings by demonstrating functional differences in vmPFC between 20 tinnitus patients and 20 age-matched controls. Importantly, the observed BOLD response in vmPFC was positively correlated with tinnitus characteristics such as subjective loudness and the percent of time during which the tinnitus was perceived, whereas correlations with tinnitus handicap inventory scores and other variables known to be affected in tinnitus (e.g., depression, anxiety, noise sensitivity, hearing loss) were weaker or absent. This suggests that the observed group differences are indeed related to the strength of the tinnitus percept and not to an affective reaction to tinnitus. The results further corroborate vmPFC as a region of high interest for tinnitus research.This article is part of a Special Issue entitled: Tinnitus Neuroscience.
Collapse
|
45
|
Ventral and dorsal streams in the evolution of speech and language. FRONTIERS IN EVOLUTIONARY NEUROSCIENCE 2012; 4:7. [PMID: 22615693 PMCID: PMC3351753 DOI: 10.3389/fnevo.2012.00007] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2011] [Accepted: 04/25/2012] [Indexed: 11/13/2022]
Abstract
The brains of humans and old-world monkeys show a great deal of anatomical similarity. The auditory cortical system, for instance, is organized into a ventral and a dorsal pathway in both species. A fundamental question with regard to the evolution of speech and language (as well as music) is whether human and monkey brains show principal differences in their organization (e.g., new pathways appearing as a result of a single mutation), or whether species differences are of a more subtle, quantitative nature. There is little doubt about a similar role of the ventral auditory pathway in both humans and monkeys in the decoding of spectrally complex sounds, which some authors have referred to as auditory object recognition. This includes the decoding of speech sounds ("speech perception") and their ultimate linking to meaning in humans. The originally presumed role of the auditory dorsal pathway in spatial processing, by analogy to the visual dorsal pathway, has recently been conceptualized into a more general role in sensorimotor integration and control. Specifically for speech, the dorsal processing stream plays a role in speech production as well as categorization of phonemes during on-line processing of speech.
Collapse
|
46
|
Cortico-limbic morphology separates tinnitus from tinnitus distress. Front Syst Neurosci 2012; 6:21. [PMID: 22493571 PMCID: PMC3319920 DOI: 10.3389/fnsys.2012.00021] [Citation(s) in RCA: 113] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2012] [Accepted: 03/16/2012] [Indexed: 12/21/2022] Open
Abstract
Tinnitus is a common auditory disorder characterized by a chronic ringing or buzzing "in the ear."Despite the auditory-perceptual nature of this disorder, a growing number of studies have reported neuroanatomical differences in tinnitus patients outside the auditory-perceptual system. Some have used this evidence to characterize chronic tinnitus as dysregulation of the auditory system, either resulting from inefficient inhibitory control or through the formation of aversive associations with tinnitus. It remains unclear, however, whether these "non-auditory" anatomical markers of tinnitus are related to the tinnitus signal itself, or merely to negative emotional reactions to tinnitus (i.e., tinnitus distress). In the current study, we used anatomical MRI to identify neural markers of tinnitus, and measured their relationship to a variety of tinnitus characteristics and other factors often linked to tinnitus, such as hearing loss, depression, anxiety, and noise sensitivity. In a new cohort of participants, we confirmed that people with chronic tinnitus exhibit reduced gray matter in ventromedial prefrontal cortex (vmPFC) compared to controls matched for age and hearing loss. This effect was driven by reduced cortical surface area, and was not related to tinnitus distress, symptoms of depression or anxiety, noise sensitivity, or other factors. Instead, tinnitus distress was positively correlated with cortical thickness in the anterior insula in tinnitus patients, while symptoms of anxiety and depression were negatively correlated with cortical thickness in subcallosal anterior cingulate cortex (scACC) across all groups. Tinnitus patients also exhibited increased gyrification of dorsomedial prefrontal cortex (dmPFC), which was more severe in those patients with constant (vs. intermittent) tinnitus awareness. Our data suggest that the neural systems associated with chronic tinnitus are different from those involved in aversive or distressed reactions to tinnitus.
Collapse
|
47
|
Sound-identity processing in early areas of the auditory ventral stream in the macaque. J Neurophysiol 2011; 107:1123-41. [PMID: 22131372 DOI: 10.1152/jn.00793.2011] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Auditory cortical processing is thought to be accomplished along two processing streams. The existence of a posterior/dorsal stream dealing, among others, with the processing of spatial aspects of sound has been corroborated by numerous studies in several species. An anterior/ventral stream for the processing of nonspatial sound qualities, including the identification of sounds such as species-specific vocalizations, has also received much support. Originally discovered in anterolateral belt cortex, most recent work on the anterior/ventral pathway has been performed on far anterior superior temporal (ST) areas and on ventrolateral prefrontal cortex (VLPFC). Regions of the anterior/ventral stream near its origin in early auditory areas have been less explored. In the present study, we examined three early auditory regions with different anteroposterior locations (caudal, middle, and rostral) in awake rhesus macaques. We analyzed how well classification based on sound-evoked activity patterns of neuronal populations replicates the original stimulus categories. Of the three regions, the rostral region (rR), which included core area R and medial belt area RM, yielded the greatest classification success across all stimulus classes or between classes of natural sounds. Starting from ∼80 ms past stimulus onset, clustering based on the population response in rR became clearly more successful than clustering based on responses from any other region. Our study demonstrates that specialization for sound-identity processing can be found very early in the auditory ventral stream. Furthermore, the fact that this processing develops over time can shed light on underlying mechanisms. Finally, we show that population analysis is a more sensitive method for revealing functional specialization than conventional types of analysis.
Collapse
|
48
|
Preserved Functional Specialization in Sensory Substitution of the Early Blind. Iperception 2011. [DOI: 10.1068/ic749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
49
|
Abstract
Tinnitus is a common disorder characterized by ringing in the ear in the absence of sound. Converging evidence suggests that tinnitus pathophysiology involves damage to peripheral and/or central auditory pathways. However, whether auditory system dysfunction is sufficient to explain chronic tinnitus is unclear, especially in light of evidence implicating other networks, including the limbic system. Using functional magnetic resonance imaging and voxel-based morphometry, we assessed tinnitus-related functional and anatomical anomalies in auditory and limbic networks. Moderate hyperactivity was present in the primary and posterior auditory cortices of tinnitus patients. However, the nucleus accumbens exhibited the greatest degree of hyperactivity, specifically to sounds frequency-matched to patients' tinnitus. Complementary structural differences were identified in ventromedial prefrontal cortex, another limbic structure heavily connected to the nucleus accumbens. Furthermore, tinnitus-related anomalies were intercorrelated in the two limbic regions and between limbic and primary auditory areas, indicating the importance of auditory-limbic interactions in tinnitus.
Collapse
|
50
|
Segregation of vowels and consonants in human auditory cortex: evidence for distributed hierarchical organization. Front Psychol 2010; 1:232. [PMID: 21738513 PMCID: PMC3125530 DOI: 10.3389/fpsyg.2010.00232] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2010] [Accepted: 12/08/2010] [Indexed: 11/24/2022] Open
Abstract
The speech signal consists of a continuous stream of consonants and vowels, which must be de- and encoded in human auditory cortex to ensure the robust recognition and categorization of speech sounds. We used small-voxel functional magnetic resonance imaging to study information encoded in local brain activation patterns elicited by consonant-vowel syllables, and by a control set of noise bursts. First, activation of anterior–lateral superior temporal cortex was seen when controlling for unspecific acoustic processing (syllables versus band-passed noises, in a “classic” subtraction-based design). Second, a classifier algorithm, which was trained and tested iteratively on data from all subjects to discriminate local brain activation patterns, yielded separations of cortical patches discriminative of vowel category versus patches discriminative of stop-consonant category across the entire superior temporal cortex, yet with regional differences in average classification accuracy. Overlap (voxels correctly classifying both speech sound categories) was surprisingly sparse. Third, lending further plausibility to the results, classification of speech–noise differences was generally superior to speech–speech classifications, with the no\ exception of a left anterior region, where speech–speech classification accuracies were significantly better. These data demonstrate that acoustic–phonetic features are encoded in complex yet sparsely overlapping local patterns of neural activity distributed hierarchically across different regions of the auditory cortex. The redundancy apparent in these multiple patterns may partly explain the robustness of phonemic representations.
Collapse
|