1
|
Temudo S, Pinheiro AP. What Is Faster than Where in Vocal Emotional Perception. J Cogn Neurosci 2025; 37:239-265. [PMID: 39348115 DOI: 10.1162/jocn_a_02251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Voices carry a vast amount of information about speakers (e.g., emotional state; spatial location). Neuroimaging studies postulate that spatial ("where") and emotional ("what") cues are processed by partially independent processing streams. Although behavioral evidence reveals interactions between emotion and space, the temporal dynamics of these processes in the brain and its modulation by attention remain unknown. We investigated whether and how spatial and emotional features interact during voice processing as a function of attention focus. Spatialized nonverbal vocalizations differing in valence (neutral, amusement, anger) were presented at different locations around the head, whereas listeners discriminated either the spatial location or emotional quality of the voice. Neural activity was measured with ERPs of the EEG. Affective ratings were collected at the end of the EEG session. Emotional vocalizations elicited decreased N1 but increased P2 and late positive potential amplitudes. Interactions of space and emotion occurred at the salience detection stage: neutral vocalizations presented at right (vs. left) locations elicited increased P2 amplitudes, but no such differences were observed for emotional vocalizations. When task instructions involved emotion categorization, the P2 was increased for vocalizations presented at front (vs. back) locations. Behaviorally, only valence and arousal ratings showed emotion-space interactions. These findings suggest that emotional representations are activated earlier than spatial representations in voice processing. The perceptual prioritization of emotional cues occurred irrespective of task instructions but was not paralleled by an augmented stimulus representation in space. These findings support the differential responding to emotional information by auditory processing pathways.
Collapse
|
2
|
Clarke S, Da Costa S, Crottaz-Herbette S. Dual Representation of the Auditory Space. Brain Sci 2024; 14:535. [PMID: 38928534 PMCID: PMC11201621 DOI: 10.3390/brainsci14060535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
Collapse
Affiliation(s)
- Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011 Lausanne, Switzerland; (S.D.C.); (S.C.-H.)
| | | | | |
Collapse
|
3
|
Josef-Golubić S. Triple model of auditory sensory processing: a novel gating stream directly links primary auditory areas to executive prefrontal cortex. Acta Clin Croat 2020; 59:721-728. [PMID: 34285443 PMCID: PMC8253058 DOI: 10.20471/acc.2020.59.04.19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 10/09/2018] [Indexed: 11/24/2022] Open
Abstract
The generally accepted model of sensory processing of visual and auditory stimuli assumes two major parallel processing streams, ventral and dorsal, which comprise functionally and anatomically distinct but interacting processes in which the ventral stream supports stimulus identification, and the dorsal stream is involved in recognizing the stimulus spatial location and sensori-motor integration functions. However, recent studies suggest the existence of a third, very fast sensory processing pathway, a gating stream that directly links the primary auditory cortices to the executive prefrontal cortex within the first 50 milliseconds after presentation of a stimulus, bypassing hierarchical structure of the ventral and dorsal pathways. Gating stream propagates the sensory gating phenomenon, which serves as a basic protective mechanism preventing irrelevant, repeated information from recurrent sensory processing. The goal of the present paper is to introduce the novel 'three-stream' model of auditory processing, including the new fast sensory processing stream, i.e. gating stream, alongside the well-affirmed dorsal and ventral sensory processing pathways. The impairments in sensory processing along the gating stream have been found to be strongly involved in the pathophysiological sensory processing in Alzheimer's disease and could be the underlying issue in numerous neuropsychiatric disorders and diseases that are linked to the pathological sensory gating inhibition, such as schizophrenia, post-traumatic stress disorder, bipolar disorder or attention deficit hyperactivity disorder.
Collapse
Affiliation(s)
- Sanja Josef-Golubić
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
4
|
Kuiper JJ, Lin YH, Young IM, Bai MY, Briggs RG, Tanglay O, Fonseka RD, Hormovas J, Dhanaraj V, Conner AK, O'Neal CM, Sughrue ME. A parcellation-based model of the auditory network. Hear Res 2020; 396:108078. [PMID: 32961519 DOI: 10.1016/j.heares.2020.108078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/01/2020] [Accepted: 09/11/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION The auditory network plays an important role in interaction with the environment. Multiple cortical areas, such as the inferior frontal gyrus, superior temporal gyrus and adjacent insula have been implicated in this processing. However, understanding of this network's connectivity has been devoid of tractography specificity. METHODS Using attention task-based functional magnetic resonance imaging (MRI) studies, an activation likelihood estimation (ALE) of the auditory network was generated. Regions of interest corresponding to the cortical parcellation scheme previously published under the Human Connectome Project were co-registered onto the ALE in the Montreal Neurological Institute coordinate space, and visually assessed for inclusion in the network. Diffusion spectrum MRI-based fiber tractography was performed to determine the structural connections between cortical parcellations comprising the network. RESULTS Fifteen cortical regions were found to be part of the auditory network: areas 44 and 8C, auditory area 1, 4, and 5, frontal operculum area 4, the lateral belt, medial belt and parabelt, parietal area F centromedian, perisylvian language area, retroinsular cortex, supplementary and cingulate eye field and the temporoparietal junction area 1. These regions showed consistent interconnections between adjacent parcellations. The frontal aslant tract was found to connect areas within the frontal lobe, while the arcuate fasciculus was found to connect the frontal and temporal lobe, and subcortical U-fibers were found to connect parcellations within the temporal area. Further studies may refine this model with the ultimate goal of clinical application.
Collapse
Affiliation(s)
- Joseph J Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Yueh-Hsin Lin
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | | | - Michael Y Bai
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Onur Tanglay
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - R Dineth Fonseka
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Jorge Hormovas
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Vukshitha Dhanaraj
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Christen M O'Neal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Michael E Sughrue
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia.
| |
Collapse
|
5
|
Liang Y, Liu B, Li X, Wang P, Wang B. Revealing the differences of the representations of sounds from different directions in the human brain using functional connectivity. Neurosci Lett 2020; 718:134746. [PMID: 31923522 DOI: 10.1016/j.neulet.2020.134746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 01/03/2020] [Accepted: 01/06/2020] [Indexed: 11/24/2022]
Abstract
Many studies have focused on the processing mechanism of sound directions in the human brain, however, as far as we know, it remains unclear whether the representations of sounds from different directions are different. In the present study, 28 subjects were scanned while listening to sounds from different directions. We used the whole-brain functional connectivity (FC) analysis to explore which brain regions had significant changes. Our results revealed that sounds from different directions affected the FC in the widely distributed regions. Importantly, all regions showed significant differences in FC between the central and eccentric directions, while few regions showed a difference between the left and right directions. These findings revealed the differences in the representations of sounds from different directions.
Collapse
Affiliation(s)
- Yaping Liang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, PR China; College of Intelligence and Computing, Tianjin University, Tianjin, 300350, PR China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, PR China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, PR China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, PR China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, PR China
| |
Collapse
|
6
|
Tissieres I, Crottaz-Herbette S, Clarke S. Implicit representation of the auditory space: contribution of the left and right hemispheres. Brain Struct Funct 2019; 224:1569-1582. [PMID: 30848352 DOI: 10.1007/s00429-019-01853-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 02/25/2019] [Indexed: 11/24/2022]
Abstract
Spatial cues contribute to the ability to segregate sound sources and thus facilitate their detection and recognition. This implicit use of spatial cues can be preserved in cases of cortical spatial deafness, suggesting that partially distinct neural networks underlie the explicit sound localization and the implicit use of spatial cues. We addressed this issue by assessing 40 patients, 20 patients with left and 20 patients with right hemispheric damage, for their ability to use auditory spatial cues implicitly in a paradigm of spatial release from masking (SRM) and explicitly in sound localization. The anatomical correlates of their performance were determined with voxel-based lesion-symptom mapping (VLSM). During the SRM task, the target was always presented at the centre, whereas the masker was presented at the centre or at one of the two lateral positions on the right or left side. The SRM effect was absent in some but not all patients; the inability to perceive the target when the masker was at one of the lateral positions correlated with lesions of the left temporo-parieto-frontal cortex or of the right inferior parietal lobule and the underlying white matter. As previously reported, sound localization depended critically on the right parietal and opercular cortex. Thus, explicit and implicit use of spatial cues depends on at least partially distinct neural networks. Our results suggest that the implicit use may rely on the left-dominant position-linked representation of sound objects, which has been demonstrated in previous EEG and fMRI studies.
Collapse
Affiliation(s)
- Isabel Tissieres
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland.
| |
Collapse
|
7
|
Briggs RG, Pryor DP, Conner AK, Nix CE, Milton CK, Kuiper JK, Palejwala AH, Sughrue ME. The Artery of Aphasia, A Uniquely Sensitive Posterior Temporal Middle Cerebral Artery Branch that Supplies Language Areas in the Brain: Anatomy and Report of Four Cases. World Neurosurg 2019; 126:e65-e76. [PMID: 30735868 DOI: 10.1016/j.wneu.2019.01.159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 01/14/2019] [Accepted: 01/17/2019] [Indexed: 10/27/2022]
Abstract
BACKGROUND Arterial disruption during brain surgery can cause devastating injuries to wide expanses of white and gray matter beyond the tumor resection cavity. Such damage may occur as a result of disrupting blood flow through en passage arteries. Identification of these arteries is critical to prevent unforeseen neurologic sequelae during brain tumor resection. In this study, we discuss one such artery, termed the artery of aphasia (AoA), which when disrupted can lead to receptive and expressive language deficits. METHODS We performed a retrospective review of all patients undergoing an awake craniotomy for resection of a glioma by the senior author from 2012 to 2018. Patients were included if they experienced language deficits secondary to postoperative infarction in the left posterior temporal lobe in the distribution of the AoA. The gross anatomy of the AoA was then compared with activation likelihood estimations of the auditory and semantic language networks using coordinate-based meta-analytic techniques. RESULTS We identified 4 patients with left-sided posterior temporal artery infarctions in the distribution of the AoA on diffusion-weighted magnetic resonance imaging. All 4 patients developed substantial expressive and receptive language deficits after surgery. Functional language improvement occurred in only 2/4 patients. Activation likelihood estimations localized parts of the auditory and semantic language networks in the distribution of the AoA. CONCLUSIONS The AoA is prone to blood flow disruption despite benign manipulation. Patients seem to have limited capacity for speech recovery after intraoperative ischemia in the distribution of this artery, which supplies parts of the auditory and semantic language networks.
Collapse
Affiliation(s)
- Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Dillon P Pryor
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Cameron E Nix
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Camille K Milton
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Joseph K Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Ali H Palejwala
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Michael E Sughrue
- Department of Neurosurgery, Prince of Wales Private Hospital, Sydney, Australia.
| |
Collapse
|
8
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
9
|
Altmann CF, Ueda R, Bucher B, Furukawa S, Ono K, Kashino M, Mima T, Fukuyama H. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography. Neuroimage 2017; 159:185-194. [DOI: 10.1016/j.neuroimage.2017.07.055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 06/15/2017] [Accepted: 07/25/2017] [Indexed: 11/29/2022] Open
|
10
|
Kim SG, Knösche TR. Resting state functional connectivity of the ventral auditory pathway in musicians with absolute pitch. Hum Brain Mapp 2017; 38:3899-3916. [PMID: 28481006 DOI: 10.1002/hbm.23637] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2017] [Revised: 04/06/2017] [Accepted: 04/23/2017] [Indexed: 11/09/2022] Open
Abstract
Absolute pitch (AP) is the ability to recognize pitch chroma of tonal sound without external references, providing a unique model of the human auditory system (Zatorre: Nat Neurosci 6 () 692-695). In a previous study (Kim and Knösche: Hum Brain Mapp () 3486-3501), we identified enhanced intracortical myelination in the right planum polare (PP) in musicians with AP, which could be a potential site for perceptional processing of pitch chroma information. We speculated that this area, which initiates the ventral auditory pathway, might be crucially involved in the perceptual stage of the AP process in the context of the "dual pathway hypothesis" that suggests the role of the ventral pathway in processing nonspatial information related to the identity of an auditory object (Rauschecker: Eur J Neurosci 41 () 579-585). To test our conjecture on the ventral pathway, we investigated resting state functional connectivity (RSFC) using functional magnetic resonance imaging (fMRI) from musicians with varying degrees of AP. Should our hypothesis be correct, RSFC via the ventral pathway is expected to be stronger in musicians with AP, whereas such group effect is not predicted in the RSFC via the dorsal pathway. In the current data, we found greater RSFC between the right PP and bilateral anteroventral auditory cortices in musicians with AP. In contrast, we did not find any group difference in the RSFC of the planum temporale (PT) between musicians with and without AP. We believe that these findings support our conjecture on the critical role of the ventral pathway in AP recognition. Hum Brain Mapp 38:3899-3916, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Research Group for MEG and EEG - Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas R Knösche
- Research Group for MEG and EEG - Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
11
|
Renvall H, Staeren N, Barz CS, Ley A, Formisano E. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes. Front Neurosci 2016; 10:254. [PMID: 27375416 PMCID: PMC4894904 DOI: 10.3389/fnins.2016.00254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Department of Neuroscience and Biomedical Engineering, Aalto University School of ScienceEspoo, Finland; Aalto Neuroimaging, Magnetoencephalography (MEG) Core, Aalto UniversityEspoo, Finland
| | - Noël Staeren
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Claudia S Barz
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Institute for Neuroscience and Medicine, Research Centre JuelichJuelich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen UniversityAachen, Germany
| | - Anke Ley
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Maastricht Center for Systems Biology (MaCSBio), Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
12
|
Abstract
Goal-directed behavior can be characterized as a dynamic link between a sensory stimulus and a motor act. Neural correlates of many of the intermediate events of goal-directed behavior are found in the posterior parietal cortex. Although the parietal cortex’s role in guiding visual behaviors has received considerable attention, relatively little is known about its role in mediating auditory behaviors. Here, the authors review recent studies that have focused on how neurons in the lateral intraparietal area (area LIP) differentially process auditory and visual stimuli. These studies suggest that area LIP contains a modality-dependent representation that is highly dependent on behavioral context.
Collapse
Affiliation(s)
- Yale E Cohen
- Department of Psychological and Brain Sciences, Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH
| | | | | |
Collapse
|
13
|
Kim SG, Knösche TR. Intracortical myelination in musicians with absolute pitch: Quantitative morphometry using 7-T MRI. Hum Brain Mapp 2016; 37:3486-501. [PMID: 27160707 PMCID: PMC5084814 DOI: 10.1002/hbm.23254] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 04/26/2016] [Accepted: 04/27/2016] [Indexed: 11/06/2022] Open
Abstract
Absolute pitch (AP) is known as the ability to recognize and label the pitch chroma of a given tone without external reference. Known brain structures and functions related to AP are mainly of macroscopic aspects. To shed light on the underlying neural mechanism of AP, we investigated the intracortical myeloarchitecture in musicians with and without AP using the quantitative mapping of the longitudinal relaxation rates with ultra‐high‐field magnetic resonance imaging at 7 T. We found greater intracortical myelination for AP musicians in the anterior region of the supratemporal plane, particularly the medial region of the right planum polare (PP). In the same region of the right PP, we also found a positive correlation with a behavioral index of AP performance. In addition, we found a positive correlation with a frequency discrimination threshold in the anterolateral Heschl's gyrus in the right hemisphere, demonstrating distinctive neural processes of absolute recognition and relative discrimination of pitch. Regarding possible effects of local myelination in the cortex and the known importance of the anterior superior temporal gyrus/sulcus for the identification of auditory objects, we argue that pitch chroma may be processed as an identifiable object property in AP musicians. Hum Brain Mapp 37:3486–3501, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Research Group for MEG and EEG-Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas R Knösche
- Research Group for MEG and EEG-Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
14
|
Dormal G, Rezk M, Yakobov E, Lepore F, Collignon O. Auditory motion in the sighted and blind: Early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions. Neuroimage 2016; 134:630-644. [PMID: 27107468 DOI: 10.1016/j.neuroimage.2016.04.027] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Revised: 03/31/2016] [Accepted: 04/13/2016] [Indexed: 10/21/2022] Open
Abstract
How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information.
Collapse
Affiliation(s)
- Giulia Dormal
- Centre de recherche en Neuropsychologie et Cognition (CERNEC), University of Montreal, Canada; Institut de Psychologie et Institut de Neurosciences, University of Louvain, Belgium; Biological Psychology and Neuropsychology, Institute for Psychology, University of Hamburg, Germany.
| | - Mohamed Rezk
- Centre for Mind/Brain Science (CIMeC), University of Trento, Italy
| | | | - Franco Lepore
- Centre de recherche en Neuropsychologie et Cognition (CERNEC), University of Montreal, Canada
| | | |
Collapse
|
15
|
Zündorf IC, Lewald J, Karnath HO. Testing the dual-pathway model for auditory processing in human cortex. Neuroimage 2015; 124:672-681. [PMID: 26388552 DOI: 10.1016/j.neuroimage.2015.09.026] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 11/16/2022] Open
Abstract
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis.
Collapse
Affiliation(s)
- Ida C Zündorf
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany; Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.
| |
Collapse
|
16
|
Roaring lions and chirruping lemurs: How the brain encodes sound objects in space. Neuropsychologia 2015; 75:304-13. [DOI: 10.1016/j.neuropsychologia.2015.06.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 06/07/2015] [Accepted: 06/10/2015] [Indexed: 01/29/2023]
|
17
|
Da Costa S, Bourquin NMP, Knebel JF, Saenz M, van der Zwaag W, Clarke S. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI. PLoS One 2015; 10:e0124072. [PMID: 25938430 PMCID: PMC4418571 DOI: 10.1371/journal.pone.0124072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 02/25/2015] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.
Collapse
Affiliation(s)
- Sandra Da Costa
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Nathalie M.-P. Bourquin
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Jean-François Knebel
- National Center of Competence in Research, SYNAPSY—The Synaptic Bases of Mental Diseases, Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Wietske van der Zwaag
- Centre d’Imagerie BioMédicale, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| |
Collapse
|
18
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
19
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
20
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
21
|
Salminen NH. Human cortical sensitivity to interaural level differences in low- and high-frequency sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:EL190-EL193. [PMID: 25698049 DOI: 10.1121/1.4907736] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Interaural level difference (ILD) is used as a cue in horizontal sound source localization. In free field, the magnitude of ILD depends on frequency: it is more prominent at high than low frequencies. Here, a magnetoencephalography experiment was conducted to test whether the sensitivity of the human auditory cortex to ILD is also frequency-dependent. Robust cortical sensitivity to ILD was found that could not be explained by monaural level effects, but this sensitivity did not differ between low- and high-frequency stimuli. This is consistent with previous psychoacoustical investigations showing that performance in ILD discrimination is not dependent on frequency.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science and MEG Core, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
| |
Collapse
|
22
|
Abstract
The auditory cortex is a network of areas in the part of the brain that receives inputs from the subcortical auditory pathways in the brainstem and thalamus. Through an elaborate network of intrinsic and extrinsic connections, the auditory cortex is thought to bring about the conscious perception of sound and provide a basis for the comprehension and production of meaningful utterances. In this chapter, the organization of auditory cortex is described with an emphasis on its anatomic features and the flow of information within the network. These features are then used to introduce key neurophysiologic concepts that are being intensively studied in humans and animal models. The discussion is presented in the context of our working model of the primate auditory cortex and extensions to humans. The material is presented in the context of six underlying principles, which reflect distinct, but related, aspects of anatomic and physiologic organization: (1) the division of auditory cortex into regions; (2) the subdivision of regions into areas; (3) tonotopic organization of areas; (4) thalamocortical connections; (5) serial and parallel organization of connections; and (6) topographic relationships between auditory and auditory-related areas. Although the functional roles of the various components of this network remain poorly defined, a more complete understanding is emerging from ongoing studies that link auditory behavior to its anatomic and physiologic substrates.
Collapse
Affiliation(s)
- Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine and Department of Psychology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
23
|
Cammoun L, Thiran JP, Griffa A, Meuli R, Hagmann P, Clarke S. Intrahemispheric cortico-cortical connections of the human auditory cortex. Brain Struct Funct 2014; 220:3537-53. [PMID: 25173473 DOI: 10.1007/s00429-014-0872-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2014] [Accepted: 08/06/2014] [Indexed: 10/24/2022]
Abstract
The human auditory cortex comprises the supratemporal plane and large parts of the temporal and parietal convexities. We have investigated the relevant intrahemispheric cortico-cortical connections using in vivo DSI tractography combined with landmark-based registration, automatic cortical parcellation and whole-brain structural connection matrices in 20 right-handed male subjects. On the supratemporal plane, the pattern of connectivity was related to the architectonically defined early-stage auditory areas. It revealed a three-tier architecture characterized by a cascade of connections from the primary auditory cortex to six adjacent non-primary areas and from there to the superior temporal gyrus. Graph theory-driven analysis confirmed the cascade-like connectivity pattern and demonstrated a strong degree of segregation and hierarchy within early-stage auditory areas. Putative higher-order areas on the temporal and parietal convexities had more widely spread local connectivity and long-range connections with the prefrontal cortex; analysis of optimal community structure revealed five distinct modules in each hemisphere. The pattern of temporo-parieto-frontal connectivity was partially asymmetrical. In conclusion, the human early-stage auditory cortical connectivity, as revealed by in vivo DSI tractography, has strong similarities with that of non-human primates. The modular architecture and hemispheric asymmetry in higher-order regions is compatible with segregated processing streams and lateralization of cognitive functions.
Collapse
Affiliation(s)
- Leila Cammoun
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Lausanne, Switzerland.
| | | | | | - Reto Meuli
- Service de Radiodiagnostic et Radiologie Interventionnelle, CHUV, Université de Lausanne, Lausanne, Switzerland
| | - Patric Hagmann
- Service de Radiodiagnostic et Radiologie Interventionnelle, CHUV, Université de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Lausanne, Switzerland.
| |
Collapse
|
24
|
Huang S, Chang WT, Belliveau JW, Hämäläinen M, Ahveninen J. Lateralized parietotemporal oscillatory phase synchronization during auditory selective attention. Neuroimage 2013; 86:461-9. [PMID: 24185023 DOI: 10.1016/j.neuroimage.2013.10.043] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2013] [Revised: 09/24/2013] [Accepted: 10/18/2013] [Indexed: 10/26/2022] Open
Abstract
Based on the infamous left-lateralized neglect syndrome, one might hypothesize that the dominating right parietal cortex has a bilateral representation of space, whereas the left parietal cortex represents only the contralateral right hemispace. Whether this principle applies to human auditory attention is not yet fully clear. Here, we explicitly tested the differences in cross-hemispheric functional coupling between the intraparietal sulcus (IPS) and auditory cortex (AC) using combined magnetoencephalography (MEG), EEG, and functional MRI (fMRI). Inter-regional pairwise phase consistency (PPC) was analyzed from data obtained during dichotic auditory selective attention task, where subjects were in 10-s trials cued to attend to sounds presented to one ear and to ignore sounds presented in the opposite ear. Using MEG/EEG/fMRI source modeling, parietotemporal PPC patterns were (a) mapped between all AC locations vs. IPS seeds and (b) analyzed between four anatomically defined AC regions-of-interest (ROI) vs. IPS seeds. Consistent with our hypothesis, stronger cross-hemispheric PPC was observed between the right IPS and left AC for attended right-ear sounds, as compared to PPC between the left IPS and right AC for attended left-ear sounds. In the mapping analyses, these differences emerged at 7-13Hz, i.e., at the theta to alpha frequency bands, and peaked in Heschl's gyrus and lateral posterior non-primary ACs. The ROI analysis revealed similarly lateralized differences also in the beta and lower theta bands. Taken together, our results support the view that the right parietal cortex dominates auditory spatial attention.
Collapse
Affiliation(s)
- Samantha Huang
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Wei-Tang Chang
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - John W Belliveau
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Matti Hämäläinen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Jyrki Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| |
Collapse
|
25
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 89] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
26
|
Altmann CF, Gaese BH. Representation of frequency-modulated sounds in the human brain. Hear Res 2013; 307:74-85. [PMID: 23933098 DOI: 10.1016/j.heares.2013.07.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2013] [Revised: 07/26/2013] [Accepted: 07/27/2013] [Indexed: 10/26/2022]
Abstract
Frequency-modulation is a ubiquitous sound feature present in communicative sounds of various animal species and humans. Functional imaging of the human auditory system has seen remarkable advances in the last two decades and studies pertaining to frequency-modulation have centered around two major questions: a) are there dedicated feature-detectors encoding frequency-modulation in the brain and b) is there concurrent representation with amplitude-modulation, another temporal sound feature? In this review, we first describe how these two questions are motivated by psychophysical studies and neurophysiology in animal models. We then review how human non-invasive neuroimaging studies have furthered our understanding of the representation of frequency-modulated sounds in the brain. Finally, we conclude with some suggestions on how human neuroimaging could be used in future studies to address currently still open questions on this fundamental sound feature. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan; Career-Path Promotion Unit for Young Life Scientists, Kyoto University, Kyoto 606-8501, Japan.
| | | |
Collapse
|
27
|
Ahveninen J, Kopčo N, Jääskeläinen IP. Psychophysics and neuronal bases of sound localization in humans. Hear Res 2013; 307:86-97. [PMID: 23886698 DOI: 10.1016/j.heares.2013.07.008] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 07/02/2013] [Accepted: 07/10/2013] [Indexed: 10/26/2022]
Abstract
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory "where" pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | | | | |
Collapse
|
28
|
Ogawa A, Macaluso E. Audio-visual interactions for motion perception in depth modulate activity in visual area V3A. Neuroimage 2013; 71:158-67. [PMID: 23333414 DOI: 10.1016/j.neuroimage.2013.01.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 12/20/2012] [Accepted: 01/09/2013] [Indexed: 11/28/2022] Open
Abstract
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) "matched vs. unmatched" conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio-visual "congruent vs. incongruent" between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio-visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio-visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio-visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices.
Collapse
Affiliation(s)
- Akitoshi Ogawa
- Neuroimaging Laboratory, IRCCS, Santa Lucia Foundation, Via Ardeatina 306, Rome 00179, Italy.
| | | |
Collapse
|
29
|
|
30
|
Kong L, Michalka SW, Rosen ML, Sheremata SL, Swisher JD, Shinn-Cunningham BG, Somers DC. Auditory spatial attention representations in the human cerebral cortex. Cereb Cortex 2012. [PMID: 23180753 DOI: 10.1093/cercor/bhs359] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes.
Collapse
|
31
|
Garell PC, Bakken H, Greenlee JDW, Volkov I, Reale RA, Oya H, Kawasaki H, Howard MA, Brugge JF. Functional connection between posterior superior temporal gyrus and ventrolateral prefrontal cortex in human. Cereb Cortex 2012; 23:2309-21. [PMID: 22879355 DOI: 10.1093/cercor/bhs220] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The connection between auditory fields of the temporal lobe and prefrontal cortex has been well characterized in nonhuman primates. Little is known of temporofrontal connectivity in humans, however, due largely to the fact that invasive experimental approaches used so successfully to trace anatomical pathways in laboratory animals cannot be used in humans. Instead, we used a functional tract-tracing method in 12 neurosurgical patients with multicontact electrode arrays chronically implanted over the left (n = 7) or right (n = 5) perisylvian temporal auditory cortex (area PLST) and the ventrolateral prefrontal cortex (VLPFC) of the inferior frontal gyrus (IFG) for diagnosis and treatment of medically intractable epilepsy. Area PLST was identified by the distribution of average auditory-evoked potentials obtained in response to simple and complex sounds. The same sounds evoked little if there is any activity in VLPFC. A single bipolar electrical pulse (0.2 ms, charge-balanced) applied between contacts within physiologically identified PLST resulted in polyphasic evoked potentials clustered in VLPFC, with greatest activation being in pars triangularis of the IFG. The average peak latency of the earliest negative deflection of the evoked potential on VLPFC was 13.48 ms (range: 9.0-18.5 ms), providing evidence for a rapidly conducting pathway between area PLST and VLPFC.
Collapse
Affiliation(s)
- P C Garell
- Department of Neurosurgery, New York Medical College, Valhalla, NY, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Wack DS, Cox JL, Schirda CV, Magnano CR, Sussman JE, Henderson D, Burkard RF. Functional anatomy of the masking level difference, an fMRI study. PLoS One 2012; 7:e41263. [PMID: 22848453 PMCID: PMC3407245 DOI: 10.1371/journal.pone.0041263] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2011] [Accepted: 06/25/2012] [Indexed: 11/30/2022] Open
Abstract
INTRODUCTION Masking level differences (MLDs) are differences in the hearing threshold for the detection of a signal presented in a noise background, where either the phase of the signal or noise is reversed between ears. We use N0/Nπ to denote noise presented in-phase/out-of-phase between ears and S0/Sπ to denote a 500 Hz sine wave signal as in/out-of-phase. Signal detection level for the noise/signal combinations N0Sπ and NπS0 is typically 10-20 dB better than for N0S0. All combinations have the same spectrum, level, and duration of both the signal and the noise. METHODS Ten participants (5 female), age: 22-43, with N0Sπ-N0S0 MLDs greater than 10 dB, were imaged using a sparse BOLD fMRI sequence, with a 9 second gap (1 second quiet preceding stimuli). Band-pass (400-600 Hz) noise and an enveloped signal (.25 second tone burst, 50% duty-cycle) were used to create the stimuli. Brain maps of statistically significant regions were formed from a second-level analysis using SPM5. RESULTS The contrast NπS0- N0Sπ had significant regions of activation in the right pulvinar, corpus callosum, and insula bilaterally. The left inferior frontal gyrus had significant activation for contrasts N0Sπ-N0S0 and NπS0-N0S0. The contrast N0S0-N0Sπ revealed a region in the right insula, and the contrast N0S0-NπS0 had a region of significance in the left insula. CONCLUSION Our results extend the view that the thalamus acts as a gating mechanism to enable dichotic listening, and suggest that MLD processing is accomplished through thalamic communication with the insula, which communicate across the corpus callosum to either enhance or diminish the binaural signal (depending on the MLD condition). The audibility improvement of the signal with both MLD conditions is likely reflected by activation in the left inferior frontal gyrus, a late stage in the what/where model of auditory processing.
Collapse
Affiliation(s)
- David S Wack
- Buffalo Neuroimaging Analysis Center, Dept. of Neurology, University at Buffalo, Buffalo, New York, United States of America.
| | | | | | | | | | | | | |
Collapse
|
33
|
Nodal FR, Bajo VM, King AJ. Plasticity of spatial hearing: behavioural effects of cortical inactivation. J Physiol 2012; 590:3965-86. [PMID: 22547635 PMCID: PMC3464400 DOI: 10.1113/jphysiol.2011.222828] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex.
Collapse
Affiliation(s)
- Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | |
Collapse
|
34
|
Paltoglou AE, Sumner CJ, Hall DA. Mapping feature-sensitivity and attentional modulation in human auditory cortex with functional magnetic resonance imaging. Eur J Neurosci 2011; 33:1733-41. [PMID: 21447093 PMCID: PMC3110306 DOI: 10.1111/j.1460-9568.2011.07656.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast.
Collapse
|
35
|
Steinschneider M, Nourski KV, Kawasaki H, Oya H, Brugge JF, Howard MA. Intracranial study of speech-elicited activity on the human posterolateral superior temporal gyrus. ACTA ACUST UNITED AC 2011; 21:2332-47. [PMID: 21368087 DOI: 10.1093/cercor/bhr014] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To clarify speech-elicited response patterns within auditory-responsive cortex of the posterolateral superior temporal (PLST) gyrus, time-frequency analyses of event-related band power in the high gamma frequency range (75-175 Hz) were performed on the electrocorticograms recorded from high-density subdural grid electrodes in 8 patients undergoing evaluation for medically intractable epilepsy. Stimuli were 6 stop consonant-vowel (CV) syllables that varied in their consonant place of articulation (POA) and voice onset time (VOT). Initial augmentation was maximal over several centimeters of PLST, lasted about 400 ms, and was often followed by suppression and a local outward expansion of activation. Maximal gamma power overlapped either the Nα or Pβ deflections of the average evoked potential (AEP). Correlations were observed between the relative magnitudes of gamma band responses elicited by unvoiced stop CV syllables (/pa/, /ka/, /ta/) and their corresponding voiced stop CV syllables (/ba/, /ga/, /da/), as well as by the VOT of the stimuli. VOT was also represented in the temporal patterns of the AEP. These findings, obtained in the passive awake state, indicate that PLST discriminates acoustic features associated with POA and VOT and serve as a benchmark upon which task-related speech activity can be compared.
Collapse
|
36
|
Samson F, Zeffiro TA, Toussaint A, Belin P. Stimulus complexity and categorical effects in human auditory cortex: an activation likelihood estimation meta-analysis. Front Psychol 2011; 1:241. [PMID: 21833294 PMCID: PMC3153845 DOI: 10.3389/fpsyg.2010.00241] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2010] [Accepted: 12/23/2010] [Indexed: 11/13/2022] Open
Abstract
Investigations of the functional organization of human auditory cortex typically examine responses to different sound categories. An alternative approach is to characterize sounds with respect to their amount of variation in the time and frequency domains (i.e., spectral and temporal complexity). Although the vast majority of published studies examine contrasts between discrete sound categories, an alternative complexity-based taxonomy can be evaluated through meta-analysis. In a quantitative meta-analysis of 58 auditory neuroimaging studies, we examined the evidence supporting current models of functional specialization for auditory processing using grouping criteria based on either categories or spectro-temporal complexity. Consistent with current models, analyses based on typical sound categories revealed hierarchical auditory organization and left-lateralized responses to speech sounds, with high speech sensitivity in the left anterior superior temporal cortex. Classification of contrasts based on spectro-temporal complexity, on the other hand, revealed a striking within-hemisphere dissociation in which caudo-lateral temporal regions in auditory cortex showed greater sensitivity to spectral changes, while anterior superior temporal cortical areas were more sensitive to temporal variation, consistent with recent findings in animal models. The meta-analysis thus suggests that spectro-temporal acoustic complexity represents a useful alternative taxonomy to investigate the functional organization of human auditory cortex.
Collapse
Affiliation(s)
- Fabienne Samson
- Centre d'Excellence en Troubles Envahissants du Développement de l'Université de Montréal Montréal, QC, Canada
| | | | | | | |
Collapse
|
37
|
van der Zwaag W, Gentile G, Gruetter R, Spierer L, Clarke S. Where sound position influences sound object representations: a 7-T fMRI study. Neuroimage 2010; 54:1803-11. [PMID: 20965262 DOI: 10.1016/j.neuroimage.2010.10.032] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2010] [Revised: 09/28/2010] [Accepted: 10/11/2010] [Indexed: 11/25/2022] Open
Abstract
Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.
Collapse
|
38
|
Effects of microinjections of apomorphine and haloperidol into the inferior colliculus on the latent inhibition of the conditioned emotional response. Exp Neurol 2009; 216:16-21. [DOI: 10.1016/j.expneurol.2008.10.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2008] [Revised: 10/28/2008] [Accepted: 10/30/2008] [Indexed: 11/20/2022]
|
39
|
Altmann CF, Henning M, Döring MK, Kaiser J. Effects of feature-selective attention on auditory pattern and location processing. Neuroimage 2008; 41:69-79. [DOI: 10.1016/j.neuroimage.2008.02.013] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2007] [Revised: 01/30/2008] [Accepted: 02/11/2008] [Indexed: 10/22/2022] Open
|
40
|
Krumbholz K, Eickhoff SB, Fink GR. Feature- and object-based attentional modulation in the human auditory "where" pathway. J Cogn Neurosci 2008; 19:1721-33. [PMID: 18271742 DOI: 10.1162/jocn.2007.19.10.1721] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Attending to a visual stimulus feature, such as color or motion, enhances the processing of that feature in the visual cortex. Moreover, the processing of the attended object's other, unattended, features is also enhanced. Here, we used functional magnetic resonance imaging to show that attentional modulation in the auditory system may also exhibit such feature- and object-specific effects. Specifically, we found that attending to auditory motion increases activity in nonprimary motion-sensitive areas of the auditory cortical "where" pathway. Moreover, activity in these motion-sensitive areas was also increased when attention was directed to a moving rather than a stationary sound object, even when motion was not the attended feature. An analysis of effective connectivity revealed that the motion-specific attentional modulation was brought about by an increase in connectivity between the primary auditory cortex and nonprimary motion-sensitive areas, which, in turn, may have been mediated by the paracingulate cortex in the frontal lobe. The current results indicate that auditory attention can select both objects and features. The finding of feature-based attentional modulation implies that attending to one feature of a sound object does not necessarily entail an exhaustive processing of the object's unattended features.
Collapse
Affiliation(s)
- Katrin Krumbholz
- MRC Institute of Hearing Research, University Park Nottingham NG-7 2RD, UK.
| | | | | |
Collapse
|
41
|
Kaiser J, Heidegger T, Wibral M, Altmann CF, Lutzenberger W. Distinct gamma-band components reflect the short-term memory maintenance of different sound lateralization angles. Cereb Cortex 2008; 18:2286-95. [PMID: 18252742 PMCID: PMC2536701 DOI: 10.1093/cercor/bhm251] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Oscillatory activity in human electro- or magnetoencephalogram has been related to cortical stimulus representations and their modulation by cognitive processes. Whereas previous work has focused on gamma-band activity (GBA) during attention or maintenance of representations, there is little evidence for GBA reflecting individual stimulus representations. The present study aimed at identifying stimulus-specific GBA components during auditory spatial short-term memory. A total of 28 adults were assigned to 1 of 2 groups who were presented with only right- or left-lateralized sounds, respectively. In each group, 2 sample stimuli were used which differed in their lateralization angles (15° or 45°) with respect to the midsagittal plane. Statistical probability mapping served to identify spectral amplitude differences between 15° versus 45° stimuli. Distinct GBA components were found for each sample stimulus in different sensors over parieto-occipital cortex contralateral to the side of stimulation peaking during the middle 200–300 ms of the delay phase. The differentiation between “preferred” and “nonpreferred” stimuli during the final 100 ms of the delay phase correlated with task performance. These findings suggest that the observed GBA components reflect the activity of distinct networks tuned to spatial sound features which contribute to the maintenance of task-relevant information in short-term memory.
Collapse
Affiliation(s)
- Jochen Kaiser
- Institute of Medical Psychology, Johann Wolfgang Goethe-University, 60528 Frankfurt am Main, Germany.
| | | | | | | | | |
Collapse
|
42
|
Alain C, He Y, Grady C. The Contribution of the Inferior Parietal Lobe to Auditory Spatial Working Memory. J Cogn Neurosci 2008; 20:285-95. [DOI: 10.1162/jocn.2008.20014] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
There is strong evidence for dissociable “what” and “where” pathways in the auditory system, but considerable debate remains regarding the functional role of these pathways. The sensory-motor account of spatial processing posits that the dorsal brain regions (e.g., inferior parietal lobule, IPL) mediate sensory-motor integration required during “where” responding. An alternative account suggests that the IPL plays an important role in monitoring sound location. To test these two models, we used a mixed-block and event-related functional magnetic resonance imaging (fMRI) design in which participants responded to occasional repetitions in either sound location (“where” task) or semantic category (“what” task). The fMRI data were analyzed with the general linear model using separate regressors for representing sustained and transient activity in both listening conditions. This analysis revealed more sustained activity in right dorsal brain regions, including the IPL and superior frontal sulcus, during the location than during the category task, after accounting for transient activity related to target detection and the motor response. Conversely, we found greater sustained activity in the left superior temporal gyrus and left inferior frontal gyrus during the category task compared to the location task. Transient target-related activity in both tasks was associated with enhanced signal in the left pre- and postcentral gyrus, prefrontal cortex and bilateral IPL. These results suggest dual roles for the right IPL in auditory working memory—one involved in monitoring and updating sound location independent of motor responding, and another that underlies the integration of sensory and motor functions.
Collapse
Affiliation(s)
- Claude Alain
- 1Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- 2University of Toronto, Ontario, Canada
| | - Yu He
- 1Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Cheryl Grady
- 1Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- 2University of Toronto, Ontario, Canada
| |
Collapse
|
43
|
Lehnert G, Zimmer HD. Modality and domain specific components in auditory and visual working memory tasks. Cogn Process 2007; 9:53-61. [PMID: 17891428 DOI: 10.1007/s10339-007-0187-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2007] [Revised: 09/04/2007] [Accepted: 09/05/2007] [Indexed: 11/24/2022]
Abstract
In the tripartite model of working memory (WM) it is postulated that a unique part system-the visuo-spatial sketchpad (VSSP)-processes non-verbal content. Due to behavioral and neurophysiological findings, the VSSP was later subdivided into visual object and visual spatial processing, the former representing objects' appearance and the latter spatial information. This distinction is well supported. However, a challenge to this model is the question how spatial information from non-visual sensory modalities, for example the auditory one, is processed. Only a few studies so far have directly compared visual and auditory spatial WM. They suggest that the distinction of two processing domains--one for object and one for spatial information--also holds true for auditory WM, but that only a part of the processes is modality specific. We propose that processing in the object domain (the item's appearance) is modality specific, while spatial WM as well as object-location binding relies on modality general processes.
Collapse
Affiliation(s)
- Günther Lehnert
- Brain and Cognition Unit, Department of Psychology, Saarland University, Saarbrücken, Germany.
| | | |
Collapse
|
44
|
Palmer AR, Hall DA, Sumner C, Barrett DJK, Jones S, Nakamoto K, Moore DR. Some investigations into non-passive listening. Hear Res 2007; 229:148-57. [PMID: 17275232 DOI: 10.1016/j.heares.2006.12.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2006] [Revised: 12/07/2006] [Accepted: 12/07/2006] [Indexed: 10/23/2022]
Abstract
Our knowledge of the function of the auditory nervous system is based upon a wealth of data obtained, for the most part, in anaesthetised animals. More recently, it has been generally acknowledged that factors such as attention profoundly modulate the activity of sensory systems and this can take place at many levels of processing. Imaging studies, in particular, have revealed the greater activation of auditory areas and areas outside of sensory processing areas when attending to a stimulus. We present here a brief review of the consequences of such non-passive listening and go on to describe some of the experiments we are conducting to investigate them. In imaging studies, using fMRI, we can demonstrate the activation of attention networks that are non-specific to the sensory modality as well as greater and different activation of the areas of the supra-temporal plane that includes primary and secondary auditory areas. The profuse descending connections of the auditory system seem likely to be part of the mechanisms subserving attention to sound. These are generally thought to be largely inactivated by anaesthesia. However, we have been able to demonstrate that even in an anaesthetised preparation, removing the descending control from the cortex leads to quite profound changes in the temporal patterns of activation by sounds in thalamus and inferior colliculus. Some of these effects seem to be specific to the ear of stimulation and affect interaural processing. To bridge these observations we are developing an awake behaving preparation involving freely moving animals in which it will be possible to investigate the effects of consciousness (by contrasting awake and anaesthetized), passive and active listening.
Collapse
Affiliation(s)
- A R Palmer
- MRC Institute of Hearing Research, University Park, Nottingham, UK.
| | | | | | | | | | | | | |
Collapse
|
45
|
Sanders LD, Poeppel D. Local and global auditory processing: behavioral and ERP evidence. Neuropsychologia 2007; 45:1172-86. [PMID: 17113115 PMCID: PMC1850243 DOI: 10.1016/j.neuropsychologia.2006.10.010] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2006] [Revised: 10/05/2006] [Accepted: 10/24/2006] [Indexed: 11/27/2022]
Abstract
Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively.
Collapse
Affiliation(s)
- Lisa D Sanders
- Department of Psychology, University of Massachusetts, Amherst, MA 01003, USA.
| | | |
Collapse
|
46
|
Cohen YE, Theunissen F, Russ BE, Gill P. Acoustic Features of Rhesus Vocalizations and Their Representation in the Ventrolateral Prefrontal Cortex. J Neurophysiol 2007; 97:1470-84. [PMID: 17135477 DOI: 10.1152/jn.00769.2006] [Citation(s) in RCA: 75] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Communication is one of the fundamental components of both human and nonhuman animal behavior. Auditory communication signals (i.e., vocalizations) are especially important in the socioecology of several species of nonhuman primates such as rhesus monkeys. In rhesus, the ventrolateral prefrontal cortex (vPFC) is thought to be part of a circuit involved in representing vocalizations and other auditory objects. To further our understanding of the role of the vPFC in processing vocalizations, we characterized the spectrotemporal features of rhesus vocalizations, compared these features with other classes of natural stimuli, and then related the rhesus-vocalization acoustic features to neural activity. We found that the range of these spectrotemporal features was similar to that found in other ensembles of natural stimuli, including human speech, and identified the subspace of these features that would be particularly informative to discriminate between different vocalizations. In a first neural study, however, we found that the tuning properties of vPFC neurons did not emphasize these particularly informative spectrotemporal features. In a second neural study, we found that a first-order linear model (the spectrotemporal receptive field) is not a good predictor of vPFC activity. The results of these two neural studies are consistent with the hypothesis that the vPFC is not involved in coding the first-order acoustic properties of a stimulus but is involved in processing the higher-order information needed to form representations of auditory objects.
Collapse
Affiliation(s)
- Yale E Cohen
- Department of Psychological and Brain Sciences, Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH 03755, USA.
| | | | | | | |
Collapse
|
47
|
Russ BE, Lee YS, Cohen YE. Neural and behavioral correlates of auditory categorization. Hear Res 2007; 229:204-12. [PMID: 17208397 DOI: 10.1016/j.heares.2006.10.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2006] [Accepted: 10/28/2006] [Indexed: 10/23/2022]
Abstract
Goal-directed behavior is the essence of adaptation because it allows humans and other animals to respond dynamically to different environmental scenarios. Goal-directed behavior can be characterized as the formation of dynamic links between stimuli and actions. One important attribute of goal-directed behavior is that linkages can be formed based on how a stimulus is categorized. That is, links are formed based on the membership of a stimulus in a particular functional category. In this review, we review categorization with an emphasis on auditory categorization. We focus on the role of categorization in language and non-human vocalizations. We present behavioral data indicating that non-human primates categorize and respond to vocalizations based on differences in their putative meaning and not differences in their acoustics. Finally, we present evidence suggesting that the ventrolateral prefrontal cortex plays an important role in processing auditory objects and has a specific role in the representation of auditory categories.
Collapse
Affiliation(s)
- Brian E Russ
- Department of Psychological and Brain Sciences and Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH 03755, USA
| | | | | |
Collapse
|
48
|
Fullerton BC, Pandya DN. Architectonic analysis of the auditory-related areas of the superior temporal region in human brain. J Comp Neurol 2007; 504:470-98. [PMID: 17701981 DOI: 10.1002/cne.21432] [Citation(s) in RCA: 82] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Architecture of auditory areas of the superior temporal region (STR) in the human was analyzed in Nissl-stained material to see whether auditory cortex is organized according to principles that have been described in the rhesus monkey. Based on shared architectonic features, the auditory cortex in human and monkey is organized into three lines: areas in the cortex of the circular sulcus (root), areas on the supratemporal plane (core), and areas on the superior temporal gyrus (belt). The cytoarchitecture of the auditory area changes in a stepwise manner toward the koniocortical area, both from the direction of the temporal polar proisocortex as well as from the caudal temporal cortex. This architectonic dichotomy is consistent with differences in cortical and subcortical connections of STR and may be related to different functions of the rostral and caudal temporal cortices. There are some differences between rhesus monkey and human auditory anatomy. For instance, the koniocortex, root area PaI, and belt area PaA show further differentiation into subareas in the human brain. The relative volume of the core area is larger than that of the belt area in the human, but the reverse is true in the monkey. The functional significance of these differences across species is not known but may relate to speech and language functions.
Collapse
Affiliation(s)
- Barbara C Fullerton
- Eaton-Peabody Laboratory of Auditory Physiology, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, USA.
| | | |
Collapse
|
49
|
Krumbholz K, Hewson-Stoate N, Schönwiesner M. Cortical response to auditory motion suggests an asymmetry in the reliance on inter-hemispheric connections between the left and right auditory cortices. J Neurophysiol 2006; 97:1649-55. [PMID: 17108095 DOI: 10.1152/jn.00560.2006] [Citation(s) in RCA: 74] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The aim of the current study was to measure the brain's response to auditory motion using electroencephalography (EEG) to gain insight into the mechanisms by which hemispheric lateralization for auditory spatial processing is established in the human brain. The onset of left- or rightward motion in an otherwise continuous sound was found to elicit a large response, which appeared to arise from higher-level nonprimary auditory areas. This motion onset response was strongly lateralized to the hemisphere contralateral to the direction of motion. The response latencies suggest that the ipsilateral response to the leftward motion was produced by indirect callosal projections from the opposite hemisphere, whereas the ipsilateral response to the rightward motion seemed to receive contributions from direct thalamocortical projections. These results suggest an asymmetry in the reliance on inter-hemispheric projections between the left and right auditory cortices for auditory spatial processing.
Collapse
Affiliation(s)
- Katrin Krumbholz
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK.
| | | | | |
Collapse
|
50
|
Eckert MA, Leonard CM, Possing ET, Binder JR. Uncoupled leftward asymmetries for planum morphology and functional language processing. BRAIN AND LANGUAGE 2006; 98:102-11. [PMID: 16697453 PMCID: PMC1661833 DOI: 10.1016/j.bandl.2006.04.002] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2005] [Revised: 03/23/2006] [Accepted: 04/02/2006] [Indexed: 05/09/2023]
Abstract
Explanations for left hemisphere language laterality have often focused on hemispheric structural asymmetry of the planum temporale. We examined the association between an index of language laterality and brain morphology in 99 normal adults whose degree of laterality was established using a functional MRI single-word comprehension task. The index of language laterality was derived from the difference in volume of activation between the left and right hemispheres. Planum temporale and brain volume measures were made using structural MRI scans, blind to the functional data. Although both planum temporale asymmetry (t(1,99) = 6.86, p < .001) and language laterality (t(1,99) = 15.26, p < .001) were significantly left hemisphere biased, there was not a significant association between these variables (r(99) = .01,ns). Brain volume, a control variable for the planum temporale analyses, was related to language laterality in a multiple regression (beta = -.30, t = -2.25, p < .05). Individuals with small brains were more likely to demonstrate strong left hemisphere language laterality. These results suggest that language laterality is a multidimensional construct with complex neurological origins.
Collapse
Affiliation(s)
- Mark A Eckert
- Medical University of South Carolina, Department of Otolaryngology-Head and Neck Surgery, USA.
| | | | | | | |
Collapse
|