1
|
Duraisingam A, Palaniappan R, Soria D. Attentional bias towards high and low caloric food on repeated visual food stimuli: An ERP study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:740-743. [PMID: 34891397 DOI: 10.1109/embc46164.2021.9629882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Food variety influences appetitive behaviour, motivation to eat and energy intake. Research found that repeated exposure to varied food images increases the motivation towards food in adults and children. This study investigates the effects of repetition on the modulation of early and late components of event-related potentials (ERPs) when participants passively viewed the same food and non-food images repeatedly. The motivational attention to food and non-food images were assessed in frontal, centroparietal, parietooccipital and occipitotemporal areas of the brain. Participants showed increased late positive potential (late ERP component) to high caloric image in the occipitotemporal region compared to low caloric and non-food images. Similar effects could be seen in the early ERP component in the frontal region, but with reversed polarity. Data suggest that both the early and late ERP components show greater ERP amplitude when viewing high caloric images than low caloric and non-food images. Despite repeated exposure to same image, high caloric food continued to show sustained attention compared to low caloric and non-food image.
Collapse
|
2
|
Semantic and perceptual priming activate partially overlapping brain networks as revealed by direct cortical recordings in humans. Neuroimage 2019; 203:116204. [DOI: 10.1016/j.neuroimage.2019.116204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 08/19/2019] [Accepted: 09/17/2019] [Indexed: 01/20/2023] Open
|
3
|
Hjortkjær J, Kassuba T, Madsen KH, Skov M, Siebner HR. Task-Modulated Cortical Representations of Natural Sound Source Categories. Cereb Cortex 2018; 28:295-306. [PMID: 29069292 DOI: 10.1093/cercor/bhx263] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI to measure cortical response patterns while human listeners categorized real-world sounds created by objects of different solid materials (glass, metal, wood) manipulated by different sound-producing actions (striking, rattling, dropping). In different sessions, subjects had to identify either material or action categories in the same sound stimuli. The sound-producing action and the material of the sound source could be decoded from multivoxel activity patterns in auditory cortex, including Heschl's gyrus and planum temporale. Importantly, decoding success depended on task relevance and category discriminability. Action categories were more accurately decoded in auditory cortex when subjects identified action information. Conversely, the material of the same sound sources was decoded with higher accuracy in the inferior frontal cortex during material identification. Representational similarity analyses indicated that both early and higher-order auditory cortex selectively enhanced spectrotemporal features relevant to the target category. Together, the results indicate a cortical selection mechanism that favors task-relevant information in the processing of nonvocal sound categories.
Collapse
Affiliation(s)
- Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Tanja Kassuba
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Martin Skov
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Decision Neuroscience Research Group, Copenhagen Business School, 2000 Frederiksberg, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Department of Neurology, Copenhagen University Hospital Bispebjerg, Copenhagen, 2400 København NV, Denmark
| |
Collapse
|
4
|
Wu H, Tang H, Ge Y, Yang S, Mai X, Luo YJ, Liu C. Object words modulate the activity of the mirror neuron system during action imitation. Brain Behav 2017; 7:e00840. [PMID: 29201543 PMCID: PMC5698860 DOI: 10.1002/brb3.840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 08/02/2017] [Accepted: 08/28/2017] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Although research has demonstrated that the mirror neuron system (MNS) plays a crucial role in both action imitation and action-related semantic processing, whether action-related words can inversely modulate the MNS activity remains unclear. METHODS Here, three types of task-irrelevant words (body parts, verbs, and manufactured objects) were presented to examine the modulation effect of these words on the MNS activity during action observation and imitation. Twenty-two participants were recruited for the fMRI scanning and remaining data from 19 subjects were reported here. RESULTS Brain activity results showed that word types elicited different modulation effects over nodes of the MNS (i.e., the right inferior frontal gyrus, premotor cortex, inferior parietal lobule, and STS), especially during the imitation stage. Compared with other word conditions, action imitation following manufactured objects words induced stronger activation in these brain regions during the imitation stage. These results were consistent in both task-dependent and -independent ROI analysis. CONCLUSION Our findings thus provide evidence for the unique effect of object words on the MNS during imitation of action, which may also confirm the key role of goal inference in action imitation.
Collapse
Affiliation(s)
- Haiyan Wu
- CAS Key Laboratory of Behavioral Science Beijing China.,Department of Psychology University of Chinese Academy of Sciences Beijing China.,State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research Beijing Normal University Beijing China
| | - Honghong Tang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research Beijing Normal University Beijing China.,School of Economics and Business Administration Beijing Normal University Beijing China
| | - Yue Ge
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research Beijing Normal University Beijing China.,Beijing Institution of Biomedicine Beijing China
| | - Suyong Yang
- Key Laboratory of Exercise and Health Sciences of Ministry of Education Shanghai University of Sport Shanghai China
| | - Xiaoqin Mai
- Department of Psychology Renmin University of China Beijing China
| | - Yue-Jia Luo
- Institute of Affective and Social Neuroscience Shenzhen University Shenzhen Guangdong China
| | - Chao Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research Beijing Normal University Beijing China.,Center for Collaboration and Innovation in Brain and Learning Sciences Beijing Normal University Beijing China.,Beijing Key Laboratory of Brain Imaging and Connectomics Beijing Normal University Beijing China
| |
Collapse
|
5
|
Matusz PJ, Wallace MT, Murray MM. A multisensory perspective on object memory. Neuropsychologia 2017; 105:243-252. [PMID: 28400327 PMCID: PMC5632572 DOI: 10.1016/j.neuropsychologia.2017.04.008] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 04/04/2017] [Accepted: 04/05/2017] [Indexed: 12/20/2022]
Abstract
Traditional studies of memory and object recognition involved objects presented within a single sensory modality (i.e., purely visual or purely auditory objects). However, in naturalistic settings, objects are often evaluated and processed in a multisensory manner. This begets the question of how object representations that combine information from the different senses are created and utilised by memory functions. Here we review research that has demonstrated that a single multisensory exposure can influence memory for both visual and auditory objects. In an old/new object discrimination task, objects that were presented initially with a task-irrelevant stimulus in another sense were better remembered compared to stimuli presented alone, most notably when the two stimuli were semantically congruent. The brain discriminates between these two types of object representations within the first 100ms post-stimulus onset, indicating early "tagging" of objects/events by the brain based on the nature of their initial presentation context. Interestingly, the specific brain networks supporting the improved object recognition vary based on a variety of factors, including the effectiveness of the initial multisensory presentation and the sense that is task-relevant. We specify the requisite conditions for multisensory contexts to improve object discrimination following single exposures, and the individual differences that exist with respect to these improvements. Our results shed light onto how memory operates on the multisensory nature of object representations as well as how the brain stores and retrieves memories of objects.
Collapse
Affiliation(s)
- Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology & Neurorehabilitation Service & Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology & Neurorehabilitation Service & Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Lausanne, Switzerland.
| |
Collapse
|
6
|
Matusz PJ, Thelen A, Amrein S, Geiser E, Anken J, Murray MM. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories. Eur J Neurosci 2015; 41:699-708. [PMID: 25728186 DOI: 10.1111/ejn.12804] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Revised: 11/13/2014] [Accepted: 11/13/2014] [Indexed: 11/28/2022]
Abstract
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Collapse
Affiliation(s)
- Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland; Attention, Behaviour, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK; University of Social Sciences and Humanities, Faculty in Wroclaw, Wroclaw, Poland
| | | | | | | | | | | |
Collapse
|
7
|
Snyder JS, Schwiedrzik CM, Vitela AD, Melloni L. How previous experience shapes perception in different sensory modalities. Front Hum Neurosci 2015; 9:594. [PMID: 26582982 PMCID: PMC4628108 DOI: 10.3389/fnhum.2015.00594] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Accepted: 10/12/2015] [Indexed: 11/13/2022] Open
Abstract
What has transpired immediately before has a strong influence on how sensory stimuli are processed and perceived. In particular, temporal context can have contrastive effects, repelling perception away from the interpretation of the context stimulus, and attractive effects (TCEs), whereby perception repeats upon successive presentations of the same stimulus. For decades, scientists have documented contrastive and attractive temporal context effects mostly with simple visual stimuli. But both types of effects also occur in other modalities, e.g., audition and touch, and for stimuli of varying complexity, raising the possibility that context effects reflect general computational principles of sensory systems. Neuroimaging shows that contrastive and attractive context effects arise from neural processes in different areas of the cerebral cortex, suggesting two separate operations with distinct functional roles. Bayesian models can provide a functional account of both context effects, whereby prior experience adjusts sensory systems to optimize perception of future stimuli.
Collapse
Affiliation(s)
- Joel S. Snyder
- Department of Psychology, University of NevadaLas Vegas, Las Vegas, NV, USA
| | | | - A. Davi Vitela
- Department of Psychology, University of NevadaLas Vegas, Las Vegas, NV, USA
| | - Lucia Melloni
- Department of Neurophysiology, Max Planck Institute for Brain ResearchFrankfurt, Germany
- Department of Neurological Surgery, Columbia UniversityNew York, NY, USA
- Comprehensive Epilepsy Center, Department of Neurology, NYU Langone Medical Center, NYU School of Medicine, New York UniversityNew York, NY, USA
| |
Collapse
|
8
|
Roaring lions and chirruping lemurs: How the brain encodes sound objects in space. Neuropsychologia 2015; 75:304-13. [DOI: 10.1016/j.neuropsychologia.2015.06.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 06/07/2015] [Accepted: 06/10/2015] [Indexed: 01/29/2023]
|
9
|
Da Costa S, Bourquin NMP, Knebel JF, Saenz M, van der Zwaag W, Clarke S. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI. PLoS One 2015; 10:e0124072. [PMID: 25938430 PMCID: PMC4418571 DOI: 10.1371/journal.pone.0124072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 02/25/2015] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.
Collapse
Affiliation(s)
- Sandra Da Costa
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Nathalie M.-P. Bourquin
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Jean-François Knebel
- National Center of Competence in Research, SYNAPSY—The Synaptic Bases of Mental Diseases, Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Wietske van der Zwaag
- Centre d’Imagerie BioMédicale, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| |
Collapse
|
10
|
Clarke S, Bindschaedler C, Crottaz-Herbette S. Impact of Cognitive Neuroscience on Stroke Rehabilitation. Stroke 2015; 46:1408-13. [DOI: 10.1161/strokeaha.115.007435] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2015] [Accepted: 02/11/2015] [Indexed: 11/16/2022]
Affiliation(s)
- Stephanie Clarke
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| | - Claire Bindschaedler
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| |
Collapse
|
11
|
Thelen A, Talsma D, Murray MM. Single-trial multisensory memories affect later auditory and visual object discrimination. Cognition 2015; 138:148-60. [DOI: 10.1016/j.cognition.2015.02.003] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 11/12/2014] [Accepted: 02/09/2015] [Indexed: 11/30/2022]
|
12
|
Giordano BL, Pernet C, Charest I, Belizaire G, Zatorre RJ, Belin P. Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus. Cortex 2014; 58:170-85. [PMID: 25038309 DOI: 10.1016/j.cortex.2014.06.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2013] [Revised: 03/24/2014] [Accepted: 06/09/2014] [Indexed: 11/18/2022]
Abstract
Identifying sound sources is fundamental to developing a stable representation of the environment in the face of variable auditory information. The cortical processes underlying this ability have received little attention. In two fMRI experiments, we investigated passive adaptation to (Exp. 1) and explicit discrimination of (Exp. 2) source identities for different categories of auditory objects (voices, musical instruments, environmental sounds). All cortical effects of source identity were independent of high-level category information, and were accounted for by sound-to-sound differences in low-level structure (e.g., loudness). A conjunction analysis revealed that the left posterior middle frontal gyrus (pMFG) adapted to identity repetitions during both passive listening and active discrimination tasks. These results indicate that the comparison of sound source identities in a stream of auditory stimulation recruits the pMFG in a domain-general way, i.e., independent of the sound category, based on information contained in the low-level acoustical structure. pMFG recruitment during both passive listening and explicit identity comparison tasks also suggests its automatic engagement in sound source identity processing.
Collapse
Affiliation(s)
- Bruno L Giordano
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK.
| | - Cyril Pernet
- Brain Research Imaging Center, Neuroimaging Sciences, University of Edinburgh, Western General Hospital, Edinburgh, Scotland, UK.
| | - Ian Charest
- Medical Research Council - Cognition and Brain Sciences Unit, Cambridge, UK.
| | - Guylaine Belizaire
- International Laboratory for Brain, Music and Sound (BRAMS), Université de Montréal, Montréal, QC, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Université de Montréal, Montréal, Québec, Canada.
| | - Robert J Zatorre
- Montréal Neurological Institute, McGill University, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound (BRAMS), Université de Montréal, Montréal, QC, Canada.
| | - Pascal Belin
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK; Institut des Neurosciences de la Timone, UMR7289, CNRS-Université Aix Marseille, Marseille, France; International Laboratory for Brain, Music and Sound (BRAMS), Université de Montréal, Montréal, QC, Canada.
| |
Collapse
|
13
|
Cossy N, Tzovara A, Simonin A, Rossetti AO, De Lucia M. Robust discrimination between EEG responses to categories of environmental sounds in early coma. Front Psychol 2014; 5:155. [PMID: 24611061 PMCID: PMC3933775 DOI: 10.3389/fpsyg.2014.00155] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Accepted: 02/07/2014] [Indexed: 01/18/2023] Open
Abstract
Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.
Collapse
Affiliation(s)
- Natacha Cossy
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center, University of Lausanne Lausanne, Switzerland ; Department of Radiology, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Athina Tzovara
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center, University of Lausanne Lausanne, Switzerland ; Department of Radiology, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Alexandre Simonin
- Department of Clinical Neurosciences, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Andrea O Rossetti
- Department of Clinical Neurosciences, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Marzia De Lucia
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center, University of Lausanne Lausanne, Switzerland ; Department of Radiology, University Hospital Center, University of Lausanne Lausanne, Switzerland
| |
Collapse
|
14
|
Bourquin NMP, Murray MM, Clarke S. Location-independent and location-linked representations of sound objects. Neuroimage 2013; 73:40-9. [DOI: 10.1016/j.neuroimage.2013.01.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2012] [Revised: 01/14/2013] [Accepted: 01/16/2013] [Indexed: 11/24/2022] Open
|
15
|
Andics A, Gál V, Vicsi K, Rudas G, Vidnyánszky Z. FMRI repetition suppression for voices is modulated by stimulus expectations. Neuroimage 2012; 69:277-83. [PMID: 23268783 DOI: 10.1016/j.neuroimage.2012.12.033] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2012] [Revised: 12/12/2012] [Accepted: 12/17/2012] [Indexed: 11/25/2022] Open
Abstract
According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing ('alt') and identical ('rep') voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
Collapse
Affiliation(s)
- Attila Andics
- MR Research Center, Szentágothai János Knowledge Center - Semmelweis University, Budapest, Balassa u. 6., 1083, Hungary.
| | | | | | | | | |
Collapse
|
16
|
Bröckelmann AK, Steinberg C, Dobel C, Elling L, Zwanzger P, Pantev C, Junghöfer M. Affect-specific modulation of the N1m to shock-conditioned tones: magnetoencephalographic correlates. Eur J Neurosci 2012; 37:303-15. [PMID: 23167712 DOI: 10.1111/ejn.12043] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2012] [Accepted: 09/26/2012] [Indexed: 11/28/2022]
Abstract
Despite its fundamental relevance for representing the emotional world surrounding us, human affective neuroscience research has widely neglected the auditory system, at least in comparison to the visual domain. Here, we have investigated the spatiotemporal dynamics of human affective auditory processing using time-sensitive whole-head magnetoencephalography. A novel and highly challenging affective associative learning procedure, 'MultiCS conditioning', involving multiple conditioned stimuli (CS) per affective category, was adopted to test whether previous findings from intramodal conditioning of multiple click-tones with an equal number of auditory emotional scenes (Bröckelmann et al., 2011 J. Neurosci., 31, 7801) would generalise to crossmodal conditioning of multiple click-tones with an electric shock as single aversive somatosensory unconditioned stimulus (UCS). Event-related magnetic fields were recorded in response to 40 click-tones before and after four contingent pairings of 20 CS with a shock and the other half remaining unpaired. In line with previous findings from intramodal MultiCS conditioning we found an affect-specific modulation of the auditory N1m component 100-150 ms post-stimulus within a distributed frontal-temporal-parietal neural network. Increased activation for shock-associated tones was lateralised to right-hemispheric regions, whereas unpaired safety-signalling tones were preferentially processed in the left hemisphere. Participants did not show explicit awareness of the contingent CS-UCS relationship, yet behavioural conditioning effects were indicated on an indirect measure of stimulus valence. Our findings imply converging evidence for a rapid and highly differentiating affect-specific modulation of the auditory N1m after intramodal as well crossmodal MultiCS conditioning and a correspondence of the modulating impact of emotional attention on early affective processing in vision and audition.
Collapse
Affiliation(s)
- Ann-Kathrin Bröckelmann
- Institute for Biomagnetism and Biosignalanalysis, University Hospital Münster, Malmedyweg 15, D-48149 Münster, Germany
| | | | | | | | | | | | | |
Collapse
|
17
|
Electrophysiological correlates of object-repetition effects: sLORETA imaging with 64-channel EEG and individual MRI. BMC Neurosci 2012; 13:124. [PMID: 23075055 PMCID: PMC3502408 DOI: 10.1186/1471-2202-13-124] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2012] [Accepted: 10/15/2012] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND We investigated the electrophysiological correlates of object-repetition effects using an object categorization task, standardized low-resolution electromagnetic tomography (sLORETA), and individual magnetic resonance imaging. Sixteen healthy adults participated, and a total of 396 line drawings of living and non-living objects were used as stimuli. Of these stimuli, 274 were presented only once, and 122 were repeated after one to five intervening pictures. Participants were asked to categorize the objects as living or non-living things by pressing one of two buttons. RESULTS The old/new effect (i.e., a faster response time and more positive potentials in response to repeated stimuli than to stimuli initially presented) was observed at 350-550 ms post-stimulus. The distributions of cortical sources for the old and new stimuli were very similar at 250-650 ms after stimulus-onset. Activation in the right middle occipital gyrus/cuneus, right fusiform gyrus, left superior temporal gyrus, and right inferior frontal gyrus was significantly reduced in response to old compared with new stimuli at 250-350, 350-450, 450-550, and 550-650 ms after stimulus-onset, respectively. Priming in response time was correlated with the electrophysiological priming at left parietal area and repetition suppression at left superior temporal gyrus in 450-550 ms. CONCLUSIONS These results suggest processing of repeated objects is facilitated by sharpening perceptual representation and by efficient detection or attentional control of repeated objects.
Collapse
|
18
|
Repetition-Induced Plasticity of Motor Representations of Action Sounds. Brain Topogr 2012; 26:152-6. [DOI: 10.1007/s10548-012-0260-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2012] [Accepted: 09/18/2012] [Indexed: 11/26/2022]
|
19
|
Bourquin NMP, Spierer L, Murray MM, Clarke S. Neural plasticity associated with recently versus often heard objects. Neuroimage 2012; 62:1800-6. [DOI: 10.1016/j.neuroimage.2012.04.055] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2012] [Revised: 04/18/2012] [Accepted: 04/29/2012] [Indexed: 10/28/2022] Open
|
20
|
A corticostriatal neural system enhances auditory perception through temporal context processing. J Neurosci 2012; 32:6177-82. [PMID: 22553024 DOI: 10.1523/jneurosci.5153-11.2012] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The temporal context of an acoustic signal can greatly influence its perception. The present study investigated the neural correlates underlying perceptual facilitation by regular temporal contexts in humans. Participants listened to temporally regular (periodic) or temporally irregular (nonperiodic) sequences of tones while performing an intensity discrimination task. Participants performed significantly better on intensity discrimination during periodic than nonperiodic tone sequences. There was greater activation in the putamen for periodic than nonperiodic sequences. Conversely, there was greater activation in bilateral primary and secondary auditory cortices (planum polare and planum temporale) for nonperiodic than periodic sequences. Across individuals, greater putamen activation correlated with lesser auditory cortical activation in both right and left hemispheres. These findings suggest that temporal regularity is detected in the putamen, and that such detection facilitates temporal-lobe cortical processing associated with superior auditory perception. Thus, this study reveals a corticostriatal system associated with contextual facilitation for auditory perception through temporal regularity processing.
Collapse
|
21
|
Auditory perceptual decision-making based on semantic categorization of environmental sounds. Neuroimage 2012; 60:1704-15. [PMID: 22330317 DOI: 10.1016/j.neuroimage.2012.01.131] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2011] [Revised: 01/24/2012] [Accepted: 01/25/2012] [Indexed: 11/21/2022] Open
Abstract
Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250 ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174 ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112 ms) and on a separate period at ~270 ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.
Collapse
|
22
|
The role of energetic value in dynamic brain response adaptation during repeated food image viewing. Appetite 2012; 58:11-8. [DOI: 10.1016/j.appet.2011.09.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2011] [Revised: 09/15/2011] [Accepted: 09/24/2011] [Indexed: 11/19/2022]
|
23
|
Laufer I, Negishi M, Lacadie CM, Papademetris X, Constable RT. Dissociation between the activity of the right middle frontal gyrus and the middle temporal gyrus in processing semantic priming. PLoS One 2011; 6:e22368. [PMID: 21829619 PMCID: PMC3150328 DOI: 10.1371/journal.pone.0022368] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2011] [Accepted: 06/24/2011] [Indexed: 11/19/2022] Open
Abstract
The aim of this event-related functional magnetic resonance imaging (fMRI) study was to test whether the right middle frontal gyrus (MFG) and middle temporal gyrus (MTG) would show differential sensitivity to the effect of prime-target association strength on repetition priming. In the experimental condition (RP), the target occurred after repetitive presentation of the prime within an oddball design. In the control condition (CTR), the target followed a single presentation of the prime with equal probability of the target as in RP. To manipulate semantic overlap between the prime and the target both conditions (RP and CTR) employed either the onomatopoeia "oink" as the prime and the referent "pig" as the target (OP) or vice-versa (PO) since semantic overlap was previously shown to be greater in OP. The results showed that the left MTG was sensitive to release of adaptation while both the right MTG and MFG were sensitive to sequence regularity extraction and its verification. However, dissociated activity between OP and PO was revealed in RP only in the right MFG. Specifically, target "pig" (OP) and the physically equivalent target in CTR elicited comparable deactivations whereas target "oink" (PO) elicited less inhibited response in RP than in CTR. This interaction in the right MFG was explained by integrating these effects into a competition model between perceptual and conceptual effects in priming processing.
Collapse
Affiliation(s)
- Ilan Laufer
- Department of Diagnostic Radiology, Yale University, New Haven, Connecticut, United States of America.
| | | | | | | | | |
Collapse
|
24
|
Grady CL, Charlton R, He Y, Alain C. Age differences in FMRI adaptation for sound identity and location. Front Hum Neurosci 2011; 5:24. [PMID: 21441992 PMCID: PMC3061355 DOI: 10.3389/fnhum.2011.00024] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2010] [Accepted: 03/01/2011] [Indexed: 11/25/2022] Open
Abstract
We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing.
Collapse
|
25
|
van der Zwaag W, Gentile G, Gruetter R, Spierer L, Clarke S. Where sound position influences sound object representations: a 7-T fMRI study. Neuroimage 2010; 54:1803-11. [PMID: 20965262 DOI: 10.1016/j.neuroimage.2010.10.032] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2010] [Revised: 09/28/2010] [Accepted: 10/11/2010] [Indexed: 11/25/2022] Open
Abstract
Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.
Collapse
|
26
|
Abstract
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.
Collapse
|