1
|
Ioannucci S, Vetter P. Semantic audio-visual congruence modulates visual sensitivity to biological motion across awareness levels. Cognition 2025; 262:106181. [PMID: 40378502 DOI: 10.1016/j.cognition.2025.106181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2025] [Revised: 05/06/2025] [Accepted: 05/07/2025] [Indexed: 05/19/2025]
Abstract
Whether cross-modal interaction requires conscious awareness of multisensory information or whether it can occur in the absence of awareness, is still an open question. Here, we investigated if sounds can enhance detection sensitivity of semantically matching visual stimuli at varying levels of visual awareness. We presented biological motion stimuli of human actions (walking, rowing, sawing) during dynamic continuous flash suppression (CFS) to 80 participants and measured the effect of co-occurring, semantically matching or non-matching action sounds on visual sensitivity (d'). By individually thresholding stimulus contrast, we distinguished participants who detected motion either above or at chance level. Participants who reliably detected visual motion above chance showed higher sensitivity to upright versus inverted biological motion across all experimental conditions. In contrast, participants detecting visual motion at chance level, i.e. during successful suppression, demonstrated this upright advantage exclusively during trials with semantically congruent sounds. Across the whole sample, the impact of sounds on visual sensitivity increased as participants' visual detection performance decreased, revealing a systematic trade-off between auditory and visual processing. Our findings suggest that semantic congruence between auditory and visual information can selectively modulate biological motion perception when visual awareness is minimal or absent, while more robust visual signals enable perception of biological motion independent of auditory input. Thus, semantically congruent sounds may impact visual representations as a function of the level of visual awareness.
Collapse
Affiliation(s)
- Stefano Ioannucci
- Visual and Cognitive Neuroscience Lab, Dept. of Psychology, University of Fribourg, Switzerland.
| | - Petra Vetter
- Visual and Cognitive Neuroscience Lab, Dept. of Psychology, University of Fribourg, Switzerland
| |
Collapse
|
2
|
Alwashmi K, Rowe F, Meyer G. Multimodal MRI analysis of microstructural and functional connectivity brain changes following systematic audio-visual training in a virtual environment. Neuroimage 2025; 305:120983. [PMID: 39732221 DOI: 10.1016/j.neuroimage.2024.120983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 12/06/2024] [Accepted: 12/18/2024] [Indexed: 12/30/2024] Open
Abstract
Recent work has shown rapid microstructural brain changes in response to learning new tasks. These cognitive tasks tend to draw on multiple brain regions connected by white matter (WM) tracts. Therefore, behavioural performance change is likely to be the result of microstructural, functional activation, and connectivity changes in extended neural networks. Here we show for the first time that learning-induced microstructural change in WM tracts, quantified with diffusion tensor and kurtosis imaging (DTI, DKI) is linked to functional connectivity changes in brain areas that use these tracts to communicate. Twenty healthy participants engaged in a month of virtual reality (VR) systematic audiovisual (AV) training. DTI analysis using repeated-measures ANOVA unveiled a decrease in mean diffusivity (MD) in the SLF II, alongside a significant increase in fractional anisotropy (FA) in optic radiations post-training, persisting in the follow-up (FU) assessment (post: MD t(76) = 6.13, p < 0.001, FA t(76) = 3.68, p < 0.01, FU: MD t(76) = 4.51, p < 0.001, FA t(76) = 2.989, p < 0.05). The MD reduction across participants was significantly correlated with the observed behavioural performance gains. A functional connectivity (FC) analysis showed significantly enhanced functional activity correlation between primary visual and auditory cortices post-training, which was evident by the DKI microstructural changes found within these two regions as well as in the sagittal stratum including WM tracts connecting occipital and temporal lobes (mean kurtosis (MK): cuneus t(19)=2.3 p < 0.05, transverse temporal t(19)=2.6 p < 0.05, radial kurtosis (RK): sagittal stratum t(19)=2.3 p < 0.05). DTI and DKI show complementary data, both of which are consistent with the task-relevant brain networks. The results demonstrate the utility of multimodal imaging analysis to provide complementary evidence for brain changes at the level of networks. In summary, our study shows the complex relationship between microstructural adaptations and functional connectivity, unveiling the potential of multisensory integration within immersive VR training. These findings have implications for learning and rehabilitation strategies, facilitating more effective interventions within virtual environments.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Fiona Rowe
- IDEAS, University of Liverpool, United Kingdom.
| | - Georg Meyer
- Institute of Population Health, University of Liverpool, United Kingdom; Hanse Wissenschaftskolleg, Delmenhorst, Germany.
| |
Collapse
|
3
|
Gao D, Liang X, Ting Q, Nichols ES, Bai Z, Xu C, Cai M, Liu L. A meta-analysis of letter-sound integration: Assimilation and accommodation in the superior temporal gyrus. Hum Brain Mapp 2024; 45:e26713. [PMID: 39447213 PMCID: PMC11501095 DOI: 10.1002/hbm.26713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 04/15/2024] [Accepted: 05/02/2024] [Indexed: 10/26/2024] Open
Abstract
Despite being a relatively new cultural phenomenon, the ability to perform letter-sound integration is readily acquired even though it has not had time to evolve in the brain. Leading theories of how the brain accommodates literacy acquisition include the neural recycling hypothesis and the assimilation-accommodation hypothesis. The neural recycling hypothesis proposes that a new cultural skill is developed by "invading" preexisting neural structures to support a similar cognitive function, while the assimilation-accommodation hypothesis holds that a new cognitive skill relies on direct invocation of preexisting systems (assimilation) and adds brain areas based on task requirements (accommodation). Both theories agree that letter-sound integration may be achieved by reusing pre-existing functionally similar neural bases, but differ in their proposals of how this occurs. We examined the evidence for each hypothesis by systematically comparing the similarities and differences between letter-sound integration and two other types of preexisting and functionally similar audiovisual (AV) processes, namely object-sound and speech-sound integration, by performing an activation likelihood estimation (ALE) meta-analysis. All three types of AV integration recruited the left posterior superior temporal gyrus (STG), while speech-sound integration additionally activated the bilateral middle STG and letter-sound integration directly invoked the AV areas involved in speech-sound integration. These findings suggest that letter-sound integration may reuse the STG for speech-sound and object-sound integration through an assimilation-accommodation mechanism.
Collapse
Affiliation(s)
- Danqi Gao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Xitong Liang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Qi Ting
- Department of Brain Cognition and Intelligent MedicineBeijing University of Posts and TelecommunicationsBeijingChina
| | | | - Zilin Bai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Chaoying Xu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Mingnan Cai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| |
Collapse
|
4
|
Shyamchand Singh S, Mukherjee A, Raghunathan P, Ray D, Banerjee A. High segregation and diminished global integration in large-scale brain functional networks enhances the perceptual binding of cross-modal stimuli. Cereb Cortex 2024; 34:bhae323. [PMID: 39110411 DOI: 10.1093/cercor/bhae323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 07/16/2024] [Accepted: 07/25/2024] [Indexed: 01/29/2025] Open
Abstract
Speech perception requires the binding of spatiotemporally disjoint auditory-visual cues. The corresponding brain network-level information processing can be characterized by two complementary mechanisms: functional segregation which refers to the localization of processing in either isolated or distributed modules across the brain, and integration which pertains to cooperation among relevant functional modules. Here, we demonstrate using functional magnetic resonance imaging recordings that subjective perceptual experience of multisensory speech stimuli, real and illusory, are represented in differential states of segregation-integration. We controlled the inter-subject variability of illusory/cross-modal perception parametrically, by introducing temporal lags in the incongruent auditory-visual articulations of speech sounds within the McGurk paradigm. The states of segregation-integration balance were captured using two alternative computational approaches. First, the module responsible for cross-modal binding of sensory signals defined as the perceptual binding network (PBN) was identified using standardized parametric statistical approaches and their temporal correlations with all other brain areas were computed. With increasing illusory perception, the majority of the nodes of PBN showed decreased cooperation with the rest of the brain, reflecting states of high segregation but reduced global integration. Second, using graph theoretic measures, the altered patterns of segregation-integration were cross-validated.
Collapse
Affiliation(s)
- Soibam Shyamchand Singh
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
- Department of Psychology, Ashoka University, Sonepat 131029, Haryana, India
| | - Abhishek Mukherjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
| | - Partha Raghunathan
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
| | - Dipanjan Ray
- Department of Psychology, Ashoka University, Sonepat 131029, Haryana, India
| | - Arpan Banerjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
| |
Collapse
|
5
|
Lemercier CE, Krieger P, Manahan-Vaughan D. Dynamic modulation of mouse thalamocortical visual activity by salient sounds. iScience 2024; 27:109364. [PMID: 38523779 PMCID: PMC10959669 DOI: 10.1016/j.isci.2024.109364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 12/11/2023] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
Visual responses of the primary visual cortex (V1) are altered by sound. Sound-driven behavioral arousal suggests that, in addition to direct inputs from the primary auditory cortex (A1), multiple other sources may shape V1 responses to sound. Here, we show in anesthetized mice that sound (white noise, ≥70dB) drives a biphasic modulation of V1 visually driven gamma-band activity, comprising fast-transient inhibitory and slow, prolonged excitatory (A1-independent) arousal-driven components. An analogous yet quicker modulation of the visual response also occurred earlier in the visual pathway, at the level of the dorsolateral geniculate nucleus (dLGN), where sound transiently inhibited the early phasic visual response and subsequently induced a prolonged increase in tonic spiking activity and gamma rhythmicity. Our results demonstrate that sound-driven modulations of visual activity are not exclusive to V1 and suggest that thalamocortical inputs from the dLGN to V1 contribute to shaping V1 visual response to sound.
Collapse
Affiliation(s)
- Clément E. Lemercier
- Department of Neurophysiology, Medical Faculty, Ruhr-University Bochum, 44801 Bochum, Germany
| | - Patrik Krieger
- Department of Neurophysiology, Medical Faculty, Ruhr-University Bochum, 44801 Bochum, Germany
| | - Denise Manahan-Vaughan
- Department of Neurophysiology, Medical Faculty, Ruhr-University Bochum, 44801 Bochum, Germany
| |
Collapse
|
6
|
Qian Q, Cai S, Zhang X, Huang J, Chen Y, Wang A, Zhang M. Seeing is believing: Larger Colavita effect in school-aged children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 238:105798. [PMID: 37844345 DOI: 10.1016/j.jecp.2023.105798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/07/2023] [Accepted: 09/25/2023] [Indexed: 10/18/2023]
Abstract
Attention-deficit/hyperactivity disorder (ADHD) is a common neurodevelopmental disorder that leads to visually relevant compensatory activities and cognitive strategies in children. Previous studies have identified difficulties with audiovisual integration in children with ADHD, but the characteristics of the visual dominance effect when processing multisensory stimuli are not clear in children with ADHD. The current study used the Colavita paradigm to explore the visual dominance effect in school-aged children with ADHD. The results found that, compared with typically developing children, children with ADHD had a higher proportion of "visual-auditory" trials and a lower proportion of "simultaneous" trials. The study also found that the proportion of visual-auditory trials in children with ADHD decreased as their Swanson, Nolan, and Pelham-IV rating scale (SNAP-IV) inattention scores increased. The results showed that school-aged children with ADHD had a larger Colavita effect, which decreased with the severity of inattentive symptoms. This may be due to an overreliance on visual information and an abnormal integration time window. The connection between multisensory cognitive processing performance and clinical symptoms found in the current study provides empirical and theoretical support for the knowledge base of multisensory and cognitive abilities in disorders.
Collapse
Affiliation(s)
- Qinyue Qian
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China
| | - Shizhong Cai
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou, Jiangsu 215003, China
| | - Xianghui Zhang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China
| | - Jie Huang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China
| | - Yan Chen
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou, Jiangsu 215003, China.
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China.
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China; Department of Psychology, Suzhou University of Science and Technology, Suzhou, Jiangsu 215011, China; Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan.
| |
Collapse
|
7
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
8
|
Shan L, Yuan L, Zhang B, Ma J, Xu X, Gu F, Jiang Y, Dai J. Neural Integration of Audiovisual Sensory Inputs in Macaque Amygdala and Adjacent Regions. Neurosci Bull 2023; 39:1749-1761. [PMID: 36920645 PMCID: PMC10661144 DOI: 10.1007/s12264-023-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 03/16/2023] Open
Abstract
Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain. Subcortical regions such as the amygdala are involved in sensory processing including vision and audition, yet their roles in multisensory integration remain unclear. In this study, we systematically investigated the function of neurons in the amygdala and adjacent regions in integrating audiovisual sensory inputs using a semi-chronic multi-electrode array and multiple combinations of audiovisual stimuli. From a sample of 332 neurons, we showed the diverse response patterns to audiovisual stimuli and the neural characteristics of bimodal over unimodal modulation, which could be classified into four types with differentiated regional origins. Using the hierarchical clustering method, neurons were further clustered into five groups and associated with different integrating functions and sub-regions. Finally, regions distinguishing congruent and incongruent bimodal sensory inputs were identified. Overall, visual processing dominates audiovisual integration in the amygdala and adjacent regions. Our findings shed new light on the neural mechanisms of multisensory integration in the primate brain.
Collapse
Affiliation(s)
- Liang Shan
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
| | - Liu Yuan
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Bo Zhang
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Brain Science, Zunyi Medical University, Zunyi, 563000, China
| | - Jian Ma
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiao Xu
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Fei Gu
- University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Jiang
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| | - Ji Dai
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Shenzhen Technological Research Center for Primate Translational Medicine, Shenzhen, 518055, China.
| |
Collapse
|
9
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
10
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
11
|
Long-term Tai Chi training reduces the fusion illusion in older adults. Exp Brain Res 2023; 241:517-526. [PMID: 36611123 DOI: 10.1007/s00221-023-06544-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 01/01/2023] [Indexed: 01/09/2023]
Abstract
Sound-induced flash illusion (SiFI) is an auditory-dominated audiovisual integration phenomenon that can be used as a reliable indicator of audiovisual integration. Although previous studies have found that Tai Chi exercise has a promoting effect on cognitive processing, such as executive functions, the effect of Tai Chi exercise on early perceptual processing has yet to be investigated. This study used the classic SiFI paradigm to investigate the effects of long-term Tai Chi exercise on multisensory integration in older adults. We compared older adults with long-term Tai Chi exercise experience with those with long-term walking exercise. The results showed that the accuracy of the Tai Chi group was higher than that of the control group under the fusion illusion condition, mainly due to the increased perceptual sensitivity to flashes. However, there was no significant difference between the two groups in the fission illusion. These results indicated that the fission and fusion illusions were affected differently by Tai Chi exercise, and this was attributable to the association of the participants' flash discriminability with them. The present study provides preliminary evidence that long-term Tai Chi exercise improves older adults' multisensory integration, which occurs in early perceptual processing.
Collapse
|
12
|
Vastano R, Costantini M, Alexander WH, Widerstrom-Noga E. Multisensory integration in humans with spinal cord injury. Sci Rep 2022; 12:22156. [PMID: 36550184 PMCID: PMC9780239 DOI: 10.1038/s41598-022-26678-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Although multisensory integration (MSI) has been extensively studied, the underlying mechanisms remain a topic of ongoing debate. Here we investigate these mechanisms by comparing MSI in healthy controls to a clinical population with spinal cord injury (SCI). Deafferentation following SCI induces sensorimotor impairment, which may alter the ability to synthesize cross-modal information. We applied mathematical and computational modeling to reaction time data recorded in response to temporally congruent cross-modal stimuli. We found that MSI in both SCI and healthy controls is best explained by cross-modal perceptual competition, highlighting a common competition mechanism. Relative to controls, MSI impairments in SCI participants were better explained by reduced stimulus salience leading to increased cross-modal competition. By combining traditional analyses with model-based approaches, we examine how MSI is realized during normal function, and how it is compromised in a clinical population. Our findings support future investigations identifying and rehabilitating MSI deficits in clinical disorders.
Collapse
Affiliation(s)
- Roberta Vastano
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| | - Marcello Costantini
- grid.412451.70000 0001 2181 4941Department of Psychological, Health and Territorial Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy ,grid.412451.70000 0001 2181 4941Institute for Advanced Biomedical Technologies, ITAB, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy
| | - William H. Alexander
- grid.255951.fCenter for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, USA ,grid.255951.fDepartment of Psychology, Florida Atlantic University, Boca Raton, USA ,grid.255951.fThe Brain Institute, Florida Atlantic University, Boca Raton, USA
| | - Eva Widerstrom-Noga
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| |
Collapse
|
13
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
14
|
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. Atten Percept Psychophys 2022; 85:889-904. [PMID: 35902451 PMCID: PMC10066119 DOI: 10.3758/s13414-022-02540-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/08/2022]
Abstract
A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
Collapse
|
15
|
Michail G, Senkowski D, Holtkamp M, Wächter B, Keil J. Early beta oscillations in multisensory association areas underlie crossmodal performance enhancement. Neuroimage 2022; 257:119307. [PMID: 35577024 DOI: 10.1016/j.neuroimage.2022.119307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022] Open
Abstract
The combination of signals from different sensory modalities can enhance perception and facilitate behavioral responses. While previous research described crossmodal influences in a wide range of tasks, it remains unclear how such influences drive performance enhancements. In particular, the neural mechanisms underlying performance-relevant crossmodal influences, as well as the latency and spatial profile of such influences are not well understood. Here, we examined data from high-density electroencephalography (N = 30) recordings to characterize the oscillatory signatures of crossmodal facilitation of response speed, as manifested in the speeding of visual responses by concurrent task-irrelevant auditory information. Using a data-driven analysis approach, we found that individual gains in response speed correlated with larger beta power difference (13-25 Hz) between the audiovisual and the visual condition, starting within 80 ms after stimulus onset in the secondary visual cortex and in multisensory association areas in the parietal cortex. In addition, we examined data from electrocorticography (ECoG) recordings in four epileptic patients in a comparable paradigm. These ECoG data revealed reduced beta power in audiovisual compared with visual trials in the superior temporal gyrus (STG). Collectively, our data suggest that the crossmodal facilitation of response speed is associated with reduced early beta power in multisensory association and secondary visual areas. The reduced early beta power may reflect an auditory-driven feedback signal to improve visual processing through attentional gating. These findings improve our understanding of the neural mechanisms underlying crossmodal response speed facilitation and highlight the critical role of beta oscillations in mediating behaviorally relevant multisensory processing.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany.
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany; Department of Neurology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charité Campus Mitte (CCM), Charitéplatz 1, Berlin 10117, Germany
| | - Bettina Wächter
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany
| | - Julian Keil
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel 24118, Germany
| |
Collapse
|
16
|
Investigating the relationship between background luminance and self-reported valence of auditory stimuli. Acta Psychol (Amst) 2022; 224:103532. [PMID: 35151005 DOI: 10.1016/j.actpsy.2022.103532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 01/28/2022] [Accepted: 02/07/2022] [Indexed: 11/20/2022] Open
Abstract
The present study investigated the effect of background luminance on the self-reported valence ratings of auditory stimuli, as suggested by some earlier work. A secondary aim was to better characterise the effect of auditory valence on pupillary responses, on which the literature is inconsistent. Participants were randomly presented with sounds of different valence categories (negative, neutral, and positive) obtained from the IADS-E database. At the same time, the background luminance of the computer screen (in blue hue) was manipulated across three levels (i.e., low, medium, and high), with pupillometry confirming the expected strong effect of luminance on pupil size. Participants were asked to rate the valence of the presented sound under these different luminance levels. On a behavioural level, we found evidence for an effect of background luminance on the self-reported valence rating, with generally more positive ratings as background luminance increased. Turning to valence effects on pupil size, irrespective of background luminance, interestingly, we observed that pupils were smallest in the positive valence and the largest in negative valence condition, with neutral valence in between. In sum, the present findings provide evidence concerning a relationship between luminance perception (and hence pupil size) and self-reported valence of auditory stimuli, indicating a possible cross-modal interaction of auditory valence processing with completely task-irrelevant visual background luminance. We furthermore discuss the potential for future applications of the current findings in the clinical field.
Collapse
|
17
|
The Role of the Interaction between the Inferior Parietal Lobule and Superior Temporal Gyrus in the Multisensory Go/No-go Task. Neuroimage 2022; 254:119140. [PMID: 35342002 DOI: 10.1016/j.neuroimage.2022.119140] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 03/19/2022] [Accepted: 03/22/2022] [Indexed: 11/23/2022] Open
Abstract
Information from multiple sensory modalities interacts. Using functional magnetic resonance imaging (fMRI), we aimed to identify the neural structures correlated with how cooccurring sound modulates the visual motor response execution. The reaction time (RT) to audiovisual stimuli was significantly faster than the RT to visual stimuli. Signal detection analyses showed no significant difference in the perceptual sensitivity (d') between audiovisual and visual stimuli, while the response criteria (β or c) of the audiovisual stimuli was decreased compared to the visual stimuli. The functional connectivity between the left inferior parietal lobule (IPL) and bilateral superior temporal gyrus (STG) was enhanced in Go processing compared with No-go processing of audiovisual stimuli. Furthermore, the left precentral gyrus (PreCG) showed enhanced functional connectivity with the bilateral STG and other areas of the ventral stream in Go processing compared with No-go processing of audiovisual stimuli. These results revealed that the neuronal network correlated with modulations of the motor response execution after the presentation of both visual stimuli along with cooccurring sound in a multisensory Go/Nogo task, including the left IPL, left PreCG, bilateral STG and some areas of the ventral stream. The role of the interaction between the IPL and STG in transforming audiovisual information into motor behavior is discussed. The current study provides a new perspective for exploring potential brain mechanisms underlying how humans execute appropriate behaviors on the basis of multisensory information.
Collapse
|
18
|
Cross-Modal Interaction and Integration Through Stimulus-Specific Adaptation in the Thalamic Reticular Nucleus of Rats. Neurosci Bull 2022; 38:785-795. [PMID: 35212974 DOI: 10.1007/s12264-022-00827-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 11/11/2021] [Indexed: 10/19/2022] Open
Abstract
Stimulus-specific adaptation (SSA), defined as a decrease in responses to a common stimulus that only partially generalizes to other rare stimuli, is a widespread phenomenon in the brain that is believed to be related to novelty detection. Although cross-modal sensory processing is also a widespread phenomenon, the interaction between the two phenomena is not well understood. In this study, the thalamic reticular nucleus (TRN), which is regarded as a hub of the attentional system that contains multi-modal neurons, was investigated. The results showed that SSA existed in an interactive oddball stimulation, which mimics stimulation changes from one modality to another. In the bimodal integration, SSA to bimodal stimulation was stronger than to visual stimulation alone but similar to auditory stimulation alone, which indicated a limited integrative effect. Collectively, the present results provide evidence for independent cross-modal processing in bimodal TRN neurons.
Collapse
|
19
|
Ball F, Nentwich A, Noesselt T. Cross-modal perceptual enhancement of unisensory targets is uni-directional and does not affect temporal expectations. Vision Res 2021; 190:107962. [PMID: 34757275 DOI: 10.1016/j.visres.2021.107962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 10/20/2022]
Abstract
Temporal structures in the environment can shape temporal expectations (TE); and previous studies demonstrated that TEs interact with multisensory interplay (MSI) when multisensory stimuli are presented synchronously. Here, we tested whether other types of MSI - evoked by asynchronous yet temporally flanking irrelevant stimuli - result in similar performance patterns. To this end, we presented sequences of 12 stimuli (10 Hz) which consisted of auditory (A), visual (V) or alternating auditory-visual stimuli (e.g. A-V-A-V-…) with either auditory or visual targets (Exp. 1). Participants discriminated target frequencies (auditory pitch or visual spatial frequency) embedded in these sequences. To test effects of TE, the proportion of early and late temporal target positions was manipulated run-wise. Performance for unisensory targets was affected by temporally flanking distractors, with auditory temporal flankers selectively improving visual target perception (Exp. 1). However, no effect of temporal expectation was observed. Control experiments (Exp. 2-3) tested whether this lack of TE effect was due to the higher presentation frequency in Exp. 1 relative to previous experiments. Importantly, even at higher stimulation frequencies redundant multisensory targets (Exp. 2-3) reliably modulated TEs. Together, our results indicate that visual target detection was enhanced by MSI. However, this cross-modal enhancement - in contrast to the redundant target effect - was still insufficient to generate TEs. We posit that unisensory target representations were either instable or insufficient for the generation of TEs while less demanding MSI still occurred; highlighting the need for robust stimulus representations when generating temporal expectations.
Collapse
Affiliation(s)
- Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany.
| | - Annika Nentwich
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| |
Collapse
|
20
|
Canbeyli R. Sensory Stimulation Via the Visual, Auditory, Olfactory and Gustatory Systems Can Modulate Mood and Depression. Eur J Neurosci 2021; 55:244-263. [PMID: 34708453 DOI: 10.1111/ejn.15507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 10/20/2021] [Indexed: 11/28/2022]
Abstract
Depression is one of the most common mental disorders, predicted to be the leading cause of disease burden by the next decade. There is great deal of emphasis on the central origin and potential therapeutics of depression whereby the symptomatology of depression has been interpreted and treated as brain generated dysfunctions filtering down to the periphery. This top-down approach has found strong support from clinical work and basic neuroscientific research. Nevertheless, despite great advances in our knowledge of the etiology and therapeutics of depression, success in treatment is still by no means assured.. As a consequence, a wide net has been cast by both clinicians and researchers in search of more efficient therapies for mood disorders. As a complementary view, the present integrative review advocates approaching mood and depression from the opposite perspective: a bottom-up view that starts from the periphery. Specifically, evidence is provided to show that sensory stimulation via the visual, auditory, olfactory and gustatory systems can modulate depression. The review shows how -depending on several parameters- unisensory stimulation via these modalities can ameliorate or aggravate depressive symptoms. Moreover, the review emphasizes the bidirectional relationship between sensory stimulation and depression. Just as peripheral stimulation can modulate depression, depression in turn affects-and in most cases impairs-sensory reception. Furthermore, the review suggests that combined use of multisensory stimulation may have synergistic ameliorative effects on depressive symptoms over and above what has so far been documented for unisensory stimulation.
Collapse
Affiliation(s)
- Resit Canbeyli
- Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University
| |
Collapse
|
21
|
Li F, Wang R, Song C, Zhao M, Ren H, Wang S, Liang K, Li D, Ma X, Zhu B, Wang H, Hao Y. A Skin-Inspired Artificial Mechanoreceptor for Tactile Enhancement and Integration. ACS NANO 2021; 15:16422-16431. [PMID: 34597014 DOI: 10.1021/acsnano.1c05836] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Mechanoreceptors endow humans with the sense of touch by translating the external stimuli into coded spikes, inspiring the rise of artificial mechanoreceptor systems. However, to incorporate slow adaptive receptors-like pressure sensors with artificial neurons remains a challenge. Here we demonstrate an artificial mechanoreceptor by rationally integrating a polypyrrole-based resistive pressure sensor with a volatile NbOx memristor, to mimic the tactile sensation and perception in natural skin, respectively. The artificial mechanoreceptor enables the tactile sensory coding by converting the external mechanical stimuli into strength-modulated electrical spikes. Also, tactile sensation enhancement is achieved by processing the spike frequency characteristics with the pulse coupled neural network. Furthermore, the artificial mechanoreceptor can integrate signals from parallel sensor channels and encode them into unified electrical spikes, resembling the coding of intensity in tactile neural processing. These results provide simple and efficient strategies for constructing future bio-inspired electronic systems.
Collapse
Affiliation(s)
- Fanfan Li
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Advanced Materials and Nanotechnology, Xidian University, Xi'an 710071, China
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
| | - Rui Wang
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Advanced Materials and Nanotechnology, Xidian University, Xi'an 710071, China
| | - Chunyan Song
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
| | - Momo Zhao
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Advanced Materials and Nanotechnology, Xidian University, Xi'an 710071, China
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
| | - Huihui Ren
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
- Zhejiang University, Hangzhou 310027, China
| | - Saisai Wang
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Advanced Materials and Nanotechnology, Xidian University, Xi'an 710071, China
| | - Kun Liang
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
- Zhejiang University, Hangzhou 310027, China
| | - Dingwei Li
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
- Zhejiang University, Hangzhou 310027, China
| | - Xiaohua Ma
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Microelectronics, Xidian University, Xi'an 710071, China
| | - Bowen Zhu
- Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
| | - Hong Wang
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Advanced Materials and Nanotechnology, Xidian University, Xi'an 710071, China
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Microelectronics, Xidian University, Xi'an 710071, China
| | - Yue Hao
- Key Laboratory of Wide Band Gap Semiconductor Technology, School of Microelectronics, Xidian University, Xi'an 710071, China
| |
Collapse
|
22
|
Peng X, Tang X, Jiang H, Wang A, Zhang M, Chang R. Inhibition of Return Decreases Early Audiovisual Integration: An Event-Related Potential Study. Front Hum Neurosci 2021; 15:712958. [PMID: 34690717 PMCID: PMC8526535 DOI: 10.3389/fnhum.2021.712958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/10/2021] [Indexed: 11/25/2022] Open
Abstract
Previous behavioral studies have found that inhibition of return decreases the audiovisual integration, while the underlying neural mechanisms are unknown. The current work utilized the high temporal resolution of event-related potentials (ERPs) to investigate how audiovisual integration would be modulated by inhibition of return. We employed the cue-target paradigm and manipulated the target type and cue validity. Participants were required to perform the task of detection of visual (V), auditory (A), or audiovisual (AV) targets shown in the identical (valid cue) or opposed (invalid cue) side to be the preceding exogenous cue. The neural activities between AV targets and the sum of the A and V targets were compared, and their differences were calculated to present the audiovisual integration effect in different cue validity conditions (valid, invalid). The ERPs results showed that a significant super-additive audiovisual integration effect was observed on the P70 (60∼90 ms, frontal-central) only under the invalid cue condition. The significant audiovisual integration effects were observed on the N1 or P2 components (N1, 120∼180 ms, frontal-central-parietal; P2, 200∼260 ms, frontal-central-parietal) in both valid cue as well as invalid cue condition. And there were no significant differences on the later components between invalid cue and valid cue. The result offers the first neural demonstration that inhibition of return modulates the early audiovisual integration process.
Collapse
Affiliation(s)
- Xing Peng
- Institute of Aviation Human Factors and Ergonomics, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Hao Jiang
- Institute of Aviation Human Factors and Ergonomics, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China
| | - Ruosong Chang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| |
Collapse
|
23
|
Vastano R, Costantini M, Widerstrom-Noga E. Maladaptive reorganization following SCI: The role of body representation and multisensory integration. Prog Neurobiol 2021; 208:102179. [PMID: 34600947 DOI: 10.1016/j.pneurobio.2021.102179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 09/08/2021] [Accepted: 09/24/2021] [Indexed: 10/20/2022]
Abstract
In this review we focus on maladaptive brain reorganization after spinal cord injury (SCI), including the development of neuropathic pain, and its relationship with impairments in body representation and multisensory integration. We will discuss the implications of altered sensorimotor interactions after SCI with and without neuropathic pain and possible deficits in multisensory integration and body representation. Within this framework we will examine published research findings focused on the use of bodily illusions to manipulate multisensory body representation to induce analgesic effects in heterogeneous chronic pain populations and in SCI-related neuropathic pain. We propose that the development and intensification of neuropathic pain after SCI is partly dependent on brain reorganization associated with dysfunctional multisensory integration processes and distorted body representation. We conclude this review by suggesting future research avenues that may lead to a better understanding of the complex mechanisms underlying the sense of the body after SCI, with a focus on cortical changes.
Collapse
Affiliation(s)
- Roberta Vastano
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Eva Widerstrom-Noga
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| |
Collapse
|
24
|
Schulze M, Aslan B, Stöcker T, Stirnberg R, Lux S, Philipsen A. Disentangling early versus late audiovisual integration in adult ADHD: a combined behavioural and resting-state connectivity study. J Psychiatry Neurosci 2021; 46:E528-E537. [PMID: 34548387 PMCID: PMC8526154 DOI: 10.1503/jpn.210017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 05/27/2021] [Accepted: 06/21/2021] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Studies investigating sensory processing in attention-deficit/hyperactivity disorder (ADHD) have shown altered visual and auditory processing. However, evidence is lacking for audiovisual interplay - namely, multisensory integration. As well, neuronal dysregulation at rest (e.g., aberrant within- or between-network functional connectivity) may account for difficulties with integration across the senses in ADHD. We investigated whether sensory processing was altered at the multimodal level in adult ADHD and included resting-state functional connectivity to illustrate a possible overlap between deficient network connectivity and the ability to integrate stimuli. METHODS We tested 25 patients with ADHD and 24 healthy controls using 2 illusionary paradigms: the sound-induced flash illusion and the McGurk illusion. We applied the Mann-Whitney U test to assess statistical differences between groups. We acquired resting-state functional MRIs on a 3.0 T Siemens magnetic resonance scanner, using a highly accelerated 3-dimensional echo planar imaging sequence. RESULTS For the sound-induced flash illusion, susceptibility and reaction time were not different between the 2 groups. For the McGurk illusion, susceptibility was significantly lower for patients with ADHD, and reaction times were significantly longer. At a neuronal level, resting-state functional connectivity in the ADHD group was more highly regulated in polymodal regions that play a role in binding unimodal sensory inputs from different modalities and enabling sensory-to-cognition integration. LIMITATIONS We did not explicitly screen for autism spectrum disorder, which has high rates of comorbidity with ADHD and also involves impairments in multisensory integration. Although the patients were carefully screened by our outpatient department, we could not rule out the possibility of autism spectrum disorder in some participants. CONCLUSION Unimodal hypersensitivity seems to have no influence on the integration of basal stimuli, but it might have negative consequences for the multisensory integration of complex stimuli. This finding was supported by observations of higher resting-state functional connectivity between unimodal sensory areas and polymodal multisensory integration convergence zones for complex stimuli.
Collapse
Affiliation(s)
- Marcel Schulze
- From the Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany (Schulze, Aslan, Lux, Philipsen); Biopsychology and Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany (Schulze); the German Centre for Neurodegenerative Diseases (DZNE), Bonn, Germany (Stöcker, Stirnberg); and the Department of Physics and Astronomy, University of Bonn, Bonn, Germany (Stöcker)
| | | | | | | | | | | |
Collapse
|
25
|
Rezaul Karim AKM, Proulx MJ, de Sousa AA, Likova LT. Neuroplasticity and Crossmodal Connectivity in the Normal, Healthy Brain. PSYCHOLOGY & NEUROSCIENCE 2021; 14:298-334. [PMID: 36937077 PMCID: PMC10019101 DOI: 10.1037/pne0000258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objective Neuroplasticity enables the brain to establish new crossmodal connections or reorganize old connections which are essential to perceiving a multisensorial world. The intent of this review is to identify and summarize the current developments in neuroplasticity and crossmodal connectivity, and deepen understanding of how crossmodal connectivity develops in the normal, healthy brain, highlighting novel perspectives about the principles that guide this connectivity. Methods To the above end, a narrative review is carried out. The data documented in prior relevant studies in neuroscience, psychology and other related fields available in a wide range of prominent electronic databases are critically assessed, synthesized, interpreted with qualitative rather than quantitative elements, and linked together to form new propositions and hypotheses about neuroplasticity and crossmodal connectivity. Results Three major themes are identified. First, it appears that neuroplasticity operates by following eight fundamental principles and crossmodal integration operates by following three principles. Second, two different forms of crossmodal connectivity, namely direct crossmodal connectivity and indirect crossmodal connectivity, are suggested to operate in both unisensory and multisensory perception. Third, three principles possibly guide the development of crossmodal connectivity into adulthood. These are labeled as the principle of innate crossmodality, the principle of evolution-driven 'neuromodular' reorganization and the principle of multimodal experience. These principles are combined to develop a three-factor interaction model of crossmodal connectivity. Conclusions The hypothesized principles and the proposed model together advance understanding of neuroplasticity, the nature of crossmodal connectivity, and how such connectivity develops in the normal, healthy brain.
Collapse
|
26
|
Ball F, Spuerck I, Noesselt T. Minimal interplay between explicit knowledge, dynamics of learning and temporal expectations in different, complex uni- and multisensory contexts. Atten Percept Psychophys 2021; 83:2551-2573. [PMID: 33977407 PMCID: PMC8302534 DOI: 10.3758/s13414-021-02313-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/29/2021] [Indexed: 01/23/2023]
Abstract
While temporal expectations (TE) generally improve reactions to temporally predictable events, it remains unknown how the learning of temporal regularities (one time point more likely than another time point) and explicit knowledge about temporal regularities contribute to performance improvements; and whether any contributions generalise across modalities. Here, participants discriminated the frequency of diverging auditory, visual or audio-visual targets embedded in auditory, visual or audio-visual distractor sequences. Temporal regularities were manipulated run-wise (early vs. late target within sequence). Behavioural performance (accuracy, RT) plus measures from a computational learning model all suggest that learning of temporal regularities occurred but did not generalise across modalities, and that dynamics of learning (size of TE effect across runs) and explicit knowledge have little to no effect on the strength of TE. Remarkably, explicit knowledge affects performance-if at all-in a context-dependent manner: Only under complex task regimes (here, unknown target modality) might it partially help to resolve response conflict while it is lowering performance in less complex environments.
Collapse
Affiliation(s)
- Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, PO Box 4120, 39106, Magdeburg, Germany.
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.
| | - Inga Spuerck
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, PO Box 4120, 39106, Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, PO Box 4120, 39106, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
27
|
Billock VA, Kinney MJ, Schnupp JW, Meredith MA. A simple vector-like law for perceptual information combination is also followed by a class of cortical multisensory bimodal neurons. iScience 2021; 24:102527. [PMID: 34142039 PMCID: PMC8188495 DOI: 10.1016/j.isci.2021.102527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 01/10/2021] [Accepted: 05/05/2021] [Indexed: 11/25/2022] Open
Abstract
An interdisciplinary approach to sensory information combination shows a correspondence between perceptual and neural measures of nonlinear multisensory integration. In psychophysics, sensory information combinations are often characterized by the Minkowski formula, but the neural substrates of many psychophysical multisensory interactions are unknown. We show that audiovisual interactions - for both psychophysical detection threshold data and cortical bimodal neurons - obey similar vector-like Minkowski models, suggesting that cortical bimodal neurons could underlie multisensory perceptual sensitivity. An alternative Bayesian model is not a good predictor of cortical bimodal response. In contrast to cortex, audiovisual data from superior colliculus resembles the 'City-Block' combination rule used in perceptual similarity metrics. Previous work found a simple power law amplification rule is followed for perceptual appearance measures and by cortical subthreshold multisensory neurons. The two most studied neural cell classes in cortical multisensory interactions may provide neural substrates for two important perceptual modes: appearance-based and performance-based perception.
Collapse
Affiliation(s)
- Vincent A. Billock
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
| | - Micah J. Kinney
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
- Naval Air Warfare Center, NAWCAD, Patuxent River, MD 20670, USA
| | - Jan W.H. Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong, China
| | - M. Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University, Richmond, VA 23298, USA
| |
Collapse
|
28
|
Yuan X, Cheng Y, Jiang Y. Multisensory signals inhibit pupillary light reflex: Evidence from pupil oscillation. Psychophysiology 2021; 58:e13848. [PMID: 34002397 DOI: 10.1111/psyp.13848] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 04/18/2021] [Accepted: 04/26/2021] [Indexed: 11/26/2022]
Abstract
Multisensory integration, which enhances stimulus saliency at the early stage of the processing hierarchy, has been recently shown to produce a larger pupil size than its unisensory constituents. Theoretically, any modulation on pupil size ought to be associated with the sympathetic and parasympathetic pathways that are sensitive to light. But it remains poorly understood how the pupillary light reflex is changed in a multisensory context. The present study evoked an oscillation of the pupillary light reflex by periodically changing the luminance of a visual stimulus at 1.25 Hz. It was found that such induced pupil size oscillation was substantially attenuated when the bright but not the dark phase of the visual flicker was periodically and synchronously presented with a burst of tones. This inhibition effect persisted when the visual flicker was task-irrelevant and out of attentional focus, but disappeared when the visual flicker was moved from the central field to the periphery. These findings not only offer a comprehensive characterization of the multisensory impact on pupil response to light, but also provide valuable clues about the individual contributions of the sympathetic and parasympathetic pathways to multisensory modulation of pupil size.
Collapse
Affiliation(s)
- Xiangyong Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Yuhui Cheng
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China.,Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
| |
Collapse
|
29
|
Effects of stimulus intensity on audiovisual integration in aging across the temporal dynamics of processing. Int J Psychophysiol 2021; 162:95-103. [PMID: 33529642 DOI: 10.1016/j.ijpsycho.2021.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 10/26/2020] [Accepted: 01/24/2021] [Indexed: 11/24/2022]
Abstract
Previous studies have drawn different conclusions about whether older adults benefit more from audiovisual integration, and such conflicts may have been due to the stimulus features investigated in those studies, such as stimulus intensity. In the current study, using ERPs, we compared the effects of stimulus intensity on audiovisual integration between young adults and older adults. The results showed that inverse effectiveness, which depicts a phenomenon that lowing the effectiveness of sensory stimuli increases benefits of multisensory integration, was observed in young adults at earlier processing stages but was absent in older adults. Moreover, at the earlier processing stages (60-90 ms and 110-140 ms), older adults exhibited significantly greater audiovisual integration than young adults (all ps < 0.05). However, at the later processing stages (220-250 ms and 340-370 ms), young adults exhibited significantly greater audiovisual integration than old adults (all ps < 0.001). The results suggested that there is an age-related dissociation between early integration and late integration, which indicates that there are different audiovisual processing mechanisms in play between older adults and young adults.
Collapse
|
30
|
Perceived Loudness Sensitivity Influenced by Brightness in Urban Forests: A Comparison When Eyes Were Opened and Closed. FORESTS 2020. [DOI: 10.3390/f11121242] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Soundscape plays a positive, health-related role in urban forests, and there is a competitive allocation of cognitive resources between soundscapes and lightscapes. This study aimed to explore the relationship between perceived loudness sensitivity and brightness in urban forests through eye opening and closure. Questionnaires and measuring equipment were used to gather soundscape and lightscape information at 44 observation sites in urban forested areas. Diurnal variations, Pearson’s correlations, and formula derivations were then used to analyze the relationship between perception sensitivity and how perceived loudness sensitivity was influenced by lightscape. Our results suggested that soundscape variation plays a role in audio–visual perception in urban forests. Our findings also showed a gap in perception sensitivity between loudness and brightness, which conducted two opposite conditions bounded by 1.24 dBA. Furthermore, we found that the effect of brightness on perceived loudness sensitivity was limited if variations of brightness were sequential and weak. This can facilitate the understanding of individual perception to soundscape and lightscape in urban forests when proposing suitable design plans.
Collapse
|
31
|
Kimura A. Cross-modal modulation of cell activity by sound in first-order visual thalamic nucleus. J Comp Neurol 2020; 528:1917-1941. [PMID: 31983057 DOI: 10.1002/cne.24865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 12/19/2019] [Accepted: 01/16/2020] [Indexed: 12/16/2022]
Abstract
Cross-modal auditory influence on cell activity in the primary visual cortex emerging at short latencies raises the possibility that the first-order visual thalamic nucleus, which is considered dedicated to unimodal visual processing, could contribute to cross-modal sensory processing, as has been indicated in the auditory and somatosensory systems. To test this hypothesis, the effects of sound stimulation on visual cell activity in the dorsal lateral geniculate nucleus were examined in anesthetized rats, using juxta-cellular recording and labeling techniques. Visual responses evoked by light (white LED) were modulated by sound (noise burst) given simultaneously or 50-400 ms after the light, even though sound stimuli alone did not evoke cell activity. Alterations of visual response were observed in 71% of cells (57/80) with regard to response magnitude, latency, and/or burst spiking. Suppression predominated in response magnitude modulation, but de novo responses were also induced by combined stimulation. Sound affected not only onset responses but also late responses. Late responses were modulated by sound given before or after onset responses. Further, visual responses evoked by the second light stimulation of a double flash with a 150-700 ms interval were also modulated by sound given together with the first light stimulation. In morphological analysis of labeled cells projection cells comparable to X-, Y-, and W-like cells and interneurons were all susceptible to auditory influence. These findings suggest that the first-order visual thalamic nucleus incorporates auditory influence into parallel and complex thalamic visual processing for cross-modal modulation of visual attention and perception.
Collapse
Affiliation(s)
- Akihisa Kimura
- Department of Physiology, Wakayama Medical University, Wakayama, Japan
| |
Collapse
|
32
|
Individual Differences in Multisensory Interactions:The Influence of Temporal Phase Coherence and Auditory Salience on Visual Contrast Sensitivity. Vision (Basel) 2020; 4:vision4010012. [PMID: 32033350 PMCID: PMC7157667 DOI: 10.3390/vision4010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Revised: 01/21/2020] [Accepted: 01/30/2020] [Indexed: 11/16/2022] Open
Abstract
While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely salience and temporal phase coherence in relation to the visual target, jointly affect the extent to which a sound can enhance visual contrast sensitivity. Visual contrast sensitivity was measured by a psychophysical task, where human adult participants reported the location of a visual Gabor pattern presented at various contrast levels. We expected the most enhanced contrast sensitivity, the lowest contrast threshold, when the visual stimulus was accompanied by a task-irrelevant sound, weak in auditory salience, modulated in-phase with the visual stimulus (strong temporal phase coherence). Our expectations were confirmed, but only if we accounted for individual differences in optimal auditory salience level to induce maximal multisensory enhancement effects. Our findings highlight the importance of interactions between temporal phase coherence and stimulus effectiveness in determining the strength of multisensory enhancement of visual contrast as well as highlighting the importance of accounting for individual differences.
Collapse
|
33
|
Loughrey DG, Mihelj E, Lawlor BA. Age-related hearing loss associated with altered response efficiency and variability on a visual sustained attention task. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 28:1-25. [PMID: 31868123 DOI: 10.1080/13825585.2019.1704393] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study investigated the association between age-related hearing loss (ARHL) and differences in response efficiency and variability on a sustained attention task. The study population comprised 32 participants in a hearing loss group (HLG) and 34 controls without hearing loss (CG). Mean reaction time (RT) and accuracy were recorded to assess response efficiency. RT variability was decomposed to examine temporal aspects of variability associated with neural arousal and top-down executive control of vigilant attention. The HLG had a significantly longer mean RT, possibly reflecting a strategic approach to maintain accuracy. The HLG also demonstrated altered variability (indicative of greater decline in neural arousal) but maintained executive control that was significantly predictive of poorer response efficiency. Adults with ARHL may rely on higher-order attention networks to compensate for decline in both peripheral sensory function and in subcortical arousal systems which mediate lower-order automatic neurocognitive processes.
Collapse
Affiliation(s)
- David G Loughrey
- Global Brain Health Institute, Trinity College Dublin, Ireland/University of California , San Francisco, CA, USA
| | - Ernest Mihelj
- Institute of Human Movement Sciences and Sport, Eidgenössische Technische Hochschule Zürich , Switzerland
| | - Brian A Lawlor
- Global Brain Health Institute, Trinity College Dublin, Ireland/University of California, San Francisco. Mercer's Institute for Successful Ageing, St James Hospital , Dublin, Ireland
| |
Collapse
|
34
|
Xi Y, Li Q, Zhang M, Liu L, Li G, Lin W, Wu J. Optimized Configuration of Functional Brain Network for Processing Semantic Audiovisual Stimuli Underlying the Modulation of Attention: A Graph-Based Study. Front Integr Neurosci 2019; 13:67. [PMID: 31798426 PMCID: PMC6877756 DOI: 10.3389/fnint.2019.00067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 11/05/2019] [Indexed: 12/23/2022] Open
Abstract
Semantic audiovisual stimuli have a facilitatory effect on behavioral performance and influence the integration of multisensory inputs across sensory modalities. Many neuroimaging and electrophysiological studies investigated the neural mechanisms of multisensory semantic processing and reported that attention modulates the response to multisensory semantic inputs. In the present study, we designed an functional magnetic resonance imaging (fMRI) experiment of semantic discrimination using the unimodal auditory, unimodal visual and bimodal audiovisual stimuli with semantic information. By manipulating the stimuli present on attended and unattended position, we recorded the task-related fMRI data corresponding to the unimodal auditory, unimodal visual and bimodal audiovisual stimuli in attended and unattended conditions. We also recorded the fMRI data in resting state. Then the fMRI method was used together with a graph theoretical analysis to construct the functional brain networks in task-related and resting states and quantitatively characterize the topological network properties. The aim of our present study is to explore the characteristics of functional brain networks that process semantic audiovisual stimuli in attended and unattended conditions, revealing the neural mechanism of multisensory processing and the modulation of attention. The behavioral results showed that the audiovisual stimulus presented simultaneously promoted the performance of semantic discrimination task. And the analyses of network properties showed that compared with the resting-state condition, the functional networks of processing semantic audiovisual stimuli (both in attended and unattended conditions) had greater small-worldness, global efficiency, and lower clustering coefficient, characteristic path length, global efficiency and hierarchy. In addition, the hubs were concentrated in the bilateral temporal lobes, especially in the anterior temporal lobes (ATLs), which were positively correlated to reaction time (RT). Moreover, attention significantly altered the degree of small-worldness and the distribution of hubs in the functional network for processing semantic audiovisual stimuli. Our findings suggest that the topological structure of the functional brain network for processing semantic audiovisual stimulus is modulated by attention, and has the characteristics of high efficiency and low wiring cost, which maintains an optimized balance between functional segregation and integration for multisensory processing efficiently.
Collapse
Affiliation(s)
- Yang Xi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China.,School of Computer Science, Northeast Electric Power University, Jilin, China
| | - Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Mengchao Zhang
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Guangjian Li
- Department of Neurology, The First Hospital of Jilin University, Changchun, China
| | - Weihong Lin
- Department of Neurology, The First Hospital of Jilin University, Changchun, China
| | - Jinglong Wu
- Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| |
Collapse
|
35
|
Li Q, Xi Y, Zhang M, Liu L, Tang X. Distinct Mechanism of Audiovisual Integration With Informative and Uninformative Sound in a Visual Detection Task: A DCM Study. Front Comput Neurosci 2019; 13:59. [PMID: 31555115 PMCID: PMC6727739 DOI: 10.3389/fncom.2019.00059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 08/16/2019] [Indexed: 02/03/2023] Open
Abstract
Previous studies have shown that task-irrelevant auditory information can provide temporal clues for the detection of visual targets and improve visual perception; such sounds are called informative sounds. The neural mechanism of the integration of informative sound and visual stimulus has been investigated extensively, using behavioral measurement or neuroimaging methods such as functional magnetic resonance imaging (fMRI) and event-related potential (ERP), but the dynamic processes of audiovisual integration cannot be characterized formally in terms of directed neuronal coupling. The present study adopts dynamic causal modeling (DCM) of fMRI data to identify changes in effective connectivity in the hierarchical brain networks that underwrite audiovisual integration and memory. This allows us to characterize context-sensitive changes in neuronal coupling and show how visual processing is contextualized by the processing of informative and uninformative sounds. Our results show that audiovisual integration with informative and uninformative sounds conforms to different optimal models in the two conditions, indicating distinct neural mechanisms of audiovisual integration. The findings also reveal that a sound is uninformative owing to low-level automatic audiovisual integration and informative owing to integration in high-level cognitive processes.
Collapse
Affiliation(s)
- Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Yang Xi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China.,School of Computer Science, Northeast Electric Power University, Jilin, China
| | - Mengchao Zhang
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| |
Collapse
|
36
|
Macharadze T, Budinger E, Brosch M, Scheich H, Ohl FW, Henschke JU. Early Sensory Loss Alters the Dendritic Branching and Spine Density of Supragranular Pyramidal Neurons in Rodent Primary Sensory Cortices. Front Neural Circuits 2019; 13:61. [PMID: 31611778 PMCID: PMC6773815 DOI: 10.3389/fncir.2019.00061] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 09/03/2019] [Indexed: 01/26/2023] Open
Abstract
Multisensory integration in primary auditory (A1), visual (V1), and somatosensory cortex (S1) is substantially mediated by their direct interconnections and by thalamic inputs across the sensory modalities. We have previously shown in rodents (Mongolian gerbils) that during postnatal development, the anatomical and functional strengths of these crossmodal and also of sensory matched connections are determined by early auditory, somatosensory, and visual experience. Because supragranular layer III pyramidal neurons are major targets of corticocortical and thalamocortical connections, we investigated in this follow-up study how the loss of early sensory experience changes their dendritic morphology. Gerbils were sensory deprived early in development by either bilateral sciatic nerve transection at postnatal day (P) 5, ototoxic inner hair cell damage at P10, or eye enucleation at P10. Sholl and branch order analyses of Golgi-stained layer III pyramidal neurons at P28, which demarcates the end of the sensory critical period in this species, revealed that visual and somatosensory deprivation leads to a general increase of apical and basal dendritic branching in A1, V1, and S1. In contrast, dendritic branching, particularly of apical dendrites, decreased in all three areas following auditory deprivation. Generally, the number of spines, and consequently spine density, along the apical and basal dendrites decreased in both sensory deprived and non-deprived cortical areas. Therefore, we conclude that the loss of early sensory experience induces a refinement of corticocortical crossmodal and other cortical and thalamic connections by pruning of dendritic spines at the end of the critical period. Based on present and previous own results and on findings from the literature, we propose a scenario for multisensory development following early sensory loss.
Collapse
Affiliation(s)
- Tamar Macharadze
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Clinic for Anesthesiology and Intensive Care Medicine, Otto von Guericke University Hospital, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Michael Brosch
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Special Lab Primate Neurobiology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Henning Scheich
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Emeritus Group Lifelong Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute for Biology, Otto von Guericke University, Magdeburg, Germany
| | - Julia U Henschke
- Institute of Cognitive Neurology and Dementia Research (IKND), Otto von Guericke University, Magdeburg, Germany
| |
Collapse
|
37
|
Cortical network underlying audiovisual semantic integration and modulation of attention: An fMRI and graph-based study. PLoS One 2019; 14:e0221185. [PMID: 31442242 PMCID: PMC6707554 DOI: 10.1371/journal.pone.0221185] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Accepted: 07/31/2019] [Indexed: 01/12/2023] Open
Abstract
Many neuroimaging and electrophysiology studies have suggested that semantic integration as a high-level cognitive process involves various cortical regions and is modulated by attention. However, the cortical network specific to semantic integration and the modulatory mechanism of attention remain unclear. Here, we designed an fMRI experiment using “bimodal stimulus” to extract information regarding the cortical activation related to the effects of semantic integration with and without attention, and then analyzed the characteristics of the cortical network and the modulating effect of attention on semantic integration. To further investigate the related cortical regions, we constructed a functional brain network for processing attended AV stimuli to evaluate the nodal properties using a graph-based method. The results of the fMRI and graph-based analyses showed that the semantic integration with attention activated the anterior temporal lobe (ATL), temporoparietal junction (TPJ), and frontoparietal cortex, with the ATL showing the highest nodal degree and efficiency; in contrast, semantic integration without attention involved a relatively small cortical network, including the posterior superior temporal gyrus (STG), Heschl’s gyrus (HG), and precentral gyrus. These results indicated that semantic integration is a complex cognitive process that occurs not only in the attended condition but also in the unattended condition, and that attention could modulate the distribution of cortical networks related to semantic integration. We suggest that semantic integration with attention is a conscious process and needs a wide cortical network working together, in which the ATL plays the role of a central hub; in contrast, semantic integration without attention is a pre-attentive process and involves a relatively smaller cortical network, in which the HG may play an important role. Our study will provide valuable insights into semantic integration and will be useful for investigations on multisensory integration and attention mechanism at multiple processing stages and levels within the cortical hierarchy.
Collapse
|
38
|
Plass J, Ahn E, Towle VL, Stacey WC, Wasade VS, Tao J, Wu S, Issa NP, Brang D. Joint Encoding of Auditory Timing and Location in Visual Cortex. J Cogn Neurosci 2019; 31:1002-1017. [PMID: 30912728 DOI: 10.1162/jocn_a_01399] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Co-occurring sounds can facilitate perception of spatially and temporally correspondent visual events. Separate lines of research have identified two putatively distinct neural mechanisms underlying two types of crossmodal facilitations: Whereas crossmodal phase resetting is thought to underlie enhancements based on temporal correspondences, lateralized occipital evoked potentials (ERPs) are thought to reflect enhancements based on spatial correspondences. Here, we sought to clarify the relationship between these two effects to assess whether they reflect two distinct mechanisms or, rather, two facets of the same underlying process. To identify the neural generators of each effect, we examined crossmodal responses to lateralized sounds in visually responsive cortex of 22 patients using electrocorticographic recordings. Auditory-driven phase reset and ERP responses in visual cortex displayed similar topography, revealing significant activity in pericalcarine, inferior occipital-temporal, and posterior parietal cortex, with maximal activity in lateral occipitotemporal cortex (potentially V5/hMT+). Laterality effects showed similar but less widespread topography. To test whether lateralized and nonlateralized components of crossmodal ERPs emerged from common or distinct neural generators, we compared responses throughout visual cortex. Visual electrodes responded to both contralateral and ipsilateral sounds with a contralateral bias, suggesting that previously observed laterality effects do not emerge from a distinct neural generator but rather reflect laterality-biased responses in the same neural populations that produce phase-resetting responses. These results suggest that crossmodal phase reset and ERP responses previously found to reflect spatial and temporal facilitation in visual cortex may reflect the same underlying mechanism. We propose a new unified model to account for these and previous results.
Collapse
|
39
|
Tang X, Gao Y, Yang W, Ren Y, Wu J, Zhang M, Wu Q. Bimodal-divided attention attenuates visually induced inhibition of return with audiovisual targets. Exp Brain Res 2019; 237:1093-1107. [PMID: 30770958 DOI: 10.1007/s00221-019-05488-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 02/04/2019] [Indexed: 11/27/2022]
Abstract
Inhibition of return (IOR) refers to the slower response to a target appearing at a previously attended location in a cue-target paradigm. It has been greatly explored in the visual or auditory modality. This study investigates differences between the IOR of audiovisual targets and the IOR of visual targets under conditions of modality-specific selective attention (Experiment 1) and divided-modalities attention (Experiment 2). We employed an exogenous spatial cueing paradigm and manipulated the modalities of targets, including visual, auditory, or audiovisual modalities. The participants were asked to detect targets in visual modality or both visual and auditory modalities, which were presented on the same (cued) or opposite (uncued) side as the preceding visual peripheral cues. In Experiment 1, we found the comparable IOR with visual and audiovisual targets when participants were asked to selectively focus on visual modality. In Experiment 2, however, there was a smaller magnitude of IOR with audiovisual targets as compared with visual targets when paying attention to both visual and auditory modalities. We also observed a reduced multisensory response enhancement effect and race model inequality violation at cued locations relative to uncued locations. These results provide the first evidence of the IOR with audiovisual targets. Furthermore, IOR with audiovisual targets decreases when paying attention to both modalities. The interaction between exogenous spatial attention and audiovisual integration is discussed.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, 116029, China.
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan.
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, 130012, China
| | - Weiping Yang
- Department of Psychology, Hubei University, Wuhan, 430062, China
| | - Yanna Ren
- Department of Psychology, Guiyang University of Chinese Medicine, Guiyang, 550025, China
| | - Jinglong Wu
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan
- Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
- Key Laboratory of Biomimetic Robots and Systems, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing, 100081, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, 215123, China.
| | - Qiong Wu
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan.
| |
Collapse
|
40
|
Living and Working in a Multisensory World: From Basic Neuroscience to the Hospital. MULTIMODAL TECHNOLOGIES AND INTERACTION 2019. [DOI: 10.3390/mti3010002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
The intensive care unit (ICU) of a hospital is an environment subjected to ceaseless noise. Patient alarms contribute to the saturated auditory environment and often overwhelm healthcare providers with constant and false alarms. This may lead to alarm fatigue and prevent optimum patient care. In response, a multisensory alarm system developed with consideration for human neuroscience and basic music theory is proposed as a potential solution. The integration of auditory, visual, and other sensory output within an alarm system can be used to convey more meaningful clinical information about patient vital signs in the ICU and operating room to ultimately improve patient outcomes.
Collapse
|
41
|
Visually induced inhibition of return affects the audiovisual integration under different SOA conditions. ACTA PSYCHOLOGICA SINICA 2019. [DOI: 10.3724/sp.j.1041.2019.00759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
42
|
Bieler M, Xu X, Marquardt A, Hanganu-Opatz IL. Multisensory integration in rodent tactile but not visual thalamus. Sci Rep 2018; 8:15684. [PMID: 30356135 PMCID: PMC6200796 DOI: 10.1038/s41598-018-33815-y] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 10/04/2018] [Indexed: 11/09/2022] Open
Abstract
Behavioural performance requires a coherent perception of environmental features that address multiple senses. These diverse sensory inputs are integrated in primary sensory cortices, yet it is still largely unknown whether their convergence occurs even earlier along the sensory tract. Here we investigate the role of putatively modality-specific first-order (FO) thalamic nuclei (ventral posteromedial nucleus (VPM), dorsal lateral geniculate nucleus (dLGN)) and their interactions with primary sensory cortices (S1, V1) for multisensory integration in pigmented rats in vivo. We show that bimodal stimulation (i.e. simultaneous light flash and whisker deflection) enhances sensory evoked activity in VPM, but not dLGN. Moreover, cross-modal stimuli reset the phase of thalamic network oscillations and strengthen the coupling efficiency between VPM and S1, but not between dLGN and V1. Finally, the information flow from VPM to S1 is enhanced. Thus, FO tactile, but not visual, thalamus processes and relays sensory inputs from multiple senses, revealing a functional difference between sensory thalamic nuclei during multisensory integration.
Collapse
Affiliation(s)
- Malte Bieler
- Developmental Neurophysiology, Institute of Neuroanatomy, University Medical Center Hamburg-Eppendorf, 20251, Hamburg, Germany. .,Laboratory for Neural Computation, Department of Physiology, University of Oslo, 0372, Oslo, Norway.
| | - Xiaxia Xu
- Developmental Neurophysiology, Institute of Neuroanatomy, University Medical Center Hamburg-Eppendorf, 20251, Hamburg, Germany
| | - Annette Marquardt
- Developmental Neurophysiology, Institute of Neuroanatomy, University Medical Center Hamburg-Eppendorf, 20251, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Developmental Neurophysiology, Institute of Neuroanatomy, University Medical Center Hamburg-Eppendorf, 20251, Hamburg, Germany.
| |
Collapse
|
43
|
Tivadar RI, Retsa C, Turoman N, Matusz PJ, Murray MM. Sounds enhance visual completion processes. Neuroimage 2018; 179:480-488. [PMID: 29959049 DOI: 10.1016/j.neuroimage.2018.06.070] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 06/13/2018] [Accepted: 06/25/2018] [Indexed: 10/28/2022] Open
Abstract
Everyday vision includes the detection of stimuli, figure-ground segregation, as well as object localization and recognition. Such processes must often surmount impoverished or noisy conditions; borders are perceived despite occlusion or absent contrast gradients. These illusory contours (ICs) are an example of so-called mid-level vision, with an event-related potential (ERP) correlate at ∼100-150 ms post-stimulus onset and originating within lateral-occipital cortices (the ICeffect). Presently, visual completion processes supporting IC perception are considered exclusively visual; any influence from other sensory modalities is currently unknown. It is now well-established that multisensory processes can influence both low-level vision (e.g. detection) as well as higher-level object recognition. By contrast, it is unknown if mid-level vision exhibits multisensory benefits and, if so, through what mechanisms. We hypothesized that sounds would impact the ICeffect. We recorded 128-channel ERPs from 17 healthy, sighted participants who viewed ICs or no-contour (NC) counterparts either in the presence or absence of task-irrelevant sounds. The ICeffect was enhanced by sounds and resulted in the recruitment of a distinct configuration of active brain areas over the 70-170 ms post-stimulus period. IC-related source-level activity within the lateral occipital cortex (LOC), inferior parietal lobe (IPL), as well as primary visual cortex (V1) were enhanced by sounds. Moreover, the activity in these regions was correlated when sounds were present, but not when absent. Results from a control experiment, which employed amodal variants of the stimuli, suggested that sounds impact the perceived brightness of the IC rather than shape formation per se. We provide the first demonstration that multisensory processes augment mid-level vision and everyday visual completion processes, and that one of the mechanisms is brightness enhancement. These results have important implications for the design of treatments and/or visual aids for low-vision patients.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, 1011, Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, 1003, Lausanne, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, 1011, Lausanne, Switzerland
| | - Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, 1011, Lausanne, Switzerland
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, 1011, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, 1011, Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, 1003, Lausanne, Switzerland; The EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center and University of Lausanne, 1011, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, 37203-5721, USA.
| |
Collapse
|
44
|
de la Rosa MD, Bausenhart KM. Enhancement of letter identification by concurrent auditory stimuli of varying duration. Acta Psychol (Amst) 2018; 190:38-52. [PMID: 30005175 DOI: 10.1016/j.actpsy.2018.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 06/25/2018] [Accepted: 07/02/2018] [Indexed: 11/24/2022] Open
Abstract
Previously it has been shown that the concurrent presentation of a sound can improve processing of visual information at higher perceptual levels, for example, in letter identification tasks. Moreover, increasing the duration of the concurrent sounds can enhance performance in low-level tasks as contrast detection, which has been attributed to a sustained visual activation corresponding to the duration of the sound. Yet, the role of sound duration has so far not been investigated in higher-level visual processing. In a series of five Experiments, we again demonstrated that the mere presence of a concurrent sound can enhance the identification of a masked, centrally presented letter compared to unimodal presentation, even though this benefit was absent in one experiment for high-contrast letters yielding an especially high level of task-performance. In general, however, the sound-induced benefit was not modulated by a variation of target contrast or by the duration of the target-to-mask interstimulus interval. Taking individual performance differences into account, a further analysis suggested that the sound-induced facilitation effect may nevertheless be most pronounced at specific performance levels. Beyond this general sound-induced facilitation, letter identification performance was not further affected by the duration of the concurrent sounds, even though in a control experiment it could be established that letter identification performance improved with increasing letter duration, and perceived letter duration was prolonged with increasing auditory duration. The results and their interpretation with respect to the large observed interindividual performance differences are discussed in terms of potential underlying mechanisms of multisensory facilitation, as preparedness enhancement, signal enhancement, and object enhancement.
Collapse
|
45
|
Ball F, Fuehrmann F, Stratil F, Noesselt T. Phasic and sustained interactions of multisensory interplay and temporal expectation. Sci Rep 2018; 8:10208. [PMID: 29976998 PMCID: PMC6033875 DOI: 10.1038/s41598-018-28495-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 06/25/2018] [Indexed: 12/18/2022] Open
Abstract
Every moment organisms are confronted with complex streams of information which they use to generate a reliable mental model of the world. There is converging evidence for several optimization mechanisms instrumental in integrating (or segregating) incoming information; among them are multisensory interplay (MSI) and temporal expectation (TE). Both mechanisms can account for enhanced perceptual sensitivity and are well studied in isolation; how these two mechanisms interact is currently less well-known. Here, we tested in a series of four psychophysical experiments for TE effects in uni- and multisensory contexts with different levels of modality-related and spatial uncertainty. We found that TE enhanced perceptual sensitivity for the multisensory relative to the best unisensory condition (i.e. multisensory facilitation according to the max-criterion). In the latter TE effects even vanished if stimulus-related spatial uncertainty was increased. Accordingly, computational modelling indicated that TE, modality-related and spatial uncertainty predict multisensory facilitation. Finally, the analysis of stimulus history revealed that matching expectation at trial n-1 selectively improves multisensory performance irrespective of stimulus-related uncertainty. Together, our results indicate that benefits of multisensory stimulation are enhanced by TE especially in noisy environments, which allows for more robust information extraction to boost performance on both short and sustained time ranges.
Collapse
Affiliation(s)
- Felix Ball
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.
- Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.
| | - Fabienne Fuehrmann
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Fenja Stratil
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Toemme Noesselt
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
- Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
46
|
Díaz B, Blank H, von Kriegstein K. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition. Neuroimage 2018; 178:721-734. [PMID: 29772380 DOI: 10.1016/j.neuroimage.2018.05.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 04/12/2018] [Accepted: 05/12/2018] [Indexed: 11/19/2022] Open
Abstract
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition.
Collapse
Affiliation(s)
- Begoña Díaz
- Center for Brain and Cognition, Pompeu Fabra University, Barcelona, 08018, Spain; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, 04103, Germany; Department of Basic Sciences, Faculty of Medicine and Health Sciences, International University of Catalonia, 08195 Sant Cugat del Vallès, Spain.
| | - Helen Blank
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, 04103, Germany; University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, 04103, Germany; Faculty of Psychology, Technische Universität Dresden, 01187, Dresden, Germany
| |
Collapse
|
47
|
Henschke JU, Oelschlegel AM, Angenstein F, Ohl FW, Goldschmidt J, Kanold PO, Budinger E. Early sensory experience influences the development of multisensory thalamocortical and intracortical connections of primary sensory cortices. Brain Struct Funct 2018; 223:1165-1190. [PMID: 29094306 PMCID: PMC5871574 DOI: 10.1007/s00429-017-1549-1] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 09/29/2017] [Indexed: 12/21/2022]
Abstract
The nervous system integrates information from multiple senses. This multisensory integration already occurs in primary sensory cortices via direct thalamocortical and corticocortical connections across modalities. In humans, sensory loss from birth results in functional recruitment of the deprived cortical territory by the spared senses but the underlying circuit changes are not well known. Using tracer injections into primary auditory, somatosensory, and visual cortex within the first postnatal month of life in a rodent model (Mongolian gerbil) we show that multisensory thalamocortical connections emerge before corticocortical connections but mostly disappear during development. Early auditory, somatosensory, or visual deprivation increases multisensory connections via axonal reorganization processes mediated by non-lemniscal thalamic nuclei and the primary areas themselves. Functional single-photon emission computed tomography of regional cerebral blood flow reveals altered stimulus-induced activity and higher functional connectivity specifically between primary areas in deprived animals. Together, we show that intracortical multisensory connections are formed as a consequence of sensory-driven multisensory thalamocortical activity and that spared senses functionally recruit deprived cortical areas by an altered development of sensory thalamocortical and corticocortical connections. The functional-anatomical changes after early sensory deprivation have translational implications for the therapy of developmental hearing loss, blindness, and sensory paralysis and might also underlie developmental synesthesia.
Collapse
Affiliation(s)
- Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- German Center for Neurodegenerative Diseases Within the Helmholtz Association, Leipziger Str. 44, 39120, Magdeburg, Germany
- Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Anja M Oelschlegel
- Research Group Neuropharmacology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Institute of Anatomy, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Frank Angenstein
- Functional Neuroimaging Group, German Center for Neurodegenerative Diseases Within the Helmholtz Association, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Jürgen Goldschmidt
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Patrick O Kanold
- Department of Biology, University of Maryland, College Park, MD, 20742, USA
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany.
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany.
| |
Collapse
|
48
|
Henschke JU, Ohl FW, Budinger E. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging. Front Aging Neurosci 2018; 10:52. [PMID: 29551970 PMCID: PMC5840148 DOI: 10.3389/fnagi.2018.00052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Accepted: 02/15/2018] [Indexed: 11/22/2022] Open
Abstract
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.
Collapse
Affiliation(s)
- Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Department Genetics, Leibniz Institute for Neurobiology, Magdeburg, Germany.,German Center for Neurodegenerative Diseases within the Helmholtz Association, Magdeburg, Germany.,Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute of Biology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
49
|
Kimura A, Imbe H. Robust Subthreshold Cross-modal Modulation of Auditory Response by Cutaneous Electrical Stimulation in First- and Higher-order Auditory Thalamic Nuclei. Neuroscience 2018; 372:161-180. [PMID: 29309880 DOI: 10.1016/j.neuroscience.2017.12.051] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 12/14/2017] [Accepted: 12/27/2017] [Indexed: 12/14/2022]
Abstract
Conventional extracellular recording has revealed cross-modal alterations of auditory cell activities by cutaneous electrical stimulation of the hindpaw in first- and higher-order auditory thalamic nuclei (Donishi et al., 2011). Juxta-cellular recording and labeling techniques were used in the present study to examine the cross-modal alterations in detail, focusing on possible nucleus and/or cell type-related distinctions in modulation. Recordings were obtained from 80 cells of anesthetized rats. Cutaneous electrical stimulation, which did not elicit unit discharges, i.e., subthreshold effects, modulated early (onset) and/or late auditory responses of first- (64%) and higher-order nucleus cells (77%) with regard to response magnitude, latency and/or burst spiking. Attenuation predominated in the modulation of response magnitude and burst spiking, and delay predominated in the modulation of response time. Striking alterations of burst spiking took place in higher-order nucleus cells, which had the potential to exhibit higher propensities for burst spiking as compared to first-order nucleus cells. A subpopulation of first-order nucleus cells showing modulation in early response magnitude in the caudal domain of the nucleus had larger cell bodies and higher propensities for burst spiking as compared to cells showing no modulation. These findings suggest that somatosensory influence is incorporated into parallel channels in auditory thalamic nuclei to impose distinct impacts on cortical and subcortical sensory processing. Further, cutaneous electrical stimulation given after early auditory responses modulated late responses. Somatosensory influence is likely to affect ongoing auditory processing at any time without being coincident with sound onset in a narrow temporal window.
Collapse
Affiliation(s)
- Akihisa Kimura
- Department of Physiology, Wakayama Medical University, Wakayama Kimiidera 811-1, 641-8509, Japan.
| | - Hiroki Imbe
- Department of Physiology, Wakayama Medical University, Wakayama Kimiidera 811-1, 641-8509, Japan
| |
Collapse
|
50
|
Starke J, Ball F, Heinze HJ, Noesselt T. The spatio-temporal profile of multisensory integration. Eur J Neurosci 2017; 51:1210-1223. [PMID: 29057531 DOI: 10.1111/ejn.13753] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Revised: 10/13/2017] [Accepted: 10/16/2017] [Indexed: 12/29/2022]
Abstract
Task-irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower- and higher-intensity sounds were paired with a non-informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co-stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low-intensity audiovisual stimuli which scaled with subject-specific enhancement in perceptual sensitivity. Concordantly, a modulation of event-related potentials could already be observed over frontal electrodes at an early latency (30-80 ms), which again scaled with subject-specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain-behaviour correspondence. Hence, the latency of the corresponding fMRI-EEG brain-behaviour modulation points at an early interplay of visual and auditory signals in low-level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher-intensity auditory stimuli were also elevated by visual co-stimulation (in the absence of any behavioural effect) suggesting a general, intensity-independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception.
Collapse
Affiliation(s)
- Johanna Starke
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|