1
|
Sinha A, Lee J, Kim J, So H. An evaluation of recent advancements in biological sensory organ-inspired neuromorphically tuned biomimetic devices. MATERIALS HORIZONS 2024; 11:5181-5208. [PMID: 39114942 DOI: 10.1039/d4mh00522h] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/29/2024]
Abstract
In the field of neuroscience, significant progress has been made regarding how the brain processes information. Unlike computer processors, the brain comprises neurons and synapses instead of memory blocks and transistors. Despite advancements in artificial neural networks, a complete understanding concerning brain functions remains elusive. For example, to achieve more accurate neuron replication, we must better understand signal transmission during synaptic processes, neural network tunability, and the creation of nanodevices featuring neurons and synapses. This study discusses the latest algorithms utilized in neuromorphic systems, the production of synaptic devices, differences between single and multisensory gadgets, recent advances in multisensory devices, and the promising research opportunities available in this field. We also explored the ability of an artificial synaptic device to mimic biological neural systems across diverse applications. Despite existing challenges, neuroscience-based computing technology holds promise for attracting scientists seeking to enhance solutions and augment the capabilities of neuromorphic devices, thereby fostering future breakthroughs in algorithms and the widespread application of cutting-edge technologies.
Collapse
Affiliation(s)
- Animesh Sinha
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
| | - Jihun Lee
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
| | - Junho Kim
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
| | - Hongyun So
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
- Institute of Nano Science and Technology, Hanyang University, Seoul 04763, South Korea
| |
Collapse
|
2
|
Cacciamani L, Tomer D, Mylod-Vargas MG, Selcov A, Peterson GA, Oseguera CI, Barbieux A. HD-tDCS to the lateral occipital complex improves haptic object recognition. Exp Brain Res 2024; 242:2113-2124. [PMID: 38970654 DOI: 10.1007/s00221-024-06888-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 06/26/2024] [Indexed: 07/08/2024]
Abstract
High-definition transcranial direct current stimulation (HD-tDCS) is a non-invasive brain stimulation technique that has been shown to be safe and effective in modulating neuronal activity. The present study investigates the effect of anodal HD-tDCS on haptic object perception and memory through stimulation of the lateral occipital complex (LOC), a structure that has been shown to be involved in both visual and haptic object recognition. In this single-blind, sham-controlled, between-subjects study, blindfolded healthy, sighted participants used their right (dominant) hand to perform haptic discrimination and recognition tasks with 3D-printed, novel objects called "Greebles" while receiving 20 min of 2 milliamp (mA) anodal stimulation (or sham) to the left or right LOC. Compared to sham, those who received left LOC stimulation (contralateral to the hand used) showed an improvement in haptic object recognition but not discrimination-a finding that was evident from the start of the behavioral tasks. A second experiment showed that this effect was not observed with right LOC stimulation (ipsilateral to the hand used). These results suggest that HD-tDCS to the left LOC can improve recognition of objects perceived via touch. Overall, this work sheds light on the LOC as a multimodal structure that plays a key role in object recognition in both the visual and haptic modalities.
Collapse
Affiliation(s)
- Laura Cacciamani
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA.
| | - Daniel Tomer
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Mary Grace Mylod-Vargas
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Aaron Selcov
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Grace A Peterson
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Christopher I Oseguera
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Aidan Barbieux
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| |
Collapse
|
3
|
Retsa C, Turpin H, Geiser E, Ansermet F, Müller-Nix C, Murray MM. Longstanding Auditory Sensory and Semantic Differences in Preterm Born Children. Brain Topogr 2024; 37:536-551. [PMID: 38010487 PMCID: PMC11199270 DOI: 10.1007/s10548-023-01022-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 11/06/2023] [Indexed: 11/29/2023]
Abstract
More than 10% of births are preterm, and the long-term consequences on sensory and semantic processing of non-linguistic information remain poorly understood. 17 very preterm-born children (born at < 33 weeks gestational age) and 15 full-term controls were tested at 10 years old with an auditory object recognition task, while 64-channel auditory evoked potentials (AEPs) were recorded. Sounds consisted of living (animal and human vocalizations) and manmade objects (e.g. household objects, instruments, and tools). Despite similar recognition behavior, AEPs strikingly differed between full-term and preterm children. Starting at 50ms post-stimulus onset, AEPs from preterm children differed topographically from their full-term counterparts. Over the 108-224ms post-stimulus period, full-term children showed stronger AEPs in response to living objects, whereas preterm born children showed the reverse pattern; i.e. stronger AEPs in response to manmade objects. Differential brain activity between semantic categories could reliably classify children according to their preterm status. Moreover, this opposing pattern of differential responses to semantic categories of sounds was also observed in source estimations within a network of occipital, temporal and frontal regions. This study highlights how early life experience in terms of preterm birth shapes sensory and object processing later on in life.
Collapse
Affiliation(s)
- Chrysa Retsa
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland.
| | - Hélène Turpin
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Eveline Geiser
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - François Ansermet
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
- Department of Child and Adolescent Psychiatry, University Hospital, Geneva, Switzerland
| | - Carole Müller-Nix
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
4
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
5
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
6
|
Ghaneirad E, Borgolte A, Sinke C, Čuš A, Bleich S, Szycik GR. The effect of multisensory semantic congruency on unisensory object recognition in schizophrenia. Front Psychiatry 2023; 14:1246879. [PMID: 38025441 PMCID: PMC10646423 DOI: 10.3389/fpsyt.2023.1246879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Multisensory, as opposed to unisensory processing of stimuli, has been found to enhance the performance (e.g., reaction time, accuracy, and discrimination) of healthy individuals across various tasks. However, this enhancement is not as pronounced in patients with schizophrenia (SZ), indicating impaired multisensory integration (MSI) in these individuals. To the best of our knowledge, no study has yet investigated the impact of MSI deficits in the context of working memory, a domain highly reliant on multisensory processing and substantially impaired in schizophrenia. To address this research gap, we employed two adopted versions of the continuous object recognition task to investigate the effect of single-trail multisensory encoding on subsequent object recognition in 21 schizophrenia patients and 21 healthy controls (HC). Participants were tasked with discriminating between initial and repeated presentations. For the initial presentations, half of the stimuli were audiovisual pairings, while the other half were presented unimodal. The task-relevant stimuli were then presented a second time in a unisensory manner (either auditory stimuli in the auditory task or visual stimuli in the visual task). To explore the impact of semantic context on multisensory encoding, half of the audiovisual pairings were selected to be semantically congruent, while the remaining pairs were not semantically related to each other. Consistent with prior studies, our findings demonstrated that the impact of single-trial multisensory presentation during encoding remains discernible during subsequent object recognition. This influence could be distinguished based on the semantic congruity between the auditory and visual stimuli presented during the encoding. This effect was more robust in the auditory task. In the auditory task, when congruent multisensory pairings were encoded, both participant groups demonstrated a multisensory facilitation effect. This effect resulted in improved accuracy and RT performance. Regarding incongruent audiovisual encoding, as expected, HC did not demonstrate an evident multisensory facilitation effect on memory performance. In contrast, SZs exhibited an atypically accelerated reaction time during the subsequent auditory object recognition. Based on the predictive coding model we propose that this observed deviations indicate a reduced semantic modulatory effect and anomalous predictive errors signaling, particularly in the context of conflicting cross-modal sensory inputs in SZ.
Collapse
Affiliation(s)
- Erfan Ghaneirad
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Anna Borgolte
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Division of Clinical Psychology and Sexual Medicine, Hannover Medical School, Hannover, Germany
| | - Anja Čuš
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
- Center for Systems Neuroscience, University of Veterinary Medicine, Hanover, Germany
| | - Gregor R. Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|
7
|
Al-Mazidi SH. The Physiology of Cognition in Autism Spectrum Disorder: Current and Future Challenges. Cureus 2023; 15:e46581. [PMID: 37808604 PMCID: PMC10557542 DOI: 10.7759/cureus.46581] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2023] [Indexed: 10/10/2023] Open
Abstract
Cognitive impairment is among the most challenging characteristics of autism spectrum disorder (ASD). Although ASD is one of the common neurodevelopmental disorders, we are still behind in diagnosing and treating cognitive impairment in ASD. Cognitive impairment in ASD varies, meaning it could be at the sensory perception level to cognitive processing, learning, and memory. There are no diagnostic criteria for cognitive impairment that are specific to ASD. The leading causes of cognitive impairment in ASD could be neurological, immune, and gastrointestinal dysfunction. Immune dysfunction might lead to neuroinflammation, affecting neural connectivity, glutamate/gamma-aminobutyric acid (GABA) balance, and plasticity. The gut-brain axes are essential in the developing brain. Special retinal changes have recently been detected in ASD, which need clinical investigation to find their possible role in early diagnosis. Early intervention is crucial for ASD cognitive dysfunction. Due to the heterogeneity of the disease, the clinical manifestation of ASD makes it difficult for clinicians to develop gold-standard diagnostic and therapeutic criteria. We suggest a triad for diagnosis, which includes clinical tests for immune and gastrointestinal dysfunction biomarkers, clinical examination for the retina, and an objective neurocognitive evaluation for ASD, and to develop a treatment strategy involving these three aspects. Developing clear treatment criteria for cognitive impairment for ASD would improve the quality of life of ASD people and their caregivers and would delay or prevent dementia-related disorders in ASD people.
Collapse
|
8
|
Reuveni I, Dan R, Canetti L, Bick AS, Segman R, Azoulay M, Kalla C, Bonne O, Goelman G. Aberrant Intrinsic Brain Network Functional Connectivity During a Face-Matching Task in Women Diagnosed With Premenstrual Dysphoric Disorder. Biol Psychiatry 2023; 94:492-500. [PMID: 37031779 DOI: 10.1016/j.biopsych.2023.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/11/2023] [Accepted: 04/01/2023] [Indexed: 04/11/2023]
Abstract
BACKGROUND Premenstrual dysphoric disorder (PMDD) is characterized by affective, cognitive, and physical symptoms, suggesting alterations at the brain network level. Women with PMDD demonstrate aberrant discrimination of facial emotions during the luteal phase of the menstrual cycle and altered reactivity to emotional stimuli. However, previous studies assessing emotional task-related brain reactivity using region-of-interest or whole-brain analysis have reported conflicting findings. Therefore, we utilized both region-of-interest task-reactivity and seed-voxel functional connectivity (FC) approaches to test for differences in the default mode network, salience network, and central executive network between women with PMDD and control participants during an emotional-processing task that yields an optimal setup for investigating brain network changes in PMDD. METHODS Twenty-four women with PMDD and 27 control participants were classified according to the Daily Record of Severity of Problems. Participants underwent functional magnetic resonance imaging scans while completing the emotional face-matching task during the midfollicular and late-luteal phases of their menstrual cycle. RESULTS No significant between-group differences in brain reactivity were found using region-of-interest analysis. In the FC analysis, a main effect of diagnosis was found showing decreased default mode network connectivity, increased salience network connectivity, and decreased central executive network connectivity in women with PMDD compared with control participants. A significant interaction between menstrual cycle phase and diagnosis was found in the central executive network for right posterior parietal cortex and left inferior lateral occipital cortex connectivity. A post hoc analysis revealed stronger FC during the midfollicular than the late-luteal phase of PMDD. CONCLUSIONS Aberrant FC in the 3 brain networks involved in PMDD may indicate vulnerability to experience affective and cognitive symptoms of the disorder.
Collapse
Affiliation(s)
- Inbal Reuveni
- Department of Psychiatry, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Rotem Dan
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Department of Neurology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Laura Canetti
- Department of Psychiatry, Hadassah Hebrew University Medical Center, Jerusalem, Israel; Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Atira S Bick
- Department of Neurology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Ronen Segman
- Department of Psychiatry, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Moria Azoulay
- Department of Psychiatry, Hadassah Hebrew University Medical Center, Jerusalem, Israel; Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Carmel Kalla
- Department of Psychiatry, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Omer Bonne
- Department of Psychiatry, Hadassah Hebrew University Medical Center, Jerusalem, Israel.
| | - Gadi Goelman
- Department of Neurology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| |
Collapse
|
9
|
Schumann K, Rodriguez-Raecke R, Sijben R, Freiherr J. Elevated Insulin Levels Engage the Salience Network during Multisensory Perception. Neuroendocrinology 2023; 114:90-106. [PMID: 37634508 DOI: 10.1159/000533663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 08/14/2023] [Indexed: 08/29/2023]
Abstract
INTRODUCTION Brain insulin reactivity has been reported in connection with systematic energy metabolism, enhancement in cognition, olfactory sensitivity, and neuroendocrine circuits. High receptor densities exist in regions important for sensory processing. The main aim of the study was to examine whether intranasal insulin would modulate the activity of areas in charge of olfactory-visual integration. METHODS As approach, a placebo-controlled double-blind within crossover design was chosen. The experiments were conducted in a research unit of a university hospital. On separate mornings, twenty-six healthy normal-weight males aged between 19 and 31 years received either 40 IU intranasal insulin or placebo vehicle. Subsequently, they underwent 65 min of functional magnetic resonance imaging whilst performing an odor identification task. Functional brain activations of olfactory, visual, and multisensory integration as well as insulin versus placebo were assessed. Regarding the odor identification task, reaction time, accuracy, pleasantness, and intensity measurements were taken to examine the role of integration and treatment. Blood samples were drawn to control for peripheral hormone concentrations. RESULTS Intranasal insulin administration during olfactory-visual stimulation revealed strong bilateral engagement of frontoinsular cortices, anterior cingulate, prefrontal cortex, mediodorsal thalamus, striatal, and hippocampal regions (p ≤ 0.001 familywise error [FWE] corrected). In addition, the integration contrast showed increased activity in left intraparietal sulcus, left inferior frontal gyrus, left superior frontal gyrus, and left middle frontal gyrus (p ≤ 0.013 FWE corrected). CONCLUSIONS Intranasal insulin application in lean men led to enhanced activation in multisensory olfactory-visual integration sites and salience hubs which indicates stimuli valuation modulation. This effect can serve as a basis for understanding the connection of intracerebral insulin and olfactory-visual processing.
Collapse
Affiliation(s)
- Katja Schumann
- Diagnostic and Interventional Neuroradiology, RWTH Aachen University, Aachen, Germany
| | - Rea Rodriguez-Raecke
- Diagnostic and Interventional Neuroradiology, RWTH Aachen University, Aachen, Germany
- Brain Imaging Facility, Interdisciplinary Center for Clinical Research, RWTH Aachen University, Aachen, Germany
| | - Rik Sijben
- Brain Imaging Facility, Interdisciplinary Center for Clinical Research, RWTH Aachen University, Aachen, Germany
| | - Jessica Freiherr
- Department of Psychiatry and Psychotherapy, Friedrich-Alexander, University Erlangen-Nürnberg, Erlangen, Germany
- Fraunhofer Institute for Process Engineering and Packaging IVV, Freising, Germany
| |
Collapse
|
10
|
Krason A, Vigliocco G, Mailend ML, Stoll H, Varley R, Buxbaum LJ. Benefit of visual speech information for word comprehension in post-stroke aphasia. Cortex 2023; 165:86-100. [PMID: 37271014 PMCID: PMC10850036 DOI: 10.1016/j.cortex.2023.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 03/13/2023] [Accepted: 04/22/2023] [Indexed: 06/06/2023]
Abstract
Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.
Collapse
Affiliation(s)
- Anna Krason
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA.
| | - Gabriella Vigliocco
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Marja-Liisa Mailend
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Special Education, University of Tartu, Tartu Linn, Estonia
| | - Harrison Stoll
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Applied Cognitive and Brain Science, Drexel University, Philadelphia, PA, USA
| | | | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, PA, USA
| |
Collapse
|
11
|
Pepper JL, Nuttall HE. Age-Related Changes to Multisensory Integration and Audiovisual Speech Perception. Brain Sci 2023; 13:1126. [PMID: 37626483 PMCID: PMC10452685 DOI: 10.3390/brainsci13081126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/20/2023] [Accepted: 07/22/2023] [Indexed: 08/27/2023] Open
Abstract
Multisensory integration is essential for the quick and accurate perception of our environment, particularly in everyday tasks like speech perception. Research has highlighted the importance of investigating bottom-up and top-down contributions to multisensory integration and how these change as a function of ageing. Specifically, perceptual factors like the temporal binding window and cognitive factors like attention and inhibition appear to be fundamental in the integration of visual and auditory information-integration that may become less efficient as we age. These factors have been linked to brain areas like the superior temporal sulcus, with neural oscillations in the alpha-band frequency also being implicated in multisensory processing. Age-related changes in multisensory integration may have significant consequences for the well-being of our increasingly ageing population, affecting their ability to communicate with others and safely move through their environment; it is crucial that the evidence surrounding this subject continues to be carefully investigated. This review will discuss research into age-related changes in the perceptual and cognitive mechanisms of multisensory integration and the impact that these changes have on speech perception and fall risk. The role of oscillatory alpha activity is of particular interest, as it may be key in the modulation of multisensory integration.
Collapse
Affiliation(s)
| | - Helen E. Nuttall
- Department of Psychology, Lancaster University, Bailrigg LA1 4YF, UK;
| |
Collapse
|
12
|
Xu Y, Vignali L, Sigismondi F, Crepaldi D, Bottini R, Collignon O. Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people. PLoS Biol 2023; 21:e3001930. [PMID: 37490508 PMCID: PMC10368275 DOI: 10.1371/journal.pbio.3001930] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/23/2023] [Indexed: 07/27/2023] Open
Abstract
We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.
Collapse
Affiliation(s)
- Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Lorenzo Vignali
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Davide Crepaldi
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Roberto Bottini
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- Psychological Sciences Research Institute (IPSY) and Institute of NeuroScience (IoNS), University of Louvain, Louvain-la-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| |
Collapse
|
13
|
Niimi R, Saiki T, Yokosawa K. Auditory scene context facilitates visual recognition of objects in consistent visual scenes. Atten Percept Psychophys 2023; 85:1267-1275. [PMID: 36977906 DOI: 10.3758/s13414-023-02699-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 03/29/2023]
Abstract
Visual object recognition is facilitated by contextually consistent scenes in which the object is embedded. Scene gist representations extracted from the scenery backgrounds yield this scene consistency effect. Here we examined whether the scene consistency effect is specific to the visual domain or if it is crossmodal. Through four experiments, the accuracy of the naming of briefly presented visual objects was assessed. In each trial, a 4-s sound clip was presented and a visual scene containing the target object was briefly shown at the end of the sound clip. In a consistent sound condition, an environmental sound associated with the scene in which the target object typically appears was presented (e.g., forest noise for a bear target object). In an inconsistent sound condition, a sound clip contextually inconsistent with the target object was presented (e.g., city noise for a bear). In a control sound condition, a nonsensical sound (sawtooth wave) was presented. When target objects were embedded in contextually consistent visual scenes (Experiment 1: a bear in a forest background), consistent sounds increased object-naming accuracy. In contrast, sound conditions did not show a significant effect when target objects were embedded in contextually inconsistent visual scenes (Experiment 2: a bear in a pedestrian crossing background) or in a blank background (Experiments 3 and 4). These results suggested that auditory scene context has weak or no direct influence on visual object recognition. It seems likely that consistent auditory scenes indirectly facilitate visual object recognition by promoting visual scene processing.
Collapse
|
14
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
15
|
Sveistrup MA, Langlois J, Wilson TD. Do our hands see what our eyes see? Investigating spatial and haptic abilities. ANATOMICAL SCIENCES EDUCATION 2022. [PMID: 36565014 DOI: 10.1002/ase.2247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 11/02/2022] [Accepted: 12/17/2022] [Indexed: 06/17/2023]
Abstract
Spatial abilities (SAs) are cognitive resources used to mentally manipulate representations of objects to solve problems. Haptic abilities (HAs) represent tactile interactions with real-world objects transforming somatic information into mental representations. Both are proposed to be factors in anatomy education, yet relationships between SAs and HAs remain unknown. The objective of the current study was to explore SA-HA interactions. A haptic ability test (HAT) was developed based on the mental rotations test (MRT) with three-dimensional (3D) objects. The HAT was undertaken in three sensory conditions: (1) sighted, (2) sighted with haptics, and (3) haptics. Participants (n = 22; 13 females, 9 males) completed the MRT and were categorized into high spatial abilities (HSAs) (n = 12, mean± standard deviation: 13.7 ± 3.0) and low spatial abilities (LSAs) (n = 10, 5.6 ± 2.0) based on score distributions about the overall mean. Each SA group's HAT scores were compared across the three sensory conditions. Spearman's correlation coefficients between MRT and HAT scores indicated a statistically significant correlation in sighted condition (r = 0.553, p = 0.015) but were not significant in the sighted with haptics (r = 0.0.078, p = 0.212) and haptics (r = 0.043, p = 0.279) conditions. These data suggest HAs appear unrelated to SAs. With haptic exploration, LSA HAT scores were compensated; comparing HSA with LSA: sighted with haptics [median (lower and upper quartiles): 12 (12,13) vs. 12 (11,13), p = 0.254], and haptics [12 (11,13) vs. 12 (10,12), p = 0.381] conditions. Migrations to online anatomy teaching may unwittingly remove important sensory modalities from the learner. Understanding learner behaviors and performance when haptic inputs are removed from the learning environment represents valuable insight informing future anatomy curriculum and resource development.
Collapse
Affiliation(s)
- Michelle A Sveistrup
- The Corps for Research of Instructional and Perceptual Technologies (CRIPT) Laboratory, Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Jean Langlois
- Department of Emergency Medicine, CIUSSS de l'Estrie-Centre hospitalier universitaire de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Timothy D Wilson
- The Corps for Research of Instructional and Perceptual Technologies (CRIPT) Laboratory, Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
16
|
Gori M, Bertonati G, Mazzoni E, Freddi E, Amadeo MB. The impact of COVID-19 on the everyday life of blind and sighted individuals. Front Psychol 2022; 13:897098. [PMID: 36389583 PMCID: PMC9650307 DOI: 10.3389/fpsyg.2022.897098] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 09/27/2022] [Indexed: 12/01/2022] Open
Abstract
The COVID-19 pandemic caused unexpected and unavoidable changes in daily life worldwide. Governments and communities found ways to mitigate the impact of these changes, but many solutions were inaccessible to people with visual impairments. This work aimed to investigate how blind individuals subjectively experienced the restrictions and isolation caused by the COVID-19 pandemic. To this end, a group of twenty-seven blind and seventeen sighted people took part in a survey addressing how COVID-19 impacted life practically and psychologically, how it affected their daily habits, and how it changed their experiences of themselves and others. Results demonstrated that both sighted and blind individuals had a hard time adapting to the new situation. However, while sighted people struggled more with personal and social aspects, the frustration of the blind population derived mostly from more practical and logistical issues. Likely as consequences, results showed that blind people engaged more in their inner life and experienced fear and anger as main emotions. This study suggests that changes in life associated with COVID-19 have been subjectively experienced differently based on the presence or not of blindness, and that tailored future interventions should be considered to take care of the different needs of blind individuals.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genova, Italy
| | - Giorgia Bertonati
- Unit for Visually Impaired People, Italian Institute of Technology, Genova, Italy
- DIBRIS, Università degli studi di Genova, Genova, Italy
- *Correspondence: Giorgia Bertonati,
| | - Emanuela Mazzoni
- Unit for Visually Impaired People, Italian Institute of Technology, Genova, Italy
- PREPOS Studio Associato, Lucca, Italy
| | - Elisa Freddi
- Unit for Visually Impaired People, Italian Institute of Technology, Genova, Italy
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People, Italian Institute of Technology, Genova, Italy
| |
Collapse
|
17
|
Leo F, Gori M, Sciutti A. Early blindness modulates haptic object recognition. Front Hum Neurosci 2022; 16:941593. [PMID: 36158621 PMCID: PMC9498977 DOI: 10.3389/fnhum.2022.941593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to explore the cube twice, reporting whether the cube was the same or it differed in pins disposition. Results showed that recognition accuracy was not modulated by the level of visual ability. However, congenitally blind touched more cells simultaneously while exploring the faces and changed more the pattern of touched cells from one recording sample to the next than late blind and sighted. Furthermore, the number of simultaneously touched cells negatively correlated with exploration duration. These findings indicate that early blindness shapes haptic exploration of objects that can be held in hands.
Collapse
Affiliation(s)
- Fabrizio Leo
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
- *Correspondence: Fabrizio Leo,
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
18
|
Campbell EE, Bergelson E. Making sense of sensory language: Acquisition of sensory knowledge by individuals with congenital sensory impairments. Neuropsychologia 2022; 174:108320. [PMID: 35842021 DOI: 10.1016/j.neuropsychologia.2022.108320] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 06/21/2022] [Accepted: 07/06/2022] [Indexed: 10/17/2022]
Abstract
The present article provides a narrative review on how language communicates sensory information and how knowledge of sight and sound develops in individuals born deaf or blind. Studying knowledge of the perceptually inaccessible sensory domain for these populations offers a lens into how humans learn about that which they cannot perceive. We first review the linguistic strategies within language that communicate sensory information. Highlighting the power of language to shape knowledge, we next review the detailed knowledge of sensory information by individuals with congenital sensory impairments, limitations therein, and neural representations of imperceptible phenomena. We suggest that the acquisition of sensory knowledge is supported by language, experience with multiple perceptual domains, and cognitive and social abilities which mature over the first years of life, both in individuals with and without sensory impairment. We conclude by proposing a developmental trajectory for acquiring sensory knowledge in the absence of sensory perception.
Collapse
Affiliation(s)
- Erin E Campbell
- Duke University, Department of Psychology and Neuroscience, USA.
| | - Elika Bergelson
- Duke University, Department of Psychology and Neuroscience, USA
| |
Collapse
|
19
|
Liu Q, Ulloa A, Horwitz B. The Spatiotemporal Neural Dynamics of Intersensory Attention Capture of Salient Stimuli: A Large-Scale Auditory-Visual Modeling Study. Front Comput Neurosci 2022; 16:876652. [PMID: 35645750 PMCID: PMC9133449 DOI: 10.3389/fncom.2022.876652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 11/13/2022] Open
Abstract
The spatiotemporal dynamics of the neural mechanisms underlying endogenous (top-down) and exogenous (bottom-up) attention, and how attention is controlled or allocated in intersensory perception are not fully understood. We investigated these issues using a biologically realistic large-scale neural network model of visual-auditory object processing of short-term memory. We modeled and incorporated into our visual-auditory object-processing model the temporally changing neuronal mechanisms for the control of endogenous and exogenous attention. The model successfully performed various bimodal working memory tasks, and produced simulated behavioral and neural results that are consistent with experimental findings. Simulated fMRI data were generated that constitute predictions that human experiments could test. Furthermore, in our visual-auditory bimodality simulations, we found that increased working memory load in one modality would reduce the distraction from the other modality, and a possible network mediating this effect is proposed based on our model.
Collapse
Affiliation(s)
- Qin Liu
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Department of Physics, University of Maryland, College Park, College Park, MD, United States
| | - Antonio Ulloa
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Center for Information Technology, National Institutes of Health, Bethesda, MD, United States
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- *Correspondence: Barry Horwitz,
| |
Collapse
|
20
|
Whether attentional loads influence audiovisual integration depends on semantic associations. Atten Percept Psychophys 2022; 84:2205-2218. [PMID: 35304700 DOI: 10.3758/s13414-022-02461-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 11/08/2022]
Abstract
Neuronal studies have shown that selectively attending to a common object in one sensory modality results in facilitated processing of that object's representations in the ignored sensory modality. Thus, the audiovisual (AV) integration of common objects can be observed under modality-specific selective attention. However, little is known about whether this AV integration can also occur under increased attentional load conditions. Additionally, whether semantic associations between multisensory features of common objects modulate the influence of increased attentional loads on this cross-modal integration remains unknown. In the present study, participants completed an AV integration task (ignored auditory stimuli) under various attentional load conditions: no load, low load, and high load. The semantic associations between AV stimuli were composed of animal pictures presented concurrently with semantically congruent, semantically incongruent, or semantically unrelated auditory stimuli. Our results demonstrated that attentional loads did not disrupt the integration of semantically congruent AV stimuli but suppressed the potential alertness effects induced by incongruent or unrelated auditory stimuli under the condition of modality-specific selective attention. These findings highlight the critical role of semantic association between AV stimuli in modulating the effect of attentional loads on the AV integration of modality-specific selective attention.
Collapse
|
21
|
Matsuzaki J, Kagitani-Shimono K, Aoki S, Hanaie R, Kato Y, Nakanishi M, Tatsumi A, Tominaga K, Yamamoto T, Nagai Y, Mohri I, Taniike M. Abnormal cortical responses elicited by audiovisual movies in patients with autism spectrum disorder with atypical sensory behavior: A magnetoencephalographic study. Brain Dev 2022; 44:81-94. [PMID: 34563417 DOI: 10.1016/j.braindev.2021.08.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 08/09/2021] [Accepted: 08/30/2021] [Indexed: 12/23/2022]
Abstract
BACKGROUND Atypical sensory behavior disrupts behavioral adaptation in children with autism spectrum disorder (ASD); however, neural correlates of sensory dysfunction using magnetoencephalography (MEG) remain unclear. METHOD We used MEG to measure the cortical activation elicited by visual (uni)/audiovisual (multisensory) movies in 46 children (7-14 years) were included in final analysis: 13 boys with atypical audiovisual behavior in ASD (AAV+), 10 without this condition, and 23 age-matched typically developing boys. RESULTS The AAV+ group demonstrated an increase in the cortical activation in the bilateral insula in response to unisensory movies and in the left occipital, right superior temporal sulcus (rSTS), and temporal regions to multisensory movies. These increased responses were correlated with severity of the sensory impairment. Increased theta-low gamma oscillations were observed in the rSTS in AAV+. CONCLUSION The findings suggest that AAV is attributed to atypical neural networks centered on the rSTS.
Collapse
Affiliation(s)
- Junko Matsuzaki
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Kuriko Kagitani-Shimono
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan.
| | - Sho Aoki
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan
| | - Ryuzo Hanaie
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yoko Kato
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan
| | - Mariko Nakanishi
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Aika Tatsumi
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Koji Tominaga
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Tomoka Yamamoto
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yukie Nagai
- International Research Center for Neurointelligence, The University of Tokyo, Tokyo, Japan
| | - Ikuko Mohri
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Masako Taniike
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| |
Collapse
|
22
|
Pattamadilok C, Sato M. How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing. BRAIN AND LANGUAGE 2022; 225:105058. [PMID: 34929531 DOI: 10.1016/j.bandl.2021.105058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 10/31/2021] [Accepted: 12/08/2021] [Indexed: 06/14/2023]
Abstract
Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers' brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word's onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.
Collapse
Affiliation(s)
| | - Marc Sato
- Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| |
Collapse
|
23
|
Togoli I, Arrighi R. Evidence for an A-Modal Number Sense: Numerosity Adaptation Generalizes Across Visual, Auditory, and Tactile Stimuli. Front Hum Neurosci 2021; 15:713565. [PMID: 34456699 PMCID: PMC8385665 DOI: 10.3389/fnhum.2021.713565] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 07/16/2021] [Indexed: 11/23/2022] Open
Abstract
Humans and other species share a perceptual mechanism dedicated to the representation of approximate quantities that allows to rapidly and reliably estimate the numerosity of a set of objects: an Approximate Number System (ANS). Numerosity perception shows a characteristic shared by all primary visual features: it is susceptible to adaptation. As a consequence of prolonged exposure to a large/small quantity (“adaptor”), the apparent numerosity of a subsequent (“test”) stimulus is distorted yielding a robust under- or over-estimation, respectively. Even if numerosity adaptation has been reported across several sensory modalities (vision, audition, and touch), suggesting the idea of a central and a-modal numerosity processing system, evidence for cross-modal effects are limited to vision and audition, two modalities that are known to preferentially encode sensory stimuli in an external coordinate system. Here we test whether numerosity adaptation for visual and auditory stimuli also distorts the perceived numerosity of tactile stimuli (and vice-versa) despite touch being a modality primarily coded in an internal (body-centered) reference frame. We measured numerosity discrimination of stimuli presented sequentially after adaptation to series of either few (around 2 Hz; low adaptation) or numerous (around 8 Hz; high adaptation) impulses for all possible combinations of visual, auditory, or tactile adapting and test stimuli. In all cases, adapting to few impulses yielded a significant overestimation of the test numerosity with the opposite occurring as a consequence of adaptation to numerous stimuli. The overall magnitude of adaptation was robust (around 30%) and rather similar for all sensory modality combinations. Overall, these findings support the idea of a truly generalized and a-modal mechanism for numerosity representation aimed to process numerical information independently from the sensory modality of the incoming signals.
Collapse
Affiliation(s)
- Irene Togoli
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Roberto Arrighi
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| |
Collapse
|
24
|
Caffarra S, Lizarazu M, Molinaro N, Carreiras M. Reading-Related Brain Changes in Audiovisual Processing: Cross-Sectional and Longitudinal MEG Evidence. J Neurosci 2021; 41:5867-5875. [PMID: 34088796 PMCID: PMC8265799 DOI: 10.1523/jneurosci.3021-20.2021] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 05/10/2021] [Accepted: 05/16/2021] [Indexed: 02/01/2023] Open
Abstract
The ability to establish associations between visual objects and speech sounds is essential for human reading. Understanding the neural adjustments required for acquisition of these arbitrary audiovisual associations can shed light on fundamental reading mechanisms and help reveal how literacy builds on pre-existing brain circuits. To address these questions, the present longitudinal and cross-sectional MEG studies characterize the temporal and spatial neural correlates of audiovisual syllable congruency in children (age range, 4-9 years; 22 males and 20 females) learning to read. Both studies showed that during the first years of reading instruction children gradually set up audiovisual correspondences between letters and speech sounds, which can be detected within the first 400 ms of a bimodal presentation and recruit the superior portions of the left temporal cortex. These findings suggest that children progressively change the way they treat audiovisual syllables as a function of their reading experience. This reading-specific brain plasticity implies (partial) recruitment of pre-existing brain circuits for audiovisual analysis.SIGNIFICANCE STATEMENT Linking visual and auditory linguistic representations is the basis for the development of efficient reading, while dysfunctional audiovisual letter processing predicts future reading disorders. Our developmental MEG project included a longitudinal and a cross-sectional study; both studies showed that children's audiovisual brain circuits progressively change as a function of reading experience. They also revealed an exceptional degree of neuroplasticity in audiovisual neural networks, showing that as children develop literacy, the brain progressively adapts so as to better detect new correspondences between letters and speech sounds.
Collapse
Affiliation(s)
- Sendy Caffarra
- Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, Stanford, California 94305-5101
- Stanford University Graduate School of Education, Stanford, California 94305
- Basque Center on Cognition, Brain and Language, 20009 San Sebastian, Spain
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, 20009 San Sebastian, Spain
| | - Nicola Molinaro
- Basque Center on Cognition, Brain and Language, 20009 San Sebastian, Spain
- Ikerbasque Basque Foundation for Science, 48009 Bilbao, Spain
| | - Manuel Carreiras
- Basque Center on Cognition, Brain and Language, 20009 San Sebastian, Spain
- Ikerbasque Basque Foundation for Science, 48009 Bilbao, Spain
- University of the Basque Country (UPV/EHU), 48940 Bilbao, Spain
| |
Collapse
|
25
|
Steines M, Nagels A, Kircher T, Straube B. The role of the left and right inferior frontal gyrus in processing metaphoric and unrelated co-speech gestures. Neuroimage 2021; 237:118182. [PMID: 34020020 DOI: 10.1016/j.neuroimage.2021.118182] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 05/11/2021] [Accepted: 05/16/2021] [Indexed: 11/30/2022] Open
Abstract
Gestures are an integral part of in-person conversations and complement the meaning of the speech they accompany. The neural processing of co-speech gestures is supported by a mostly left-lateralized network of fronto-temporal regions. However, in contrast to iconic gestures, metaphoric as well as unrelated gestures have been found to more strongly engage the left and right inferior frontal gyrus (IFG), respectively. With this study, we conducted the first systematic comparison of all three types of gestures and resulting potential laterality effects. During collection of functional imaging data, 74 subjects were presented with 5 s videos of abstract speech with related metaphoric gestures, concrete speech with related iconic gestures and concrete speech with unrelated gestures. They were asked to judge whether the content of the speech and gesture matched or not. Differential contrasts revealed that both abstract related and concrete unrelated compared to concrete related stimuli elicited stronger activation of the bilateral IFG. Analyses of lateralization indices for IFG activation further showed a left hemispheric dominance for metaphoric gestures and a right hemispheric dominance for unrelated gestures. Our results give support to the hypothesis that the bilateral IFG is activated specifically when processing load for speech-gesture combinations is high. In addition, laterality effects indicate a stronger involvement of the right IFG in mismatch detection and conflict processing, whereas the left IFG performs the actual integration of information from speech and gesture.
Collapse
Affiliation(s)
- Miriam Steines
- Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, Marburg 35039, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, Marburg 35032, Germany.
| | - Arne Nagels
- Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, Marburg 35039, Germany
| | - Tilo Kircher
- Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, Marburg 35039, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, Marburg 35032, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Rudolf-Bultmann-Straße 8, Marburg 35039, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, Marburg 35032, Germany
| |
Collapse
|
26
|
Porada DK, Regenbogen C, Freiherr J, Seubert J, Lundström JN. Trimodal processing of complex stimuli in inferior parietal cortex is modality-independent. Cortex 2021; 139:198-210. [PMID: 33878687 DOI: 10.1016/j.cortex.2021.03.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 11/29/2020] [Accepted: 03/09/2021] [Indexed: 11/26/2022]
Abstract
In humans, multisensory mechanisms facilitate object processing through integration of sensory signals that match in their temporal and spatial occurrence as well as their meaning. The generalizability of such integration processes across different sensory modalities is, however, to date not well understood. As such, it remains unknown whether there are cerebral areas that process object-related signals independently of the specific senses from which they arise, and whether these areas show different response profiles depending on the number of sensory channels that carry information. To address these questions, we presented participants with dynamic stimuli that simultaneously emitted object-related sensory information via one, two, or three channels (sight, sound, smell) in the MR scanner. By comparing neural activation patterns between various integration processes differing in type and number of stimulated senses, we showed that the left inferior frontal gyrus and areas within the left inferior parietal cortex were engaged independently of the number and type of sensory input streams. Activation in these areas was enhanced during bimodal stimulation, compared to the sum of unimodal activations, and increased even further during trimodal stimulation. Taken together, our findings demonstrate that activation of the inferior parietal cortex during processing and integration of meaningful multisensory stimuli is both modality-independent and modulated by the number of available sensory modalities. This suggests that the processing demand placed on the parietal cortex increases with the number of sensory input streams carrying meaningful information, likely due to the increasing complexity of such stimuli.
Collapse
Affiliation(s)
- Danja K Porada
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Christina Regenbogen
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA Institute Brain Structure Function Relationship, RWTH Aachen University, Aachen, Germany
| | - Jessica Freiherr
- Department of Psychiatry and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Janina Seubert
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Monell Chemical Senses Center, Philadelphia, USA; Department of Psychology, University of Pennsylvania, Philadelphia, USA; Stockholm University Brain Imaging Centre, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
27
|
Jao Keehn RJ, Pueschel EB, Gao Y, Jahedi A, Alemu K, Carper R, Fishman I, Müller RA. Underconnectivity Between Visual and Salience Networks and Links With Sensory Abnormalities in Autism Spectrum Disorders. J Am Acad Child Adolesc Psychiatry 2021; 60:274-285. [PMID: 32126259 PMCID: PMC7483217 DOI: 10.1016/j.jaac.2020.02.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 12/19/2019] [Accepted: 02/25/2020] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The anterior insular cortex (AI), which is a part of the salience network, is critically involved in visual awareness, multisensory perception, and social and emotional processing, among other functions. In children and adolescents with autism spectrum disorders (ASDs), evidence has suggested aberrant functional connectivity (FC) of AI compared with typically developing peers. While recent studies have primarily focused on the functional connections between salience and social networks, much less is known about connectivity between AI and primary sensory regions, including visual areas, and how these patterns may be linked to autism symptomatology. METHOD The current investigation implemented functional magnetic resonance imaging to examine resting-state FC patterns of salience and visual networks in children and adolescents with ASDs compared with typically developing controls, and to relate them to behavioral measures. RESULTS Functional underconnectivity was found in the ASD group between left AI and bilateral visual cortices. Moreover, in an ASD subgroup with more atypical visual sensory profiles, FC was positively correlated with abnormal social motivational responsivity. CONCLUSION Findings of reduced FC between salience and visual networks in ASDs potentially indicate deficient selection of salient information. Moreover, in children and adolescents with ASDs who show strongly atypical visual sensory profiles, connectivity at seemingly more neurotypical levels may be paradoxically associated with greater impairment of social motivation.
Collapse
Affiliation(s)
- R Joanne Jao Keehn
- Brain Development Imaging Laboratories, San Diego State University, California.
| | - Ellyn B Pueschel
- Brain Development Imaging Laboratories, San Diego State University, California
| | - Yangfeifei Gao
- Brain Development Imaging Laboratories, San Diego State University, California; San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology, California
| | - Afrooz Jahedi
- Brain Development Imaging Laboratories, San Diego State University, California; San Diego State University/Claremont Graduate University Joint Doctoral Program in Computational Statistics, California
| | - Kalekirstos Alemu
- Brain Development Imaging Laboratories, San Diego State University, California
| | - Ruth Carper
- Brain Development Imaging Laboratories, San Diego State University, California; San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology, California
| | - Inna Fishman
- Brain Development Imaging Laboratories, San Diego State University, California; San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology, California
| | - Ralph-Axel Müller
- Brain Development Imaging Laboratories, San Diego State University, California; San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology, California
| |
Collapse
|
28
|
Kherif F, Muller S. Neuro-Clinical Signatures of Language Impairments: A Theoretical Framework for Function-to-structure Mapping in Clinics. Curr Top Med Chem 2021; 20:800-811. [PMID: 32116193 DOI: 10.2174/1568026620666200302111130] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 11/10/2019] [Accepted: 01/12/2020] [Indexed: 12/26/2022]
Abstract
In the past decades, neuroscientists and clinicians have collected a considerable amount of data and drastically increased our knowledge about the mapping of language in the brain. The emerging picture from the accumulated knowledge is that there are complex and combinatorial relationships between language functions and anatomical brain regions. Understanding the underlying principles of this complex mapping is of paramount importance for the identification of the brain signature of language and Neuro-Clinical signatures that explain language impairments and predict language recovery after stroke. We review recent attempts to addresses this question of language-brain mapping. We introduce the different concepts of mapping (from diffeomorphic one-to-one mapping to many-to-many mapping). We build those different forms of mapping to derive a theoretical framework where the current principles of brain architectures including redundancy, degeneracy, pluri-potentiality and bow-tie network are described.
Collapse
Affiliation(s)
- Ferath Kherif
- Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Sandrine Muller
- 1Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
29
|
Rizzo JR, Beheshti M, Fang Y, Flanagan S, Giudice NA. COVID-19 and Visual Disability: Can't Look and Now Don't Touch. PM R 2020; 13:415-421. [PMID: 33354903 DOI: 10.1002/pmrj.12541] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 11/23/2020] [Accepted: 12/04/2020] [Indexed: 01/13/2023]
Affiliation(s)
- John-Ross Rizzo
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY.,Department of Neurology, NYU Langone Health, New York, NY.,Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, NY.,Department of Mechanical & Aerospace Eng., NYU Tandon School of Engineering, New York, NY
| | - Mahya Beheshti
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY.,Department of Mechanical & Aerospace Eng., NYU Tandon School of Engineering, New York, NY
| | - Yi Fang
- Department of Electrical and Computer Eng, NYU Tandon School of Engineering, New York, NY
| | - Steven Flanagan
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY
| | - Nicholas A Giudice
- Virtual Environments and Multimodal Interaction (VEMI) Lab, The University of Maine, Orono, ME.,School of Computing and Information Science, The University of Maine, Orono, ME.,Department of Psychology, The University of Maine, Orono, ME
| |
Collapse
|
30
|
Gallero-Salas Y, Han S, Sych Y, Voigt FF, Laurenczy B, Gilad A, Helmchen F. Sensory and Behavioral Components of Neocortical Signal Flow in Discrimination Tasks with Short-Term Memory. Neuron 2020; 109:135-148.e6. [PMID: 33159842 DOI: 10.1016/j.neuron.2020.10.017] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 09/13/2020] [Accepted: 10/12/2020] [Indexed: 12/30/2022]
Abstract
In the neocortex, each sensory modality engages distinct sensory areas that route information to association areas. Where signal flow converges for maintaining information in short-term memory and how behavior may influence signal routing remain open questions. Using wide-field calcium imaging, we compared cortex-wide neuronal activity in layer 2/3 for mice trained in auditory and tactile tasks with delayed response. In both tasks, mice were either active or passive during stimulus presentation, moving their body or sitting quietly. Irrespective of behavioral strategy, auditory and tactile stimulation activated distinct subdivisions of the posterior parietal cortex, anterior area A and rostrolateral area RL, which held stimulus-related information necessary for the respective tasks. In the delay period, in contrast, behavioral strategy rather than sensory modality determined short-term memory location, with activity converging frontomedially in active trials and posterolaterally in passive trials. Our results suggest behavior-dependent routing of sensory-driven cortical signals flow from modality-specific posterior parietal cortex (PPC) subdivisions to higher association areas.
Collapse
Affiliation(s)
- Yasir Gallero-Salas
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland
| | - Shuting Han
- Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Yaroslav Sych
- Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Fabian F Voigt
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland
| | - Balazs Laurenczy
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland
| | - Ariel Gilad
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Department of Medical Neurobiology, Institute for Medical Research Israel Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Fritjof Helmchen
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland.
| |
Collapse
|
31
|
Li Q. Semantic Congruency Modulates the Effect of Attentional Load on the Audiovisual Integration of Animate Images and Sounds. Iperception 2020; 11:2041669520981096. [PMID: 33456746 PMCID: PMC7783684 DOI: 10.1177/2041669520981096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Accepted: 11/19/2020] [Indexed: 12/04/2022] Open
Abstract
Attentional processes play a complex and multifaceted role in the integration of input from different sensory modalities. However, whether increased attentional load disrupts the audiovisual (AV) integration of common objects that involve semantic content remains unclear. Furthermore, knowledge regarding how semantic congruency interacts with attentional load to influence the AV integration of common objects is limited. We investigated these questions by examining AV integration under various attentional-load conditions. AV integration was assessed by adopting an animal identification task using unisensory (animal images and sounds) and AV stimuli (semantically congruent AV objects and semantically incongruent AV objects), while attentional load was manipulated by using a rapid serial visual presentation task. Our results indicate that attentional load did not attenuate the integration of semantically congruent AV objects. However, semantically incongruent animal sounds and images were not integrated (as there was no multisensory facilitation), and the interference effect produced by the semantically incongruent AV objects was reduced by increased attentional-load manipulations. These findings highlight the critical role of semantic congruency in modulating the effect of attentional load on the AV integration of common objects.
Collapse
Affiliation(s)
- Qingqing Li
- Cognitive Neuroscience Laboratory, Graduate School of Natural
Science and Technology, Okayama University, Okayama, Japan
| |
Collapse
|
32
|
Sonderfeld M, Mathiak K, Häring GS, Schmidt S, Habel U, Gur R, Klasen M. Supramodal neural networks support top-down processing of social signals. Hum Brain Mapp 2020; 42:676-689. [PMID: 33073911 PMCID: PMC7814753 DOI: 10.1002/hbm.25252] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 08/08/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
The perception of facial and vocal stimuli is driven by sensory input and cognitive top‐down influences. Important top‐down influences are attentional focus and supramodal social memory representations. The present study investigated the neural networks underlying these top‐down processes and their role in social stimulus classification. In a neuroimaging study with 45 healthy participants, we employed a social adaptation of the Implicit Association Test. Attentional focus was modified via the classification task, which compared two domains of social perception (emotion and gender), using the exactly same stimulus set. Supramodal memory representations were addressed via congruency of the target categories for the classification of auditory and visual social stimuli (voices and faces). Functional magnetic resonance imaging identified attention‐specific and supramodal networks. Emotion classification networks included bilateral anterior insula, pre‐supplementary motor area, and right inferior frontal gyrus. They were pure attention‐driven and independent from stimulus modality or congruency of the target concepts. No neural contribution of supramodal memory representations could be revealed for emotion classification. In contrast, gender classification relied on supramodal memory representations in rostral anterior cingulate and ventromedial prefrontal cortices. In summary, different domains of social perception involve different top‐down processes which take place in clearly distinguishable neural networks.
Collapse
Affiliation(s)
- Melina Sonderfeld
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Gianna S Häring
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Sarah Schmidt
- Life & Brain - Institute for Experimental Epileptology and Cognition Research, Bonn, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Raquel Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.,Interdisciplinary Training Centre for Medical Education and Patient Safety - AIXTRA, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
33
|
Su W, Guo Q, Li Y, Zhang K, Zhang Y, Chen Q. Momentary lapses of attention in multisensory environment. Cortex 2020; 131:195-209. [PMID: 32906014 DOI: 10.1016/j.cortex.2020.07.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 07/20/2020] [Indexed: 11/26/2022]
Abstract
Momentary lapses in attention disrupt goal-directed behaviors, and have been associated with increased pre-stimulus activity in the default mode network (DMN). The human brain often encounters multisensory inputs. It remains unknown, however, whether the neural mechanisms underlying attentional lapses are supra-modal or modality-dependent. To answer this question in the present functional magnetic resonance imaging (fMRI) study, we asked participants to respond to either visual or auditory targets in a multisensory paradigm, and focused on the pre-stimulus neural signals underlying attentional lapses, which resulted in impaired task performance, in terms of both delayed RTs and behavioral errors, in different sensory modalities. Behaviorally, mean reaction times (RTs) were equivalent between the visual and auditory modality. At the neural level, increased pre-stimulus neural activity in the majority of the core DMN regions, including medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and left angular gyrus (AG), predicted delayed RTs more effectively in the visual than auditory modality. Especially, increased pre-stimulus activity in the mPFC predicted not only delayed RTs but also errors, more effectively in the visual than auditory modality. On the other hand, increased pre-stimulus activity in the anterior precuneus predicted both prolonged RTs and errors more effectively in the auditory than visual modality. Moreover, a supra-modal mechanism was revealed in the left middle temporal gyrus (MTG), which belongs to the posterior DMN. Increased pre-stimulus neural activity in the left MTG predicted impaired task performance in both the visual and auditory modality. Taken together, the core DMN regions manifest vision-dependent mechanisms of attentional lapses while a novel region in the anterior precuneus shows audition-dependent mechanisms of attentional lapses. Moreover, left MTG in the posterior DMN manifests a supra-modal mechanism of attentional lapses, independent of the modality of sensory inputs.
Collapse
Affiliation(s)
- Wen Su
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, China
| | - Qiang Guo
- Epilepsy Center, Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - You Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, China
| | - Kun Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, China
| | - Yanni Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, China
| | - Qi Chen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, China; Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Jülich, Germany.
| |
Collapse
|
34
|
Tivadar RI, Gaglianese A, Murray MM. Auditory Enhancement of Illusory Contour Perception. Multisens Res 2020; 34:1-15. [PMID: 33706283 DOI: 10.1163/22134808-bja10018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 04/24/2020] [Indexed: 11/19/2022]
Abstract
Illusory contours (ICs) are borders that are perceived in the absence of contrast gradients. Until recently, IC processes were considered exclusively visual in nature and presumed to be unaffected by information from other senses. Electrophysiological data in humans indicates that sounds can enhance IC processes. Despite cross-modal enhancement being observed at the neurophysiological level, to date there is no evidence of direct amplification of behavioural performance in IC processing by sounds. We addressed this knowledge gap. Healthy adults ( n = 15) discriminated instances when inducers were arranged to form an IC from instances when no IC was formed (NC). Inducers were low-constrast and masked, and there was continuous background acoustic noise throughout a block of trials. On half of the trials, i.e., independently of IC vs NC, a 1000-Hz tone was presented synchronously with the inducer stimuli. Sound presence improved the accuracy of indicating when an IC was presented, but had no impact on performance with NC stimuli (significant IC presence/absence × Sound presence/absence interaction). There was no evidence that this was due to general alerting or to a speed-accuracy trade-off (no main effect of sound presence on accuracy rates and no comparable significant interaction on reaction times). Moreover, sound presence increased sensitivity and reduced bias on the IC vs NC discrimination task. These results demonstrate that multisensory processes augment mid-level visual functions, exemplified by IC processes. Aside from the impact on neurobiological and computational models of vision, our findings may prove clinically beneficial for low-vision or sight-restored patients.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,2Department of Ophthalmology, University of LausanneandFondation Asile des aveugles, Lausanne, Switzerland
| | - Anna Gaglianese
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,3Spinoza Centre for Neuroimaging, Amsterdam, The Netherlands
| | - Micah M Murray
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,2Department of Ophthalmology, University of LausanneandFondation Asile des aveugles, Lausanne, Switzerland.,4Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,5Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
35
|
Cacciamani L, Sheparovich L, Gibbons M, Crowley B, Carpenter KE, Wack C. Task-Irrelevant Sound Corrects Leftward Spatial Bias in Blindfolded Haptic Placement Task. Multisens Res 2020; 33:521-548. [PMID: 32083560 DOI: 10.1163/22134808-20191387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Accepted: 09/03/2019] [Indexed: 11/19/2022]
Abstract
We often rely on our sense of vision for understanding the spatial location of objects around us. If vision cannot be used, one must rely on other senses, such as hearing and touch, in order to build spatial representations. Previous work has found evidence of a leftward spatial bias in visual and tactile tasks. In this study, we sought evidence of this leftward bias in a non-visual haptic object location memory task and assessed the influence of a task-irrelevant sound. In Experiment 1, blindfolded right-handed sighted participants used their non-dominant hand to haptically locate an object on the table, then used their dominant hand to place the object back in its original location. During placement, participants either heard nothing (no-sound condition) or a task-irrelevant repeating tone to the left, right, or front of the room. The results showed that participants exhibited a leftward placement bias on no-sound trials. On sound trials, this leftward bias was corrected; placements were faster and more accurate (regardless of the direction of the sound). One explanation for the leftward bias could be that participants were overcompensating their reach with the right hand during placement. Experiment 2 tested this explanation by switching the hands used for exploration and placement, but found similar results as Experiment 1. A third Experiment found evidence supporting the explanation that sound corrects the leftward bias by heightening attention. Together, these findings show that sound, even if task-irrelevant and semantically unrelated, can correct one's tendency to place objects too far to the left.
Collapse
Affiliation(s)
- Laura Cacciamani
- California Polytechnic State University, San Luis Obispo, CA,USA
| | | | - Molly Gibbons
- California Polytechnic State University, San Luis Obispo, CA,USA
| | - Brooke Crowley
- California Polytechnic State University, San Luis Obispo, CA,USA
| | | | - Carson Wack
- California Polytechnic State University, San Luis Obispo, CA,USA
| |
Collapse
|
36
|
Audio-visual priming in 7-month-old infants: An ERP study. Infant Behav Dev 2020; 58:101411. [DOI: 10.1016/j.infbeh.2019.101411] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 11/22/2019] [Accepted: 12/09/2019] [Indexed: 11/22/2022]
|
37
|
|
38
|
Scheller M, Garcia S, Bathelt J, de Haan M, Petrini K. Active touch facilitates object size perception in children but not adults: A multisensory event related potential study. Brain Res 2019; 1723:146381. [PMID: 31419429 DOI: 10.1016/j.brainres.2019.146381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Revised: 07/19/2019] [Accepted: 08/12/2019] [Indexed: 11/28/2022]
Abstract
In order to increase perceptual precision the adult brain dynamically combines redundant information from different senses depending on their reliability. During object size estimation, for example, visual, auditory and haptic information can be integrated to increase the precision of the final size estimate. Young children, however, do not integrate sensory information optimally and instead rely on active touch. Whether this early haptic dominance is reflected in age-related differences in neural mechanisms and whether it is driven by changes in bottom-up perceptual or top-down attentional processes has not yet been investigated. Here, we recorded event-related-potentials from a group of adults and children aged 5-7 years during an object size perception task using auditory, visual and haptic information. Multisensory information was presented either congruently (conveying the same information) or incongruently (conflicting information). No behavioral responses were required from participants. When haptic size information was available via actively tapping the objects, response amplitudes in the mid-parietal area were significantly reduced by information congruency in children but not in adults between 190 ms-250 ms and 310 ms-370 ms. These findings indicate that during object size perception only children's brain activity is modulated by active touch supporting a neural maturational shift from sensory dominance in early childhood to optimal multisensory benefit in adulthood.
Collapse
Affiliation(s)
| | | | - Joe Bathelt
- Brain & Cognition, University of Amsterdam, Netherlands; UCL Great Ormond Street Institute of Child Health, UK
| | | | | |
Collapse
|
39
|
Tafreshi TF, Daliri MR, Ghodousi M. Functional and effective connectivity based features of EEG signals for object recognition. Cogn Neurodyn 2019; 13:555-566. [PMID: 31741692 DOI: 10.1007/s11571-019-09556-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 09/17/2019] [Accepted: 09/24/2019] [Indexed: 01/06/2023] Open
Abstract
Classifying different object categories is one of the most important aims of brain-computer interface researches. Recently, interactions between brain regions were studied using different methods, such as functional and effective connectivity techniques. Functional and effective connectivity techniques are applied to estimate human brain areas connectivity. The main purpose of this study is to compare classification accuracy of the most advanced functional and effective methods in order to classify 12 basic object categories using Electroencephalography (EEG) signals. In this paper, 19 channels EEG signals were collected from 10 healthy subjects; when they were visiting color images and instructed to select the target images among others. Correlation, magnitude square coherence, wavelet coherence (WC), phase synchronization and mutual information were applied to estimate functional cortical connectivity. On the other hand, directed transfer function, partial directed coherence, generalized partial directed coherence (GPDC) were used to obtain effective cortical connectivity. After feature extraction, the scalar feature selection methods including T-test and one-sided-anova were applied to rank and select the most informative features. The selected features were classified by a one-against-one support vector machine classifier. The results indicated that the use of different techniques led to different classifying accuracy and brain lobes analysis. WC and GPDC are the most accurate methods with performances of 80.15% and 64.43%, respectively.
Collapse
Affiliation(s)
| | - Mohammad Reza Daliri
- 2Neuroscience and Neuroengineering Research Lab., Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Tehran, Iran
| | - Mahrad Ghodousi
- 3Department of Neuroscience, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
40
|
Pramudya RC, Seo HS. Hand-Feel Touch Cues and Their Influences on Consumer Perception and Behavior with Respect to Food Products: A Review. Foods 2019; 8:foods8070259. [PMID: 31311188 PMCID: PMC6678767 DOI: 10.3390/foods8070259] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 07/05/2019] [Accepted: 07/09/2019] [Indexed: 12/12/2022] Open
Abstract
There has been a great deal of research investigating intrinsic/extrinsic cues and their influences on consumer perception and purchasing decisions at points of sale, product usage, and consumption. Consumers create expectations toward a food product through sensory information extracted from its surface (intrinsic cues) or packaging (extrinsic cues) at retail stores. Packaging is one of the important extrinsic cues that can modulate consumer perception, liking, and decision making of a product. For example, handling a product packaging during consumption, even just touching the packaging while opening or holding it during consumption, may result in a consumer expectation of the package content. Although hand-feel touch cues are an integral part of the food consumption experience, as can be observed in such an instance, little has been known about their influences on consumer perception, acceptability, and purchase behavior of food products. This review therefore provided a better understanding about hand-feel touch cues and their influences in the context of food and beverage experience with a focus on (1) an overview of touch as a sensory modality, (2) factors influencing hand-feel perception, (3) influences of hand-feel touch cues on the perception of other sensory modalities, and (4) the effects of hand-feel touch cues on emotional responses and purchase behavior.
Collapse
Affiliation(s)
- Ragita C Pramudya
- Department of Food Science, University of Arkansas, 2650 North Young Avenue, Fayetteville, AR 72704, USA
| | - Han-Seok Seo
- Department of Food Science, University of Arkansas, 2650 North Young Avenue, Fayetteville, AR 72704, USA.
| |
Collapse
|
41
|
Lemaitre G, Pyles JA, Halpern AR, Navolio N, Lehet M, Heller LM. Who's that Knocking at My Door? Neural Bases of Sound Source Identification. Cereb Cortex 2019; 28:805-818. [PMID: 28052922 DOI: 10.1093/cercor/bhw397] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 12/14/2016] [Indexed: 11/13/2022] Open
Abstract
When hearing knocking on a door, a listener typically identifies both the action (forceful and repeated impacts) and the object (a thick wooden board) causing the sound. The current work studied the neural bases of sound source identification by switching listeners' attention toward these different aspects of a set of simple sounds during functional magnetic resonance imaging scanning: participants either discriminated the action or the material that caused the sounds, or they simply discriminated meaningless scrambled versions of them. Overall, discriminating action and material elicited neural activity in a left-lateralized frontoparietal network found in other studies of sound identification, wherein the inferior frontal sulcus and the ventral premotor cortex were under the control of selective attention and sensitive to task demand. More strikingly, discriminating materials elicited increased activity in cortical regions connecting auditory inputs to semantic, motor, and even visual representations, whereas discriminating actions did not increase activity in any regions. These results indicate that discriminating and identifying material requires deeper processing of the stimuli than discriminating actions. These results are consistent with previous studies suggesting that auditory perception is better suited to comprehend the actions than the objects producing sounds in the listeners' environment.
Collapse
Affiliation(s)
- Guillaume Lemaitre
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - John A Pyles
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Andrea R Halpern
- Bucknell University, Department of Psychology, Lewisburg 17837, PA, USA
| | - Nicole Navolio
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Matthew Lehet
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Laurie M Heller
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| |
Collapse
|
42
|
Li X, Wang A, Xu J, Sun Z, Xia J, Wang P, Wang B, Zhang M, Tian J. Reduced Dynamic Interactions Within Intrinsic Functional Brain Networks in Early Blind Patients. Front Neurosci 2019; 13:268. [PMID: 30983956 PMCID: PMC6448007 DOI: 10.3389/fnins.2019.00268] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 03/07/2019] [Indexed: 11/16/2022] Open
Abstract
Neuroimaging studies in early blind (EB) patients have shown altered connections or brain networks. However, it remains unclear how the causal relationships are disrupted within intrinsic brain networks. In our study, we used spectral dynamic causal modeling (DCM) to estimate the causal interactions using resting-state data in a group of 20 EB patients and 20 healthy controls (HC). Coupling parameters in specific regions were estimated, including the medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and inferior parietal lobule (IPC) in the default mode network (DMN); dorsal anterior cingulate cortex (dACC) and bilateral anterior insulae (AI) in the salience network (SN), and bilateral frontal eye fields (FEF) and superior parietal lobes (SPL) within the dorsal attention network (DAN). Statistical analyses found that all endogenous connections and the connections from the mPFC to bilateral IPCs in EB patients were significantly reduced within the DMN, and the effective connectivity from the PCC and lIPC to the mPFC, and from the mPFC to the PCC were enhanced. For the SN, all significant connections in EB patients were significantly decreased, except the intrinsic right AI connections. Within the DAN, more significant effective connections were observed to be reduced between the EB and HC groups, while only the connections from the right SPL to the left SPL and the intrinsic connection in the left SPL were significantly enhanced. Furthermore, discovery of more decreased effective connections in the EB subjects suggested that the disrupted causal interactions between specific regions are responsive to the compensatory brain plasticity in early deprivation.
Collapse
Affiliation(s)
- Xianglin Li
- Department of Medical Imaging, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China.,Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Ailing Wang
- Department of Clinical Laboratory, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| | - Junhai Xu
- Tianjin Key Laboratory of Cognitive Computing and Application, School of Artificial Intelligence, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Zhenbo Sun
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Jikai Xia
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Ming Zhang
- Department of Medical Imaging, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Jie Tian
- Department of Medical Imaging, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China.,School of Life Sciences and Technology, Xidian University, Xi'an, China
| |
Collapse
|
43
|
A functional MRI investigation of crossmodal interference in an audiovisual Stroop task. PLoS One 2019; 14:e0210736. [PMID: 30645634 PMCID: PMC6333399 DOI: 10.1371/journal.pone.0210736] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Accepted: 01/01/2019] [Indexed: 01/08/2023] Open
Abstract
The visual color-word Stroop task is widely used in clinical and research settings as a measure of cognitive control. Numerous neuroimaging studies have used color-word Stroop tasks to investigate the neural resources supporting cognitive control, but to our knowledge all have used unimodal (typically visual) Stroop paradigms. Thus, it is possible that this classic measure of cognitive control is not capturing the resources involved in multisensory cognitive control. The audiovisual integration and crossmodal correspondence literatures identify regions sensitive to congruency of auditory and visual stimuli, but it is unclear how these regions relate to the unimodal cognitive control literature. In this study we aimed to identify brain regions engaged by crossmodal cognitive control during an audiovisual color-word Stroop task, and how they relate to previous unimodal Stroop and audiovisual integration findings. First, we replicated previous behavioral audiovisual Stroop findings in an fMRI-adapted audiovisual Stroop paradigm: incongruent visual information increased reaction time towards an auditory stimulus and congruent visual information decreased reaction time. Second, we investigated the brain regions supporting cognitive control during an audiovisual color-word Stroop task using fMRI. Similar to unimodal cognitive control tasks, a left superior parietal region exhibited an interference effect of visual information on the auditory stimulus. This superior parietal region was also identified using a standard audiovisual integration localizing procedure, indicating that audiovisual integration resources are sensitive to cognitive control demands. Facilitation of the auditory stimulus by congruent visual information was found in posterior superior temporal cortex, including in the posterior STS which has been found to support audiovisual integration. The dorsal anterior cingulate cortex, often implicated in unimodal Stroop tasks, was not modulated by the audiovisual Stroop task. Overall the findings indicate that an audiovisual color-word Stroop task engages overlapping resources with audiovisual integration and overlapping but distinct resources compared to unimodal Stroop tasks.
Collapse
|
44
|
Wang X, Gu J, Xu J, Li X, Geng J, Wang B, Liu B. Decoding natural scenes based on sounds of objects within scenes using multivariate pattern analysis. Neurosci Res 2018; 148:9-18. [PMID: 30513353 DOI: 10.1016/j.neures.2018.11.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Revised: 11/21/2018] [Accepted: 11/30/2018] [Indexed: 10/27/2022]
Abstract
Scene recognition plays an important role in spatial navigation and scene classification. It remains unknown whether the occipitotemporal cortex could represent the semantic association between the scenes and sounds of objects within the scenes. In this study, we used the functional magnetic resonance imaging (fMRI) technique and multivariate pattern analysis to assess whether diff ; ;erent scenes could be discriminated based on the patterns evoked by sounds of objects within the scenes. We found that patterns evoked by scenes could be predicted with patterns evoked by sounds of objects within the scenes in the posterior fusiform area (pF), lateral occipital area (LO) and superior temporal sulcus (STS). The further functional connectivity analysis suggested significant correlations between pF, LO and parahippocampal place area (PPA) except that between STS and other three regions under the scene and sound conditions. A distinct network in processing scenes and sounds was discovered using a seed-to-voxel analysis with STS as the seed. This study may provide a cross-modal channel of scene decoding through the sounds of objects within the scenes in the occipitotemporal cortex, which could complement the single-modal channel of scene decoding based on the global scene properties or objects within the scenes.
Collapse
Affiliation(s)
- Xiaojing Wang
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China
| | - Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China
| | - Junhai Xu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong 264003, China
| | - Junzu Geng
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong 264003, China
| | - Baolin Liu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China; State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
45
|
Hu X, Urhie O, Chang K, Hostetler R, Agmon A. A Novel Method for Training Mice in Visuo-Tactile 3-D Object Discrimination and Recognition. Front Behav Neurosci 2018; 12:274. [PMID: 30555307 PMCID: PMC6282041 DOI: 10.3389/fnbeh.2018.00274] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 10/24/2018] [Indexed: 11/13/2022] Open
Abstract
Perceiving, recognizing and remembering 3-dimensional (3-D) objects encountered in the environment has a very high survival value; unsurprisingly, this ability is shared among many animal species, including humans. The psychological, psychophysical and neural basis for object perception, discrimination, recognition and memory has been extensively studied in humans, monkeys, pigeons and rodents, but is still far from understood. Nearly all 3-D object recognition studies in the rodent used the "novel object recognition" paradigm, which relies on innate rather than learned behavior; however, this procedure has several important limitations. Recently, investigators have begun to recognize the power of behavioral tasks learned through reinforcement training (operant conditioning) to reveal the sensorimotor and cognitive abilities of mice and to elucidate their underlying neural mechanisms. Here, we describe a novel method for training and testing mice in visual and tactile object discrimination, recognition and memory, and use it to begin to examine the underlying sensory basis for these cognitive capacities. A custom-designed Y maze was used to train mice to associate one of two 3-D objects with a food reward. Out of nine mice trained in two cohorts, seven reached performance criterion in about 20-35 daily sessions of 20 trials each. The learned association was retained, or rapidly re-acquired, after a 6 weeks hiatus in training. When tested under low light conditions, individual animals differed in the degree to which they used tactile or visual cues to identify the objects. Switching to total darkness resulted only in a transient dip in performance, as did subsequent trimming of all large whiskers (macrovibrissae). Additional removal of the small whiskers (microvibrissae) did not degrade performance, but transiently increased the time spent inspecting the object. This novel method can be combined in future studies with the large arsenal of genetic tools available in the mouse, to elucidate the neural basis of object perception, recognition and memory.
Collapse
Affiliation(s)
- Xian Hu
- Department of Neuroscience, West Virginia University School of Medicine, Morgantown, WV, United States
| | - Ogaga Urhie
- Department of Neuroscience, West Virginia University School of Medicine, Morgantown, WV, United States
| | - Kevin Chang
- Department of Neuroscience, West Virginia University School of Medicine, Morgantown, WV, United States
| | - Rachel Hostetler
- Department of Neuroscience, West Virginia University School of Medicine, Morgantown, WV, United States
| | - Ariel Agmon
- Department of Neuroscience, West Virginia University School of Medicine, Morgantown, WV, United States
| |
Collapse
|
46
|
Neural Mechanisms of Material Perception: Quest on Shitsukan. Neuroscience 2018; 392:329-347. [PMID: 30213767 DOI: 10.1016/j.neuroscience.2018.09.001] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 08/13/2018] [Accepted: 09/03/2018] [Indexed: 01/11/2023]
Abstract
In recent years, a growing body of research has addressed the nature and mechanism of material perception. Material perception entails perceiving and recognizing a material, surface quality or internal state of an object based on sensory stimuli such as visual, tactile, and/or auditory sensations. This process is ongoing in every aspect of daily life. We can, for example, easily distinguish whether an object is made of wood or metal, or whether a surface is rough or smooth. Judging whether the ground is wet or dry or whether a fish is fresh also involves material perception. Information obtained through material perception can be used to govern actions toward objects and to make decisions about whether to approach an object or avoid it. Because the physical processes leading to sensory signals related to material perception is complicated, it has been difficult to manipulate experimental stimuli in a rigorous manner. However, that situation is now changing thanks to advances in technology and knowledge in related fields. In this article, we will review what is currently known about the neural mechanisms responsible for material perception. We will show that cortical areas in the ventral visual pathway are strongly involved in material perception. Our main focus is on vision, but every sensory modality is involved in material perception. Information obtained through different sensory modalities is closely linked in material perception. Such cross-modal processing is another important feature of material perception, and will also be covered in this review.
Collapse
|
47
|
Banner A, Shamay-Tsoory S. Effects of androstadienone on dominance perception in males with low and high social anxiety. Psychoneuroendocrinology 2018; 95:138-144. [PMID: 29859341 DOI: 10.1016/j.psyneuen.2018.05.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 05/23/2018] [Accepted: 05/23/2018] [Indexed: 11/19/2022]
Abstract
Increasing evidence suggests that humans can communicate both trait-dominance and state-dominance via body odor. Androstadienone (androsta-4,16,-dien-3-one), a chemosignal found in human sweat, seems to be a likely candidate for signaling dominance in humans. The aim of the current study was to investigate the effects of androstadienone on the perception of social dominance. Moreover, we examined whether high levels of social anxiety, a psychopathology involving concerns that specifically pertain to social dominance, are associated with increased sensitivity to androstadienone as a chemical cue of dominance. In a double-blind, placebo-controlled, within-subject design, 64 heterosexual male participants (32 with high social anxiety and 32 with low social anxiety) viewed facial images of males depicting dominant, neutral and submissive postures, and were asked to recognize and rate the dominance expressed in those images. Participants completed the task twice, once under exposure to androstadienone and once under exposure to a control solution. The results indicate that androstadienone increased the perceived dominance of men's faces, specifically among participants with high social anxiety. These findings suggest a direct influence of androstadienone on dominance perception and further highlight the preferential processing of dominance and social threat signals evident in social anxiety.
Collapse
Affiliation(s)
- Amir Banner
- Department of Psychology, University of Haifa, Abba Khoushy Ave 199, Haifa, Israel.
| | - Simone Shamay-Tsoory
- Department of Psychology, University of Haifa, Abba Khoushy Ave 199, Haifa, Israel
| |
Collapse
|
48
|
Rao AR. An oscillatory neural network model that demonstrates the benefits of multisensory learning. Cogn Neurodyn 2018; 12:481-499. [PMID: 30250627 DOI: 10.1007/s11571-018-9489-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 04/27/2018] [Accepted: 06/01/2018] [Indexed: 12/13/2022] Open
Abstract
Since the world consists of objects that stimulate multiple senses, it is advantageous for a vertebrate to integrate all the sensory information available. However, the precise mechanisms governing the temporal dynamics of multisensory processing are not well understood. We develop a computational modeling approach to investigate these mechanisms. We present an oscillatory neural network model for multisensory learning based on sparse spatio-temporal encoding. Recently published results in cognitive science show that multisensory integration produces greater and more efficient learning. We apply our computational model to qualitatively replicate these results. We vary learning protocols and system dynamics, and measure the rate at which our model learns to distinguish superposed presentations of multisensory objects. We show that the use of multiple channels accelerates learning and recall by up to 80%. When a sensory channel becomes disabled, the performance degradation is less than that experienced during the presentation of non-congruent stimuli. This research furthers our understanding of fundamental brain processes, paving the way for multiple advances including the building of machines with more human-like capabilities.
Collapse
Affiliation(s)
- A Ravishankar Rao
- Gildart Haase School of Computer Sciences and Engineering, Fairleigh Dickinson University, Teaneck, NJ USA
| |
Collapse
|
49
|
Cordani L, Tagliazucchi E, Vetter C, Hassemer C, Roenneberg T, Stehle JH, Kell CA. Endogenous modulation of human visual cortex activity improves perception at twilight. Nat Commun 2018; 9:1274. [PMID: 29636448 PMCID: PMC5893589 DOI: 10.1038/s41467-018-03660-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 03/01/2018] [Indexed: 11/09/2022] Open
Abstract
Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.
Collapse
Affiliation(s)
- Lorenzo Cordani
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany.,Department of Neurology, Goethe University, 60528, Frankfurt am Main, Germany
| | - Enzo Tagliazucchi
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany.,Brain and Spine Institute, Hôpital Pitié Salpêtrière, 75013, Paris, France.,Departamento de Física, Instituto de Física de Buenos Aires-CONICET, Buenos Aires, 1428, Argentina
| | - Céline Vetter
- Department of Integrative Physiology, University of Colorado, Boulder, CO, 80310, USA.,Institute of Medical Psychology, Ludwig Maximilian University, 80336, Munich, Germany
| | - Christian Hassemer
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany.,Institute of Anatomy III, Goethe University, 60590, Frankfurt am Main, Germany
| | - Till Roenneberg
- Institute of Medical Psychology, Ludwig Maximilian University, 80336, Munich, Germany
| | - Jörg H Stehle
- Institute of Anatomy III, Goethe University, 60590, Frankfurt am Main, Germany
| | - Christian A Kell
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany. .,Department of Neurology, Goethe University, 60528, Frankfurt am Main, Germany.
| |
Collapse
|
50
|
Effects of modality and repetition in a continuous recognition memory task: Repetition has no effect on auditory recognition memory. Acta Psychol (Amst) 2018; 185:72-80. [PMID: 29407247 DOI: 10.1016/j.actpsy.2018.01.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 01/24/2018] [Accepted: 01/29/2018] [Indexed: 11/20/2022] Open
Abstract
Previous research has shown that auditory recognition memory is poorer compared to visual and cross-modal (visual and auditory) recognition memory. The effect of repetition on memory has been robust in showing improved performance. It is not clear, however, how auditory recognition memory compares to visual and cross-modal recognition memory following repetition. Participants performed a recognition memory task, making old/new discriminations to new stimuli, stimuli repeated for the first time after 4-7 intervening items (R1), or repeated for the second time after 36-39 intervening items (R2). Depending on the condition, participants were either exposed to visual stimuli (2D line drawings), auditory stimuli (spoken words), or cross-modal stimuli (pairs of images and associated spoken words). Results showed that unlike participants in the visual and cross-modal conditions, participants in the auditory recognition did not show improvements in performance on R2 trials compared to R1 trials. These findings have implications for pedagogical techniques in education, as well as for interventions and exercises aimed at boosting memory performance.
Collapse
|