1
|
Froesel M, Gacoin M, Clavagnier S, Hauser M, Goudard Q, Ben Hamed S. Macaque claustrum, pulvinar and putative dorsolateral amygdala support the cross-modal association of social audio-visual stimuli based on meaning. Eur J Neurosci 2024; 59:3203-3223. [PMID: 38637993 DOI: 10.1111/ejn.16328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 02/14/2024] [Accepted: 03/07/2024] [Indexed: 04/20/2024]
Abstract
Social communication draws on several cognitive functions such as perception, emotion recognition and attention. The association of audio-visual information is essential to the processing of species-specific communication signals. In this study, we use functional magnetic resonance imaging in order to identify the subcortical areas involved in the cross-modal association of visual and auditory information based on their common social meaning. We identified three subcortical regions involved in audio-visual processing of species-specific communicative signals: the dorsolateral amygdala, the claustrum and the pulvinar. These regions responded to visual, auditory congruent and audio-visual stimulations. However, none of them was significantly activated when the auditory stimuli were semantically incongruent with the visual context, thus showing an influence of visual context on auditory processing. For example, positive vocalization (coos) activated the three subcortical regions when presented in the context of positive facial expression (lipsmacks) but not when presented in the context of negative facial expression (aggressive faces). In addition, the medial pulvinar and the amygdala presented multisensory integration such that audiovisual stimuli resulted in activations that were significantly higher than those observed for the highest unimodal response. Last, the pulvinar responded in a task-dependent manner, along a specific spatial sensory gradient. We propose that the dorsolateral amygdala, the claustrum and the pulvinar belong to a multisensory network that modulates the perception of visual socioemotional information and vocalizations as a function of the relevance of the stimuli in the social context. SIGNIFICANCE STATEMENT: Understanding and correctly associating socioemotional information across sensory modalities, such that happy faces predict laughter and escape scenes predict screams, is essential when living in complex social groups. With the use of functional magnetic imaging in the awake macaque, we identify three subcortical structures-dorsolateral amygdala, claustrum and pulvinar-that only respond to auditory information that matches the ongoing visual socioemotional context, such as hearing positively valenced coo calls and seeing positively valenced mutual grooming monkeys. We additionally describe task-dependent activations in the pulvinar, organizing along a specific spatial sensory gradient, supporting its role as a network regulator.
Collapse
Affiliation(s)
- Mathilda Froesel
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Maëva Gacoin
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Simon Clavagnier
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Marc Hauser
- Risk-Eraser, West Falmouth, Massachusetts, USA
| | - Quentin Goudard
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| |
Collapse
|
2
|
Cary E, Lahdesmaki I, Badde S. Audiovisual simultaneity windows reflect temporal sensory uncertainty. Psychon Bull Rev 2024:10.3758/s13423-024-02478-4. [PMID: 38388825 DOI: 10.3758/s13423-024-02478-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
The ability to judge the temporal alignment of visual and auditory information is a prerequisite for multisensory integration and segregation. However, each temporal measurement is subject to error. Thus, when judging whether a visual and auditory stimulus were presented simultaneously, observers must rely on a subjective decision boundary to distinguish between measurement error and truly misaligned audiovisual signals. Here, we tested whether these decision boundaries are relaxed with increasing temporal sensory uncertainty, i.e., whether participants make the same type of adjustment an ideal observer would make. Participants judged the simultaneity of audiovisual stimulus pairs with varying temporal offset, while being immersed in different virtual environments. To obtain estimates of participants' temporal sensory uncertainty and simultaneity criteria in each environment, an independent-channels model was fitted to their simultaneity judgments. In two experiments, participants' simultaneity decision boundaries were predicted by their temporal uncertainty, which varied unsystematically with the environment. Hence, observers used a flexibly updated estimate of their own audiovisual temporal uncertainty to establish subjective criteria of simultaneity. This finding implies that, under typical circumstances, audiovisual simultaneity windows reflect an observer's cross-modal temporal uncertainty.
Collapse
Affiliation(s)
- Emma Cary
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Ilona Lahdesmaki
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, 02155, USA.
| |
Collapse
|
3
|
Zhu H, Tang X, Chen T, Yang J, Wang A, Zhang M. Audiovisual illusion training improves multisensory temporal integration. Conscious Cogn 2023; 109:103478. [PMID: 36753896 DOI: 10.1016/j.concog.2023.103478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/26/2023] [Accepted: 01/26/2023] [Indexed: 02/08/2023]
Abstract
When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the 'temporal binding window (TBW)' can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience-dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion. The sound-induced flash illusion (SiFI) training is a reliable function for improving perceptual sensitivity. The present study utilized the classic auditory-dominated SiFI paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task. We demonstrate that audiovisual illusion training enhances multisensory temporal integration precision in the form of (i) the point of subjective simultaneity (PSS) shifts to reality (0 ms) and (ii) a narrowing TBW. The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Tingji Chen
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
4
|
Mafi F, Tang MF, Afarinesh MR, Ghasemian S, Sheibani V, Arabzadeh E. Temporal order judgment of multisensory stimuli in rat and human. Front Behav Neurosci 2023; 16:1070452. [PMID: 36710957 PMCID: PMC9879721 DOI: 10.3389/fnbeh.2022.1070452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/16/2022] [Indexed: 01/13/2023] Open
Abstract
We do not fully understand the resolution at which temporal information is processed by different species. Here we employed a temporal order judgment (TOJ) task in rats and humans to test the temporal precision with which these species can detect the order of presentation of simple stimuli across two modalities of vision and audition. Both species reported the order of audiovisual stimuli when they were presented from a central location at a range of stimulus onset asynchronies (SOA)s. While both species could reliably distinguish the temporal order of stimuli based on their sensory content (i.e., the modality label), rats outperformed humans at short SOAs (less than 100 ms) whereas humans outperformed rats at long SOAs (greater than 100 ms). Moreover, rats produced faster responses compared to humans. The reaction time data further revealed key differences in decision process across the two species: at longer SOAs, reaction times increased in rats but decreased in humans. Finally, drift-diffusion modeling allowed us to isolate the contribution of various parameters including evidence accumulation rates, lapse and bias to the sensory decision. Consistent with the psychophysical findings, the model revealed higher temporal sensitivity and a higher lapse rate in rats compared to humans. These findings suggest that these species applied different strategies for making perceptual decisions in the context of a multimodal TOJ task.
Collapse
Affiliation(s)
- Fatemeh Mafi
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Matthew F. Tang
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
| | - Mohammad Reza Afarinesh
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Sadegh Ghasemian
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Vahid Sheibani
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Ehsan Arabzadeh
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran,Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia,*Correspondence: Ehsan Arabzadeh,
| |
Collapse
|
5
|
Scurry AN, Lovelady Z, Jiang F. Task-dependent audiovisual temporal sensitivity is not affected by stimulus intensity levels. Vision Res 2021; 186:71-79. [PMID: 34058622 PMCID: PMC8273142 DOI: 10.1016/j.visres.2021.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 04/30/2021] [Accepted: 05/12/2021] [Indexed: 10/21/2022]
Abstract
Flexibility and robustness of multisensory temporal recalibration is paramount for maintaining perceptual constancy of the surrounding natural world. Different environments impart various impediments, distances and routes that alter the propagation times of sight and sound cues comprising a multimodal event. One's ability to rapidly calibrate and account for these external variations allows for maintained perception of synchrony which is crucial for coherent and consistent perception. The two common paradigms used to compare precision of temporal processing between experimental and control groups, the simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks, often use supra-threshold stimuli. However, few studies have specifically examined the effects of normalizing stimulus intensities to participant's unisensory detection thresholds. The current project presented multiple combinations of auditory and visual stimulus intensity levels, based on individual detection thresholds, during a TOJ and a SJ task. While no effect of stimulus intensity was found on temporal sensitivity or perceived temporal synchrony, there was a significant difference in point of subjective simultaneity (PSS) measures between tasks. In addition, PSS estimates were audio-leading, rather than visual-leading as previously reported, suggesting that exposure to the particular combinations of stimulus intensity levels used influenced temporal synchrony perception. Overall, these results support the use of supra-threshold stimuli in TOJ and SJ tasks as a way of minimizing the confound from differences in unisensory processing.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, 1664 N. Virginia St., Reno, NV 89557, USA.
| | - Zachary Lovelady
- Department of Psychology, University of Nevada, 1664 N. Virginia St., Reno, NV 89557, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, 1664 N. Virginia St., Reno, NV 89557, USA
| |
Collapse
|
6
|
Atilgan H, Bizley JK. Training enhances the ability of listeners to exploit visual information for auditory scene analysis. Cognition 2020; 208:104529. [PMID: 33373937 PMCID: PMC7868888 DOI: 10.1016/j.cognition.2020.104529] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 11/24/2020] [Accepted: 11/25/2020] [Indexed: 11/25/2022]
Abstract
The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.
Collapse
|
7
|
Loss of
Cntnap2
in the Rat Causes Autism‐Related Alterations in Social Interactions, Stereotypic Behavior, and Sensory Processing. Autism Res 2020; 13:1698-1717. [DOI: 10.1002/aur.2364] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 07/11/2020] [Accepted: 07/13/2020] [Indexed: 02/06/2023]
|
8
|
Siemann JK, Veenstra-VanderWeele J, Wallace MT. Approaches to Understanding Multisensory Dysfunction in Autism Spectrum Disorder. Autism Res 2020; 13:1430-1449. [PMID: 32869933 PMCID: PMC7721996 DOI: 10.1002/aur.2375] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 12/14/2022]
Abstract
Abnormal sensory responses are a DSM-5 symptom of autism spectrum disorder (ASD), and research findings demonstrate altered sensory processing in ASD. Beyond difficulties with processing information within single sensory domains, including both hypersensitivity and hyposensitivity, difficulties in multisensory processing are becoming a core issue of focus in ASD. These difficulties may be targeted by treatment approaches such as "sensory integration," which is frequently applied in autism treatment but not yet based on clear evidence. Recently, psychophysical data have emerged to demonstrate multisensory deficits in some children with ASD. Unlike deficits in social communication, which are best understood in humans, sensory and multisensory changes offer a tractable marker of circuit dysfunction that is more easily translated into animal model systems to probe the underlying neurobiological mechanisms. Paralleling experimental paradigms that were previously applied in humans and larger mammals, we and others have demonstrated that multisensory function can also be examined behaviorally in rodents. Here, we review the sensory and multisensory difficulties commonly found in ASD, examining laboratory findings that relate these findings across species. Next, we discuss the known neurobiology of multisensory integration, drawing largely on experimental work in larger mammals, and extensions of these paradigms into rodents. Finally, we describe emerging investigations into multisensory processing in genetic mouse models related to autism risk. By detailing findings from humans to mice, we highlight the advantage of multisensory paradigms that can be easily translated across species, as well as the potential for rodent experimental systems to reveal opportunities for novel treatments. LAY SUMMARY: Sensory and multisensory deficits are commonly found in ASD and may result in cascading effects that impact social communication. By using similar experiments to those in humans, we discuss how studies in animal models may allow an understanding of the brain mechanisms that underlie difficulties in multisensory integration, with the ultimate goal of developing new treatments. Autism Res 2020, 13: 1430-1449. © 2020 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Justin K Siemann
- Department of Biological Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Jeremy Veenstra-VanderWeele
- Department of Psychiatry, Columbia University, Center for Autism and the Developing Brain, New York Presbyterian Hospital, and New York State Psychiatric Institute, New York, New York, USA
| | - Mark T Wallace
- Department of Psychiatry, Vanderbilt University, Nashville, Tennessee, USA
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
9
|
Compensatory Plasticity in the Lateral Extrastriate Visual Cortex Preserves Audiovisual Temporal Processing following Adult-Onset Hearing Loss. Neural Plast 2019; 2019:7946987. [PMID: 31223309 PMCID: PMC6541963 DOI: 10.1155/2019/7946987] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Accepted: 03/19/2019] [Indexed: 11/17/2022] Open
Abstract
Partial hearing loss can cause neurons in the auditory and audiovisual cortices to increase their responsiveness to visual stimuli; however, behavioral studies in hearing-impaired humans and rats have found that the perceptual ability to accurately judge the relative timing of auditory and visual stimuli is largely unaffected. To investigate the neurophysiological basis of how audiovisual temporal acuity may be preserved in the presence of hearing loss-induced crossmodal plasticity, we exposed adult rats to loud noise and two weeks later performed in vivo electrophysiological recordings in two neighboring regions within the lateral extrastriate visual (V2L) cortex—a multisensory zone known to be responsive to audiovisual stimuli (V2L-Mz) and a predominantly auditory zone (V2L-Az). To examine the cortical layer-specific effects at the level of postsynaptic potentials, a current source density (CSD) analysis was applied to the local field potential (LFP) data recorded in response to auditory and visual stimuli presented at various stimulus onset asynchronies (SOAs). As predicted, differential effects were observed in the neighboring cortical regions' postnoise exposure. Most notably, an analysis of the strength of multisensory response interactions revealed that V2L-Mz lost its sensitivity to the relative timing of the auditory and visual stimuli, due to an increased responsiveness to visual stimulation that produced a prominent audiovisual response irrespective of the SOA. In contrast, not only did the V2L-Az in noise-exposed rats become more responsive to visual stimuli but neurons in this region also inherited the capacity to process audiovisual stimuli with the temporal precision and specificity that was previously restricted to the V2L-Mz. Thus, the present study provides the first demonstration that audiovisual temporal processing can be preserved following moderate hearing loss via compensatory plasticity in the higher-order sensory cortices that is ultimately characterized by a functional transition in the cortical region capable of temporal sensitivity.
Collapse
|