1
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
2
|
Mathias B, von Kriegstein K. Enriched learning: behavior, brain, and computation. Trends Cogn Sci 2023; 27:81-97. [PMID: 36456401 DOI: 10.1016/j.tics.2022.10.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 10/05/2022] [Accepted: 10/25/2022] [Indexed: 11/29/2022]
Abstract
The presence of complementary information across multiple sensory or motor modalities during learning, referred to as multimodal enrichment, can markedly benefit learning outcomes. Why is this? Here, we integrate cognitive, neuroscientific, and computational approaches to understanding the effectiveness of enrichment and discuss recent neuroscience findings indicating that crossmodal responses in sensory and motor brain regions causally contribute to the behavioral benefits of enrichment. The findings provide novel evidence for multimodal theories of enriched learning, challenge assumptions of longstanding cognitive theories, and provide counterevidence to unimodal neurobiologically inspired theories. Enriched educational methods are likely effective not only because they may engage greater levels of attention or deeper levels of processing, but also because multimodal interactions in the brain can enhance learning and memory.
Collapse
Affiliation(s)
- Brian Mathias
- School of Psychology, University of Aberdeen, Aberdeen, UK; Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
3
|
Pesnot Lerousseau J, Parise CV, Ernst MO, van Wassenhove V. Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nat Commun 2022; 13:2489. [PMID: 35513362 PMCID: PMC9072402 DOI: 10.1038/s41467-022-29687-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/14/2022] [Indexed: 11/09/2022] Open
Abstract
Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.
Collapse
Affiliation(s)
- Jacques Pesnot Lerousseau
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
- Applied Cognitive Psychology, Ulm University, Ulm, Germany.
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France.
| | | | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France
| |
Collapse
|
4
|
Opoku-Baah C, Schoenhaut AM, Vassall SG, Tovar DA, Ramachandran R, Wallace MT. Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
Affiliation(s)
- Collins Opoku-Baah
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Sarah G Vassall
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - David A Tovar
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ramnarayan Ramachandran
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Vision Research Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Vision Research Center, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
5
|
Chen S, Shi Z, Müller HJ, Geyer T. Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search. Sci Rep 2021; 11:9439. [PMID: 33941832 PMCID: PMC8093296 DOI: 10.1038/s41598-021-88946-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 04/16/2021] [Indexed: 02/02/2023] Open
Abstract
Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing 'contextual cueing'. This effect was enhanced in the multisensory session-importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift-diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.
Collapse
Affiliation(s)
- Siyi Chen
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany.
| | - Zhuanghua Shi
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany
| | - Hermann J Müller
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany
| | - Thomas Geyer
- Allgemeine Und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, München, Germany
| |
Collapse
|
6
|
La Rocca D, Wendt H, van Wassenhove V, Ciuciu P, Abry P. Revisiting Functional Connectivity for Infraslow Scale-Free Brain Dynamics Using Complex Wavelets. Front Physiol 2021; 11:578537. [PMID: 33488390 PMCID: PMC7818786 DOI: 10.3389/fphys.2020.578537] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 11/25/2020] [Indexed: 01/18/2023] Open
Abstract
The analysis of human brain functional networks is achieved by computing functional connectivity indices reflecting phase coupling and interactions between remote brain regions. In magneto- and electroencephalography, the most frequently used functional connectivity indices are constructed based on Fourier-based cross-spectral estimation applied to specific fast and band-limited oscillatory regimes. Recently, infraslow arrhythmic fluctuations (below the 1 Hz) were recognized as playing a leading role in spontaneous brain activity. The present work aims to propose to assess functional connectivity from fractal dynamics, thus extending the assessment of functional connectivity to the infraslow arrhythmic or scale-free temporal dynamics of M/EEG-quantified brain activity. Instead of being based on Fourier analysis, new Imaginary Coherence and weighted Phase Lag indices are constructed from complex-wavelet representations. Their performances are first assessed on synthetic data by means of Monte-Carlo simulations, and they are then compared favorably against the classical Fourier-based indices. These new assessments of functional connectivity indices are also applied to MEG data collected on 36 individuals both at rest and during the learning of a visual motion discrimination task. They demonstrate a higher statistical sensitivity, compared to their Fourier counterparts, in capturing significant and relevant functional interactions in the infraslow regime and modulations from rest to task. Notably, the consistent overall increase in functional connectivity assessed from fractal dynamics from rest to task correlated with a change in temporal dynamics as well as with improved performance in task completion, which suggests that the complex-wavelet weighted Phase Lag index is the sole index is able to capture brain plasticity in the infraslow scale-free regime.
Collapse
Affiliation(s)
- Daria La Rocca
- CEA, NeuroSpin, University of Paris-Saclay, Paris, France.,Inria Saclay Île-de-France, Parietal, University of Paris-Saclay, Paris, France
| | - Herwig Wendt
- IRIT, CNRS, University of Toulouse, Toulouse, France
| | - Virginie van Wassenhove
- CEA, NeuroSpin, University of Paris-Saclay, Paris, France.,INSERM U992, Collège de France, University of Paris-Saclay, Paris, France
| | - Philippe Ciuciu
- CEA, NeuroSpin, University of Paris-Saclay, Paris, France.,Inria Saclay Île-de-France, Parietal, University of Paris-Saclay, Paris, France
| | - Patrice Abry
- Univ. Lyon, ENS de Lyon, Univ. Claude Bernard, CNRS, Laboratoire de Physique, Lyon, France
| |
Collapse
|
7
|
La Rocca D, Ciuciu P, Engemann DA, van Wassenhove V. Emergence of β and γ networks following multisensory training. Neuroimage 2020; 206:116313. [PMID: 31676416 PMCID: PMC7355235 DOI: 10.1016/j.neuroimage.2019.116313] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/22/2019] [Accepted: 10/23/2019] [Indexed: 12/31/2022] Open
Abstract
Our perceptual reality relies on inferences about the causal structure of the world given by multiple sensory inputs. In ecological settings, multisensory events that cohere in time and space benefit inferential processes: hearing and seeing a speaker enhances speech comprehension, and the acoustic changes of flapping wings naturally pace the motion of a flock of birds. Here, we asked how a few minutes of (multi)sensory training could shape cortical interactions in a subsequent unisensory perceptual task. For this, we investigated oscillatory activity and functional connectivity as a function of individuals' sensory history during training. Human participants performed a visual motion coherence discrimination task while being recorded with magnetoencephalography. Three groups of participants performed the same task with visual stimuli only, while listening to acoustic textures temporally comodulated with the strength of visual motion coherence, or with auditory noise uncorrelated with visual motion. The functional connectivity patterns before and after training were contrasted to resting-state networks to assess the variability of common task-relevant networks, and the emergence of new functional interactions as a function of sensory history. One major finding is the emergence of a large-scale synchronization in the high γ (gamma: 60-120Hz) and β (beta: 15-30Hz) bands for individuals who underwent comodulated multisensory training. The post-training network involved prefrontal, parietal, and visual cortices. Our results suggest that the integration of evidence and decision-making strategies become more efficient following congruent multisensory training through plasticity in network routing and oscillatory regimes.
Collapse
Affiliation(s)
- Daria La Rocca
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Philippe Ciuciu
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Denis-Alexander Engemann
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Virginie van Wassenhove
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Cognitive Neuroimaging Unit, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191, Gif-sur-Yvette, France.
| |
Collapse
|
8
|
Cuturi LF, Tonelli A, Cappagli G, Gori M. Coarse to Fine Audio-Visual Size Correspondences Develop During Primary School Age. Front Psychol 2019; 10:2068. [PMID: 31572264 PMCID: PMC6751278 DOI: 10.3389/fpsyg.2019.02068] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 08/26/2019] [Indexed: 11/13/2022] Open
Abstract
Developmental studies have shown that children can associate visual size with non-visual and apparently unrelated stimuli, such as pure tone frequencies. Most research to date has focused on audio-visual size associations by showing that children can associate low pure tone frequencies with large objects, and high pure tone frequencies with small objects. Researchers relate these findings to coarser association, i.e., less precise associations for which binary categories of stimuli are used such as in the case of low versus high frequencies and large versus small visual stimuli. This study investigates how finer, more precise, crossmodal audio-visual associations develop during primary school age (from 6 to 11 years old). To unveil such patterns, we took advantage of a range of auditory pure tones and tested how primary school children match sounds with visually presented shapes. We tested 66 children (6-11 years old) in an audio-visual matching task involving a range of pure tone frequencies. Visual stimuli were circles or angles of different sizes. We asked participants to indicate the shape matching the sound. All children associated large objects/angles with low pitch, and small objects/angles with high pitch sounds. Interestingly, older children made greater use of intermediate visual sizes to provide their responses. Indeed, audio-visual associations for finer differences between stimulus features such as size and pure tone frequencies, may develop later depending on the maturation of supramodal size perception processes. Considering our results, we suggest that audio-visual size correspondences can be used for educational purposes by aiding the discrimination of sizes, including angles of different aperture. Moreover, their use should be shaped according to children's specific developmental stage.
Collapse
Affiliation(s)
- Luigi F. Cuturi
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Alessia Tonelli
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
- Fondazione “Istituto Neurologico Casimiro Mondino” (IRCSS), Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
9
|
La Rocca D, Zilber N, Abry P, van Wassenhove V, Ciuciu P. Self-similarity and multifractality in human brain activity: A wavelet-based analysis of scale-free brain dynamics. J Neurosci Methods 2018; 309:175-187. [DOI: 10.1016/j.jneumeth.2018.09.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 09/06/2018] [Accepted: 09/06/2018] [Indexed: 12/16/2022]
|
10
|
Alpha Oscillations Reduce Temporal Long-Range Dependence in Spontaneous Human Brain Activity. J Neurosci 2017; 38:755-764. [PMID: 29167403 DOI: 10.1523/jneurosci.0831-17.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 10/18/2017] [Accepted: 11/12/2017] [Indexed: 01/26/2023] Open
Abstract
Ongoing neural dynamics comprise both frequency-specific oscillations and broadband-features, such as long-range dependence (LRD). Despite both being behaviorally relevant, little is known about their potential interactions. In humans, 8-12 Hz α oscillations constitute the strongest deviation from 1/f power-law scaling, the signature of LRD. We postulated that α oscillations, believed to exert active inhibitory gating, downmodulate the temporal width of LRD in slower ongoing brain activity. In two independent "resting-state" datasets (electroencephalography surface recordings and magnetoencephalography source reconstructions), both across space and dynamically over time, power of α activity covaried with the power slope <5 Hz (i.e., greater α activity shortened LRD). Causality of α activity dynamics was implied by its temporal precedence over changes of slope. A model where power-law fluctuations of the α envelope inhibit baseline activity closely replicated our results. Thus, α oscillations may provide an active control mechanism to adaptively regulate LRD of brain activity at slow temporal scales, thereby shaping internal states and cognitive processes.SIGNIFICANCE STATEMENT The two prominent features of ongoing brain activity are oscillations and temporal long-range dependence. Both shape behavioral performance, but little is known about their interaction. Here, we demonstrate such an interaction in EEG and MEG recordings of task-free human brain activity. Specifically, we show that spontaneous dynamics in alpha activity explain ensuing variations of dependence in the low and ultra-low-frequency range. In modeling, two features of alpha oscillations are critical to account for the observed effects on long-range dependence, scale-free properties of alpha oscillations themselves, and a modulation of baseline levels, presumably inhibitory. Both these properties have been observed empirically, and our study hence establishes alpha oscillations as a regulatory mechanism governing long-range dependence or "memory" in slow ongoing brain activity.
Collapse
|
11
|
The Impact of Feedback on the Different Time Courses of Multisensory Temporal Recalibration. Neural Plast 2017; 2017:3478742. [PMID: 28316841 PMCID: PMC5339631 DOI: 10.1155/2017/3478742] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 01/14/2017] [Accepted: 01/26/2017] [Indexed: 11/18/2022] Open
Abstract
The capacity to rapidly adjust perceptual representations confers a fundamental advantage when confronted with a constantly changing world. Unexplored is how feedback regarding sensory judgments (top-down factors) interacts with sensory statistics (bottom-up factors) to drive long- and short-term recalibration of multisensory perceptual representations. Here, we examined the time course of both cumulative and rapid temporal perceptual recalibration for individuals completing an audiovisual simultaneity judgment task in which they were provided with varying degrees of feedback. We find that in the presence of feedback (as opposed to simple sensory exposure) temporal recalibration is more robust. Additionally, differential time courses are seen for cumulative and rapid recalibration dependent upon the nature of the feedback provided. Whereas cumulative recalibration effects relied more heavily on feedback that informs (i.e., negative feedback) rather than confirms (i.e., positive feedback) the judgment, rapid recalibration shows the opposite tendency. Furthermore, differential effects on rapid and cumulative recalibration were seen when the reliability of feedback was altered. Collectively, our findings illustrate that feedback signals promote and sustain audiovisual recalibration over the course of cumulative learning and enhance rapid trial-to-trial learning. Furthermore, given the differential effects seen for cumulative and rapid recalibration, these processes may function via distinct mechanisms.
Collapse
|
12
|
Sun Y, Hickey TJ, Shinn-Cunningham B, Sekuler R. Catching Audiovisual Interactions With a First-Person Fisherman Video Game. Perception 2016. [DOI: 10.1177/0301006616682755] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects’ performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.
Collapse
Affiliation(s)
- Yile Sun
- Volen Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Timothy J. Hickey
- Department of Computer Science, Brandeis University, Waltham, MA, USA
| | | | - Robert Sekuler
- Volen Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
13
|
Rosenblum LD, Dorsi J, Dias JW. The Impact and Status of Carol Fowler's Supramodal Theory of Multisensory Speech Perception. ECOLOGICAL PSYCHOLOGY 2016. [DOI: 10.1080/10407413.2016.1230373] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
14
|
Rishiq D, Rao A, Koerner T, Abrams H. Can a Commercially Available Auditory Training Program Improve Audiovisual Speech Performance? Am J Audiol 2016; 25:308-312. [PMID: 27768194 DOI: 10.1044/2016_aja-16-0017] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 06/03/2016] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The goal of this study was to determine whether hearing aids in combination with computer-based auditory training improve audiovisual (AV) performance compared with the use of hearing aids alone. METHOD Twenty-four participants were randomized into an experimental group (hearing aids plus ReadMyQuips [RMQ] training) and a control group (hearing aids only). The Multimodal Lexical Sentence Test for Adults (Kirk et al., 2012) was used to measure auditory-only (AO) and AV speech perception performance at three signal-to-noise ratios (SNRs). Participants were tested at the time of hearing aid fitting (pretest), after 4 weeks of hearing aid use (posttest I), and again after 4 weeks of RMQ training (posttest II). RESULTS Results did not reveal an effect of training. As expected, interactions were found between (a) modality (AO vs. AV) and SNR and (b) test (pretest vs. posttests) and SNR. CONCLUSION Data do not show a significant effect of RMQ training on AO or AV performance as measured using the Multimodal Lexical Sentence Test for Adults.
Collapse
Affiliation(s)
- Dania Rishiq
- Audiology Section, Otorhinolaryngology Department, Mayo Clinic, Jacksonville, FL
| | - Aparna Rao
- Department of Speech and Hearing Science, Arizona State University, Tempe
| | - Tess Koerner
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
| | - Harvey Abrams
- Starkey Hearing Technologies, Eden Prairie, MN
- University of South Florida, Tampa
| |
Collapse
|
15
|
Gau R, Noppeney U. How prior expectations shape multisensory perception. Neuroimage 2016; 124:876-886. [DOI: 10.1016/j.neuroimage.2015.09.045] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 09/20/2015] [Indexed: 11/24/2022] Open
|
16
|
Kafaligonul H, Oluk C. Audiovisual associations alter the perception of low-level visual motion. Front Integr Neurosci 2015; 9:26. [PMID: 25873869 PMCID: PMC4379893 DOI: 10.3389/fnint.2015.00026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 03/14/2015] [Indexed: 11/13/2022] Open
Abstract
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.
Collapse
Affiliation(s)
- Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University Ankara, Turkey
| | - Can Oluk
- Department of Psychology, Bilkent University Ankara, Turkey
| |
Collapse
|
17
|
Martin JR, Kösem A, van Wassenhove V. Hysteresis in audiovisual synchrony perception. PLoS One 2015; 10:e0119365. [PMID: 25774653 PMCID: PMC4361681 DOI: 10.1371/journal.pone.0119365] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2014] [Accepted: 01/12/2015] [Indexed: 11/25/2022] Open
Abstract
The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.
Collapse
Affiliation(s)
- Jean-Rémy Martin
- Université Paris VI (UPMC), Institut d’Étude de la Cognition (IEC) & Institut Jean-Nicod (IJN, ENS-EHESS-CNRS), Paris, France
- * E-mail:
| | - Anne Kösem
- CEA, DSV/I2BM, NeuroSpin, INSERM, U992, Cognitive Neuroimaging Unit, Univ Paris-Sud, F-Gif/Yvette, France
| | - Virginie van Wassenhove
- CEA, DSV/I2BM, NeuroSpin, INSERM, U992, Cognitive Neuroimaging Unit, Univ Paris-Sud, F-Gif/Yvette, France
| |
Collapse
|
18
|
Eberhardt SP, Auer ET, Bernstein LE. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training. Front Hum Neurosci 2014; 8:829. [PMID: 25400566 PMCID: PMC4215828 DOI: 10.3389/fnhum.2014.00829] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 09/29/2014] [Indexed: 12/04/2022] Open
Abstract
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Collapse
Affiliation(s)
- Silvio P Eberhardt
- Communication Neuroscience Laboratory, Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA
| | - Edward T Auer
- Communication Neuroscience Laboratory, Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA
| | - Lynne E Bernstein
- Communication Neuroscience Laboratory, Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA
| |
Collapse
|
19
|
Ciuciu P, Abry P, He BJ. Interplay between functional connectivity and scale-free dynamics in intrinsic fMRI networks. Neuroimage 2014; 95:248-63. [PMID: 24675649 PMCID: PMC4043862 DOI: 10.1016/j.neuroimage.2014.03.047] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2013] [Revised: 02/28/2014] [Accepted: 03/17/2014] [Indexed: 02/05/2023] Open
Abstract
Studies employing functional connectivity-type analyses have established that spontaneous fluctuations in functional magnetic resonance imaging (fMRI) signals are organized within large-scale brain networks. Meanwhile, fMRI signals have been shown to exhibit 1/f-type power spectra - a hallmark of scale-free dynamics. We studied the interplay between functional connectivity and scale-free dynamics in fMRI signals, utilizing the fractal connectivity framework - a multivariate extension of the univariate fractional Gaussian noise model, which relies on a wavelet formulation for robust parameter estimation. We applied this framework to fMRI data acquired from healthy young adults at rest and while performing a visual detection task. First, we found that scale-invariance existed beyond univariate dynamics, being present also in bivariate cross-temporal dynamics. Second, we observed that frequencies within the scale-free range do not contribute evenly to inter-regional connectivity, with a systematically stronger contribution of the lowest frequencies, both at rest and during task. Third, in addition to a decrease of the Hurst exponent and inter-regional correlations, task performance modified cross-temporal dynamics, inducing a larger contribution of the highest frequencies within the scale-free range to global correlation. Lastly, we found that across individuals, a weaker task modulation of the frequency contribution to inter-regional connectivity was associated with better task performance manifesting as shorter and less variable reaction times. These findings bring together two related fields that have hitherto been studied separately - resting-state networks and scale-free dynamics, and show that scale-free dynamics of human brain activity manifest in cross-regional interactions as well.
Collapse
Affiliation(s)
- Philippe Ciuciu
- CEA, NeuroSpin center, INRIA, Parietal Team, Bât. 145, F-91191 Gif-sur-Yvette, France.
| | - Patrice Abry
- CNRS, UMR 5672, Physics Department, ENS Lyon, F-69007 Lyon, France
| | - Biyu J He
- National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|