1
|
Ishida K, Nittono H. Multidimensional regularity processing in music: an examination using redundant signals effect. Exp Brain Res 2024; 242:2207-2217. [PMID: 39012473 DOI: 10.1007/s00221-024-06861-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/22/2024] [Indexed: 07/17/2024]
Abstract
Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Osaka, Osaka, 565-0871, Japan.
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Osaka, Osaka, 565-0871, Japan
| |
Collapse
|
2
|
Yilmaz SK, Kafaligonul H. Attentional demands in the visual field modulate audiovisual interactions in the temporal domain. Hum Brain Mapp 2024; 45:e70009. [PMID: 39185690 PMCID: PMC11345635 DOI: 10.1002/hbm.70009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 07/10/2024] [Accepted: 08/13/2024] [Indexed: 08/27/2024] Open
Abstract
Attention and crossmodal interactions are closely linked through a complex interplay at different stages of sensory processing. Within the context of motion perception, previous research revealed that attentional demands alter audiovisual interactions in the temporal domain. In the present study, we aimed to understand the neurophysiological correlates of these attentional modulations. We utilized an audiovisual motion paradigm that elicits auditory time interval effects on perceived visual speed. The audiovisual interactions in the temporal domain were quantified by changes in perceived visual speed across different auditory time intervals. We manipulated attentional demands in the visual field by having a secondary task on a stationary object (i.e., single- vs. dual-task conditions). When the attentional demands were high (i.e., dual-task condition), there was a significant decrease in the effects of auditory time interval on perceived visual speed, suggesting a reduction in audiovisual interactions. Moreover, we found significant differences in both early and late neural activities elicited by visual stimuli across task conditions (single vs. dual), reflecting an overall increase in attentional demands in the visual field. Consistent with the changes in perceived visual speed, the audiovisual interactions in neural signals declined in the late positive component range. Compared with the findings from previous studies using different paradigms, our findings support the view that attentional modulations of crossmodal interactions are not unitary and depend on task-specific components. They also have important implications for motion processing and speed estimation in daily life situations where sensory relevance and attentional demands constantly change.
Collapse
Affiliation(s)
- Seyma Koc Yilmaz
- Aysel Sabuncu Brain Research CenterBilkent UniversityAnkaraTurkey
- National Magnetic Resonance Research Center (UMRAM)Bilkent UniversityAnkaraTurkey
- Department of NeuroscienceBilkent UniversityAnkaraTurkey
| | - Hulusi Kafaligonul
- Aysel Sabuncu Brain Research CenterBilkent UniversityAnkaraTurkey
- National Magnetic Resonance Research Center (UMRAM)Bilkent UniversityAnkaraTurkey
- Department of NeuroscienceBilkent UniversityAnkaraTurkey
- Neuroscience and Neurotechnology Center of Excellence (NÖROM), Faculty of MedicineGazi UniversityAnkaraTurkey
| |
Collapse
|
3
|
Fisher VL, Dean CL, Nave CS, Parkins EV, Kerkhoff WG, Kwakye LD. Increases in sensory noise predict attentional disruptions to audiovisual speech perception. Front Hum Neurosci 2023; 16:1027335. [PMID: 36684833 PMCID: PMC9846366 DOI: 10.3389/fnhum.2022.1027335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 12/05/2022] [Indexed: 01/06/2023] Open
Abstract
We receive information about the world around us from multiple senses which combine in a process known as multisensory integration. Multisensory integration has been shown to be dependent on attention; however, the neural mechanisms underlying this effect are poorly understood. The current study investigates whether changes in sensory noise explain the effect of attention on multisensory integration and whether attentional modulations to multisensory integration occur via modality-specific mechanisms. A task based on the McGurk Illusion was used to measure multisensory integration while attention was manipulated via a concurrent auditory or visual task. Sensory noise was measured within modality based on variability in unisensory performance and was used to predict attentional changes to McGurk perception. Consistent with previous studies, reports of the McGurk illusion decreased when accompanied with a secondary task; however, this effect was stronger for the secondary visual (as opposed to auditory) task. While auditory noise was not influenced by either secondary task, visual noise increased with the addition of the secondary visual task specifically. Interestingly, visual noise accounted for significant variability in attentional disruptions to the McGurk illusion. Overall, these results strongly suggest that sensory noise may underlie attentional alterations to multisensory integration in a modality-specific manner. Future studies are needed to determine whether this finding generalizes to other types of multisensory integration and attentional manipulations. This line of research may inform future studies of attentional alterations to sensory processing in neurological disorders, such as Schizophrenia, Autism, and ADHD.
Collapse
Affiliation(s)
- Victoria L. Fisher
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Yale University School of Medicine and the Connecticut Mental Health Center, New Haven, CT, United States
| | - Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Roche/Genentech Neurodevelopment & Psychiatry Teams Product Development, Neuroscience, South San Francisco, CA, United States
| | - Claire S. Nave
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
| | - Emma V. Parkins
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Neuroscience Graduate Program, University of Cincinnati, Cincinnati, OH, United States
| | - Willa G. Kerkhoff
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Leslie D. Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
| |
Collapse
|
4
|
Yuan Y, He X, Yue Z. Working memory load modulates the processing of audiovisual distractors: A behavioral and event-related potentials study. Front Integr Neurosci 2023; 17:1120668. [PMID: 36908504 PMCID: PMC9995450 DOI: 10.3389/fnint.2023.1120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
The interplay between different modalities can help to perceive stimuli more effectively. However, very few studies have focused on how multisensory distractors affect task performance. By adopting behavioral and event-related potentials (ERPs) techniques, the present study examined whether multisensory audiovisual distractors could attract attention more effectively than unisensory distractors. Moreover, we explored whether such a process was modulated by working memory load. Across three experiments, n-back tasks (1-back and 2-back) were adopted with peripheral auditory, visual, or audiovisual distractors. Visual and auditory distractors were white discs and pure tones (Experiments 1 and 2), pictures and sounds of animals (Experiment 3), respectively. Behavioral results in Experiment 1 showed a significant interference effect under high working memory load but not under low load condition. The responses to central letters with audiovisual distractors were significantly slower than those to letters without distractors, while no significant difference was found between unisensory distractor and without distractor conditions. Similarly, ERP results in Experiments 2 and 3 showed that there existed an integration only under high load condition. That is, an early integration for simple audiovisual distractors (240-340 ms) and a late integration for complex audiovisual distractors (440-600 ms). These findings suggest that multisensory distractors can be integrated and effectively attract attention away from the main task, i.e., interference effect. Moreover, this effect is pronounced only under high working memory load condition.
Collapse
Affiliation(s)
- Yichen Yuan
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Xiang He
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Zhenzhu Yue
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Yang W, Li S, Guo A, Li Z, Yang X, Ren Y, Yang J, Wu J, Zhang Z. Auditory attentional load modulates the temporal dynamics of audiovisual integration in older adults: An ERPs study. Front Aging Neurosci 2022; 14:1007954. [PMID: 36325188 PMCID: PMC9618958 DOI: 10.3389/fnagi.2022.1007954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/23/2022] [Indexed: 12/02/2022] Open
Abstract
As older adults experience degenerations in perceptual ability, it is important to gain perception from audiovisual integration. Due to attending to one or more auditory stimuli, performing other tasks is a common challenge for older adults in everyday life. Therefore, it is necessary to probe the effects of auditory attentional load on audiovisual integration in older adults. The present study used event-related potentials (ERPs) and a dual-task paradigm [Go / No-go task + rapid serial auditory presentation (RSAP) task] to investigate the temporal dynamics of audiovisual integration. Behavioral results showed that both older and younger adults responded faster and with higher accuracy to audiovisual stimuli than to either visual or auditory stimuli alone. ERPs revealed weaker audiovisual integration under the no-attentional auditory load condition at the earlier processing stages and, conversely, stronger integration in the late stages. Moreover, audiovisual integration was greater in older adults than in younger adults at the following time intervals: 60–90, 140–210, and 430–530 ms. Notably, only under the low load condition in the time interval of 140–210 ms, we did find that the audiovisual integration of older adults was significantly greater than that of younger adults. These results delineate the temporal dynamics of the interactions with auditory attentional load and audiovisual integration in aging, suggesting that modulation of auditory attentional load affects audiovisual integration, enhancing it in older adults.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Ao Guo
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
- *Correspondence: Yanna Ren
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Zhilin Zhang
| |
Collapse
|
6
|
Whether attentional loads influence audiovisual integration depends on semantic associations. Atten Percept Psychophys 2022; 84:2205-2218. [PMID: 35304700 DOI: 10.3758/s13414-022-02461-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 11/08/2022]
Abstract
Neuronal studies have shown that selectively attending to a common object in one sensory modality results in facilitated processing of that object's representations in the ignored sensory modality. Thus, the audiovisual (AV) integration of common objects can be observed under modality-specific selective attention. However, little is known about whether this AV integration can also occur under increased attentional load conditions. Additionally, whether semantic associations between multisensory features of common objects modulate the influence of increased attentional loads on this cross-modal integration remains unknown. In the present study, participants completed an AV integration task (ignored auditory stimuli) under various attentional load conditions: no load, low load, and high load. The semantic associations between AV stimuli were composed of animal pictures presented concurrently with semantically congruent, semantically incongruent, or semantically unrelated auditory stimuli. Our results demonstrated that attentional loads did not disrupt the integration of semantically congruent AV stimuli but suppressed the potential alertness effects induced by incongruent or unrelated auditory stimuli under the condition of modality-specific selective attention. These findings highlight the critical role of semantic association between AV stimuli in modulating the effect of attentional loads on the AV integration of modality-specific selective attention.
Collapse
|
7
|
Li Q. Semantic Congruency Modulates the Effect of Attentional Load on the Audiovisual Integration of Animate Images and Sounds. Iperception 2020; 11:2041669520981096. [PMID: 33456746 PMCID: PMC7783684 DOI: 10.1177/2041669520981096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Accepted: 11/19/2020] [Indexed: 12/04/2022] Open
Abstract
Attentional processes play a complex and multifaceted role in the integration of input from different sensory modalities. However, whether increased attentional load disrupts the audiovisual (AV) integration of common objects that involve semantic content remains unclear. Furthermore, knowledge regarding how semantic congruency interacts with attentional load to influence the AV integration of common objects is limited. We investigated these questions by examining AV integration under various attentional-load conditions. AV integration was assessed by adopting an animal identification task using unisensory (animal images and sounds) and AV stimuli (semantically congruent AV objects and semantically incongruent AV objects), while attentional load was manipulated by using a rapid serial visual presentation task. Our results indicate that attentional load did not attenuate the integration of semantically congruent AV objects. However, semantically incongruent animal sounds and images were not integrated (as there was no multisensory facilitation), and the interference effect produced by the semantically incongruent AV objects was reduced by increased attentional-load manipulations. These findings highlight the critical role of semantic congruency in modulating the effect of attentional load on the AV integration of common objects.
Collapse
Affiliation(s)
- Qingqing Li
- Cognitive Neuroscience Laboratory, Graduate School of Natural
Science and Technology, Okayama University, Okayama, Japan
| |
Collapse
|
8
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|
9
|
Wahn B, König P. Can Limitations of Visuospatial Attention Be Circumvented? A Review. Front Psychol 2017; 8:1896. [PMID: 29163278 PMCID: PMC5665179 DOI: 10.3389/fpsyg.2017.01896] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 10/12/2017] [Indexed: 12/03/2022] Open
Abstract
In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies have investigated factors that influence these attentional limitations and have identified methods to circumvent them. Here, we provide a review of these findings. We first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. We then review studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, we discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. We conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task requiring object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may facilitate joint performance. Generally, findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany.,Institut für Neurophysiologie und Pathophysiologie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
10
|
Dean CL, Eggleston BA, Gibney KD, Aligbe E, Blackwell M, Kwakye LD. Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task. PLoS One 2017; 12:e0179564. [PMID: 28723907 PMCID: PMC5516972 DOI: 10.1371/journal.pone.0179564] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Accepted: 05/30/2017] [Indexed: 12/15/2022] Open
Abstract
The ability to synthesize information across multiple senses is known as multisensory integration and is essential to our understanding of the world around us. Sensory stimuli that occur close in time are likely to be integrated, and the accuracy of this integration is dependent on our ability to precisely discriminate the relative timing of unisensory stimuli (crossmodal temporal acuity). Previous research has shown that multisensory integration is modulated by both bottom-up stimulus features, such as the temporal structure of unisensory stimuli, and top-down processes such as attention. However, it is currently uncertain how attention alters crossmodal temporal acuity. The present study investigated whether increasing attentional load would decrease crossmodal temporal acuity by utilizing a dual-task paradigm. In this study, participants were asked to judge the temporal order of a flash and beep presented at various temporal offsets (crossmodal temporal order judgment (CTOJ) task) while also directing their attention to a secondary distractor task in which they detected a target stimulus within a stream visual or auditory distractors. We found decreased performance on the CTOJ task as well as increases in both the positive and negative just noticeable difference with increasing load for both the auditory and visual distractor tasks. This strongly suggests that attention promotes greater crossmodal temporal acuity and that reducing the attentional capacity to process multisensory stimuli results in detriments to multisensory temporal processing. Our study is the first to demonstrate changes in multisensory temporal processing with decreased attentional capacity using a dual task paradigm and has strong implications for developmental disorders such as autism spectrum disorders and developmental dyslexia which are associated with alterations in both multisensory temporal processing and attention.
Collapse
Affiliation(s)
- Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Brady A. Eggleston
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Kyla David Gibney
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Enimielen Aligbe
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Marissa Blackwell
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Leslie Dowell Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
- * E-mail:
| |
Collapse
|