1
|
Saccani MS, Contemori G, Del Popolo Cristaldi F, Bonato M. Attentional load impacts multisensory integration, without leading to spatial processing asymmetries. Sci Rep 2025; 15:16240. [PMID: 40346130 PMCID: PMC12064717 DOI: 10.1038/s41598-025-95717-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 03/24/2025] [Indexed: 05/11/2025] Open
Abstract
The present study examined whether spatial processing in the unimpaired cognitive system is influenced by attentional load during multitasking. More specifically, it tested the hypothesis that high attentional load would induce spatial processing asymmetries in the form of a rightward attentional bias. We conducted two separate experiments on healthy adults (n = 101 and n = 98) by using web-based data collections. We capitalized on a condition of perceptual uncertainty to investigate the presence of these spatial asymmetries which cannot be easily detected under regular perceptual conditions. More specifically, we employed a primary audiovisual integration task, which involved presenting stimuli capable of eliciting the sound-induced flash illusion (i.e., task-relevant flashes accompanied by an incongruent number of sounds) on either the left or right side of the screen. This task enabled us to investigate audiovisual integration, but also indirectly provided an opportunity to sensitively explore spatial processing within a highly complex context. In Experiment 1, attentional load was increased by presenting stimuli to be retained before the audiovisual integration task (i.e., "offline" attentional load manipulation). Differently, in Experiment 2, attentional load was increased by having participants to perform visual discrimination during the audiovisual integration task (i.e., "online" attentional load manipulation). Attentional load was increased in a different way within each experiment to test the idea that more demanding tasks, albeit of different nature, would have similarly modulated performance. In both experiments, we replicated the increase of sound-induced flash illusion under high attentional load, which challenges the notion of an early and pre-attentive onset of the illusion. However, this effect was identical for left- and right-sided flashes, which speaks against the existence of load-induced spatial processing asymmetries in the unimpaired cognitive system. Given that both experiments yielded similar results, quantitative aspects of attentional engagement rather than the nature of the attentional resources involved seem to play a critical role.
Collapse
Affiliation(s)
- M S Saccani
- Padova Neuroscience Centre, University of Padua, Via Orus 2, 35129, Padua, Italy.
- Department of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy.
| | - G Contemori
- Department of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy
| | - F Del Popolo Cristaldi
- Department of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy
| | - M Bonato
- Padova Neuroscience Centre, University of Padua, Via Orus 2, 35129, Padua, Italy.
- Department of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy.
| |
Collapse
|
2
|
Gijbels L, Lalonde K, Shen Y, Wallace MT, Lee AK. Synchrony perception of audiovisual speech is a reliable, yet individual construct. Sci Rep 2025; 15:15909. [PMID: 40335496 PMCID: PMC12059004 DOI: 10.1038/s41598-025-00243-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 04/25/2025] [Indexed: 05/09/2025] Open
Abstract
Audiovisual (AV) (a)synchrony perception, measured by a simultaneity judgment task, provides a proxy measure for the temporal binding window (TBW), the interval of time within which participants perceive individual auditory and visual events as synchronous. The TBW is a sensitive measure showing group level differences across the lifespan. However, the significance of these findings hinges on whether AV synchrony perception is characteristic to an individual and whether it is reliable across sessions. As there is little evidence to this latter assumption, this work aimed to establish test-retest reliability of the TBW. Eighteen participants completed a simultaneity judgment task, twice, on different days, where they judged whether or not the audio and video of word-length speech stimuli were presented at the same or different times. Results showed effects of task familiarity indicated by significantly faster response times observed at the second timepoint. The slope and amplitude asymptote of the TBW also increased, however no change in TBW suggests no change in sensitivity to AV (a)synchrony. The combination of high within-subject correlations (R = 0.71) and substantial intersubject variability provide strong support for the TBW as a robust measure of AV temporal perception, and suggest a conserved mechanistic underpinning within individuals.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Box 357988, 98195-7988, WA, USA
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA
| | - Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| | - Yi Shen
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Box 357988, 98195-7988, WA, USA
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Adrian Kc Lee
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Box 357988, 98195-7988, WA, USA.
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA.
| |
Collapse
|
3
|
Magnotti JF, Basu Mallick D, Feng G, Zhou B, Zhou W, Beauchamp MS. The McGurk effect is similar in native Mandarin Chinese and American English speakers. Front Psychol 2025; 16:1531566. [PMID: 40226493 PMCID: PMC11987121 DOI: 10.3389/fpsyg.2025.1531566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Accepted: 02/24/2025] [Indexed: 04/15/2025] Open
Abstract
Humans combine the visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing audiovisual speech perception is the McGurk effect: when presented with some incongruent pairings of auditory and visual speech syllables (e.g., the auditory speech sound "ba" dubbed onto the visual mouth movements for "ga") individuals perceive a third syllable, distinct from the auditory and visual components. The many differences between Chinese and American culture and language suggest the possibility of group differences in the McGurk effect. Published studies have reported less McGurk effect in native Mandarin Chinese speakers than in English speakers, but these studies sampled small numbers of participants tested with a small number of stimuli. Therefore, we conducted in-person tests of the McGurk effect in large samples of Mandarin-speaking individuals from China and English-speaking individuals from the USA (total N = 307) viewing nine different stimuli. Averaged across participants and stimuli, we found similar frequencies of the McGurk effect between Chinese and American participants (48% vs. 44%). In both groups, there was high variability both across participants (range from 0% to 100%) and stimuli (14%-83%) with the main effect of culture and language accounting for only 0.2% of the variance in the data. The high variability inherent to the McGurk effect necessitates the use of large sample sizes to accurately estimate group differences and requires testing with a variety of McGurk stimuli, especially stimuli potent enough to evoke the illusion in the majority of participants.
Collapse
Affiliation(s)
- John F. Magnotti
- Department of Neurosurgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | | | - Guo Feng
- Psychological Research and Counseling Center, Southwest Jiaotong University, Chengdu, Sichuan, China
| | - Bin Zhou
- Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Wen Zhou
- Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Michael S. Beauchamp
- Department of Neurosurgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| |
Collapse
|
4
|
Gijbels L, Lee AKC, Lalonde K. Integration of audiovisual speech perception: From infancy to older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:1981-2000. [PMID: 40126041 DOI: 10.1121/10.0036137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 02/19/2025] [Indexed: 03/25/2025]
Abstract
One of the most prevalent and relevant social experiences for humans - engaging in face-to-face conversations - is inherently multimodal. In the context of audiovisual (AV) speech perception, the visual cues from the speaker's face play a crucial role in language acquisition and in enhancing our comprehension of incoming auditory speech signals. Nonetheless, AV integration reflects substantial individual differences, which cannot be entirely accounted for by the information conveyed through the speech signal or the perceptual abilities of the individual. These differences illustrate changes in response to experience with auditory and visual sensory processing across the lifespan, and within a phase of life. To improve our understanding of integration of AV speech, the current work offers a perspective for understanding AV speech processing in relation to AV perception in general from a prelinguistic and a linguistic viewpoint, and by looking at AV perception through the lens of humans as Bayesian observers implementing a causal inference model. This allowed us to create a cohesive approach to look at differences and similarities of AV integration from infancy to older adulthood. Behavioral and neurophysiological evidence suggests that both prelinguistic and linguistic mechanisms exhibit distinct, yet mutually influential, effects across the lifespan within and between individuals.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- University of Washington, Department of Speech and Hearing Sciences, Seattle, Washington 98195, USA
- University of Washington, Institute for Learning and Brain Sciences, Seattle, Washington 98915, USA
| | - Adrian K C Lee
- University of Washington, Department of Speech and Hearing Sciences, Seattle, Washington 98195, USA
- University of Washington, Institute for Learning and Brain Sciences, Seattle, Washington 98915, USA
| | - Kaylah Lalonde
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska 68131, USA
| |
Collapse
|
5
|
Cary E, Lahdesmaki I, Badde S. Audiovisual simultaneity windows reflect temporal sensory uncertainty. Psychon Bull Rev 2024; 31:2170-2179. [PMID: 38388825 PMCID: PMC11543760 DOI: 10.3758/s13423-024-02478-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
The ability to judge the temporal alignment of visual and auditory information is a prerequisite for multisensory integration and segregation. However, each temporal measurement is subject to error. Thus, when judging whether a visual and auditory stimulus were presented simultaneously, observers must rely on a subjective decision boundary to distinguish between measurement error and truly misaligned audiovisual signals. Here, we tested whether these decision boundaries are relaxed with increasing temporal sensory uncertainty, i.e., whether participants make the same type of adjustment an ideal observer would make. Participants judged the simultaneity of audiovisual stimulus pairs with varying temporal offset, while being immersed in different virtual environments. To obtain estimates of participants' temporal sensory uncertainty and simultaneity criteria in each environment, an independent-channels model was fitted to their simultaneity judgments. In two experiments, participants' simultaneity decision boundaries were predicted by their temporal uncertainty, which varied unsystematically with the environment. Hence, observers used a flexibly updated estimate of their own audiovisual temporal uncertainty to establish subjective criteria of simultaneity. This finding implies that, under typical circumstances, audiovisual simultaneity windows reflect an observer's cross-modal temporal uncertainty.
Collapse
Affiliation(s)
- Emma Cary
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Ilona Lahdesmaki
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, 02155, USA.
| |
Collapse
|
6
|
Wu H, Huang Y, Qin P, Wu H. Individual Differences in Bodily Self-Consciousness and Its Neural Basis. Brain Sci 2024; 14:795. [PMID: 39199487 PMCID: PMC11353174 DOI: 10.3390/brainsci14080795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 08/02/2024] [Accepted: 08/02/2024] [Indexed: 09/01/2024] Open
Abstract
Bodily self-consciousness (BSC), a subject of interdisciplinary interest, refers to the awareness of one's bodily states. Previous studies have noted the existence of individual differences in BSC, while neglecting the underlying factors and neural basis of such individual differences. Considering that BSC relied on integration from both internal and external self-relevant information, we here review previous findings on individual differences in BSC through a three-level-self model, which includes interoceptive, exteroceptive, and mental self-processing. The data show that cross-level factors influenced individual differences in BSC, involving internal bodily signal perceptibility, multisensory processing principles, personal traits shaped by environment, and interaction modes that integrate multiple levels of self-processing. Furthermore, in interoceptive processing, regions like the anterior cingulate cortex and insula show correlations with different perceptions of internal sensations. For exteroception, the parietal lobe integrates sensory inputs, coordinating various BSC responses. Mental self-processing modulates differences in BSC through areas like the medial prefrontal cortex. For interactions between multiple levels of self-processing, regions like the intraparietal sulcus involve individual differences in BSC. We propose that diverse experiences of BSC can be attributed to different levels of self-processing, which moderates one's perception of their body. Overall, considering individual differences in BSC is worth amalgamating diverse methodologies for the diagnosis and treatment of some diseases.
Collapse
Affiliation(s)
- Haiyan Wu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China; (H.W.); (Y.H.)
| | - Ying Huang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China; (H.W.); (Y.H.)
| | - Pengmin Qin
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China; (H.W.); (Y.H.)
- Pazhou Lab, Guangzhou 510330, China
| | - Hang Wu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Institute for Brain Research and Rehabilitation, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| |
Collapse
|
7
|
Smith M, Cameron L, Ferguson HJ. Scene construction ability in neurotypical and autistic adults. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2024; 28:1919-1933. [PMID: 38153207 PMCID: PMC11301963 DOI: 10.1177/13623613231216052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
LAY ABSTRACT People with autism spectrum conditions (ASC) have difficulties imagining events, which might result from difficulty mentally generating and maintaining a coherent spatial scene. This study compared this scene construction ability between autistic (N = 55) and neurotypical (N = 63) adults. Results showed that scene construction was diminished in autistic compared to neurotypical participants, and participants with fewer autistic traits had better scene construction ability. ASC diagnosis did not influence the frequency of mentions of the self or of sensory experiences. Exploratory analysis suggests that scene construction ability is associated with the ability to understand our own and other people's mental states, and that these individual-level preferences/cognitive styles can overrule typical group-level characteristics.
Collapse
|
8
|
Shyamchand Singh S, Mukherjee A, Raghunathan P, Ray D, Banerjee A. High segregation and diminished global integration in large-scale brain functional networks enhances the perceptual binding of cross-modal stimuli. Cereb Cortex 2024; 34:bhae323. [PMID: 39110411 DOI: 10.1093/cercor/bhae323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 07/16/2024] [Accepted: 07/25/2024] [Indexed: 01/29/2025] Open
Abstract
Speech perception requires the binding of spatiotemporally disjoint auditory-visual cues. The corresponding brain network-level information processing can be characterized by two complementary mechanisms: functional segregation which refers to the localization of processing in either isolated or distributed modules across the brain, and integration which pertains to cooperation among relevant functional modules. Here, we demonstrate using functional magnetic resonance imaging recordings that subjective perceptual experience of multisensory speech stimuli, real and illusory, are represented in differential states of segregation-integration. We controlled the inter-subject variability of illusory/cross-modal perception parametrically, by introducing temporal lags in the incongruent auditory-visual articulations of speech sounds within the McGurk paradigm. The states of segregation-integration balance were captured using two alternative computational approaches. First, the module responsible for cross-modal binding of sensory signals defined as the perceptual binding network (PBN) was identified using standardized parametric statistical approaches and their temporal correlations with all other brain areas were computed. With increasing illusory perception, the majority of the nodes of PBN showed decreased cooperation with the rest of the brain, reflecting states of high segregation but reduced global integration. Second, using graph theoretic measures, the altered patterns of segregation-integration were cross-validated.
Collapse
Affiliation(s)
- Soibam Shyamchand Singh
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
- Department of Psychology, Ashoka University, Sonepat 131029, Haryana, India
| | - Abhishek Mukherjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
| | - Partha Raghunathan
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
| | - Dipanjan Ray
- Department of Psychology, Ashoka University, Sonepat 131029, Haryana, India
| | - Arpan Banerjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, NH8, Manesar, Gurgaon 122052, Haryana, India
| |
Collapse
|
9
|
Gallese V, Ardizzi M, Ferroni F. Schizophrenia and the bodily self. Schizophr Res 2024; 269:152-162. [PMID: 38815468 DOI: 10.1016/j.schres.2024.05.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 05/17/2024] [Accepted: 05/21/2024] [Indexed: 06/01/2024]
Abstract
Despite the historically consolidated psychopathological perspective, on the one hand, contemporary organicistic psychiatry often highlights abnormalities in neurotransmitter systems like dysregulation of dopamine transmission, neural circuitry, and genetic factors as key contributors to schizophrenia. Neuroscience, on the other, has so far almost entirely neglected the first-person experiential dimension of this syndrome, mainly focusing on high-order cognitive functions, such as executive function, working memory, theory of mind, and the like. An alternative view posits that schizophrenia is a self-disorder characterized by anomalous self-experience and awareness. This view may not only shed new light on the psychopathological features of psychosis but also inspire empirical research targeting the bodily and neurobiological changes underpinning this disorder. Cognitive neuroscience can today address classic topics of phenomenological psychopathology by adding a new level of description, finally enabling the correlation between the first-person experiential aspects of psychiatric diseases and their neurobiological roots. Recent empirical evidence on the neurobiological basis of a minimal notion of the self, the bodily self, is presented. The relationship between the body, its motor potentialities and the notion of minimal self is illustrated. Evidence on the neural mechanisms underpinning the bodily self, its plasticity, and the blurring of self-other distinction in schizophrenic patients is introduced and discussed. It is concluded that brain-body function anomalies of multisensory integration, differential processing of self- and other-related bodily information mediating self-experience, might be at the basis of the disruption of the self disorders characterizing schizophrenia.
Collapse
Affiliation(s)
- Vittorio Gallese
- Dept. of Medicine and Surgery, Unit of Neuroscience, University of Parma, Italy; Italian Academy for Advanced Studies in America, Columbia University, New York, USA.
| | - Martina Ardizzi
- Dept. of Medicine and Surgery, Unit of Neuroscience, University of Parma, Italy
| | - Francesca Ferroni
- Dept. of Medicine and Surgery, Unit of Neuroscience, University of Parma, Italy
| |
Collapse
|
10
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
11
|
Magnotti JF, Lado A, Beauchamp MS. The noisy encoding of disparity model predicts perception of the McGurk effect in native Japanese speakers. Front Neurosci 2024; 18:1421713. [PMID: 38988770 PMCID: PMC11233445 DOI: 10.3389/fnins.2024.1421713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 05/28/2024] [Indexed: 07/12/2024] Open
Abstract
In the McGurk effect, visual speech from the face of the talker alters the perception of auditory speech. The diversity of human languages has prompted many intercultural studies of the effect in both Western and non-Western cultures, including native Japanese speakers. Studies of large samples of native English speakers have shown that the McGurk effect is characterized by high variability in the susceptibility of different individuals to the illusion and in the strength of different experimental stimuli to induce the illusion. The noisy encoding of disparity (NED) model of the McGurk effect uses principles from Bayesian causal inference to account for this variability, separately estimating the susceptibility and sensory noise for each individual and the strength of each stimulus. To determine whether variation in McGurk perception is similar between Western and non-Western cultures, we applied the NED model to data collected from 80 native Japanese-speaking participants. Fifteen different McGurk stimuli that varied in syllable content (unvoiced auditory "pa" + visual "ka" or voiced auditory "ba" + visual "ga") were presented interleaved with audiovisual congruent stimuli. The McGurk effect was highly variable across stimuli and participants, with the percentage of illusory fusion responses ranging from 3 to 78% across stimuli and from 0 to 91% across participants. Despite this variability, the NED model accurately predicted perception, predicting fusion rates for individual stimuli with 2.1% error and for individual participants with 2.4% error. Stimuli containing the unvoiced pa/ka pairing evoked more fusion responses than the voiced ba/ga pairing. Model estimates of sensory noise were correlated with participant age, with greater sensory noise in older participants. The NED model of the McGurk effect offers a principled way to account for individual and stimulus differences when examining the McGurk effect in different cultures.
Collapse
Affiliation(s)
- John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Anastasia Lado
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
12
|
Jiang Z, An X, Liu S, Yin E, Yan Y, Ming D. Beyond alpha band: prestimulus local oscillation and interregional synchrony of the beta band shape the temporal perception of the audiovisual beep-flash stimulus. J Neural Eng 2024; 21:036035. [PMID: 37419108 DOI: 10.1088/1741-2552/ace551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 07/07/2023] [Indexed: 07/09/2023]
Abstract
Objective.Multisensory integration is more likely to occur if the multimodal inputs are within a narrow temporal window called temporal binding window (TBW). Prestimulus local neural oscillations and interregional synchrony within sensory areas can modulate cross-modal integration. Previous work has examined the role of ongoing neural oscillations in audiovisual temporal integration, but there is no unified conclusion. This study aimed to explore whether local ongoing neural oscillations and interregional audiovisual synchrony modulate audiovisual temporal integration.Approach.The human participants performed a simultaneity judgment (SJ) task with the beep-flash stimuli while recording electroencephalography. We focused on two stimulus onset asynchrony (SOA) conditions where subjects report ∼50% proportion of synchronous responses in auditory- and visual-leading SOA (A50V and V50A).Main results.We found that the alpha band power is larger in synchronous response in the central-right posterior and posterior sensors in A50V and V50A conditions, respectively. The results suggested that the alpha band power reflects neuronal excitability in the auditory or visual cortex, which can modulate audiovisual temporal perception depending on the leading sense. Additionally, the SJs were modulated by the opposite phases of alpha (5-10 Hz) and low beta (14-20 Hz) bands in the A50V condition while the low beta band (14-18 Hz) in the V50A condition. One cycle of alpha or two cycles of beta oscillations matched an auditory-leading TBW of ∼86 ms, while two cycles of beta oscillations matched a visual-leading TBW of ∼105 ms. This result indicated the opposite phases in the alpha and beta bands reflect opposite cortical excitability, which modulated the audiovisual SJs. Finally, we found stronger high beta (21-28 Hz) audiovisual phase synchronization for synchronous response in the A50V condition. The phase synchrony of the beta band might be related to maintaining information flow between visual and auditory regions in a top-down manner.Significance.These results clarified whether and how the prestimulus brain state, including local neural oscillations and functional connectivity between brain regions, affects audiovisual temporal integration.
Collapse
Affiliation(s)
- Zeliang Jiang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, People's Republic of China
| | - Xingwei An
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, People's Republic of China
| | - Shuang Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, People's Republic of China
| | - Erwei Yin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, People's Republic of China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, People's Republic of China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, People's Republic of China
| | - Ye Yan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, People's Republic of China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, People's Republic of China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, People's Republic of China
| |
Collapse
|
13
|
Sasaoka T, Hirose K, Maekawa T, Inui T, Yamawaki S. The anterior cingulate cortex is involved in intero-exteroceptive integration for spatial image transformation of the self-body. Neuroimage 2024; 293:120634. [PMID: 38705431 DOI: 10.1016/j.neuroimage.2024.120634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 04/30/2024] [Accepted: 05/02/2024] [Indexed: 05/07/2024] Open
Abstract
Spatial image transformation of the self-body is a fundamental function of visual perspective-taking. Recent research underscores the significance of intero-exteroceptive information integration to construct representations of our embodied self. This raises the intriguing hypothesis that interoceptive processing might be involved in the spatial image transformation of the self-body. To test this hypothesis, the present study used functional magnetic resonance imaging to measure brain activity during an arm laterality judgment (ALJ) task. In this task, participants were tasked with discerning whether the outstretched arm of a human figure, viewed from the front or back, was the right or left hand. The reaction times for the ALJ task proved longer when the stimulus presented orientations of 0°, 90°, and 270° relative to the upright orientation, and when the front view was presented rather than the back view. Reflecting the increased reaction time, increased brain activity was manifested in a cluster centered on the dorsal anterior cingulate cortex (ACC), suggesting that the activation reflects the involvement of an embodied simulation in ALJ. Furthermore, this cluster of brain activity exhibited overlap with regions where the difference in activation between the front and back views positively correlated with the participants' interoceptive sensitivity, as assessed through the heartbeat discrimination task, within the pregenual ACC. These results suggest that the ACC plays an important role in integrating intero-exteroceptive cues to spatially transform the image of our self-body.
Collapse
Affiliation(s)
- Takafumi Sasaoka
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan.
| | - Kenji Hirose
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan; Center for Human Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Kita 12, Nishi 7, Kita-ku, Sapporo, Hokkaido 060-0812, Japan
| | - Toru Maekawa
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan
| | - Toshio Inui
- Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Shigeto Yamawaki
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan
| |
Collapse
|
14
|
Jertberg RM, Begeer S, Geurts HM, Chakrabarti B, Van der Burg E. Age, not autism, influences multisensory integration of speech stimuli among adults in a McGurk/MacDonald paradigm. Eur J Neurosci 2024; 59:2979-2994. [PMID: 38570828 DOI: 10.1111/ejn.16319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 04/05/2024]
Abstract
Differences between autistic and non-autistic individuals in perception of the temporal relationships between sights and sounds are theorized to underlie difficulties in integrating relevant sensory information. These, in turn, are thought to contribute to problems with speech perception and higher level social behaviour. However, the literature establishing this connection often involves limited sample sizes and focuses almost entirely on children. To determine whether these differences persist into adulthood, we compared 496 autistic and 373 non-autistic adults (aged 17 to 75 years). Participants completed an online version of the McGurk/MacDonald paradigm, a multisensory illusion indicative of the ability to integrate audiovisual speech stimuli. Audiovisual asynchrony was manipulated, and participants responded both to the syllable they perceived (revealing their susceptibility to the illusion) and to whether or not the audio and video were synchronized (allowing insight into temporal processing). In contrast with prior research with smaller, younger samples, we detected no evidence of impaired temporal or multisensory processing in autistic adults. Instead, we found that in both groups, multisensory integration correlated strongly with age. This contradicts prior presumptions that differences in multisensory perception persist and even increase in magnitude over the lifespan of autistic individuals. It also suggests that the compensatory role multisensory integration may play as the individual senses decline with age is intact. These findings challenge existing theories and provide an optimistic perspective on autistic development. They also underline the importance of expanding autism research to better reflect the age range of the autistic population.
Collapse
Affiliation(s)
- Robert M Jertberg
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Sander Begeer
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Hilde M Geurts
- Dutch Autism and ADHD Research Center (d'Arc), Brain & Cognition, Department of Psychology, Universiteit van Amsterdam, Amsterdam, The Netherlands
- Leo Kannerhuis (Youz/Parnassiagroup), Den Haag, The Netherlands
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- India Autism Center, Kolkata, India
- Department of Psychology, Ashoka University, Sonipat, India
| | - Erik Van der Burg
- Dutch Autism and ADHD Research Center (d'Arc), Brain & Cognition, Department of Psychology, Universiteit van Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
15
|
Huntley MK, Nguyen A, Albrecht MA, Marinovic W. Tactile cues are more intrinsically linked to motor timing than visual cues in visual-tactile sensorimotor synchronization. Atten Percept Psychophys 2024; 86:1022-1037. [PMID: 38263510 PMCID: PMC11062975 DOI: 10.3758/s13414-023-02828-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2023] [Indexed: 01/25/2024]
Abstract
Many tasks require precise synchronization with external sensory stimuli, such as driving a car. This study investigates whether combined visual-tactile information provides additional benefits to movement synchrony over separate visual and tactile stimuli and explores the relationship with the temporal binding window for multisensory integration. In Experiment 1, participants completed a sensorimotor synchronization task to examine movement variability and a simultaneity judgment task to measure the temporal binding window. Results showed similar synchronization variability between visual-tactile and tactile-only stimuli, but significantly lower than visual only. In Experiment 2, participants completed a visual-tactile sensorimotor synchronization task with cross-modal stimuli presented inside (stimulus onset asynchrony 80 ms) and outside (stimulus-onset asynchrony 400 ms) the temporal binding window to examine temporal accuracy of movement execution. Participants synchronized their movement with the first stimulus in the cross-modal pair, either the visual or tactile stimulus. Results showed significantly greater temporal accuracy when only one stimulus was presented inside the window and the second stimulus was outside the window than when both stimuli were presented inside the window, with movement execution being more accurate when attending to the tactile stimulus. Overall, these findings indicate there may be a modality-specific benefit to sensorimotor synchronization performance, such that tactile cues are weighted more strongly than visual information as tactile information is more intrinsically linked to motor timing than visual information. Further, our findings indicate that the visual-tactile temporal binding window is related to the temporal accuracy of movement execution.
Collapse
Affiliation(s)
- Michelle K Huntley
- School of Population Health, Curtin University, Perth, Western Australia, Australia.
- School of Psychology and Public Health, La Trobe University, Wodonga, Victoria, Australia.
| | - An Nguyen
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| | - Matthew A Albrecht
- Western Australia Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| |
Collapse
|
16
|
Bruns P, Thun C, Röder B. Quantifying accuracy and precision from continuous response data in studies of spatial perception and crossmodal recalibration. Behav Res Methods 2024; 56:3814-3830. [PMID: 38684625 PMCID: PMC11133116 DOI: 10.3758/s13428-024-02416-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2024] [Indexed: 05/02/2024]
Abstract
The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany.
| | - Caroline Thun
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
17
|
Bao X, Lomber SG. Visual modulation of auditory evoked potentials in the cat. Sci Rep 2024; 14:7177. [PMID: 38531940 DOI: 10.1038/s41598-024-57075-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
Visual modulation of the auditory system is not only a neural substrate for multisensory processing, but also serves as a backup input underlying cross-modal plasticity in deaf individuals. Event-related potential (ERP) studies in humans have provided evidence of a multiple-stage audiovisual interactions, ranging from tens to hundreds of milliseconds after the presentation of stimuli. However, it is still unknown if the temporal course of visual modulation in the auditory ERPs can be characterized in animal models. EEG signals were recorded in sedated cats from subdermal needle electrodes. The auditory stimuli (clicks) and visual stimuli (flashes) were timed by two independent Poison processes and were presented either simultaneously or alone. The visual-only ERPs were subtracted from audiovisual ERPs before being compared to the auditory-only ERPs. N1 amplitude showed a trend of transiting from suppression-to-facilitation with a disruption at ~ 100-ms flash-to-click delay. We concluded that visual modulation as a function of SOA with extended range is more complex than previously characterized with short SOAs and its periodic pattern can be interpreted with "phase resetting" hypothesis.
Collapse
Affiliation(s)
- Xiaohan Bao
- Integrated Program in Neuroscience, McGill University, Montreal, QC, H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, McIntyre Medical Sciences Building, Rm 1223, 3655 Promenade Sir William Osler, Montreal, QC, H3G 1Y6, Canada.
| |
Collapse
|
18
|
Yang H, Cai B, Tan W, Luo L, Zhang Z. Pitch Improvement in Attentional Blink: A Study across Audiovisual Asymmetries. Behav Sci (Basel) 2024; 14:145. [PMID: 38392498 PMCID: PMC10885858 DOI: 10.3390/bs14020145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/07/2024] [Accepted: 02/16/2024] [Indexed: 02/24/2024] Open
Abstract
Attentional blink (AB) is a phenomenon in which the perception of a second target is impaired when it appears within 200-500 ms after the first target. Sound affects an AB and is accompanied by the appearance of an asymmetry during audiovisual integration, but it is not known whether this is related to the tonal representation of sound. The aim of the present study was to investigate the effect of audiovisual asymmetry on attentional blink and whether the presentation of pitch improves the ability to detect a target during an AB that is accompanied by audiovisual asymmetry. The results showed that as the lag increased, the subject's target recognition improved and the pitch produced further improvements. These improvements exhibited a significant asymmetry across the audiovisual channel. Our findings could contribute to better utilizations of audiovisual integration resources to improve attentional transients and auditory recognition decline, which could be useful in areas such as driving and education.
Collapse
Affiliation(s)
- Haoping Yang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
| | - Biye Cai
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Wenjie Tan
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
- Department of Physical Education, South China University of Technology, Guangzhou 518100, China
| | - Li Luo
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Zonghao Zhang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| |
Collapse
|
19
|
Nava E, Giraud M, Bolognini N. The emergence of the multisensory brain: From the womb to the first steps. iScience 2024; 27:108758. [PMID: 38230260 PMCID: PMC10790096 DOI: 10.1016/j.isci.2023.108758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024] Open
Abstract
The becoming of the human being is a multisensory process that starts in the womb. By integrating spontaneous neuronal activity with inputs from the external world, the developing brain learns to make sense of itself through multiple sensory experiences. Over the past ten years, advances in neuroimaging and electrophysiological techniques have allowed the exploration of the neural correlates of multisensory processing in the newborn and infant brain, thus adding an important piece of information to behavioral evidence of early sensitivity to multisensory events. Here, we review recent behavioral and neuroimaging findings to document the origins and early development of multisensory processing, particularly showing that the human brain appears naturally tuned to multisensory events at birth, which requires multisensory experience to fully mature. We conclude the review by highlighting the potential uses and benefits of multisensory interventions in promoting healthy development by discussing emerging studies in preterm infants.
Collapse
Affiliation(s)
- Elena Nava
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Michelle Giraud
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
- Laboratory of Neuropsychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
20
|
O'Dowd A, Hirst RJ, Setti A, Kenny RA, Newell FN. Individual differences in seated resting heart rate are associated with multisensory perceptual function in older adults. Psychophysiology 2024; 61:e14430. [PMID: 37675755 DOI: 10.1111/psyp.14430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 08/09/2023] [Accepted: 08/16/2023] [Indexed: 09/08/2023]
Abstract
There is evidence that cardiovascular function can influence sensory processing and cognition, which are known to change with age. However, whether the precision of unisensory and multisensory temporal perception is influenced by cardiovascular activity in older adults is uncertain. We examined whether seated resting heart rate (RHR) was associated with unimodal visual and auditory temporal discrimination as well as susceptibility to the audio-visual Sound Induced Flash Illusion (SIFI) in a large sample of older adults (N = 3232; mean age = 64.17 years, SD = 7.74, range = 50-93; 56% female) drawn from The Irish Longitudinal Study on Ageing (TILDA). Faster seated RHR was associated with better discretization of two flashes (but not two beeps) and increased SIFI susceptibility when the audio-visual stimuli were presented close together in time but not at longer audio-visual temporal offsets. Our findings suggest a significant relationship between cardiovascular activity and the precision of visual and audio-visual temporal perception in older adults, thereby providing novel evidence for a link between cardiovascular function and perceptual function in aging.
Collapse
Affiliation(s)
- Alan O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
21
|
Maynes R, Faulkner R, Callahan G, Mims CE, Ranjan S, Stalzer J, Odegaard B. Metacognitive awareness in the sound-induced flash illusion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220347. [PMID: 37545312 PMCID: PMC10404924 DOI: 10.1098/rstb.2022.0347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/27/2023] [Indexed: 08/08/2023] Open
Abstract
Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Randolph Maynes
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Ryan Faulkner
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Grace Callahan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Callie E. Mims
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
- Psychology Department, University of South Alabama, Mobile, 36688, AL, USA
| | - Saurabh Ranjan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Justine Stalzer
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Brian Odegaard
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| |
Collapse
|
22
|
Pepper JL, Nuttall HE. Age-Related Changes to Multisensory Integration and Audiovisual Speech Perception. Brain Sci 2023; 13:1126. [PMID: 37626483 PMCID: PMC10452685 DOI: 10.3390/brainsci13081126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/20/2023] [Accepted: 07/22/2023] [Indexed: 08/27/2023] Open
Abstract
Multisensory integration is essential for the quick and accurate perception of our environment, particularly in everyday tasks like speech perception. Research has highlighted the importance of investigating bottom-up and top-down contributions to multisensory integration and how these change as a function of ageing. Specifically, perceptual factors like the temporal binding window and cognitive factors like attention and inhibition appear to be fundamental in the integration of visual and auditory information-integration that may become less efficient as we age. These factors have been linked to brain areas like the superior temporal sulcus, with neural oscillations in the alpha-band frequency also being implicated in multisensory processing. Age-related changes in multisensory integration may have significant consequences for the well-being of our increasingly ageing population, affecting their ability to communicate with others and safely move through their environment; it is crucial that the evidence surrounding this subject continues to be carefully investigated. This review will discuss research into age-related changes in the perceptual and cognitive mechanisms of multisensory integration and the impact that these changes have on speech perception and fall risk. The role of oscillatory alpha activity is of particular interest, as it may be key in the modulation of multisensory integration.
Collapse
Affiliation(s)
| | - Helen E. Nuttall
- Department of Psychology, Lancaster University, Bailrigg LA1 4YF, UK;
| |
Collapse
|
23
|
Setti A, Hernández B, Hirst RJ, Donoghue OA, Kenny RA, Newell FN. Susceptibility to the sound-induced flash illusion is associated with gait speed in a large sample of middle-aged and older adults. Exp Gerontol 2023; 174:112113. [PMID: 36736711 DOI: 10.1016/j.exger.2023.112113] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 01/18/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND Multisensory integration is the ability to appropriately merge information from different senses for the purpose of perceiving and acting in the environment. During walking, information from multiple senses must be integrated appropriately to coordinate effective movements. We tested the association between a well characterised multisensory task, the Sound-Induced Flash Illusion (SIFI), and gait speed in 3255 participants from The Irish Longitudinal Study on Ageing. High susceptibility to this illusion at longer stimulus onset asynchronies characterises older adults, and has been associated with cognitive and functional impairments, therefore it should be associated with slower gait speed. METHOD Gait was measured under three conditions; usual pace, cognitive dual tasking, and maximal walking speed. A separate logistic mixed effects regression model was run for 1) gait at usual pace, 2) change in gait speed for the cognitive dual tasking relative to usual pace and 3) change in maximal walking speed relative to usual pace. In all cases a binary response indicating a correct/incorrect response to each SIFI trial was the dependent variable. The model controlled for covariates including age, sex, education, vision and hearing abilities, Body Mass Index, and cognitive function. RESULTS Slower gait was associated with more illusions, particularly at longer temporal intervals between the flash-beep pair and the second beep, indicating that those who integrated incongruent sensory inputs over longer intervals, also walked slower. The relative changes in gait speed for cognitive dual tasking and maximal walking speed were also significantly associated with SIFI at longer SOAs. CONCLUSIONS These findings support growing evidence that mobility, susceptibility to falling and balance control are associated with multisensory processing in ageing.
Collapse
Affiliation(s)
- Annalisa Setti
- School of Applied Psychology, University College Cork, Cork, Ireland; The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland.
| | - Belinda Hernández
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland; Department of Medical Gerontology, Trinity College Dublin, Dublin, Ireland
| | - Rebecca J Hirst
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland; School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| | - Orna A Donoghue
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland; Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland; Department of Medical Gerontology, Trinity College Dublin, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| |
Collapse
|
24
|
Long-term Tai Chi training reduces the fusion illusion in older adults. Exp Brain Res 2023; 241:517-526. [PMID: 36611123 DOI: 10.1007/s00221-023-06544-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 01/01/2023] [Indexed: 01/09/2023]
Abstract
Sound-induced flash illusion (SiFI) is an auditory-dominated audiovisual integration phenomenon that can be used as a reliable indicator of audiovisual integration. Although previous studies have found that Tai Chi exercise has a promoting effect on cognitive processing, such as executive functions, the effect of Tai Chi exercise on early perceptual processing has yet to be investigated. This study used the classic SiFI paradigm to investigate the effects of long-term Tai Chi exercise on multisensory integration in older adults. We compared older adults with long-term Tai Chi exercise experience with those with long-term walking exercise. The results showed that the accuracy of the Tai Chi group was higher than that of the control group under the fusion illusion condition, mainly due to the increased perceptual sensitivity to flashes. However, there was no significant difference between the two groups in the fission illusion. These results indicated that the fission and fusion illusions were affected differently by Tai Chi exercise, and this was attributable to the association of the participants' flash discriminability with them. The present study provides preliminary evidence that long-term Tai Chi exercise improves older adults' multisensory integration, which occurs in early perceptual processing.
Collapse
|
25
|
Butera IM, Stevenson RA, Gifford RH, Wallace MT. Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions. Trends Hear 2023; 27:23312165221076681. [PMID: 37377212 PMCID: PMC10334005 DOI: 10.1177/23312165221076681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 12/08/2021] [Accepted: 01/10/2021] [Indexed: 06/29/2023] Open
Abstract
The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.
Collapse
Affiliation(s)
- Iliza M. Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ryan A. Stevenson
- Department of Psychology, University of
Western Ontario, London, ON, Canada
- Brain and Mind Institute, University of
Western Ontario, London, ON, Canada
| | - René H. Gifford
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt
University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
26
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
27
|
Van Engen KJ, Dey A, Sommers MS, Peelle JE. Audiovisual speech perception: Moving beyond McGurk. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3216. [PMID: 36586857 PMCID: PMC9894660 DOI: 10.1121/10.0015262] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/26/2022] [Accepted: 11/05/2022] [Indexed: 05/29/2023]
Abstract
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.
Collapse
Affiliation(s)
- Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Avanti Dey
- PLOS ONE, 1265 Battery Street, San Francisco, California 94111, USA
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University, St. Louis, Missouri 63130, USA
| |
Collapse
|
28
|
Sadiq O, Barnett-Cowan M. Can the Perceived Timing of Multisensory Events Predict Cybersickness? Multisens Res 2022; 35:623-652. [PMID: 36731533 DOI: 10.1163/22134808-bja10083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 09/06/2022] [Indexed: 02/07/2023]
Abstract
Humans are constantly presented with rich sensory information that the central nervous system (CNS) must process to form a coherent perception of the self and its relation to its surroundings. While the CNS is efficient in processing multisensory information in natural environments, virtual reality (VR) poses challenges of temporal discrepancies that the CNS must solve. These temporal discrepancies between information from different sensory modalities leads to inconsistencies in perception of the virtual environment which often causes cybersickness. Here, we investigate whether individual differences in the perceived relative timing of sensory events, specifically parameters of temporal-order judgement (TOJ), can predict cybersickness. Study 1 examined audiovisual (AV) TOJs while Study 2 examined audio-active head movement (AAHM) TOJs. We deduced metrics of the temporal binding window (TBW) and point of subjective simultaneity (PSS) for a total of 50 participants. Cybersickness was quantified using the Simulator Sickness Questionnaire (SSQ). Study 1 results (correlations and multiple regression) show that the oculomotor SSQ shares a significant yet positive correlation with AV PSS and TBW. While there is a positive correlation between the total SSQ scores and the TBW and PSS, these correlations are not significant. Therefore, although these results are promising, we did not find the same effect for AAHM TBW and PSS. We conclude that AV TOJ may serve as a potential tool to predict cybersickness in VR. Such findings will generate a better understanding of cybersickness which can be used for development of VR to help mitigate discomfort and maximize adoption.
Collapse
Affiliation(s)
- Ogai Sadiq
- Department of Kinesiology, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
| | | |
Collapse
|
29
|
Wilbiks JMP, Brown VA, Strand JF. Speech and non-speech measures of audiovisual integration are not correlated. Atten Percept Psychophys 2022; 84:1809-1819. [PMID: 35610409 PMCID: PMC10699539 DOI: 10.3758/s13414-022-02517-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 11/08/2022]
Abstract
Many natural events generate both visual and auditory signals, and humans are remarkably adept at integrating information from those sources. However, individuals appear to differ markedly in their ability or propensity to combine what they hear with what they see. Individual differences in audiovisual integration have been established using a range of materials, including speech stimuli (seeing and hearing a talker) and simpler audiovisual stimuli (seeing flashes of light combined with tones). Although there are multiple tasks in the literature that are referred to as "measures of audiovisual integration," the tasks themselves differ widely with respect to both the type of stimuli used (speech versus non-speech) and the nature of the tasks themselves (e.g., some tasks use conflicting auditory and visual stimuli whereas others use congruent stimuli). It is not clear whether these varied tasks are actually measuring the same underlying construct: audiovisual integration. This study tested the relationships among four commonly-used measures of audiovisual integration, two of which use speech stimuli (susceptibility to the McGurk effect and a measure of audiovisual benefit), and two of which use non-speech stimuli (the sound-induced flash illusion and audiovisual integration capacity). We replicated previous work showing large individual differences in each measure but found no significant correlations among any of the measures. These results suggest that tasks that are commonly referred to as measures of audiovisual integration may be tapping into different parts of the same process or different constructs entirely.
Collapse
Affiliation(s)
| | - Violet A Brown
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, MO, USA
| | - Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA
| |
Collapse
|
30
|
The magnitude of the sound-induced flash illusion does not increase monotonically as a function of visual stimulus eccentricity. Atten Percept Psychophys 2022; 84:1689-1698. [PMID: 35562629 PMCID: PMC9106326 DOI: 10.3758/s13414-022-02493-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2022] [Indexed: 11/24/2022]
Abstract
The sound-induced flash illusion (SIFI) occurs when a rapidly presented visual stimulus is accompanied by two auditory stimuli, creating the illusory percept of two visual stimuli. While much research has focused on how the temporal proximity of the audiovisual stimuli impacts susceptibility to the illusion, comparatively less research has focused on the impact of spatial manipulations. Here, we aimed to assess whether manipulating the eccentricity of visual flash stimuli altered the properties of the temporal binding window associated with the SIFI. Twenty participants were required to report whether they perceived one or two flashes that were concurrently presented with one or two beeps. Visual stimuli were presented at one of four different retinal eccentricities (2.5, 5, 7.5, or 10 degrees below fixation) and audiovisual stimuli were separated by one of eight stimulus-onset asynchronies. In keeping with previous findings, increasing stimulus-onset asynchrony between the auditory and visual stimuli led to a marked decrease in susceptibility to the illusion allowing us to estimate the width and amplitude of the temporal binding window. However, varying the eccentricity of the visual stimulus had no effect on either the width or the peak amplitude of the temporal binding window, with a similar pattern of results observed for both the “fission” and “fusion” variants of the illusion. Thus, spatial manipulations of the audiovisual stimuli used to elicit the SIFI appear to have a weaker effect on the integration of sensory signals than temporal manipulations, a finding which has implications for neuroanatomical models of multisensory integration.
Collapse
|
31
|
Perceptual training narrows the temporal binding window of audiovisual integration in both younger and older adults. Neuropsychologia 2022; 173:108309. [PMID: 35752266 DOI: 10.1016/j.neuropsychologia.2022.108309] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 05/16/2022] [Accepted: 06/20/2022] [Indexed: 10/17/2022]
Abstract
There is a growing body of evidence to suggest that multisensory processing changes with advancing age-usually in the form of an enlarged temporal binding window-with some studies linking these multisensory changes to negative clinical outcomes. Perceptual training regimes represent a promising means for enhancing the precision of multisensory integration in ageing; however, to date, the vast majority of studies examining the efficacy of multisensory perceptual learning have focused solely on healthy young adults. Here, we measured the temporal binding windows of younger and older participants before and after training on an audiovisual temporal discrimination task to assess (i) how perceptual training affected the shape of the temporal binding window and (ii) whether training effects were similar in both age groups. Our results replicated previous findings of an enlarged temporal binding window in older adults, as well as providing further evidence that both younger and older participants can improve the precision of their audiovisual timing estimation via perceptual training. We also show that this training protocol led to a narrowing of the temporal binding window associated with the sound-induced flash illusion in both age groups indicating a general refinement of audiovisual integration. However, while younger adults also displayed a general reduction in crossmodal interactions following training, this effect was not observed in the older adult group. Together, our results suggest that perceptual training narrows the temporal binding window of audiovisual integration in both younger and older adults but has less of an impact on prior expectations regarding the source of audiovisual signals in older adults.
Collapse
|
32
|
London RE, Benwell CSY, Cecere R, Quak M, Thut G, Talsma D. EEG Alpha power predicts the temporal sensitivity of multisensory perception. Eur J Neurosci 2022; 55:3241-3255. [DOI: 10.1111/ejn.15719] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 05/04/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022]
Affiliation(s)
| | | | - Roberto Cecere
- Institute of Neuroscience and Psychology University of Glasgow UK
| | - Michel Quak
- Department of Experimental Psychology Ghent University Belgium
| | - Gregor Thut
- Institute of Neuroscience and Psychology University of Glasgow UK
| | - Durk Talsma
- Department of Experimental Psychology Ghent University Belgium
| |
Collapse
|
33
|
Abstract
The brain’s ability to create a unified conscious representation of an object by integrating information from multiple perception pathways is called perceptual binding. Binding is crucial for normal cognitive function. Some perceptual binding errors and disorders have been linked to certain neurological conditions, brain lesions, and conditions that give rise to illusory conjunctions. However, the mechanism of perceptual binding remains elusive. Here, I present a computational model of binding using two sets of coupled oscillatory processes that are assumed to occur in response to two different percepts. I use the model to study the dynamic behavior of coupled processes to characterize how these processes can modulate each other and reach a temporal synchrony. I identify different oscillatory dynamic regimes that depend on coupling mechanisms and parameter values. The model can also discriminate different combinations of initial inputs that are set by initial states of coupled processes. Decoding brain signals that are formed through perceptual binding is a challenging task, but my modeling results demonstrate how crosstalk between two systems of processes can possibly modulate their outputs. Therefore, my mechanistic model can help one gain a better understanding of how crosstalk between perception pathways can affect the dynamic behavior of the systems that involve perceptual binding.
Collapse
|
34
|
Whether attentional loads influence audiovisual integration depends on semantic associations. Atten Percept Psychophys 2022; 84:2205-2218. [PMID: 35304700 DOI: 10.3758/s13414-022-02461-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 11/08/2022]
Abstract
Neuronal studies have shown that selectively attending to a common object in one sensory modality results in facilitated processing of that object's representations in the ignored sensory modality. Thus, the audiovisual (AV) integration of common objects can be observed under modality-specific selective attention. However, little is known about whether this AV integration can also occur under increased attentional load conditions. Additionally, whether semantic associations between multisensory features of common objects modulate the influence of increased attentional loads on this cross-modal integration remains unknown. In the present study, participants completed an AV integration task (ignored auditory stimuli) under various attentional load conditions: no load, low load, and high load. The semantic associations between AV stimuli were composed of animal pictures presented concurrently with semantically congruent, semantically incongruent, or semantically unrelated auditory stimuli. Our results demonstrated that attentional loads did not disrupt the integration of semantically congruent AV stimuli but suppressed the potential alertness effects induced by incongruent or unrelated auditory stimuli under the condition of modality-specific selective attention. These findings highlight the critical role of semantic association between AV stimuli in modulating the effect of attentional loads on the AV integration of modality-specific selective attention.
Collapse
|
35
|
Johnston PR, Alain C, McIntosh AR. Individual Differences in Multisensory Processing Are Related to Broad Differences in the Balance of Local versus Distributed Information. J Cogn Neurosci 2022; 34:846-863. [PMID: 35195723 DOI: 10.1162/jocn_a_01835] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The brain's ability to extract information from multiple sensory channels is crucial to perception and effective engagement with the environment, but the individual differences observed in multisensory processing lack mechanistic explanation. We hypothesized that, from the perspective of information theory, individuals with more effective multisensory processing will exhibit a higher degree of shared information among distributed neural populations while engaged in a multisensory task, representing more effective coordination of information among regions. To investigate this, healthy young adults completed an audiovisual simultaneity judgment task to measure their temporal binding window (TBW), which quantifies the ability to distinguish fine discrepancies in timing between auditory and visual stimuli. EEG was then recorded during a second run of the simultaneity judgment task, and partial least squares was used to relate individual differences in the TBW width to source-localized EEG measures of local entropy and mutual information, indexing local and distributed processing of information, respectively. The narrowness of the TBW, reflecting more effective multisensory processing, was related to a broad pattern of higher mutual information and lower local entropy at multiple timescales. Furthermore, a small group of temporal and frontal cortical regions, including those previously implicated in multisensory integration and response selection, respectively, played a prominent role in this pattern. Overall, these findings suggest that individual differences in multisensory processing are related to widespread individual differences in the balance of distributed versus local information processing among a large subset of brain regions, with more distributed information being associated with more effective multisensory processing. The balance of distributed versus local information processing may therefore be a useful measure for exploring individual differences in multisensory processing, its relationship to higher cognitive traits, and its disruption in neurodevelopmental disorders and clinical conditions.
Collapse
|
36
|
Giurgola S, Casati C, Stampatori C, Perucca L, Mattioli F, Vallar G, Bolognini N. Abnormal multisensory integration in relapsing–remitting multiple sclerosis. Exp Brain Res 2022; 240:953-968. [PMID: 35094114 PMCID: PMC8918188 DOI: 10.1007/s00221-022-06310-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 01/15/2022] [Indexed: 12/22/2022]
Abstract
Temporal Binding Window (TBW) represents a reliable index of efficient multisensory integration process, which allows individuals to infer which sensory inputs from different modalities pertain to the same event. TBW alterations have been reported in some neurological and neuropsychiatric disorders and seem to negatively affects cognition and behavior. So far, it is still unknown whether deficits of multisensory integration, as indexed by an abnormal TBW, are present even in Multiple Sclerosis. We addressed this issue by testing 25 participants affected by relapsing–remitting Multiple Sclerosis (RRMS) and 30 age-matched healthy controls. Participants completed a simultaneity judgment task (SJ2) to assess the audio-visual TBW; two unimodal SJ2 versions were used as control tasks. Individuals with RRMS showed an enlarged audio-visual TBW (width range = from − 166 ms to + 198 ms), as compared to healthy controls (width range = − 177/ + 66 ms), thus showing an increased tendency to integrate temporally asynchronous visual and auditory stimuli. Instead, simultaneity perception of unimodal (visual or auditory) events overall did not differ from that of controls. These results provide first evidence of a selective deficit of multisensory integration in individuals affected by RRMS, besides the well-known motor and cognitive impairments. The reduced multisensory temporal acuity is likely caused by a disruption of the neural interplay between different sensory systems caused by multiple sclerosis.
Collapse
Affiliation(s)
- Serena Giurgola
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Carlotta Casati
- Neuropsychology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | | | - Laura Perucca
- Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Flavia Mattioli
- Neuropsychology Unit, Spedali Civili of Brescia, Brescia, Italy
| | - Giuseppe Vallar
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
- Neuropsychology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
- Neuropsychology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
37
|
Carlini A, Bigand E. Does Sound Influence Perceived Duration of Visual Motion? Front Psychol 2021; 12:751248. [PMID: 34925155 PMCID: PMC8675101 DOI: 10.3389/fpsyg.2021.751248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 11/10/2021] [Indexed: 11/13/2022] Open
Abstract
Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants' responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.
Collapse
Affiliation(s)
- Alessandro Carlini
- Laboratory for Research on Learning and Development, CNRS UMR 5022, University of Burgundy, Dijon, France
| | - Emmanuel Bigand
- Laboratory for Research on Learning and Development, CNRS UMR 5022, University of Burgundy, Dijon, France
| |
Collapse
|
38
|
Gijbels L, Yeatman JD, Lalonde K, Lee AKC. Audiovisual Speech Processing in Relationship to Phonological and Vocabulary Skills in First Graders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:5022-5040. [PMID: 34735292 PMCID: PMC9150669 DOI: 10.1044/2021_jslhr-21-00196] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 07/06/2021] [Accepted: 08/11/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE It is generally accepted that adults use visual cues to improve speech intelligibility in noisy environments, but findings regarding visual speech benefit in children are mixed. We explored factors that contribute to audiovisual (AV) gain in young children's speech understanding. We examined whether there is an AV benefit to speech-in-noise recognition in children in first grade and if visual salience of phonemes influences their AV benefit. We explored if individual differences in AV speech enhancement could be explained by vocabulary knowledge, phonological awareness, or general psychophysical testing performance. METHOD Thirty-seven first graders completed online psychophysical experiments. We used an online single-interval, four-alternative forced-choice picture-pointing task with age-appropriate consonant-vowel-consonant words to measure auditory-only, visual-only, and AV word recognition in noise at -2 and -8 dB SNR. We obtained standard measures of vocabulary and phonological awareness and included a general psychophysical test to examine correlations with AV benefits. RESULTS We observed a significant overall AV gain among children in first grade. This effect was mainly attributed to the benefit at -8 dB SNR, for visually distinct targets. Individual differences were not explained by any of the child variables. Boys showed lower auditory-only performances, leading to significantly larger AV gains. CONCLUSIONS This study shows AV benefit, of distinctive visual cues, to word recognition in challenging noisy conditions in first graders. The cognitive and linguistic constraints of the task may have minimized the impact of individual differences of vocabulary and phonological awareness on AV benefit. The gender difference should be studied on a larger sample and age range.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- Department of Speech & Hearing Sciences, University of Washington, Seattle
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - Jason D. Yeatman
- Division of Developmental-Behavioral Pediatrics, School of Medicine, Stanford University, CA
- Graduate School of Education, Stanford University, CA
| | - Kaylah Lalonde
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE
| | - Adrian K. C. Lee
- Department of Speech & Hearing Sciences, University of Washington, Seattle
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| |
Collapse
|
39
|
Marin A, Störmer VS, Carver LJ. Expectations about dynamic visual objects facilitates early sensory processing of congruent sounds. Cortex 2021; 144:198-211. [PMID: 34673436 DOI: 10.1016/j.cortex.2021.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 05/17/2021] [Accepted: 08/05/2021] [Indexed: 11/17/2022]
Abstract
The perception of a moving object can lead to the expectation of its sound, yet little is known about how visual expectations influence auditory processing. We examined how visual perception of an object moving continuously across the visual field influences early auditory processing of a sound that occurred congruently or incongruently with the object's motion. In Experiment 1, electroencephalogram (EEG) activity was recorded from adults who passively viewed a ball that appeared either on the left or right boundary of a display and continuously traversed along the horizontal midline to make contact and elicit a bounce sound off the opposite boundary. Our main analysis focused on the auditory-evoked event-related potential. For audio-visual (AV) trials, a sound accompanied the visual input when the ball contacted the opposite boundary (AV-synchronous), or the sound occurred before contact (AV-asynchronous). We also included audio-only and visual-only trials. AV-synchronous sounds elicited an earlier and attenuated auditory response relative to AV-asynchronous or audio-only events. In Experiment 2, we examined the roles of expectancy and multisensory integration in influencing this response. In addition to the audio-only, AV-synchronous, and AV-asynchronous conditions, participants were shown a ball that became occluded prior to reaching the boundary of the display, but elicited an expected sound at the point of occluded collision. The auditory response during the AV-occluded condition resembled that of the AV-synchronous condition, suggesting that expectations induced by a moving object can influence early auditory processing. Broadly, the results suggest that dynamic visual stimuli can help generate expectations about the timing of sounds, which then facilitates the processing of auditory information that matches these expectations.
Collapse
Affiliation(s)
- Andrew Marin
- University of California, San Diego (UCSD), Psychology Department, La Jolla, CA, USA.
| | - Viola S Störmer
- Dartmouth College, Department of Psychological and Brain Sciences, Hanover, NH, USA.
| | - Leslie J Carver
- University of California, San Diego (UCSD), Psychology Department, La Jolla, CA, USA.
| |
Collapse
|
40
|
Motor Circuit and Superior Temporal Sulcus Activities Linked to Individual Differences in Multisensory Speech Perception. Brain Topogr 2021; 34:779-792. [PMID: 34480635 DOI: 10.1007/s10548-021-00869-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 08/24/2021] [Indexed: 10/20/2022]
Abstract
Integrating multimodal information into a unified perception is a fundamental human capacity. McGurk effect is a remarkable multisensory illusion that demonstrates a percept different from incongruent auditory and visual syllables. However, not all listeners perceive the McGurk illusion to the same degree. The neural basis for individual differences in modulation of multisensory integration and syllabic perception remains largely unclear. To probe the possible involvement of specific neural circuits in individual differences in multisensory speech perception, we first implemented a behavioral experiment to examine the McGurk susceptibility. Then, functional magnetic resonance imaging was performed in 63 participants to measure the brain activity in response to non-McGurk audiovisual syllables. We revealed significant individual variability in McGurk illusion perception. Moreover, we found significant differential activations of the auditory and visual regions and the left Superior temporal sulcus (STS), as well as multiple motor areas between strong and weak McGurk perceivers. Importantly, the individual engagement of the STS and motor areas could specifically predict the behavioral McGurk susceptibility, contrary to the sensory regions. These findings suggest that the distinct multimodal integration in STS as well as coordinated phonemic modulatory processes in motor circuits may serve as a neural substrate for interindividual differences in multisensory speech perception.
Collapse
|
41
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal personality traits and multisensory integration: An investigation using the McGurk effect. Acta Psychol (Amst) 2021; 218:103354. [PMID: 34174491 DOI: 10.1016/j.actpsy.2021.103354] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/04/2021] [Accepted: 06/10/2021] [Indexed: 12/14/2022] Open
Abstract
Multisensory integration, the process by which sensory information from different sensory modalities are bound together, is hypothesized to contribute to perceptual symptomatology in schizophrenia, in whom multisensory integration differences have been consistently found. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher levels of schizotypal traits. In the current study, we used the McGurk task as a measure of multisensory integration. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher levels of schizotypal traits, specifically Unusual Perceptual Experiences and Odd Speech subscales, would be associated with decreased multisensory integration of speech. Surprisingly, Unusual Perceptual Experiences were not associated with multisensory integration. However, Odd Speech was associated with multisensory integration, and this association extended more broadly across the Disorganized factor of the SPQ, including Odd or Eccentric Behaviour. Individuals with higher levels of Odd or Eccentric Behaviour scores also demonstrated poorer lip-reading abilities, which partially explained performance in the McGurk task. This suggests that aberrant perceptual processes affecting individuals across the schizophrenia spectrum may relate to disorganized symptomatology.
Collapse
Affiliation(s)
- Anne-Marie Muller
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Tyler C Dalal
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
42
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
43
|
Strelnikov K, Hervault M, Laurent L, Barone P. When two is worse than one: The deleterious impact of multisensory stimulation on response inhibition. PLoS One 2021; 16:e0251739. [PMID: 34014959 PMCID: PMC8136741 DOI: 10.1371/journal.pone.0251739] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/01/2021] [Indexed: 11/18/2022] Open
Abstract
Multisensory facilitation is known to improve the perceptual performances and reaction times of participants in a wide range of tasks, from detection and discrimination to memorization. We asked whether a multimodal signal can similarly improve action inhibition using the stop-signal paradigm. Indeed, consistent with a crossmodal redundant signal effect that relies on multisensory neuronal integration, the threshold for initiating behavioral responses is known for being reached faster with multisensory stimuli. To evaluate whether this phenomenon also occurs for inhibition, we compared stop signals in unimodal (human faces or voices) versus audiovisual modalities in natural or degraded conditions. In contrast to the expected multisensory facilitation, we observed poorer inhibition efficiency in the audiovisual modality compared with the visual and auditory modalities. This result was corroborated by both response probabilities and stop-signal reaction times. The visual modality (faces) was the most effective. This is the first demonstration of an audiovisual impairment in the domain of perception and action. It suggests that when individuals are engaged in a high-level decisional conflict, bimodal stimulation is not processed as a simple multisensory object improving the performance but is perceived as concurrent visual and auditory information. This absence of unity increases task demand and thus impairs the ability to revise the response.
Collapse
Affiliation(s)
- Kuzma Strelnikov
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
- Purpan University Hospital, Toulouse, France
- * E-mail:
| | - Mario Hervault
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| | - Lidwine Laurent
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| | - Pascal Barone
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| |
Collapse
|
44
|
Borgolte A, Bransi A, Seifert J, Toto S, Szycik GR, Sinke C. Audiovisual Simultaneity Judgements in Synaesthesia. Multisens Res 2021; 34:1-12. [PMID: 33984831 DOI: 10.1163/22134808-bja10050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 04/07/2021] [Indexed: 11/19/2022]
Abstract
Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.
Collapse
Affiliation(s)
- Anna Borgolte
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany
| | - Ahmad Bransi
- Oberberg Fachklinik Weserbergland, Brede 29, 32699 Extertal-Laßbruch, Germany
| | - Johanna Seifert
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany
| | - Sermin Toto
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany
| | - Gregor R Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Division of Clinical Psychology & Sexual Medicine, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany
| |
Collapse
|
45
|
Perceptual timing precision with vibrotactile, auditory, and multisensory stimuli. Atten Percept Psychophys 2021; 83:2267-2280. [PMID: 33772447 DOI: 10.3758/s13414-021-02254-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2021] [Indexed: 11/08/2022]
Abstract
The growing use of vibrotactile signaling devices makes it important to understand the perceptual limits on vibrotactile information processing. To promote that understanding, we carried out a pair of experiments on vibrotactile, auditory, and bimodal (synchronous vibrotactile and auditory) temporal acuity. On each trial, subjects experienced a set of isochronous, standard intervals (400 ms each), followed by one interval of variable duration (400 ± 1-80 ms). Intervals were demarcated by short vibrotactile, auditory, or bimodal pulses. Subjects categorized the timing of the last interval by describing the final pulse as either "early" or "late" relative to its predecessors. In Experiment 1, each trial contained three isochronous standard intervals, followed by an interval of variable length. In Experiment 2, the number of isochronous standard intervals per trial varied, from one to four. Psychometric modeling revealed that vibrotactile stimulation produced poorer temporal discrimination than either auditory or bimodal stimulation. Moreover, auditory signals dominated bimodal sensitivity, and inter-individual differences in temporal discriminability were reduced with bimodal stimulation. Additionally, varying the number of isochronous intervals in a trial failed to improve temporal sensitivity in either modality, suggesting that memory played a key role in judgments of interval duration.
Collapse
|
46
|
Abstract
The present study examined the relationship between multisensory integration and the temporal binding window (TBW) for multisensory processing in adults with Autism spectrum disorder (ASD). The ASD group was less likely than the typically developing group to perceive an illusory flash induced by multisensory integration during a sound-induced flash illusion (SIFI) task. Although both groups showed comparable TBWs during the multisensory temporal order judgment task, correlation analyses and Bayes factors provided moderate evidence that the reduced SIFI susceptibility was associated with the narrow TBW in the ASD group. These results suggest that the individuals with ASD exhibited atypical multisensory integration and that individual differences in the efficacy of this process might be affected by the temporal processing of multisensory information.
Collapse
|
47
|
O’Kane SH, Ehrsson HH. The contribution of stimulating multiple body parts simultaneously to the illusion of owning an entire artificial body. PLoS One 2021; 16:e0233243. [PMID: 33493178 PMCID: PMC7833142 DOI: 10.1371/journal.pone.0233243] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 12/04/2020] [Indexed: 12/12/2022] Open
Abstract
The full-body ownership illusion exploits multisensory perception to induce a feeling of ownership of an entire artificial body. Although previous research has shown that synchronous visuotactile stimulation of a single body part is sufficient for illusory ownership of the whole body, the effect of combining multisensory stimulation across multiple body parts remains unknown. Therefore, 48 healthy adults participated in a full-body ownership illusion with conditions involving synchronous (illusion) or asynchronous (control) visuotactile stimulation to one, two, or three body parts simultaneously (2×3 design). We used questionnaires to isolate illusory ownership of five specific body parts (left arm, right arm, trunk, left leg, right leg) from the full-body ownership experience and sought to test not only for increased ownership in synchronous versus asynchronous conditions but also for potentially varying degrees of full-body ownership illusion intensity related to the number of body parts stimulated. Illusory full-body ownership and all five body-part ownership ratings were significantly higher following synchronous stimulation than asynchronous stimulation (p-values < .01). Since non-stimulated body parts also received significantly increased ownership ratings following synchronous stimulation, the results are consistent with an illusion that engages the entire body. Furthermore, we noted that ownership ratings for right body parts (which were often but not always stimulated in this experiment) were significantly higher than ownership ratings for left body parts (which were never stimulated). Regarding the effect of stimulating multiple body parts simultaneously on explicit full-body ownership ratings, there was no evidence of a significant main effect of the number of stimulations (p = .850) or any significant interaction with stimulation synchronicity (p = .160), as assessed by linear mixed modelling. Instead, median ratings indicated a moderate affirmation (+1) of an illusory full-body sensation in all three synchronous conditions, a finding mirrored by comparable full-body illusion onset times. In sum, illusory full-body ownership appears to be an 'all-or-nothing' phenomenon and depends upon the synchronicity of visuotactile stimulation, irrespective of the number of stimulated body parts.
Collapse
Affiliation(s)
- Sophie H. O’Kane
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - H. Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
48
|
Hirst RJ, Whelan R, Boyle R, Setti A, Knight S, O'Connor J, Williamson W, McMorrow J, Fagan AJ, Meaney JF, Kenny RA, De Looze C, Newell FN. Gray matter volume in the right angular gyrus is associated with differential patterns of multisensory integration with aging. Neurobiol Aging 2020; 100:83-90. [PMID: 33508565 DOI: 10.1016/j.neurobiolaging.2020.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 11/04/2020] [Accepted: 12/05/2020] [Indexed: 02/02/2023]
Abstract
Multisensory perception might provide an important marker of brain function in aging. However, the cortical structures supporting multisensory perception in aging are poorly understood. In this study, we compared regional gray matter volume in a group of middle-aged (n = 101; 49-64 years) and older (n = 116; 71-87 years) adults from The Irish Longitudinal Study on Aging using voxel-based morphometry. Participants completed a measure of multisensory integration, the sound-induced flash illusion, and were grouped as per their illusion susceptibility. A significant interaction was observed in the right angular gyrus; in the middle-aged group, larger gray matter volume corresponded to stronger illusion perception while in older adults larger gray matter corresponded to less illusion susceptibility. This interaction remained significant even when controlling for a range of demographic, sensory, cognitive, and health variables. These findings show that multisensory integration is associated with specific structural differences in the aging brain and highlight the angular gyrus as a possible "cross-modal hub" associated with age-related change in multisensory perception.
Collapse
Affiliation(s)
- Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland; The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland.
| | - Robert Whelan
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Rory Boyle
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland; School of Applied Psychology, University College Cork, Cork, Ireland
| | - Silvin Knight
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - John O'Connor
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Wilby Williamson
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland; Department of Physiology, Trinity College Dublin, Dublin, Ireland
| | - Jason McMorrow
- The National Centre for Advanced Medical Imaging (CAMI), St. James's Hospital, Dublin, Ireland
| | - Andrew J Fagan
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - James F Meaney
- The National Centre for Advanced Medical Imaging (CAMI), St. James's Hospital, Dublin, Ireland; School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland; Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Céline De Looze
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland.
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
49
|
Opoku-Baah C, Wallace MT. Brief period of monocular deprivation drives changes in audiovisual temporal perception. J Vis 2020; 20:8. [PMID: 32761108 PMCID: PMC7438662 DOI: 10.1167/jov.20.8.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
The human brain retains a striking degree of plasticity into adulthood. Recent studies have demonstrated that a short period of altered visual experience (via monocular deprivation) can change the dynamics of binocular rivalry in favor of the deprived eye, a compensatory action thought to be mediated by an upregulation of cortical gain control mechanisms. Here, we sought to better understand the impact of monocular deprivation on multisensory abilities, specifically examining audiovisual temporal perception. Using an audiovisual simultaneity judgment task, we discovered that 90 minutes of monocular deprivation produced opposing effects on the temporal binding window depending on the eye used in the task. Thus, in those who performed the task with their deprived eye there was a narrowing of the temporal binding window, whereas in those performing the task with their nondeprived eye there was a widening of the temporal binding window. The effect was short lived, being observed only in the first 10 minutes of postdeprivation testing. These findings indicate that changes in visual experience in the adult can rapidly impact multisensory perceptual processes, a finding that has important clinical implications for those patients with adult-onset visual deprivation and for therapies founded on monocular deprivation.
Collapse
Affiliation(s)
| | - Mark T Wallace
- ,.,,.,,.,,.,,.,,
| |
Collapse
|
50
|
Zhou HY, Wang YM, Zhang RT, Cheung EFC, Pantelis C, Chan RCK. Neural Correlates of Audiovisual Temporal Binding Window in Individuals With Schizotypal and Autistic Traits: Evidence From Resting-State Functional Connectivity. Autism Res 2020; 14:668-680. [PMID: 33314710 DOI: 10.1002/aur.2456] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 12/01/2020] [Accepted: 12/03/2020] [Indexed: 01/02/2023]
Abstract
Temporal proximity is an important clue for multisensory integration. Previous evidence indicates that individuals with autism and schizophrenia are more likely to integrate multisensory inputs over a longer temporal binding window (TBW). However, whether such deficits in audiovisual temporal integration extend to subclinical populations with high schizotypal and autistic traits are unclear. Using audiovisual simultaneity judgment (SJ) tasks for nonspeech and speech stimuli, our results suggested that the width of the audiovisual TBW was not significantly correlated with self-reported schizotypal and autistic traits in a group of young adults. Functional magnetic resonance imaging (fMRI) resting-state activity was also acquired to explore the neural correlates underlying inter-individual variability of TBW width. Across the entire sample, stronger resting-state functional connectivity (rsFC) between the left superior temporal cortex and the left precuneus, and weaker rsFC between the left cerebellum and the right dorsal lateral prefrontal cortex were correlated with a narrower TBW for speech stimuli. Meanwhile, stronger rsFC between the left anterior superior temporal gyrus and the right inferior temporal gyrus was correlated with a wider audiovisual TBW for non-speech stimuli. The TBW-related rsFC was not affected by levels of subclinical traits. In conclusion, this study indicates that audiovisual temporal processing may not be affected by autistic and schizotypal traits and rsFC between brain regions responding to multisensory information and timing may account for the inter-individual difference in TBW width. LAY SUMMARY: Individuals with ASD and schizophrenia are more likely to perceive asynchronous auditory and visual events as occurring simultaneously even if they are well separated in time. We investigated whether similar difficulties in audiovisual temporal processing were present in subclinical populations with high autistic and schizotypal traits. We found that the ability to detect audiovisual asynchrony was not affected by different levels of autistic and schizotypal traits. We also found that connectivity of some brain regions engaging in multisensory and timing tasks might explain an individual's tendency to bind multisensory information within a wide or narrow time window. Autism Res 2021, 14: 668-680. © 2020 International Society for Autism Research and Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yong-Ming Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Rui-Ting Zhang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, China
| | - Christos Pantelis
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne & Melbourne Health, Carlton South, Victoria, Australia.,Florey Institute for Neurosciences and Mental Health, Parkville, Victoria, Australia
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|