1
|
Gao Y, Xue K, Odegaard B, Rahnev D. Automatic multisensory integration follows subjective confidence rather than objective performance. COMMUNICATIONS PSYCHOLOGY 2025; 3:38. [PMID: 40069314 PMCID: PMC11896883 DOI: 10.1038/s44271-025-00221-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 02/26/2025] [Indexed: 03/15/2025]
Abstract
It is well known that sensory information from one modality can automatically affect judgments from a different sensory modality. However, it remains unclear what determines the strength of the influence of an irrelevant sensory cue from one modality on a perceptual judgment for a different modality. Here we test whether the strength of multisensory impact by an irrelevant sensory cue depends on participants' objective accuracy or subjective confidence for that cue. We created visual motion stimuli with low vs. high overall motion energy, where high-energy stimuli yielded higher confidence but lower accuracy in a visual-only task. We then tested the impact of the low- and high-energy visual stimuli on auditory motion perception in 99 participants. We found that the high-energy visual stimuli influenced the auditory motion judgments more strongly than the low-energy visual stimuli, consistent with their higher confidence but contrary to their lower accuracy. A computational model assuming common principles underlying confidence reports and multisensory integration captured these effects. Our findings show that automatic multisensory integration follows subjective confidence rather than objective performance and suggest the existence of common computations across vastly different stages of perceptual decision making.
Collapse
Affiliation(s)
- Yi Gao
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, 30332, USA.
| | - Kai Xue
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Brian Odegaard
- Department of Psychology, University of Florida, Gainesville, FL, 32611, USA
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| |
Collapse
|
2
|
Patil O, Kaple M. Sensory Processing Differences in Individuals With Autism Spectrum Disorder: A Narrative Review of Underlying Mechanisms and Sensory-Based Interventions. Cureus 2023; 15:e48020. [PMID: 38034138 PMCID: PMC10687592 DOI: 10.7759/cureus.48020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 10/31/2023] [Indexed: 12/02/2023] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by difficulties with social interaction and restricted, repetitive patterns of behavior. Altered sensory processing and perception are considered characteristics of ASD. Sensory processing differences (SPDs) are commonly observed in individuals with ASD, leading to atypical responses to sensory stimuli. SPDs refer to the way in which individuals receive, process, and respond to sensory information from the environment. People with SPDs may be hypersensitive (over-reactive) or hyposensitive (under-reactive) to sensory input, or they may experience fragmented or distorted perceptions. These differences can make it difficult for individuals with SPDs to filter out irrelevant sensory information, and to integrate sensory information from different sources. This study intends to investigate the underlying mechanisms contributing to SPDs in individuals with autism and determine the effectiveness of sensory-based therapies in addressing these challenges. The literature suggests that altered neural pathways, sensory gating dysfunction, and atypical sensory modulation contribute to SPDs in individuals with ASD. Assistive technology, environmental changes, and sensory-based interventions like sensory integration therapy (SIT) have all shown promise in improving sensory functioning and reducing associated behavioral issues. However, further research is needed to improve our understanding of sensory processing in autism and to optimize interventions for individuals with ASD.
Collapse
Affiliation(s)
- Om Patil
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Meghali Kaple
- Biochemistry, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
3
|
Dung L, Newen A. Profiles of animal consciousness: A species-sensitive, two-tier account to quality and distribution. Cognition 2023; 235:105409. [PMID: 36821996 DOI: 10.1016/j.cognition.2023.105409] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 01/25/2023] [Accepted: 02/11/2023] [Indexed: 02/25/2023]
Abstract
The science of animal consciousness investigates (i) which animal species are conscious (the distribution question) and (ii) how conscious experience differs in detail between species (the quality question). We propose a framework which clearly distinguishes both questions and tackles both of them. This two-tier account distinguishes consciousness along ten dimensions and suggests cognitive capacities which serve as distinct operationalizations for each dimension. The two-tier account achieves three valuable aims: First, it separates strong and weak indicators of the presence of consciousness. Second, these indicators include not only different specific contents but also differences in the way particular contents are processed (by processes of learning, reasoning or abstraction). Third, evidence of consciousness from each dimension can be combined to derive the distinctive multi-dimensional consciousness profile of various species. Thus, the two-tier account shows how the kind of conscious experience of different species can be systematically compared.
Collapse
Affiliation(s)
- Leonard Dung
- Ruhr-University Bochum, Institut of Philosophy II, Universitätsstraße 150, 44801 Bochum, Germany.
| | - Albert Newen
- Ruhr-University Bochum, Institut of Philosophy II, Universitätsstraße 150, 44801 Bochum, Germany
| |
Collapse
|
4
|
Strelnikov K, Hervault M, Laurent L, Barone P. When two is worse than one: The deleterious impact of multisensory stimulation on response inhibition. PLoS One 2021; 16:e0251739. [PMID: 34014959 PMCID: PMC8136741 DOI: 10.1371/journal.pone.0251739] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/01/2021] [Indexed: 11/18/2022] Open
Abstract
Multisensory facilitation is known to improve the perceptual performances and reaction times of participants in a wide range of tasks, from detection and discrimination to memorization. We asked whether a multimodal signal can similarly improve action inhibition using the stop-signal paradigm. Indeed, consistent with a crossmodal redundant signal effect that relies on multisensory neuronal integration, the threshold for initiating behavioral responses is known for being reached faster with multisensory stimuli. To evaluate whether this phenomenon also occurs for inhibition, we compared stop signals in unimodal (human faces or voices) versus audiovisual modalities in natural or degraded conditions. In contrast to the expected multisensory facilitation, we observed poorer inhibition efficiency in the audiovisual modality compared with the visual and auditory modalities. This result was corroborated by both response probabilities and stop-signal reaction times. The visual modality (faces) was the most effective. This is the first demonstration of an audiovisual impairment in the domain of perception and action. It suggests that when individuals are engaged in a high-level decisional conflict, bimodal stimulation is not processed as a simple multisensory object improving the performance but is perceived as concurrent visual and auditory information. This absence of unity increases task demand and thus impairs the ability to revise the response.
Collapse
Affiliation(s)
- Kuzma Strelnikov
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
- Purpan University Hospital, Toulouse, France
- * E-mail:
| | - Mario Hervault
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| | - Lidwine Laurent
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| | - Pascal Barone
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| |
Collapse
|
5
|
Abstract
The present study examined the relationship between multisensory integration and the temporal binding window (TBW) for multisensory processing in adults with Autism spectrum disorder (ASD). The ASD group was less likely than the typically developing group to perceive an illusory flash induced by multisensory integration during a sound-induced flash illusion (SIFI) task. Although both groups showed comparable TBWs during the multisensory temporal order judgment task, correlation analyses and Bayes factors provided moderate evidence that the reduced SIFI susceptibility was associated with the narrow TBW in the ASD group. These results suggest that the individuals with ASD exhibited atypical multisensory integration and that individual differences in the efficacy of this process might be affected by the temporal processing of multisensory information.
Collapse
|
6
|
ManyPrimates, Aguenounon GS, Ballesta S, Beaud A, Bustamante L, Canteloup C, Joly M, Loyant L, Meunier H, Roig A, Troisi CA, Zablocki-Thomas P. ManyPrimates : une infrastructure de collaboration internationale dans la recherche en cognition des primates. REVUE DE PRIMATOLOGIE 2020. [DOI: 10.4000/primatologie.8808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
7
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
8
|
Carducci P, Squillace V, Manzi G, Truppa V. Touch improves visual discrimination of object features in capuchin monkeys (Sapajus spp.). Behav Processes 2020; 172:104044. [PMID: 31954810 DOI: 10.1016/j.beproc.2020.104044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Revised: 01/03/2020] [Accepted: 01/13/2020] [Indexed: 11/25/2022]
Abstract
Primates perceive many object features through vision and touch. To date, little is known on how the synergy of these two sensory modalities contributes to enhance object recognition. Here, we investigated in capuchin monkeys (N = 12) whether manipulating objects and retaining tactile information enhanced visual recognition of geometrical object properties on different scales. Capuchins were trained to visually select the rewarded one of two objects differing in size, shape (larger-scale) or surface structure (smaller-scale). Objects were explored in two experimental conditions: the Sight condition prevented capuchins from touching the chosen object; the Sight and Touch condition allowed them to touch the selected object. Our results indicated that tactile information increased the capuchins' learning speed for visual discrimination of object features. Moreover, the capuchins' learning speed was higher in both size and shape discrimination compared to surface discrimination regardless of the availability of tactile input. Overall, our data demonstrated that the acquisition of tactile information about object features was advantageous for the capuchins and allowed them to achieve high levels of visual accuracy faster. This suggests that information from touch potentiated object recognition in the visual modality.
Collapse
Affiliation(s)
- Paola Carducci
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), Via Ulisse Aldrovandi 16/B, 00197, Rome, Italy; Sapienza University of Rome, Department of Environmental Biology, Piazzale Aldo Moro 5, 00185, Rome, Italy.
| | - Valerio Squillace
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), Via Ulisse Aldrovandi 16/B, 00197, Rome, Italy
| | - Giorgio Manzi
- Sapienza University of Rome, Department of Environmental Biology, Piazzale Aldo Moro 5, 00185, Rome, Italy
| | - Valentina Truppa
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), Via Ulisse Aldrovandi 16/B, 00197, Rome, Italy.
| |
Collapse
|
9
|
Association of Subclinical Neck Pain With Altered Multisensory Integration at Baseline and 4-Week Follow-up Relative to Asymptomatic Controls. J Manipulative Physiol Ther 2019; 41:81-91. [PMID: 29482829 DOI: 10.1016/j.jmpt.2017.09.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 09/12/2017] [Accepted: 09/26/2017] [Indexed: 02/07/2023]
Abstract
OBJECTIVE The purpose of this study was to test whether people with subclinical neck pain (SCNP) had altered visual, auditory, and multisensory response times, and whether these findings were consistent over time. METHODS Twenty-five volunteers (12 SCNP and 13 asymptomatic controls) were recruited from a Canadian university student population. A 2-alternative forced-choice discrimination task with multisensory redundancy was used to measure response times to the presentation of visual (color filled circles), auditory (verbalization of the color words, eg, red or blue), and multisensory (simultaneous audiovisual) stimuli at baseline and 4 weeks later. RESULTS The SCNP group was slower at both visual and multisensory tasks (P = .046, P = .020, respectively), with no change over 4 weeks. Auditory response times improved slightly but significantly after 4 weeks (P = .050) with no group difference. CONCLUSIONS This is the first study to report that people with SCNP have slower visual and multisensory response times than asymptomatic individuals. These differences persist over 4 weeks, suggesting that the multisensory technique is reliable and that these differences in the SCNP group do not improve on their own in the absence of treatment.
Collapse
|
10
|
Hanada GM, Ahveninen J, Calabro FJ, Yengo-Kahn A, Vaina LM. Cross-Modal Cue Effects in Motion Processing. Multisens Res 2018; 32:45-65. [PMID: 30613468 PMCID: PMC6317375 DOI: 10.1163/22134808-20181313] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.
Collapse
Affiliation(s)
- G. M. Hanada
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - J. Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - F. J. Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - A. Yengo-Kahn
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - L. M. Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Neurology, Massachusetts General Hospital and Brigham and Women’s Hospital, MA, USA
| |
Collapse
|
11
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|