1
|
Yang X, Yang W, Li R, Lin J, Yang J, Ren Y. Audiovisual integration facilitates age-related perceptual decision making. J Gerontol B Psychol Sci Soc Sci 2025; 80:gbaf037. [PMID: 40036881 DOI: 10.1093/geronb/gbaf037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Indexed: 03/06/2025] Open
Abstract
OBJECTIVES Aging populations commonly experience a decline in sensory functions, which negatively affects perceptual decision making. The decline in sensory functions has been shown to be partially compensated by audiovisual integration. Although audiovisual integration may have a positive effect on perception, it remains unclear whether the perceptual improvements observed in older adults during perceptual decision making are better explained by the early or late integration hypothesis. METHODS An audiovisual categorization task was used to explore responses to unisensory and audiovisual stimuli in young and older adults. Behavioral drift-diffusion model (DDM) and electroencephalography (EEG) were applied to characterize differences in cognitive and neural dynamics across groups. RESULTS The DDM showed that older adults exhibited higher drift rates and shorter nondecision times for audiovisual stimuli than for visual or auditory stimuli alone. The EEG results showed that during the early sensory encoding stage (150-300 ms), older adults exhibited greater audiovisual integration in beta band than younger adults. In the late decision-formation stage (500-700 ms), older adults exhibited greater audiovisual integration in beta band and greater audiovisual integration in the anterior frontal electrodes than younger adults. DISCUSSION These findings highlight the crucial role of audiovisual integration in both the early and late stages of perceptual decision making in older adults. The results suggest that enhanced audiovisual integration in older adults compared with younger adults may serve as a specific mechanism to mitigate the negative effects of aging on perceptual decision making.
Collapse
Affiliation(s)
- Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Ruizhi Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jiajia Yang
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| |
Collapse
|
2
|
Brožová N, Vollmer L, Kampa B, Kayser C, Fels J. Cross-modal congruency modulates evidence accumulation, not decision thresholds. Front Neurosci 2025; 19:1513083. [PMID: 40052091 PMCID: PMC11882578 DOI: 10.3389/fnins.2025.1513083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/30/2025] [Indexed: 03/09/2025] Open
Abstract
Audiovisual cross-modal correspondences (CMCs) refer to the brain's inherent ability to subconsciously connect auditory and visual information. These correspondences reveal essential aspects of multisensory perception and influence behavioral performance, enhancing reaction times and accuracy. However, the impact of different types of CMCs-arising from statistical co-occurrences or shaped by semantic associations-on information processing and decision-making remains underexplored. This study utilizes the Implicit Association Test, where unisensory stimuli are sequentially presented and linked via CMCs within an experimental block by the specific response instructions (either congruent or incongruent). Behavioral data are integrated with EEG measurements through neurally informed drift-diffusion modeling to examine how neural activity across both auditory and visual trials is modulated by CMCs. Our findings reveal distinct neural components that differentiate between congruent and incongruent stimuli regardless of modality, offering new insights into the role of congruency in shaping multisensory perceptual decision-making. Two key neural stages were identified: an Early component enhancing sensory encoding in congruent trials and a Late component affecting evidence accumulation, particularly in incongruent trials. These results suggest that cross-modal congruency primarily influences the processing and accumulation of sensory information rather than altering decision thresholds.
Collapse
Affiliation(s)
- Natálie Brožová
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| | - Lukas Vollmer
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| | - Björn Kampa
- Systems Neurophysiology Department, Institute of Zoology, RWTH Aachen University, Aachen, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Janina Fels
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
3
|
Golmohamadian M, Faraji M, Fallah F, Sharifizadeh F, Ebrahimpour R. Flexibility in choosing decision policies in gathering discrete evidence over time. PLoS One 2025; 20:e0316320. [PMID: 39808606 PMCID: PMC11731777 DOI: 10.1371/journal.pone.0316320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025] Open
Abstract
The brain can remarkably adapt its decision-making process to suit the dynamic environment and diverse aims and demands. The brain's flexibility can be classified into three categories: flexibility in choosing solutions, decision policies, and actions. We employ two experiments to explore flexibility in decision policy: a visual object categorization task and an auditory object categorization task. Both tasks required participants to accumulate discrete evidence over time, with the only difference being the sensory state of the stimuli. We aim to investigate how the brain demonstrates flexibility in selecting decision policies in different sensory contexts when the solution and action remain the same. Our results indicate that the decision policy of the brain in integrating information is independent of inter-pulse interval across these two tasks. However, the decision policy based on how the brain ranks the first and second pulse of evidence changes flexibly. We show that the sequence of pulses does not affect the choice accuracy in the auditory mode. However, in the visual mode, the first pulse had the larger leverage on decisions. Our research underscores the importance of incorporating diverse contexts to improve our understanding of the brain's flexibility in real-world decision-making.
Collapse
Affiliation(s)
- Masoumeh Golmohamadian
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Science (IPM), Tehran, Iran
| | - Mehrbod Faraji
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Science (IPM), Tehran, Iran
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
| | - Fatemeh Fallah
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Science (IPM), Tehran, Iran
| | - Fatemeh Sharifizadeh
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Science (IPM), Tehran, Iran
| | - Reza Ebrahimpour
- Center for Cognitive Science, Institute for Convergence Science and Technology (ICST), Sharif University of Technology, Tehran, Iran
| |
Collapse
|
4
|
Marsicano G, Bertini C, Ronconi L. Decoding cognition in neurodevelopmental, psychiatric and neurological conditions with multivariate pattern analysis of EEG data. Neurosci Biobehav Rev 2024; 164:105795. [PMID: 38977116 DOI: 10.1016/j.neubiorev.2024.105795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/21/2024] [Accepted: 07/03/2024] [Indexed: 07/10/2024]
Abstract
Multivariate pattern analysis (MVPA) of electroencephalographic (EEG) data represents a revolutionary approach to investigate how the brain encodes information. By considering complex interactions among spatio-temporal features at the individual level, MVPA overcomes the limitations of univariate techniques, which often fail to account for the significant inter- and intra-individual neural variability. This is particularly relevant when studying clinical populations, and therefore MVPA of EEG data has recently started to be employed as a tool to study cognition in brain disorders. Here, we review the insights offered by this methodology in the study of anomalous patterns of neural activity in conditions such as autism, ADHD, schizophrenia, dyslexia, neurological and neurodegenerative disorders, within different cognitive domains (perception, attention, memory, consciousness). Despite potential drawbacks that should be attentively addressed, these studies reveal a peculiar sensitivity of MVPA in unveiling dysfunctional and compensatory neurocognitive dynamics of information processing, which often remain blind to traditional univariate approaches. Such higher sensitivity in characterizing individual neurocognitive profiles can provide unique opportunities to optimise assessment and promote personalised interventions.
Collapse
Affiliation(s)
- Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna 40121, Italy; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena 47023, Italy.
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna 40121, Italy; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena 47023, Italy.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy; Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
5
|
Bolam J, Diaz JA, Andrews M, Coats RO, Philiastides MG, Astill SL, Delis I. A drift diffusion model analysis of age-related impact on multisensory decision-making processes. Sci Rep 2024; 14:14895. [PMID: 38942761 PMCID: PMC11213863 DOI: 10.1038/s41598-024-65549-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 06/20/2024] [Indexed: 06/30/2024] Open
Abstract
Older adults (OAs) are typically slower and/or less accurate in forming perceptual choices relative to younger adults. Despite perceptual deficits, OAs gain from integrating information across senses, yielding multisensory benefits. However, the cognitive processes underlying these seemingly discrepant ageing effects remain unclear. To address this knowledge gap, 212 participants (18-90 years old) performed an online object categorisation paradigm, whereby age-related differences in Reaction Times (RTs) and choice accuracy between audiovisual (AV), visual (V), and auditory (A) conditions could be assessed. Whereas OAs were slower and less accurate across sensory conditions, they exhibited greater RT decreases between AV and V conditions, showing a larger multisensory benefit towards decisional speed. Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants' behaviour to probe age-related impacts on the latent multisensory decision formation processes. For OAs, HDDM demonstrated slower evidence accumulation rates across sensory conditions coupled with increased response caution for AV trials of higher difficulty. Notably, for trials of lower difficulty we found multisensory benefits in evidence accumulation that increased with age, but not for trials of higher difficulty, in which increased response caution was instead evident. Together, our findings reconcile age-related impacts on multisensory decision-making, indicating greater multisensory evidence accumulation benefits with age underlying enhanced decisional speed.
Collapse
Affiliation(s)
- Joshua Bolam
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
- Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| | - Jessica A Diaz
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
- School of Social Sciences, Birmingham City University, West Midlands, B15 3HE, UK
| | - Mark Andrews
- School of Social Sciences, Nottingham Trent University, Nottinghamshire, NG1 4FQ, UK
| | - Rachel O Coats
- School of Psychology, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Marios G Philiastides
- School of Neuroscience and Psychology, University of Glasgow, Lanarkshire, G12 8QB, UK
| | - Sarah L Astill
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
| |
Collapse
|
6
|
Li J, Hua L, Deng SW. Modality-specific impacts of distractors on visual and auditory categorical decision-making: an evidence accumulation perspective. Front Psychol 2024; 15:1380196. [PMID: 38765839 PMCID: PMC11099231 DOI: 10.3389/fpsyg.2024.1380196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/16/2024] [Indexed: 05/22/2024] Open
Abstract
Our brain constantly processes multisensory inputs to make decisions and guide behaviors, but how goal-relevant processes are influenced by irrelevant information is unclear. Here, we investigated the effects of intermodal and intramodal task-irrelevant information on visual and auditory categorical decision-making. In both visual and auditory tasks, we manipulated the modality of irrelevant inputs (visual vs. auditory vs. none) and used linear discrimination analysis of EEG and hierarchical drift-diffusion modeling (HDDM) to identify when and how task-irrelevant information affected decision-relevant processing. The results revealed modality-specific impacts of irrelevant inputs on visual and auditory categorical decision-making. The distinct effects on the visual task were shown on the neural components, with auditory distractors amplifying the sensory processing whereas visual distractors amplifying the post-sensory process. Conversely, the distinct effects on the auditory task were shown in behavioral performance and underlying cognitive processes. Visual distractors facilitate behavioral performance and affect both stages, but auditory distractors interfere with behavioral performance and impact on the sensory processing rather than the post-sensory decision stage. Overall, these findings suggested that auditory distractors affect the sensory processing stage of both tasks while visual distractors affect the post-sensory decision stage of visual categorical decision-making and both stages of auditory categorical decision-making. This study provides insights into how humans process information from multiple sensory modalities during decision-making by leveraging modality-specific impacts.
Collapse
Affiliation(s)
- Jianhua Li
- Department of Psychology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, University of Macau, Macau, China
| | - Lin Hua
- Center for Cognitive and Brain Sciences, University of Macau, Macau, China
- Faculty of Health Sciences, University of Macau, Macau, China
| | - Sophia W. Deng
- Department of Psychology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, University of Macau, Macau, China
| |
Collapse
|
7
|
Bao X, Lomber SG. Visual modulation of auditory evoked potentials in the cat. Sci Rep 2024; 14:7177. [PMID: 38531940 DOI: 10.1038/s41598-024-57075-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
Visual modulation of the auditory system is not only a neural substrate for multisensory processing, but also serves as a backup input underlying cross-modal plasticity in deaf individuals. Event-related potential (ERP) studies in humans have provided evidence of a multiple-stage audiovisual interactions, ranging from tens to hundreds of milliseconds after the presentation of stimuli. However, it is still unknown if the temporal course of visual modulation in the auditory ERPs can be characterized in animal models. EEG signals were recorded in sedated cats from subdermal needle electrodes. The auditory stimuli (clicks) and visual stimuli (flashes) were timed by two independent Poison processes and were presented either simultaneously or alone. The visual-only ERPs were subtracted from audiovisual ERPs before being compared to the auditory-only ERPs. N1 amplitude showed a trend of transiting from suppression-to-facilitation with a disruption at ~ 100-ms flash-to-click delay. We concluded that visual modulation as a function of SOA with extended range is more complex than previously characterized with short SOAs and its periodic pattern can be interpreted with "phase resetting" hypothesis.
Collapse
Affiliation(s)
- Xiaohan Bao
- Integrated Program in Neuroscience, McGill University, Montreal, QC, H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, McIntyre Medical Sciences Building, Rm 1223, 3655 Promenade Sir William Osler, Montreal, QC, H3G 1Y6, Canada.
| |
Collapse
|
8
|
Horr NK, Mousavi B, Han K, Li A, Tang R. Human behavior in free search online shopping scenarios can be predicted from EEG activation using Hjorth parameters. Front Neurosci 2023; 17:1191213. [PMID: 38027474 PMCID: PMC10667477 DOI: 10.3389/fnins.2023.1191213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 10/20/2023] [Indexed: 12/01/2023] Open
Abstract
The present work investigates whether and how decisions in real-world online shopping scenarios can be predicted based on brain activation. Potential customers were asked to search through product pages on e-commerce platforms and decide, which products to buy, while their EEG signal was recorded. Machine learning algorithms were then trained to distinguish between EEG activation when viewing products that are later bought or put into the shopping card as opposed to products that are later discarded. We find that Hjorth parameters extracted from the raw EEG can be used to predict purchase choices to a high level of accuracy. Above-chance predictions based on Hjorth parameters are achieved via different standard machine learning methods with random forest models showing the best performance of above 80% prediction accuracy in both 2-class (bought or put into card vs. not bought) and 3-class (bought vs. put into card vs. not bought) classification. While conventional EEG signal analysis commonly employs frequency domain features such as alpha or theta power and phase, Hjorth parameters use time domain signals, which can be calculated rapidly with little computational cost. Given the presented evidence that Hjorth parameters are suitable for the prediction of complex behaviors, their potential and remaining challenges for implementation in real-time applications are discussed.
Collapse
|
9
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
10
|
Thomas E, Ali FB, Tolambiya A, Chambellant F, Gaveau J. Too much information is no information: how machine learning and feature selection could help in understanding the motor control of pointing. Front Big Data 2023; 6:921355. [PMID: 37546547 PMCID: PMC10399757 DOI: 10.3389/fdata.2023.921355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 06/16/2023] [Indexed: 08/08/2023] Open
Abstract
The aim of this study was to develop the use of Machine Learning techniques as a means of multivariate analysis in studies of motor control. These studies generate a huge amount of data, the analysis of which continues to be largely univariate. We propose the use of machine learning classification and feature selection as a means of uncovering feature combinations that are altered between conditions. High dimensional electromyogram (EMG) vectors were generated as several arm and trunk muscles were recorded while subjects pointed at various angles above and below the gravity neutral horizontal plane. We used Linear Discriminant Analysis (LDA) to carry out binary classifications between the EMG vectors for pointing at a particular angle, vs. pointing at the gravity neutral direction. Classification success provided a composite index of muscular adjustments for various task constraints-in this case, pointing angles. In order to find the combination of features that were significantly altered between task conditions, we conducted a post classification feature selection i.e., investigated which combination of features had allowed for the classification. Feature selection was done by comparing the representations of each category created by LDA for the classification. In other words computing the difference between the representations of each class. We propose that this approach will help with comparing high dimensional EMG patterns in two ways; (i) quantifying the effects of the entire pattern rather than using single arbitrarily defined variables and (ii) identifying the parts of the patterns that convey the most information regarding the investigated effects.
Collapse
Affiliation(s)
- Elizabeth Thomas
- INSERMU1093, UFR STAPS, Université de Bourgogne Franche Comté, Dijon, France
| | - Ferid Ben Ali
- School of Engineering and Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| | - Arvind Tolambiya
- Applied Intelligence Hub, Accenture Solutions Private Ltd., Hyderabad, Telangana, India
| | - Florian Chambellant
- INSERMU1093, UFR STAPS, Université de Bourgogne Franche Comté, Dijon, France
| | - Jérémie Gaveau
- INSERMU1093, UFR STAPS, Université de Bourgogne Franche Comté, Dijon, France
| |
Collapse
|
11
|
Sciortino P, Kayser C. Steady state visual evoked potentials reveal a signature of the pitch-size crossmodal association in visual cortex. Neuroimage 2023; 273:120093. [PMID: 37028733 DOI: 10.1016/j.neuroimage.2023.120093] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Crossmodal correspondences describe our tendency to associate sensory features from different modalities with each other, such as the pitch of a sound with the size of a visual object. While such crossmodal correspondences (or associations) are described in many behavioural studies their neurophysiological correlates remain unclear. Under the current working model of multisensory perception both a low- and a high-level account seem plausible. That is, the neurophysiological processes shaping these associations could commence in low-level sensory regions, or may predominantly emerge in high-level association regions of semantic and object identification networks. We exploited steady-state visual evoked potentials (SSVEP) to directly probe this question, focusing on the associations between pitch and the visual features of size, hue or chromatic saturation. We found that SSVEPs over occipital regions are sensitive to the congruency between pitch and size, and a source analysis pointed to an origin around primary visual cortices. We speculate that this signature of the pitch-size association in low-level visual cortices reflects the successful pairing of congruent visual and acoustic object properties and may contribute to establishing causal relations between multisensory objects. Besides this, our study also provides a paradigm can be exploited to study other crossmodal associations involving visual stimuli in the future.
Collapse
|
12
|
Azizi Z, Ebrahimpour R. Explaining Integration of Evidence Separated by Temporal Gaps with Frontoparietal Circuit Models. Neuroscience 2023; 509:74-95. [PMID: 36457229 DOI: 10.1016/j.neuroscience.2022.10.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 10/17/2022] [Accepted: 10/20/2022] [Indexed: 11/07/2022]
Abstract
Perceptual decisions rely on accumulating sensory evidence over time. However, the accumulation process is complicated in real life when evidence resulted from separated cues over time. Previous studies demonstrate that participants are able to integrate information from two separated cues to improve their performance invariant to an interval between the cues. However, there is no neural model that can account for accuracy and confidence in decisions when there is a time interval in evidence. We used behavioral and EEG datasets from a visual choice task -Random dot motion- with separated evidence to investigate three candid distributed neural networks. We showed that decisions based on evidence accumulation by separated cues over time are best explained by the interplay of recurrent cortical dynamics of centro-parietal and frontal brain areas while an uncertainty-monitoring module included in the model.
Collapse
Affiliation(s)
- Zahra Azizi
- Department of Cognitive Modeling, Institute for Cognitive Science Studies, Tehran, Iran.
| | - Reza Ebrahimpour
- Institute for Convergence Science and Technology (ICST), Sharif University of Technology, Tehran, P.O.Box: 11155-8639, Iran; Faculty of Computer Engineering, Shahid Rajaee Teacher Training University, Postal Box: 16785-163, Tehran, Iran; School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran, Postal Box: 19395-5746, Tehran, Iran.
| |
Collapse
|
13
|
Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection. Commun Biol 2023; 6:12. [PMID: 36604455 PMCID: PMC9816120 DOI: 10.1038/s42003-022-04335-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases.
Collapse
|
14
|
Yang W, Yang X, Guo A, Li S, Li Z, Lin J, Ren Y, Yang J, Wu J, Zhang Z. Audiovisual integration of the dynamic hand-held tool at different stimulus intensities in aging. Front Hum Neurosci 2022; 16:968987. [PMID: 36590067 PMCID: PMC9794578 DOI: 10.3389/fnhum.2022.968987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 11/15/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction: In comparison to the audiovisual integration of younger adults, the same process appears more complex and unstable in older adults. Previous research has found that stimulus intensity is one of the most important factors influencing audiovisual integration. Methods: The present study compared differences in audiovisual integration between older and younger adults using dynamic hand-held tool stimuli, such as holding a hammer hitting the floor. Meanwhile, the effects of stimulus intensity on audiovisual integration were compared. The intensity of the visual and auditory stimuli was regulated by modulating the contrast level and sound pressure level. Results: Behavioral results showed that both older and younger adults responded faster and with higher hit rates to audiovisual stimuli than to visual and auditory stimuli. Further results of event-related potentials (ERPs) revealed that during the early stage of 60-100 ms, in the low-intensity condition, audiovisual integration of the anterior brain region was greater in older adults than in younger adults; however, in the high-intensity condition, audiovisual integration of the right hemisphere region was greater in younger adults than in older adults. Moreover, audiovisual integration was greater in the low-intensity condition than in the high-intensity condition in older adults during the 60-100 ms, 120-160 ms, and 220-260 ms periods, showing inverse effectiveness. However, there was no difference in the audiovisual integration of younger adults across different intensity conditions. Discussion: The results suggested that there was an age-related dissociation between high- and low-intensity conditions with audiovisual integration of the dynamic hand-held tool stimulus. Older adults showed greater audiovisual integration in the lower intensity condition, which may be due to the activation of compensatory mechanisms.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ao Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China,*Correspondence: Yanna Ren Zhilin Zhang
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China,*Correspondence: Yanna Ren Zhilin Zhang
| |
Collapse
|
15
|
Schulze M, Aslan B, Jung P, Lux S, Philipsen A. Robust perceptual-load-dependent audiovisual integration in adult ADHD. Eur Arch Psychiatry Clin Neurosci 2022; 272:1443-1451. [PMID: 35380238 PMCID: PMC9653355 DOI: 10.1007/s00406-022-01401-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 03/14/2022] [Indexed: 12/24/2022]
Abstract
We perceive our daily-life surrounded by different senses (e.g., visual, and auditory). For a coherent percept, our brain binds those multiple streams of sensory stimulations, i.e., multisensory integration (MI). Dependent on stimulus complexity, early MI is triggered by bottom-up or late via top-down attentional deployment. Adult attention-deficit/hyperactivity disorder (ADHD) is associated with successful bottom-up MI and deficient top-down MI. In the current study, we investigated the robustness of the bottom-up MI by adding additional task demand varying the perceptual load. We hypothesized diminished bottom-up MI for high perceptual load for patients with ADHD. 18 adult patients with ADHD and 18 age- and gender-matched healthy controls participated in this study. In the visual search paradigm, a target letter was surrounded by uniform distractors (low load) or by different letters (high load). Additionally, either unimodal (visual flash, auditory beep) or multimodal (audiovisual) flanked the visual search. Linear-mixed modeling was used to investigate the influence of load on reaction times. Further, the race model inequality was calculated. Patients with ADHD showed a similar degree of MI performance like healthy controls, irrespective of perceptual load manipulation. ADHD patients violated the race model for the low load but not for the high-load condition. There seems to be robust bottom-up MI independent of perceptual load in ADHD patients. However, the sensory accumulation might be altered when attentional demands are high.
Collapse
Affiliation(s)
- Marcel Schulze
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany.
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany.
| | - Behrem Aslan
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
| | - Paul Jung
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
| | - Silke Lux
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
| | - Alexandra Philipsen
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
| |
Collapse
|
16
|
Chua SFA, Liu Y, Harris JM, Otto TU. No selective integration required: A race model explains responses to audiovisual motion-in-depth. Cognition 2022; 227:105204. [PMID: 35753178 DOI: 10.1016/j.cognition.2022.105204] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 06/02/2022] [Accepted: 06/08/2022] [Indexed: 11/03/2022]
Abstract
Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion.
Collapse
Affiliation(s)
- S F Andrew Chua
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom.
| | - Yue Liu
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom
| | - Julie M Harris
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom
| | - Thomas U Otto
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom.
| |
Collapse
|
17
|
Ferrando E, Dahl CD. An investigation on the olfactory capabilities of domestic dogs (Canis lupus familiaris). Anim Cogn 2022; 25:1567-1577. [PMID: 35689114 DOI: 10.1007/s10071-022-01640-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/16/2022] [Accepted: 05/19/2022] [Indexed: 11/26/2022]
Abstract
The extraordinary olfactory capabilities in detection and rescue dogs are well-known. However, the olfactory performance varies by breed and search environment (Jezierski et al. in Forensic Sci Int 237:112-118, 2014), as well as by the quantity of training (Horowitz et al. in Learn Motivation 44(4):207-217, 2013). While detection of an olfactory cue inherently demands a judgment regarding the presence or absence of a cue at a given location, olfactory discrimination requires an assessment of quantity, a task demanding more attention and, hence, decreasing reliability as an informational source (Horowitz et al. 2013). This study aims at gaining more clarity on detection and discrimination of olfactory cues in untrained dogs and in a variety of dog breeds. Using a two-alternative forced choice (2AFC) paradigm, we assessed olfactory detection scores by presenting a varied quantity of food reward under one or the other hidden cup, and discrimination scores by presenting two varied quantities of food reward under both hidden cups. We found relatively reliable detection performances across all breeds and limited discrimination abilities, modulated by breed. We discuss our findings in relation to the cognitive demands imposed by the tasks and the cephalic index of the dog breeds.
Collapse
Affiliation(s)
- Elodie Ferrando
- Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Eötvös Loránd University, Pázmány Péter sétány 1/C, Budapest, 1117, Hungary
- Department of Ethology, Doctoral School of Biology, Institute of Biology, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/C, Budapest, 1117, Hungary
| | - Christoph D Dahl
- Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland.
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan.
- Brain and Consciousness Research Centre, Taipei Medical University Shuang-Ho Hospital, New Taipei City, Taiwan.
| |
Collapse
|
18
|
Michail G, Senkowski D, Holtkamp M, Wächter B, Keil J. Early beta oscillations in multisensory association areas underlie crossmodal performance enhancement. Neuroimage 2022; 257:119307. [PMID: 35577024 DOI: 10.1016/j.neuroimage.2022.119307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022] Open
Abstract
The combination of signals from different sensory modalities can enhance perception and facilitate behavioral responses. While previous research described crossmodal influences in a wide range of tasks, it remains unclear how such influences drive performance enhancements. In particular, the neural mechanisms underlying performance-relevant crossmodal influences, as well as the latency and spatial profile of such influences are not well understood. Here, we examined data from high-density electroencephalography (N = 30) recordings to characterize the oscillatory signatures of crossmodal facilitation of response speed, as manifested in the speeding of visual responses by concurrent task-irrelevant auditory information. Using a data-driven analysis approach, we found that individual gains in response speed correlated with larger beta power difference (13-25 Hz) between the audiovisual and the visual condition, starting within 80 ms after stimulus onset in the secondary visual cortex and in multisensory association areas in the parietal cortex. In addition, we examined data from electrocorticography (ECoG) recordings in four epileptic patients in a comparable paradigm. These ECoG data revealed reduced beta power in audiovisual compared with visual trials in the superior temporal gyrus (STG). Collectively, our data suggest that the crossmodal facilitation of response speed is associated with reduced early beta power in multisensory association and secondary visual areas. The reduced early beta power may reflect an auditory-driven feedback signal to improve visual processing through attentional gating. These findings improve our understanding of the neural mechanisms underlying crossmodal response speed facilitation and highlight the critical role of beta oscillations in mediating behaviorally relevant multisensory processing.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany.
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany; Department of Neurology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charité Campus Mitte (CCM), Charitéplatz 1, Berlin 10117, Germany
| | - Bettina Wächter
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany
| | - Julian Keil
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel 24118, Germany
| |
Collapse
|
19
|
Vastano R, Costantini M, Widerstrom-Noga E. Maladaptive reorganization following SCI: The role of body representation and multisensory integration. Prog Neurobiol 2021; 208:102179. [PMID: 34600947 DOI: 10.1016/j.pneurobio.2021.102179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 09/08/2021] [Accepted: 09/24/2021] [Indexed: 10/20/2022]
Abstract
In this review we focus on maladaptive brain reorganization after spinal cord injury (SCI), including the development of neuropathic pain, and its relationship with impairments in body representation and multisensory integration. We will discuss the implications of altered sensorimotor interactions after SCI with and without neuropathic pain and possible deficits in multisensory integration and body representation. Within this framework we will examine published research findings focused on the use of bodily illusions to manipulate multisensory body representation to induce analgesic effects in heterogeneous chronic pain populations and in SCI-related neuropathic pain. We propose that the development and intensification of neuropathic pain after SCI is partly dependent on brain reorganization associated with dysfunctional multisensory integration processes and distorted body representation. We conclude this review by suggesting future research avenues that may lead to a better understanding of the complex mechanisms underlying the sense of the body after SCI, with a focus on cortical changes.
Collapse
Affiliation(s)
- Roberta Vastano
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Eva Widerstrom-Noga
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| |
Collapse
|
20
|
Strelnikov K, Hervault M, Laurent L, Barone P. When two is worse than one: The deleterious impact of multisensory stimulation on response inhibition. PLoS One 2021; 16:e0251739. [PMID: 34014959 PMCID: PMC8136741 DOI: 10.1371/journal.pone.0251739] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/01/2021] [Indexed: 11/18/2022] Open
Abstract
Multisensory facilitation is known to improve the perceptual performances and reaction times of participants in a wide range of tasks, from detection and discrimination to memorization. We asked whether a multimodal signal can similarly improve action inhibition using the stop-signal paradigm. Indeed, consistent with a crossmodal redundant signal effect that relies on multisensory neuronal integration, the threshold for initiating behavioral responses is known for being reached faster with multisensory stimuli. To evaluate whether this phenomenon also occurs for inhibition, we compared stop signals in unimodal (human faces or voices) versus audiovisual modalities in natural or degraded conditions. In contrast to the expected multisensory facilitation, we observed poorer inhibition efficiency in the audiovisual modality compared with the visual and auditory modalities. This result was corroborated by both response probabilities and stop-signal reaction times. The visual modality (faces) was the most effective. This is the first demonstration of an audiovisual impairment in the domain of perception and action. It suggests that when individuals are engaged in a high-level decisional conflict, bimodal stimulation is not processed as a simple multisensory object improving the performance but is perceived as concurrent visual and auditory information. This absence of unity increases task demand and thus impairs the ability to revise the response.
Collapse
Affiliation(s)
- Kuzma Strelnikov
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
- Purpan University Hospital, Toulouse, France
- * E-mail:
| | - Mario Hervault
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| | - Lidwine Laurent
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| | - Pascal Barone
- Brain & Cognition Research Center (CerCo), University of Toulouse 3 –CNRS, Toulouse, France
| |
Collapse
|