1
|
Artigas C, Morales-Torres R, Rojas-Thomas F, Villena-González M, Rubio I, Ramírez-Benavides D, Bekinschtein T, Campos-Arteaga G, Rodríguez E. When alertness fades: Drowsiness-induced visual dominance and oscillatory recalibration in audiovisual integration. Int J Psychophysiol 2025; 212:112562. [PMID: 40187499 DOI: 10.1016/j.ijpsycho.2025.112562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2025] [Revised: 04/01/2025] [Accepted: 04/02/2025] [Indexed: 04/07/2025]
Abstract
Multisensory integration allows the brain to align inputs from different sensory modalities, enhancing perception and behavior. However, transitioning into drowsiness, a state marked by decreased attentional control and altered cortical dynamics, offers a unique opportunity to examine adaptations in these multisensory processes. In this study, we investigated how drowsiness influences reaction times (RTs) and neural oscillations during audiovisual multisensory integration. Participants performed a task where auditory and visual stimuli were presented either in a coordinated manner or with temporal misalignment (visual-first or auditory-first uncoordinated conditions). Behavioral results showed that drowsiness slowed RTs overall but revealed a clear sensory dominance effect: visual-first uncoordination facilitated RTs compared to auditory-first uncoordination, reflecting vision's dominant role in recalibrating sensory conflicts. In contrast, RTs in coordinated conditions remained stable across alert and drowsy states, suggesting that multisensory redundancy compensates for reduced cortical integration during drowsiness. At the neural level, distinct patterns of oscillatory activity emerged. Alpha oscillations supported attentional realignment and temporal alignment in visual-first conditions, while Gamma oscillations were recruited during auditory-first uncoordination, reflecting heightened sensory-specific processing demands. These effects were state-dependent, becoming more pronounced during drowsiness. Our findings demonstrate that drowsiness fundamentally reshapes multisensory integration by amplifying sensory dominance mechanisms, particularly vision. Compensatory neural mechanisms involving Alpha and Gamma oscillations maintain perceptual coherence under conditions of reduced cortical interaction. These results provide critical insights into how the brain adapts to sensory conflicts during states of diminished awareness, with broader implications for performance and decision-making in real-world drowsy states.
Collapse
Affiliation(s)
- Claudio Artigas
- Departamento de Ciencias Biológicas, Universidad Autónoma de Chile, Santiago, RM, Chile.
| | | | - Felipe Rojas-Thomas
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile
| | | | - Iván Rubio
- Psychology Department, Pontificia Universidad Católica de Chile, Santiago, RM, Chile
| | | | - Tristán Bekinschtein
- Consciousness and Cognition Laboratory, Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Eugenio Rodríguez
- Psychology Department, Pontificia Universidad Católica de Chile, Santiago, RM, Chile
| |
Collapse
|
2
|
Schmehl MN, Herche JL, Groh JM. Visually evoked activity and variable modulation of auditory responses in the macaque inferior colliculus. J Neurophysiol 2025; 133:1456-1467. [PMID: 40111400 DOI: 10.1152/jn.00529.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Revised: 12/17/2024] [Accepted: 03/15/2025] [Indexed: 03/22/2025] Open
Abstract
How multisensory cues affect processing in early sensory brain areas is not well understood. The inferior colliculus (IC) is an early auditory structure that is visually responsive (Porter KK, Metzger RR, Groh JM. Proc Natl Acad Sci USA 104: 17855-17860, 2007; Bulkin DA, Groh JM. Front Neural Circuits 6: 61, 2012; Bulkin DA, Groh JM. J Neurophysiol 107: 785-795, 2012), but little is known about how visual signals affect the IC's auditory representation. We explored how visual cues affect both spiking and local field potential (LFP) activity in the IC of two monkeys performing a task involving saccades to auditory, visual, or combined audiovisual stimuli. We confirm that LFPs are sensitive to the onset of fixation lights and the onset of visual targets presented during steady fixation. The LFP waveforms evoked by combined audiovisual stimuli differed from those evoked by sounds alone. In single-unit spiking activity, responses were weak when visual stimuli were presented alone, but visual stimuli could modulate the activity evoked by sounds in a stronger way. Such modulations could involve either increases or decreases in activity, and whether increases or decreases were observed was variable and not obviously correlated with the responses evoked by visual or auditory stimuli alone. These findings indicate that visual stimuli shape the IC's auditory representation in flexible ways that differ from those observed previously in multisensory areas.NEW & NOTEWORTHY We find that the inferior colliculus, a primarily auditory brain area, displays distinct population-level responses to visual stimuli. We also find that visual cues can influence the auditory responses of individual neurons. Together, the results provide insight into how relatively early sensory areas may play a role in combining multiple sensory modalities to refine the perception of complex environments.
Collapse
Affiliation(s)
- Meredith N Schmehl
- Department of Neurobiology, Duke University, Durham, North Carolina, United States
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, United States
| | - Jesse L Herche
- Department of Neurobiology, Duke University, Durham, North Carolina, United States
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, United States
| | - Jennifer M Groh
- Department of Neurobiology, Duke University, Durham, North Carolina, United States
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, United States
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina, United States
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, United States
| |
Collapse
|
3
|
Bent T. I can't hear you without my glasses. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:R5-R6. [PMID: 40072508 DOI: 10.1121/10.0036121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 12/22/2024] [Indexed: 03/14/2025]
Abstract
The Reflections series takes a look back on historical articles from The Journal of the Acoustical Society of America that have had a significant impact on the science and practice of acoustics.
Collapse
Affiliation(s)
- Tessa Bent
- Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN, 47408 USA
| |
Collapse
|
4
|
Ding K, Rakhshan M, Paredes-Acuña N, Cheng G, Thakor NV. Sensory integration for neuroprostheses: from functional benefits to neural correlates. Med Biol Eng Comput 2024; 62:2939-2960. [PMID: 38760597 DOI: 10.1007/s11517-024-03118-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 04/19/2024] [Indexed: 05/19/2024]
Abstract
In the field of sensory neuroprostheses, one ultimate goal is for individuals to perceive artificial somatosensory information and use the prosthesis with high complexity that resembles an intact system. To this end, research has shown that stimulation-elicited somatosensory information improves prosthesis perception and task performance. While studies strive to achieve sensory integration, a crucial phenomenon that entails naturalistic interaction with the environment, this topic has not been commensurately reviewed. Therefore, here we present a perspective for understanding sensory integration in neuroprostheses. First, we review the engineering aspects and functional outcomes in sensory neuroprosthesis studies. In this context, we summarize studies that have suggested sensory integration. We focus on how they have used stimulation-elicited percepts to maximize and improve the reliability of somatosensory information. Next, we review studies that have suggested multisensory integration. These works have demonstrated that congruent and simultaneous multisensory inputs provided cognitive benefits such that an individual experiences a greater sense of authority over prosthesis movements (i.e., agency) and perceives the prosthesis as part of their own (i.e., ownership). Thereafter, we present the theoretical and neuroscience framework of sensory integration. We investigate how behavioral models and neural recordings have been applied in the context of sensory integration. Sensory integration models developed from intact-limb individuals have led the way to sensory neuroprosthesis studies to demonstrate multisensory integration. Neural recordings have been used to show how multisensory inputs are processed across cortical areas. Lastly, we discuss some ongoing research and challenges in achieving and understanding sensory integration in sensory neuroprostheses. Resolving these challenges would help to develop future strategies to improve the sensory feedback of a neuroprosthetic system.
Collapse
Affiliation(s)
- Keqin Ding
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD, 21205, USA.
| | - Mohsen Rakhshan
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL, 32816, USA
- Disability, Aging, and Technology Cluster, University of Central Florida, Orlando, FL, 32816, USA
| | - Natalia Paredes-Acuña
- Institute for Cognitive Systems, School of Computation, Information and Technology, Technical University of Munich, 80333, Munich, Germany
| | - Gordon Cheng
- Institute for Cognitive Systems, School of Computation, Information and Technology, Technical University of Munich, 80333, Munich, Germany
| | - Nitish V Thakor
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD, 21205, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| |
Collapse
|
5
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024; 25:625-642. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
6
|
Leukel C, Loibl K, Leuders T. Integrating vision and somatosensation does not improve the accuracy and response time when estimating area and perimeter of rectangles in primary school. Trends Neurosci Educ 2024; 36:100238. [PMID: 39266122 DOI: 10.1016/j.tine.2024.100238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/30/2024] [Accepted: 08/01/2024] [Indexed: 09/14/2024]
Abstract
BACKGROUND Problem-solving and learning in mathematics involves sensory perception and processing. Multisensory integration may contribute by enhancing sensory estimates. This study aims to assess if combining visual and somatosensory information improves elementary students' perimeter and area estimates. METHODS 87 4th graders compared rectangles with respect to area or perimeter either solely using visual observation or additionally with somatosensory information. Three experiments targeted different task aspects. Statistical analyses tested success rates and response times. RESULTS Contrary to expectations, adding somatosensory information did not boost success rates for area and perimeter comparison. Response time even increased with adding somatosensory information. Children's difficulty in accurately tracing figures negatively impacted the success rate of area comparisons. DISCUSSION Results suggest visual observation alone suffices for accurately estimating and comparing area and perimeter of rectangles in 4th graders. IMPLICATIONS Careful deliberation on the inclusion of somatosensory information in mathematical tasks concerning perimeter and area estimations of rectangles is recommended.
Collapse
Affiliation(s)
- Christian Leukel
- University of Education Freiburg, Germany; Bernstein Center Freiburg, University of Freiburg, Germany.
| | | | | |
Collapse
|
7
|
Matsui R, Aoyama T, Kato K, Hasegawa Y. Real-time motion force-feedback system with predictive-vision for improving motor accuracy. Sci Rep 2024; 14:2168. [PMID: 38272970 PMCID: PMC10810826 DOI: 10.1038/s41598-024-52811-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 01/23/2024] [Indexed: 01/27/2024] Open
Abstract
Many haptic guidance systems have been studied over the years; however, most of them have been limited to predefined guidance methods. Calculating guidance according to the operator's motion is important for efficient human motor adaptation and learning. In this study, we developed a system that haptically provides guidance trajectory by sequential weighting between the operator's trajectory and the ideal trajectory calculated from a predictive-vision system. We investigated whether motion completion with a predictive-vision system affects human motor accuracy and adaptation in time-constrained goal-directed reaching and ball-hitting tasks through subject experiments. The experiment was conducted with 12 healthy participants, and all participants performed ball-hitting tasks. Half of the participants get forceful guidance from the proposed system in the middle of the experiment. We found that the use of the proposed system improved the operator's motor performance. Furthermore, we observed a trend in which the improvement in motor performance using this system correlated with that after the washout of this system. These results suggest that the predictive-vision system effectively enhances motor accuracy to the target error in dynamic and time-constrained reaching and hitting tasks and may contribute to facilitating motor learning.
Collapse
Affiliation(s)
- Ryo Matsui
- The Development of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan
| | - Tadayoshi Aoyama
- The Development of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan.
| | - Kenji Kato
- Assistive Robot Center, National Center for Geriatrics and Gerontology, Obu, Aichi, 474-8511, Japan.
| | - Yasuhisa Hasegawa
- The Development of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan
| |
Collapse
|
8
|
Li H, Wan B, Fang Y, Li Q, Liu JK, An L. An FPGA implementation of Bayesian inference with spiking neural networks. Front Neurosci 2024; 17:1291051. [PMID: 38249589 PMCID: PMC10796689 DOI: 10.3389/fnins.2023.1291051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 12/06/2023] [Indexed: 01/23/2024] Open
Abstract
Spiking neural networks (SNNs), as brain-inspired neural network models based on spikes, have the advantage of processing information with low complexity and efficient energy consumption. Currently, there is a growing trend to design hardware accelerators for dedicated SNNs to overcome the limitation of running under the traditional von Neumann architecture. Probabilistic sampling is an effective modeling approach for implementing SNNs to simulate the brain to achieve Bayesian inference. However, sampling consumes considerable time. It is highly demanding for specific hardware implementation of SNN sampling models to accelerate inference operations. Hereby, we design a hardware accelerator based on FPGA to speed up the execution of SNN algorithms by parallelization. We use streaming pipelining and array partitioning operations to achieve model operation acceleration with the least possible resource consumption, and combine the Python productivity for Zynq (PYNQ) framework to implement the model migration to the FPGA while increasing the speed of model operations. We verify the functionality and performance of the hardware architecture on the Xilinx Zynq ZCU104. The experimental results show that the hardware accelerator of the SNN sampling model proposed can significantly improve the computing speed while ensuring the accuracy of inference. In addition, Bayesian inference for spiking neural networks through the PYNQ framework can fully optimize the high performance and low power consumption of FPGAs in embedded applications. Taken together, our proposed FPGA implementation of Bayesian inference with SNNs has great potential for a wide range of applications, it can be ideal for implementing complex probabilistic model inference in embedded systems.
Collapse
Affiliation(s)
- Haoran Li
- Guangzhou Institute of Technology, Xidian University, Guangzhou, China
| | - Bo Wan
- School of Computer Science and Technology, Xidian University, Xi'an, China
- Key Laboratory of Smart Human Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, China
| | - Ying Fang
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, China
- Digital Fujian Internet-of-Thing Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, China
| | - Qifeng Li
- Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Jian K. Liu
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Lingling An
- Guangzhou Institute of Technology, Xidian University, Guangzhou, China
- School of Computer Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
9
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
10
|
Soballa P, Frings C, Schmalbrock P, Merz S. Multisensory integration reduces landmark distortions for tactile but not visual targets. J Neurophysiol 2023; 130:1403-1413. [PMID: 37910559 DOI: 10.1152/jn.00282.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/03/2023] Open
Abstract
Target localization is influenced by the presence of additionally presented nontargets, termed landmarks. In both the visual and tactile modality, these landmarks led to systematic distortions of target localizations often resulting in a shift toward the landmark. This shift has been attributed to averaging the spatial memory of both stimuli. Crucially, everyday experiences often rely on multiple modalities, and multisensory research suggests that inputs from different senses are optimally integrated, not averaged, for accurate perception, resulting in more reliable perception of cross-modal compared with uni-modal stimuli. As this could also lead to a reduced influence of the landmark, we wanted to test whether landmark distortions would be reduced when presented in a different modality or whether landmark distortions were unaffected by the modalities presented. In two experiments (each n = 30) tactile or visual targets were paired with tactile or visual landmarks. Experiment 1 showed that targets were less shifted toward landmarks from the different than the same modality, which was more pronounced for tactile than for visual targets. Experiment 2 aimed to replicate this pattern with increased visual uncertainty to rule out that smaller localization shifts of visual targets due to low uncertainty had led to the results. Still, landmark modality influenced localization shifts for tactile but not visual targets. The data pattern for tactile targets is not in line with memory averaging but seems to reflect the effects of multisensory integration, whereas visual targets were less prone to landmark distortions and do not appear to benefit from multisensory integration.NEW & NOTEWORTHY In the present study, we directly tested the predictions of two different accounts, namely, spatial memory averaging and multisensory integration, concerning the degree of landmark distortions of targets across modalities. We showed that landmark distortions were reduced across modalities compared to distortions within modalities, which is in line with multisensory integration. Crucially, this pattern was more pronounced for tactile than for visual targets.
Collapse
Affiliation(s)
- Paula Soballa
- Department of Psychology, University of Trier, Germany
| | | | | | - Simon Merz
- Department of Psychology, University of Trier, Germany
| |
Collapse
|
11
|
Pinardi M, Longo MR, Formica D, Strbac M, Mehring C, Burdet E, Di Pino G. Impact of supplementary sensory feedback on the control and embodiment in human movement augmentation. COMMUNICATIONS ENGINEERING 2023; 2:64. [PMCID: PMC10955865 DOI: 10.1038/s44172-023-00111-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 08/23/2023] [Indexed: 01/28/2025]
Abstract
In human movement augmentation, the number of controlled degrees of freedom could be enhanced by the simultaneous and independent use of supernumerary robotic limbs (SRL) and natural ones. However, this poses several challenges, that could be mitigated by encoding and relaying the SRL status. Here, we review the impact of supplementary sensory feedback on the control and embodiment of SRLs. We classify the main feedback features and analyse how they improve control performance. We report the feasibility of pushing body representation beyond natural human morphology and suggest that gradual SRL embodiment could make multisensory incongruencies less disruptive. We also highlight shared computational bases between SRL motor control and embodiment and suggest contextualizing them within the same theoretical framework. Finally, we argue that a shift towards long term experimental paradigms is necessary for successfully integrating motor control and embodiment. Supernumerary robotic limbs are robotic devices providing additional limbs to the user. Mattia Pinardi and colleagues review the impact of supplementary sensory feedback on the control performance and embodiment of supernumerary robotic limbs.
Collapse
Affiliation(s)
- Mattia Pinardi
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Matthew R. Longo
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Domenico Formica
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
- School of Engineering, Newcastle University, Newcastle upon Tyne, UK
| | - Matija Strbac
- Tecnalia Serbia Ltd, Belgrade, Serbia. University of Belgrade-School of Electrical Engineering, Belgrade, Serbia
| | - Carsten Mehring
- Bernstein Center and Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Etienne Burdet
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Giovanni Di Pino
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| |
Collapse
|
12
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
13
|
Villalonga MB, Sekuler R. Keep your finger on the pulse: Better rate perception and gap detection with vibrotactile compared to visual stimuli. Atten Percept Psychophys 2023; 85:2004-2017. [PMID: 37587355 PMCID: PMC10545646 DOI: 10.3758/s13414-023-02736-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2023] [Indexed: 08/18/2023]
Abstract
Important characteristics of the environment can be represented in the temporal pattern of sensory stimulation. In two experiments, we compared accuracy of temporal processing by different modalities. Experiment 1 examined binary categorization of rate for visual (V) or vibrotactile (T) stimulus pulses presented at either 4 or 6 Hz. Inter-pulse intervals were either constant or variable, perturbed by random Gaussian variates. Subjects categorized the rate of T pulse sequences more accurately than V sequences. In V conditions only, subjects disproportionately tended to mis-categorize 4-Hz pulse rates, for all but the most variable sequences. In Experiment 2, we compared gap detection thresholds across modalities, using the same V and T pulses from Experiment 1, as well as with bimodal (VT) pulses. Visual gap detection thresholds were larger (3[Formula: see text]) than tactile thresholds. Additionally, performance with VT stimuli seemed to be nearly completely dominated by their T components. Together, these results suggest (i) that vibrotactile temporal acuity surpasses visual temporal acuity, and (ii) that vibrotactile stimulation has considerable, untapped potential to convey temporal information like that needed for eyes-free alerting signals.
Collapse
Affiliation(s)
| | - Robert Sekuler
- Department of Psychology, Brandeis University, Waltham, MA, USA
- Program in Neuroscience, Brandeis University, Waltham, MA, USA
| |
Collapse
|
14
|
Park J, Kim S, Kim HR, Lee J. Prior expectation enhances sensorimotor behavior by modulating population tuning and subspace activity in sensory cortex. SCIENCE ADVANCES 2023; 9:eadg4156. [PMID: 37418521 PMCID: PMC10328413 DOI: 10.1126/sciadv.adg4156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Prior knowledge facilitates our perception and goal-directed behaviors, particularly when sensory input is lacking or noisy. However, the neural mechanisms underlying the improvement in sensorimotor behavior by prior expectations remain unknown. In this study, we examine the neural activity in the middle temporal (MT) area of visual cortex while monkeys perform a smooth pursuit eye movement task with prior expectation of the visual target's motion direction. Prior expectations discriminately reduce the MT neural responses depending on their preferred directions, when the sensory evidence is weak. This response reduction effectively sharpens neural population direction tuning. Simulations with a realistic MT population demonstrate that sharpening the tuning can explain the biases and variabilities in smooth pursuit, suggesting that neural computations in the sensory area alone can underpin the integration of prior knowledge and sensory evidence. State-space analysis further supports this by revealing neural signals of prior expectations in the MT population activity that correlate with behavioral changes.
Collapse
Affiliation(s)
- JeongJun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - HyungGoo R. Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
15
|
Pinardi M, Di Stefano N, Di Pino G, Spence C. Exploring crossmodal correspondences for future research in human movement augmentation. Front Psychol 2023; 14:1190103. [PMID: 37397340 PMCID: PMC10308310 DOI: 10.3389/fpsyg.2023.1190103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023] Open
Abstract
"Crossmodal correspondences" are the consistent mappings between perceptual dimensions or stimuli from different sensory domains, which have been widely observed in the general population and investigated by experimental psychologists in recent years. At the same time, the emerging field of human movement augmentation (i.e., the enhancement of an individual's motor abilities by means of artificial devices) has been struggling with the question of how to relay supplementary information concerning the state of the artificial device and its interaction with the environment to the user, which may help the latter to control the device more effectively. To date, this challenge has not been explicitly addressed by capitalizing on our emerging knowledge concerning crossmodal correspondences, despite these being tightly related to multisensory integration. In this perspective paper, we introduce some of the latest research findings on the crossmodal correspondences and their potential role in human augmentation. We then consider three ways in which the former might impact the latter, and the feasibility of this process. First, crossmodal correspondences, given the documented effect on attentional processing, might facilitate the integration of device status information (e.g., concerning position) coming from different sensory modalities (e.g., haptic and visual), thus increasing their usefulness for motor control and embodiment. Second, by capitalizing on their widespread and seemingly spontaneous nature, crossmodal correspondences might be exploited to reduce the cognitive burden caused by additional sensory inputs and the time required for the human brain to adapt the representation of the body to the presence of the artificial device. Third, to accomplish the first two points, the benefits of crossmodal correspondences should be maintained even after sensory substitution, a strategy commonly used when implementing supplementary feedback.
Collapse
Affiliation(s)
- Mattia Pinardi
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Giovanni Di Pino
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
16
|
Eckel S, Egelhaaf M, Doussot C. Nest-associated scent marks help bumblebees localizing their nest in visually ambiguous situations. Front Behav Neurosci 2023; 17:1155223. [PMID: 37389203 PMCID: PMC10300278 DOI: 10.3389/fnbeh.2023.1155223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/18/2023] [Indexed: 07/01/2023] Open
Abstract
Social insects such as ants and bees are excellent navigators. To manage their daily routines bumblebees, as an example, must learn multiple locations in their environment, like flower patches and their nest. While navigating from one location to another, they mainly rely on vision. Although the environment in which bumblebees live, be it a meadow or a garden, is visually stable overall, it may be prone to changes such as moving shadows or the displacement of an object in the scenery. Therefore, bees might not solely rely on visual cues, but use additional sources of information, forming a multimodal guidance system to ensure their return home to their nest. Here we show that the home-finding behavior of bumblebees, when confronted with a visually ambiguous scenario, is strongly influenced by natural scent marks they deposit at the inconspicuous nest hole when leaving their nest. Bumblebees search for a longer time and target their search with precision at potential nest locations that are visually familiar, if also marked with their natural scent. This finding sheds light on the crucial role of odor in helping bees find their way back to their inconspicuous nest.
Collapse
|
17
|
Pinardi M, Noccaro A, Raiano L, Formica D, Di Pino G. Comparing end-effector position and joint angle feedback for online robotic limb tracking. PLoS One 2023; 18:e0286566. [PMID: 37289675 PMCID: PMC10249844 DOI: 10.1371/journal.pone.0286566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 05/18/2023] [Indexed: 06/10/2023] Open
Abstract
Somatosensation greatly increases the ability to control our natural body. This suggests that supplementing vision with haptic sensory feedback would also be helpful when a user aims at controlling a robotic arm proficiently. However, whether the position of the robot and its continuous update should be coded in a extrinsic or intrinsic reference frame is not known. Here we compared two different supplementary feedback contents concerning the status of a robotic limb in 2-DoFs configuration: one encoding the Cartesian coordinates of the end-effector of the robotic arm (i.e., Task-space feedback) and another and encoding the robot joints angles (i.e., Joint-space feedback). Feedback was delivered to blindfolded participants through vibrotactile stimulation applied on participants' leg. After a 1.5-hour training with both feedbacks, participants were significantly more accurate with Task compared to Joint-space feedback, as shown by lower position and aiming errors, albeit not faster (i.e., similar onset delay). However, learning index during training was significantly higher in Joint space feedback compared to Task-space feedback. These results suggest that Task-space feedback is probably more intuitive and more suited for activities which require short training sessions, while Joint space feedback showed potential for long-term improvement. We speculate that the latter, despite performing worse in the present work, might be ultimately more suited for applications requiring long training, such as the control of supernumerary robotic limbs for surgical robotics, heavy industrial manufacturing, or more generally, in the context of human movement augmentation.
Collapse
Affiliation(s)
- Mattia Pinardi
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Alessia Noccaro
- Neurorobotics Group, Newcastle University, Newcastle, United Kingdom
| | - Luigi Raiano
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Domenico Formica
- Neurorobotics Group, Newcastle University, Newcastle, United Kingdom
| | - Giovanni Di Pino
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| |
Collapse
|
18
|
Zou W, Li C, Huang H. Ensemble perspective for understanding temporal credit assignment. Phys Rev E 2023; 107:024307. [PMID: 36932505 DOI: 10.1103/physreve.107.024307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 01/24/2023] [Indexed: 06/18/2023]
Abstract
Recurrent neural networks are widely used for modeling spatiotemporal sequences in both nature language processing and neural population dynamics. However, understanding the temporal credit assignment is hard. Here, we propose that each individual connection in the recurrent computation is modeled by a spike and slab distribution, rather than a precise weight value. We then derive the mean-field algorithm to train the network at the ensemble level. The method is then applied to classify handwritten digits when pixels are read in sequence, and to the multisensory integration task that is a fundamental cognitive function of animals. Our model reveals important connections that determine the overall performance of the network. The model also shows how spatiotemporal information is processed through the hyperparameters of the distribution, and moreover reveals distinct types of emergent neural selectivity. To provide a mechanistic analysis of the ensemble learning, we first derive an analytic solution of the learning at the infinitely large network limit. We then carry out a low-dimensional projection of both neural and synaptic dynamics, analyze symmetry breaking in the parameter space, and finally demonstrate the role of stochastic plasticity in the recurrent computation. Therefore, our study sheds light on mechanisms of how weight uncertainty impacts the temporal credit assignment in recurrent neural networks from the ensemble perspective.
Collapse
Affiliation(s)
- Wenxuan Zou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Chan Li
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
- Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
19
|
Association between different sensory modalities based on concurrent time series data obtained by a collaborative reservoir computing model. Sci Rep 2023; 13:173. [PMID: 36600034 DOI: 10.1038/s41598-023-27385-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 01/02/2023] [Indexed: 01/06/2023] Open
Abstract
Humans perceive the external world by integrating information from different modalities, obtained through the sensory organs. However, the aforementioned mechanism is still unclear and has been a subject of widespread interest in the fields of psychology and brain science. A model using two reservoir computing systems, i.e., a type of recurrent neural network trained to mimic each other's output, can detect stimulus patterns that repeatedly appear in a time series signal. We applied this model for identifying specific patterns that co-occur between information from different modalities. The model was self-organized by specific fluctuation patterns that co-occurred between different modalities, and could detect each fluctuation pattern. Additionally, similarly to the case where perception is influenced by synchronous/asynchronous presentation of multimodal stimuli, the model failed to work correctly for signals that did not co-occur with corresponding fluctuation patterns. Recent experimental studies have suggested that direct interaction between different sensory systems is important for multisensory integration, in addition to top-down control from higher brain regions such as the association cortex. Because several patterns of interaction between sensory modules can be incorporated into the employed model, we were able to compare the performance between them; the original version of the employed model incorporated such an interaction as the teaching signals for learning. The performance of the original and alternative models was evaluated, and the original model was found to perform the best. Thus, we demonstrated that feedback of the outputs of appropriately learned sensory modules performed the best when compared to the other examined patterns of interaction. The proposed model incorporated information encoded by the dynamic state of the neural population and the interactions between different sensory modules, both of which were based on recent experimental observations; this allowed us to study the influence of the temporal relationship and frequency of occurrence of multisensory signals on sensory integration, as well as the nature of interaction between different sensory signals.
Collapse
|
20
|
Fossataro C, Galigani M, Rossi Sebastiano A, Bruno V, Ronga I, Garbarini F. Spatial proximity to others induces plastic changes in the neural representation of the peripersonal space. iScience 2022; 26:105879. [PMID: 36654859 PMCID: PMC9840938 DOI: 10.1016/j.isci.2022.105879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/21/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Peripersonal space (PPS) is a highly plastic "invisible bubble" surrounding the body whose boundaries are mapped through multisensory integration. Yet, it is unclear how the spatial proximity to others alters PPS boundaries. Across five experiments (N = 80), by recording behavioral and electrophysiological responses to visuo-tactile stimuli, we demonstrate that the proximity to others induces plastic changes in the neural PPS representation. The spatial proximity to someone else's hand shrinks the portion of space within which multisensory responses occur, thus reducing the PPS boundaries. This suggests that PPS representation, built from bodily and multisensory signals, plastically adapts to the presence of conspecifics to define the self-other boundaries, so that what is usually coded as "my space" is recoded as "your space". When the space is shared with conspecifics, it seems adaptive to move the other-space away from the self-space to discriminate whether external events pertain to the self-body or to other-bodies.
Collapse
Affiliation(s)
- Carlotta Fossataro
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Mattia Galigani
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | | | - Valentina Bruno
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Irene Ronga
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Francesca Garbarini
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy,Neuroscience Institute of Turin (NIT), Turin 10123, Italy,Corresponding author
| |
Collapse
|
21
|
Vastano R, Costantini M, Alexander WH, Widerstrom-Noga E. Multisensory integration in humans with spinal cord injury. Sci Rep 2022; 12:22156. [PMID: 36550184 PMCID: PMC9780239 DOI: 10.1038/s41598-022-26678-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Although multisensory integration (MSI) has been extensively studied, the underlying mechanisms remain a topic of ongoing debate. Here we investigate these mechanisms by comparing MSI in healthy controls to a clinical population with spinal cord injury (SCI). Deafferentation following SCI induces sensorimotor impairment, which may alter the ability to synthesize cross-modal information. We applied mathematical and computational modeling to reaction time data recorded in response to temporally congruent cross-modal stimuli. We found that MSI in both SCI and healthy controls is best explained by cross-modal perceptual competition, highlighting a common competition mechanism. Relative to controls, MSI impairments in SCI participants were better explained by reduced stimulus salience leading to increased cross-modal competition. By combining traditional analyses with model-based approaches, we examine how MSI is realized during normal function, and how it is compromised in a clinical population. Our findings support future investigations identifying and rehabilitating MSI deficits in clinical disorders.
Collapse
Affiliation(s)
- Roberta Vastano
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| | - Marcello Costantini
- grid.412451.70000 0001 2181 4941Department of Psychological, Health and Territorial Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy ,grid.412451.70000 0001 2181 4941Institute for Advanced Biomedical Technologies, ITAB, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy
| | - William H. Alexander
- grid.255951.fCenter for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, USA ,grid.255951.fDepartment of Psychology, Florida Atlantic University, Boca Raton, USA ,grid.255951.fThe Brain Institute, Florida Atlantic University, Boca Raton, USA
| | - Eva Widerstrom-Noga
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| |
Collapse
|
22
|
Crosse MJ, Foxe JJ, Tarrit K, Freedman EG, Molholm S. Resolution of impaired multisensory processing in autism and the cost of switching sensory modality. Commun Biol 2022; 5:601. [PMID: 35773473 PMCID: PMC9246932 DOI: 10.1038/s42003-022-03519-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 05/23/2022] [Indexed: 11/09/2022] Open
Abstract
Children with autism spectrum disorders (ASD) exhibit alterations in multisensory processing, which may contribute to the prevalence of social and communicative deficits in this population. Resolution of multisensory deficits has been observed in teenagers with ASD for complex, social speech stimuli; however, whether this resolution extends to more basic multisensory processing deficits remains unclear. Here, in a cohort of 364 participants we show using simple, non-social audiovisual stimuli that deficits in multisensory processing observed in high-functioning children and teenagers with ASD are not evident in adults with the disorder. Computational modelling indicated that multisensory processing transitions from a default state of competition to one of facilitation, and that this transition is delayed in ASD. Further analysis revealed group differences in how sensory channels are weighted, and how this is impacted by preceding cross-sensory inputs. Our findings indicate that there is a complex and dynamic interplay among the sensory systems that differs considerably in individuals with ASD. Crosse et al. study a cohort of 364 participants with autism spectrum disorders (ASD) and matched controls, and show that deficits in multisensory processing observed in high-functioning children and teenagers with ASD are not evident in adults with the disorder. Using computational modelling they go on to demonstrate that there is a delayed transition of multisensory processing from a default state of competition to one of facilitation in ASD, as well as differences in sensory weighting and the ability to switch between sensory modalities, which sheds light on the interplay among sensory systems that differ in ASD individuals.
Collapse
Affiliation(s)
- Michael J Crosse
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA. .,The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA. .,Trinity Centre for Biomedical Engineering, Department of Mechanical, Manufacturing & Biomedical Engineering, Trinity College Dublin, Dublin, Ireland.
| | - John J Foxe
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA.,The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA.,The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Katy Tarrit
- The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Edward G Freedman
- The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Sophie Molholm
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA. .,The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA. .,The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA.
| |
Collapse
|
23
|
Gil-Guevara O, Bernal HA, Riveros AJ. Honey bees respond to multimodal stimuli following the Principle of Inverse Effectiveness. J Exp Biol 2022; 225:275501. [PMID: 35531628 PMCID: PMC9206449 DOI: 10.1242/jeb.243832] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 04/29/2022] [Indexed: 11/20/2022]
Abstract
Multisensory integration is assumed to entail benefits for receivers across multiple ecological contexts. However, signal integration effectiveness is constrained by features of the spatiotemporal and intensity domains. How sensory modalities are integrated during tasks facilitated by learning and memory, such as pollination, remains unsolved. Honey bees use olfactory and visual cues during foraging, making them a good model to study the use of multimodal signals. Here, we examined the effect of stimulus intensity on both learning and memory performance of bees trained using unimodal or bimodal stimuli. We measured the performance and the latency response across planned discrete levels of stimulus intensity. We employed the conditioning of the proboscis extension response protocol in honey bees using an electromechanical setup allowing us to control simultaneously and precisely olfactory and visual stimuli at different intensities. Our results show that the bimodal enhancement during learning and memory was higher as the intensity decreased when the separate individual components were least effective. Still, this effect was not detectable for the latency of response. Remarkably, these results support the principle of inverse effectiveness, traditionally studied in vertebrates, predicting that multisensory stimuli are more effectively integrated when the best unisensory response is relatively weak. Thus, we argue that the performance of the bees while using a bimodal stimulus depends on the interaction and intensity of its individual components. We further hold that the inclusion of findings across all levels of analysis enriches the traditional understanding of the mechanics and reliance of complex signals in honey bees. Summary: Bimodal enhancement during learning and memory tasks in africanized honey bees increases as the stimulus intensity of its unimodal components decreases; this indicates that learning performance depends on the interaction between the intensity of its components and the nature of the sensory modalities involved, supporting the principle of inverse effectiveness.
Collapse
Affiliation(s)
- Oswaldo Gil-Guevara
- Departamento de Biología, Facultad de Ciencias Naturales, Universidad del Rosario. Cra. 26 #63B-48. Bogotá. Colombia. 21Bogotá, Colombia
| | - Hernan A. Bernal
- Programa de Ingeniería Biomédica, Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario. Bogotá, Colombia
| | - Andre J. Riveros
- Departamento de Biología, Facultad de Ciencias Naturales, Universidad del Rosario. Cra. 26 #63B-48. Bogotá. Colombia. 21Bogotá, Colombia
| |
Collapse
|
24
|
Pesnot Lerousseau J, Parise CV, Ernst MO, van Wassenhove V. Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nat Commun 2022; 13:2489. [PMID: 35513362 PMCID: PMC9072402 DOI: 10.1038/s41467-022-29687-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/14/2022] [Indexed: 11/09/2022] Open
Abstract
Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.
Collapse
Affiliation(s)
- Jacques Pesnot Lerousseau
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
- Applied Cognitive Psychology, Ulm University, Ulm, Germany.
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France.
| | | | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France
| |
Collapse
|
25
|
Delis I, Ince RAA, Sajda P, Wang Q. Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction. J Neurosci 2022; 42:2344-2355. [PMID: 35091504 PMCID: PMC8936614 DOI: 10.1523/jneurosci.0861-21.2022] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/29/2021] [Accepted: 01/02/2022] [Indexed: 12/16/2022] Open
Abstract
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.
Collapse
Affiliation(s)
- Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, LS2 9JT, United Kingdom
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, G12 8QQ, United Kingdom
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, New York 10027
- Data Science Institute, Columbia University, New York, New York 10027
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
26
|
Sou KL, Say A, Xu H. Unity Assumption in Audiovisual Emotion Perception. Front Neurosci 2022; 16:782318. [PMID: 35310087 PMCID: PMC8931414 DOI: 10.3389/fnins.2022.782318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/09/2022] [Indexed: 11/29/2022] Open
Abstract
We experience various sensory stimuli every day. How does this integration occur? What are the inherent mechanisms in this integration? The “unity assumption” proposes a perceiver’s belief of unity in individual unisensory information to modulate the degree of multisensory integration. However, this has yet to be verified or quantified in the context of semantic emotion integration. In the present study, we investigate the ability of subjects to judge the intensities and degrees of similarity in faces and voices of two emotions (angry and happy). We found more similar stimulus intensities to be associated with stronger likelihoods of the face and voice being integrated. More interestingly, multisensory integration in emotion perception was observed to follow a Gaussian distribution as a function of the emotion intensity difference between the face and voice—the optimal cut-off at about 2.50 points difference on a 7-point Likert scale. This provides a quantitative estimation of the multisensory integration function in audio-visual semantic emotion perception with regards to stimulus intensity. Moreover, to investigate the variation of multisensory integration across the population, we examined the effects of personality and autistic traits of participants. Here, we found no correlation of autistic traits with unisensory processing in a nonclinical population. Our findings shed light on the current understanding of multisensory integration mechanisms.
Collapse
Affiliation(s)
- Ka Lon Sou
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore, Singapore
| | - Ashley Say
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Hong Xu
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- *Correspondence: Hong Xu,
| |
Collapse
|
27
|
From Hand to Eye: a Meta-Analysis of the Benefit from Handwriting Training in Visual Graph Recognition. EDUCATIONAL PSYCHOLOGY REVIEW 2022. [DOI: 10.1007/s10648-021-09651-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
28
|
Assayag N, Berger I, Parush S, Mell H, Bar-Shalita T. Attention-Deficit/Hyperactivity Disorder Symptoms, Sensation-Seeking, and Sensory Modulation Dysfunction in Substance Use Disorder: A Cross Sectional Two-Group Comparative Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19052541. [PMID: 35270233 PMCID: PMC8909105 DOI: 10.3390/ijerph19052541] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 12/23/2022]
Abstract
Background: Attention-deficit/hyperactivity disorder (ADHD) and sensation-seeking, a trait characterized by risk-related behaviors, have been recognized as risk factors in substance use disorder (SUD). Though ADHD co-occurs with sensory modulation dysfunction (SMD), SMD has scarcely been explored in SUD. Thus, this study aimed to characterize ADHD symptomology, sensation-seeking, and SMD, as well as to explore their contribution to SUD likelihood. Methods: A cross sectional two-group comparative study including therapeutic community residents with SUD (n = 58; study group) and healthy individuals (n = 62; comparison group) applying the MOXO continuous performance test (MOXO-CPT) evaluating ADHD-related symptoms. In addition, participants completed the ADHD Self-Report Scale—Version 1.1 for ADHD screening; the Brief Sensation Seeking Scale quantifying risk-taking behaviors; and the Sensory Responsiveness Questionnaire-Intensity Scale for identifying SMD. Results: The study group demonstrated higher SMD incidence (53.57% vs. 14.52%) and lower performance in three MOXO-CPT indexes: Attention, Impulsivity, and Hyperactivity, but not in Timing, compared to the comparison group. Sensory over-responsiveness had the strongest relationship with SUD, indicating 27-times increased odds for SUD (95% CI = 5.965, 121.216; p ≤ 0.0001). A probability risk index is proposed. Conclusion: We found SMD with the strongest relation to SUD exceeding that of ADHD, thus contributing a new perspective for developing future therapeutic modalities. Our findings highlight the need to address SMD above and beyond ADHD symptomology throughout the SUD rehabilitation.
Collapse
Affiliation(s)
- Naama Assayag
- School of Occupational Therapy, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel; (N.A.); (S.P.)
| | - Itai Berger
- Pediatric Neurology, Pediatric Division, Assuta Ashdod University Hospital, Faculty of Health Sciences, Ben-Gurion University, Beer-Sheva 8443944, Israel;
- School of Social Work and Social Welfare, Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| | - Shula Parush
- School of Occupational Therapy, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel; (N.A.); (S.P.)
| | - Haim Mell
- Department of Criminology, Max Stern Yezreel Valley College, Yezreel Valley 1930600, Israel;
| | - Tami Bar-Shalita
- Department of Occupational Therapy, School of Health Professions, Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
- Correspondence:
| |
Collapse
|
29
|
Neurocomputational mechanisms underlying cross-modal associations and their influence on perceptual decisions. Neuroimage 2021; 247:118841. [PMID: 34952232 PMCID: PMC9127393 DOI: 10.1016/j.neuroimage.2021.118841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 12/07/2021] [Accepted: 12/19/2021] [Indexed: 12/02/2022] Open
Abstract
When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent ‘early’ sensory processing benefits, or ‘late’ post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an ‘Early’ component (∼100–110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli, and a ‘Late’ component (∼330–340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of ‘early’ sensory processing, as well as ‘late’ post-sensory neural representations of associative congruency to perceptual decision formation.
Collapse
|
30
|
Kasuga S, Crevecoeur F, Cross KP, Balalaie P, Scott SH. Integration of proprioceptive and visual feedback during online control of reaching. J Neurophysiol 2021; 127:354-372. [PMID: 34907796 PMCID: PMC8794063 DOI: 10.1152/jn.00639.2020] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Visual and proprioceptive feedback both contribute to perceptual decisions, but it remains unknown how these feedback signals are integrated together or consider factors such as delays and variance during online control. We investigated this question by having participants reach to a target with randomly applied mechanical and/or visual disturbances. We observed that the presence of visual feedback during a mechanical disturbance did not increase the size of the muscle response significantly but did decrease variance, consistent with a dynamic Bayesian integration model. In a control experiment, we verified that vision had a potent influence when mechanical and visual disturbances were both present but opposite in sign. These results highlight a complex process for multisensory integration, where visual feedback has a relatively modest influence when the limb is mechanically disturbed, but a substantial influence when visual feedback becomes misaligned with the limb. NEW & NOTEWORTHY Visual feedback is more accurate, but proprioceptive feedback is faster. How should you integrate these sources of feedback to guide limb movement? As predicted by dynamic Bayesian models, the size of the muscle response to a mechanical disturbance was essentially the same whether visual feedback was present or not. Only under artificial conditions, such as when shifting the position of a cursor representing hand position, can one observe a muscle response from visual feedback.
Collapse
Affiliation(s)
- Shoko Kasuga
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Frédéric Crevecoeur
- Institute of Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Kevin Patrick Cross
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Parsa Balalaie
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada.,Department of Medicine, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
31
|
Neurocomputational mechanism of controllability inference under a multi-agent setting. PLoS Comput Biol 2021; 17:e1009549. [PMID: 34752453 PMCID: PMC8604335 DOI: 10.1371/journal.pcbi.1009549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 11/19/2021] [Accepted: 10/10/2021] [Indexed: 11/19/2022] Open
Abstract
Controllability perception significantly influences motivated behavior and emotion and requires an estimation of one’s influence on an environment. Previous studies have shown that an agent can infer controllability by observing contingency between one’s own action and outcome if there are no other outcome-relevant agents in an environment. However, if there are multiple agents who can influence the outcome, estimation of one’s genuine controllability requires exclusion of other agents’ possible influence. Here, we first investigated a computational and neural mechanism of controllability inference in a multi-agent setting. Our novel multi-agent Bayesian controllability inference model showed that other people’s action-outcome contingency information is integrated with one’s own action-outcome contingency to infer controllability, which can be explained as a Bayesian inference. Model-based functional MRI analyses showed that multi-agent Bayesian controllability inference recruits the temporoparietal junction (TPJ) and striatum. Then, this inferred controllability information was leveraged to increase motivated behavior in the vmPFC. These results generalize the previously known role of the striatum and vmPFC in single-agent controllability to multi-agent controllability, and this generalized role requires the TPJ in addition to the striatum of single-agent controllability to integrate both self- and other-related information. Finally, we identified an innate positive bias toward the self during the multi-agent controllability inference, which facilitated behavioral adaptation under volatile controllability. Furthermore, low positive bias and high negative bias were associated with increased daily feelings of guilt. Our results provide a mechanism of how our sense of controllability fluctuates due to other people in our lives, which might be related to social learned helplessness and depression. How we perceive controllability over an outcome if there are multiple other agents who can simultaneously influence that outcome? Previous ‘single-agent’ studies showed that an agents’ inferred controllability depends on contingency between its own action and following outcome and this inference involves striatum. Here, we show that in a multi-agent setting, other people’s action-outcome contingency information is integrated with one’s own action-outcome contingency to infer controllability, which was explained as a biased Bayesian inference. Notably, bias in inference played an adaptive role under volatile controllability and was associated with a perception of guilt. Striatum and temporoparietal junction (TPJ) were involved in this multi-agent Bayesian controllability inference and this controllability information was leveraged to increase motivated behavior in the vmPFC. Our results first provide a neurocomputational mechanism of multi-agent controllability inference.
Collapse
|
32
|
Ronga I, Galigani M, Bruno V, Castellani N, Rossi Sebastiano A, Valentini E, Fossataro C, Neppi-Modona M, Garbarini F. Seeming confines: Electrophysiological evidence of peripersonal space remapping following tool-use in humans. Cortex 2021; 144:133-150. [PMID: 34666298 DOI: 10.1016/j.cortex.2021.08.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 02/05/2021] [Accepted: 08/07/2021] [Indexed: 11/29/2022]
Abstract
The peripersonal space (PPS) is a special portion of space immediately surrounding the body, where the integration between tactile stimuli delivered on the body and auditory or visual events emanating from the environment occurs. Interestingly, PPS can widen if a tool is employed to interact with objects in the far space. However, electrophysiological evidence of such tool-use dependent plasticity in the human brain is scarce. Here, in a series of three experiments, participants were asked to respond to tactile stimuli, delivered to their right hand, either in isolation (unimodal condition) or combined with auditory stimulation, which could occur near (bimodal-near) or far from the stimulated hand (bimodal-far). According to multisensory integration spatial rule, when bimodal stimuli are presented at the same location, we expected a response enhancement (response time - RT - facilitation and event-related potential - ERP - super-additivity). In Experiment 1, we verified that RT facilitation was driven by bimodal input spatial congruency, independently from auditory stimulus intensity. In Experiment 2, we showed that our bimodal task was effective in eliciting the magnification of ERPs in bimodal conditions, with significantly larger responses in the near as compared to far condition. In Experiment 3 (main experiment), we explored tool-use driven PPS plasticity. Our audio-tactile task was performed either following tool-use (a 20-min reaching task, performed using a 145 cm-long rake) or after a control cognitive training (a 20-min visual discrimination task) performed in the far space. Following the control training, faster RTs and greater super-additive ERPs were found in bimodal-near as compared to bimodal-far condition (replicating Experiment 2 results). Crucially, this far-near differential response was significantly reduced after tool-use. Altogether our results indicate a selective effect of tool-use remapping in extending the boundaries of PPS. The present finding might be considered as an electrophysiological evidence of tool-use dependent plasticity in the human brain.
Collapse
Affiliation(s)
- Irene Ronga
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy
| | - Mattia Galigani
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy
| | - Valentina Bruno
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy
| | - Nicolò Castellani
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy; Molecular Mind Lab, IMT School for Advanced Studies, Lucca, Italy
| | | | - Elia Valentini
- Department of Psychology and Centre for Brain Science, University of Essex, UK
| | - Carlotta Fossataro
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy
| | - Marco Neppi-Modona
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy
| | - Francesca Garbarini
- MANIBUS Research Group, Department of Psychology, University of Turin, Italy.
| |
Collapse
|
33
|
Vestibular Stimulation May Drive Multisensory Processing: Principles for Targeted Sensorimotor Therapy (TSMT). Brain Sci 2021; 11:brainsci11081111. [PMID: 34439730 PMCID: PMC8393350 DOI: 10.3390/brainsci11081111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 08/20/2021] [Accepted: 08/20/2021] [Indexed: 12/01/2022] Open
Abstract
At birth, the vestibular system is fully mature, whilst higher order sensory processing is yet to develop in the full-term neonate. The current paper lays out a theoretical framework to account for the role vestibular stimulation may have driving multisensory and sensorimotor integration. Accordingly, vestibular stimulation, by activating the parieto-insular vestibular cortex, and/or the posterior parietal cortex may provide the cortical input for multisensory neurons in the superior colliculus that is needed for multisensory processing. Furthermore, we propose that motor development, by inducing change of reference frames, may shape the receptive field of multisensory neurons. This, by leading to lack of spatial contingency between formally contingent stimuli, may cause degradation of prior motor responses. Additionally, we offer a testable hypothesis explaining the beneficial effect of sensory integration therapies regarding attentional processes. Key concepts of a sensorimotor integration therapy (e.g., targeted sensorimotor therapy (TSMT)) are also put into a neurological context. TSMT utilizes specific tools and instruments. It is administered in 8-weeks long successive treatment regimens, each gradually increasing vestibular and postural stimulation, so sensory-motor integration is facilitated, and muscle strength is increased. Empirically TSMT is indicated for various diseases. Theoretical foundations of this sensorimotor therapy are discussed.
Collapse
|
34
|
Tian Y, Sun P. Characteristics of the neural coding of causality. Phys Rev E 2021; 103:012406. [PMID: 33601638 DOI: 10.1103/physreve.103.012406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 12/21/2020] [Indexed: 02/02/2023]
Abstract
While causality processing is an essential cognitive capacity of the neural system, a systematic understanding of the neural coding of causality is still elusive. We propose a physically fundamental analysis of this issue and demonstrate that the neural dynamics encodes the original causality between external events near homomorphically. The causality coding is memory robust for the amount of historical information and features high precision but low recall. This coding process creates a sparser representation for the external causality. Finally, we propose a statistic characterization for the neural coding mapping from the original causality to the coded causality in neural dynamics.
Collapse
Affiliation(s)
- Yang Tian
- Department of Psychology, Tsinghua University, Beijing 100084, China and Tsinghua Brain and Intelligence Lab, Beijing 100084, China
| | - Pei Sun
- Department of Psychology, Tsinghua University, Beijing 100084, China and Tsinghua Brain and Intelligence Lab, Beijing 100084, China
| |
Collapse
|
35
|
Auditory information enhances post-sensory visual evidence during rapid multisensory decision-making. Nat Commun 2020; 11:5440. [PMID: 33116148 PMCID: PMC7595090 DOI: 10.1038/s41467-020-19306-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/06/2020] [Indexed: 11/08/2022] Open
Abstract
Despite recent progress in understanding multisensory decision-making, a conclusive mechanistic account of how the brain translates the relevant evidence into a decision is lacking. Specifically, it remains unclear whether perceptual improvements during rapid multisensory decisions are best explained by sensory (i.e., ‘Early’) processing benefits or post-sensory (i.e., ‘Late’) changes in decision dynamics. Here, we employ a well-established visual object categorisation task in which early sensory and post-sensory decision evidence can be dissociated using multivariate pattern analysis of the electroencephalogram (EEG). We capitalize on these distinct neural components to identify when and how complementary auditory information influences the encoding of decision-relevant visual evidence in a multisensory context. We show that it is primarily the post-sensory, rather than the early sensory, EEG component amplitudes that are being amplified during rapid audiovisual decision-making. Using a neurally informed drift diffusion model we demonstrate that a multisensory behavioral improvement in accuracy arises from an enhanced quality of the relevant decision evidence, as captured by the post-sensory EEG component, consistent with the emergence of multisensory evidence in higher-order brain areas. A conclusive account on how the brain translates audiovisual evidence into a rapid decision is still lacking. Here, using a neurally-informed modelling approach, the authors show that sounds amplify visual evidence later in the decision process, in line with higher-order multisensory effects.
Collapse
|
36
|
Yu Z, Chen F, Liu JK. Sampling-Tree Model: Efficient Implementation of Distributed Bayesian Inference in Neural Networks. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2927808] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Pinardi M, Ferrari F, D’Alonzo M, Clemente F, Raiano L, Cipriani C, Di Pino G. ‘Doublecheck: a sensory confirmation is required to own a robotic hand, sending a command to feel in charge of it’. Cogn Neurosci 2020; 11:216-228. [DOI: 10.1080/17588928.2020.1793751] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Affiliation(s)
- M. Pinardi
- NeXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Campus Bio-Medico University, Rome, Italy
| | - F. Ferrari
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & A.I., Scuola Superiore Sant’Anna, Pisa, Italy
| | - M. D’Alonzo
- NeXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Campus Bio-Medico University, Rome, Italy
| | - F. Clemente
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & A.I., Scuola Superiore Sant’Anna, Pisa, Italy
| | - L. Raiano
- NeXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Campus Bio-Medico University, Rome, Italy
| | - C. Cipriani
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & A.I., Scuola Superiore Sant’Anna, Pisa, Italy
| | - G. Di Pino
- NeXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Campus Bio-Medico University, Rome, Italy
| |
Collapse
|
38
|
Boyce WP, Lindsay A, Zgonnikov A, Rañó I, Wong-Lin K. Optimality and Limitations of Audio-Visual Integration for Cognitive Systems. Front Robot AI 2020; 7:94. [PMID: 33501261 PMCID: PMC7805627 DOI: 10.3389/frobt.2020.00094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 06/09/2020] [Indexed: 11/13/2022] Open
Abstract
Multimodal integration is an important process in perceptual decision-making. In humans, this process has often been shown to be statistically optimal, or near optimal: sensory information is combined in a fashion that minimizes the average error in perceptual representation of stimuli. However, sometimes there are costs that come with the optimization, manifesting as illusory percepts. We review audio-visual facilitations and illusions that are products of multisensory integration, and the computational models that account for these phenomena. In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artifacts in artificial cognitive systems. We provide cautionary considerations when designing artificial cognitive systems with the view of avoiding such artifacts. Finally, we suggest avenues of research toward solutions to potential pitfalls in system design. We conclude that detailed understanding of multisensory integration and the mechanisms behind audio-visual illusions can benefit the design of artificial cognitive systems.
Collapse
Affiliation(s)
- William Paul Boyce
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - Anthony Lindsay
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - Arkady Zgonnikov
- AiTech, Delft University of Technology, Delft, Netherlands
- Department of Cognitive Robotics, Faculty of Mechanical, Maritime, and Materials Engineering, Delft University of Technology, Delft, Netherlands
| | - Iñaki Rañó
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - KongFatt Wong-Lin
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| |
Collapse
|
39
|
Immersive virtual reality reveals that visuo-proprioceptive discrepancy enlarges the hand-centred peripersonal space. Neuropsychologia 2020; 146:107540. [PMID: 32593721 DOI: 10.1016/j.neuropsychologia.2020.107540] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 06/11/2020] [Accepted: 06/19/2020] [Indexed: 12/23/2022]
Abstract
Vision and proprioception, informing the system about the body position in space, seem crucial in defining the boundary of the peripersonal space (PPS). What happens to the PPS representation when a conflict between vision and proprioception arises? We capitalize on the Immersive Virtual Reality to dissociate vision and proprioception by presenting the participants' 3D hand image in congruent/incongruent positions with respect to the participants' real hand. To measure the hand-centred PPS, we exploit multisensory integration occurring when visual stimuli are delivered simultaneously with tactile stimuli applied to a body district; i.e., visual enhancement of touch (VET). Participants are instructed to respond to tactile stimuli while ignoring visual stimuli (red LED), which can appear either near to or far from the hand receiving tactile (electrical) stimuli. The results show that, when vision and proprioception are congruent (i.e., real and virtual hand coincide), a space-dependent modulation of the VET effect occurs (with faster responses when visual stimuli are near to than far from the stimulated hand). Contrarily, when vision and proprioception are incongruent (i.e., a discrepancy between real and virtual hand is present), a comparable VET effect is observed when visual stimuli occur near to the real hand and when they occur far from it, but close to the virtual hand. These findings, also confirmed by the independent estimate of a Bayesian Causal Inference model, suggest that, when the visuo-proprioceptive discrepancy makes the coding of the hand position less precise, the hand-centred PPS is enlarged, likely to optimize reactions to external events.
Collapse
|
40
|
Zeng T, Tang F, Ji D, Si B. NeuroBayesSLAM: Neurobiologically inspired Bayesian integration of multisensory information for robot navigation. Neural Netw 2020; 126:21-35. [PMID: 32179391 DOI: 10.1016/j.neunet.2020.02.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 02/10/2020] [Accepted: 02/28/2020] [Indexed: 01/09/2023]
Abstract
Spatial navigation depends on the combination of multiple sensory cues from idiothetic and allothetic sources. The computational mechanisms of mammalian brains in integrating different sensory modalities under uncertainty for navigation is enlightening for robot navigation. We propose a Bayesian attractor network model to integrate visual and vestibular inputs inspired by the spatial memory systems of mammalian brains. In the model, the pose of the robot is encoded separately by two sub-networks, namely head direction network for angle representation and grid cell network for position representation, using similar neural codes of head direction cells and grid cells observed in mammalian brains. The neural codes in each of the sub-networks are updated in a Bayesian manner by a population of integrator cells for vestibular cue integration, as well as a population of calibration cells for visual cue calibration. The conflict between vestibular cue and visual cue is resolved by the competitive dynamics between the two populations. The model, implemented on a monocular visual simultaneous localization and mapping (SLAM) system, termed NeuroBayesSLAM, successfully builds semi-metric topological maps and self-localizes in outdoor and indoor environments of difference characteristics, achieving comparable performance as previous neurobiologically inspired navigation systems but with much less computation complexity. The proposed multisensory integration method constitutes a concise yet robust and biologically plausible method for robot navigation in large environments. The model provides a viable Bayesian mechanism for multisensory integration that may pertain to other neural subsystems beyond spatial cognition.
Collapse
Affiliation(s)
- Taiping Zeng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, China; State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China.
| | - Fengzhen Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China.
| | - Daxiong Ji
- Ocean College, Zhejiang University, Zhoushan, 316021, Zhejiang, China.
| | - Bailu Si
- School of Systems Science, Beijing Normal University, 100875, China.
| |
Collapse
|
41
|
Yu Z, Guo S, Deng F, Yan Q, Huang K, Liu JK, Chen F. Emergent Inference of Hidden Markov Models in Spiking Neural Networks Through Winner-Take-All. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1347-1354. [PMID: 30295641 DOI: 10.1109/tcyb.2018.2871144] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Hidden Markov models (HMMs) underpin the solution to many problems in computational neuroscience. However, it is still unclear how to implement inference of HMMs with a network of neurons in the brain. The existing methods suffer from the problem of being nonspiking and inaccurate. Here, we build a precise equivalence between the inference equation of HMMs with time-invariant hidden variables and the dynamics of spiking winner-take-all (WTA) neural networks. We show that the membrane potential of each spiking neuron in the WTA circuit encodes the logarithm of the posterior probability of the hidden variable in each state, and the firing rate of each neuron is proportional to the posterior probability of the HMMs. We prove that the time course of the neural firing rate can implement posterior inference of HMMs. Theoretical analysis and experimental results show that the proposed WTA circuit can get accurate inference results of HMMs.
Collapse
|
42
|
Dutta A, Lev-Ari T, Barzilay O, Mairon R, Wolf A, Ben-Shahar O, Gutfreund Y. Self-motion trajectories can facilitate orientation-based figure-ground segregation. J Neurophysiol 2020; 123:912-926. [PMID: 31967932 DOI: 10.1152/jn.00439.2019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Segregation of objects from the background is a basic and essential property of the visual system. We studied the neural detection of objects defined by orientation difference from background in barn owls (Tyto alba). We presented wide-field displays of densely packed stripes with a dominant orientation. Visual objects were created by orienting a circular patch differently from the background. In head-fixed conditions, neurons in both tecto- and thalamofugal visual pathways (optic tectum and visual Wulst) were weakly responsive to these objects in their receptive fields. However, notably, in freely viewing conditions, barn owls occasionally perform peculiar side-to-side head motions (peering) when scanning the environment. In the second part of the study we thus recorded the neural response from head-fixed owls while the visual displays replicated the peering conditions; i.e., the displays (objects and backgrounds) were shifted along trajectories that induced a retinal motion identical to sampled peering motions during viewing of a static object. These conditions induced dramatic neural responses to the objects, in the very same neurons that where unresponsive to the objects in static displays. By reverting to circular motions of the display, we show that the pattern of the neural response is mostly shaped by the orientation of the background relative to motion and not the orientation of the object. Thus our findings provide evidence that peering and/or other self-motions can facilitate orientation-based figure-ground segregation through interaction with inhibition from the surround.NEW & NOTEWORTHY Animals frequently move their sensory organs and thereby create motion cues that can enhance object segregation from background. We address a special example of such active sensing, in barn owls. When scanning the environment, barn owls occasionally perform small-amplitude side-to-side head movements called peering. We show that the visual outcome of such peering movements elicit neural detection of objects that are rotated from the dominant orientation of the background scene and which are otherwise mostly undetected. These results suggest a novel role for self-motions in sensing objects that break the regular orientation of elements in the scene.
Collapse
Affiliation(s)
- Arkadeb Dutta
- The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Haifa, Israel
| | - Tidhar Lev-Ari
- The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Haifa, Israel
| | - Ouriel Barzilay
- Faculty of Mechanical Engineering, The Technion, Haifa, Israel
| | - Rotem Mairon
- Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Alon Wolf
- Faculty of Mechanical Engineering, The Technion, Haifa, Israel
| | - Ohad Ben-Shahar
- Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,The Zlotowski Center for Neuroscience Research, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yoram Gutfreund
- The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Haifa, Israel
| |
Collapse
|
43
|
Cross-Modal Integration of Reward Value during Oculomotor Planning. eNeuro 2020; 7:ENEURO.0381-19.2020. [PMID: 31996392 PMCID: PMC7029185 DOI: 10.1523/eneuro.0381-19.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 12/20/2019] [Accepted: 01/05/2020] [Indexed: 11/30/2022] Open
Abstract
Reward value guides goal-directed behavior and modulates early sensory processing. Rewarding stimuli are often multisensory, but it is not known how reward value is combined across sensory modalities. Here we show that the integration of reward value critically depends on whether the distinct sensory inputs are perceived to emanate from the same multisensory object. We systematically manipulated the congruency in monetary reward values and the relative spatial positions of co-occurring auditory and visual stimuli that served as bimodal distractors during an oculomotor task performed by healthy human participants (male and female). The amount of interference induced by the distractors was used as an indicator of their perceptual salience. Our results across two experiments show that when reward value is linked to each modality separately, the value congruence between vision and audition determines the combined salience of the bimodal distractors. However, the reward value of vision wins over the value of audition if the two modalities are perceived to convey conflicting information regarding the spatial position of the bimodal distractors. These results show that in a task that highly relies on the processing of visual spatial information, the reward values from multiple sensory modalities are integrated with each other, each with their respective weights. This weighting depends on the strength of prior beliefs regarding a common source for incoming unisensory signals based on their congruency in reward value and perceived spatial alignment.
Collapse
|
44
|
Audio-visual experience strengthens multisensory assemblies in adult mouse visual cortex. Nat Commun 2019; 10:5684. [PMID: 31831751 PMCID: PMC6908602 DOI: 10.1038/s41467-019-13607-2] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 11/07/2019] [Indexed: 11/09/2022] Open
Abstract
We experience the world through multiple senses simultaneously. To better understand mechanisms of multisensory processing we ask whether inputs from two senses (auditory and visual) can interact and drive plasticity in neural-circuits of the primary visual cortex (V1). Using genetically-encoded voltage and calcium indicators, we find coincident audio-visual experience modifies both the supra and subthreshold response properties of neurons in L2/3 of mouse V1. Specifically, we find that after audio-visual pairing, a subset of multimodal neurons develops enhanced auditory responses to the paired auditory stimulus. This cross-modal plasticity persists over days and is reflected in the strengthening of small functional networks of L2/3 neurons. We find V1 processes coincident auditory and visual events by strengthening functional associations between feature specific assemblies of multimodal neurons during bouts of sensory driven co-activity, leaving a trace of multisensory experience in the cortical network.
Collapse
|
45
|
Chandrasekaran C, Hawkins GE. ChaRTr: An R toolbox for modeling choices and response times in decision-making tasks. J Neurosci Methods 2019; 328:108432. [PMID: 31586868 PMCID: PMC6980795 DOI: 10.1016/j.jneumeth.2019.108432] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 08/01/2019] [Accepted: 09/07/2019] [Indexed: 11/25/2022]
Abstract
BACKGROUND Decision-making is the process of choosing and performing actions in response to sensory cues to achieve behavioral goals. Many mathematical models have been developed to describe the choice behavior and response time (RT) distributions of observers performing decision-making tasks. However, relatively few researchers use these models because it demands expertise in various numerical, statistical, and software techniques. NEW METHOD We present a toolbox - Choices and Response Times in R, or ChaRTr - that provides the user the ability to implement and test a wide variety of decision-making models ranging from classic through to modern versions of the diffusion decision model, to models with urgency signals, or collapsing boundaries. RESULTS In three different case studies, we demonstrate how ChaRTr can be used to effortlessly discriminate between multiple models of decision-making behavior. We also provide guidance on how to extend the toolbox to incorporate future developments in decision-making models. COMPARISON WITH EXISTING METHOD(S) Existing software packages surmounted some of the numerical issues but have often focused on the classical decision-making model, the diffusion decision model. Recent models that posit roles for urgency, time-varying decision thresholds, noise in various aspects of the decision-formation process or low pass filtering of sensory evidence have proven to be challenging to incorporate in a coherent software framework that permits quantitative evaluation among these competing classes of decision-making models. CONCLUSION ChaRTr can be used to make insightful statements about the cognitive processes underlying observed decision-making behavior and ultimately for deeper insights into decision mechanisms.
Collapse
Affiliation(s)
- Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA; Department of Anatomy & Neurobiology, Boston University School of Medicine, Boston, MA, USA; Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| | - Guy E Hawkins
- School of Psychology, University of Newcastle, Australia.
| |
Collapse
|
46
|
Kao JC. Considerations in using recurrent neural networks to probe neural dynamics. J Neurophysiol 2019; 122:2504-2521. [PMID: 31619125 DOI: 10.1152/jn.00467.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics.NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
Collapse
Affiliation(s)
- Jonathan C Kao
- Department of Electrical and Computer Engineering, University of California, Los Angeles, California.,Neurosciences Program, University of California, Los Angeles, California
| |
Collapse
|
47
|
Hou H, Zheng Q, Zhao Y, Pouget A, Gu Y. Neural Correlates of Optimal Multisensory Decision Making under Time-Varying Reliabilities with an Invariant Linear Probabilistic Population Code. Neuron 2019; 104:1010-1021.e10. [DOI: 10.1016/j.neuron.2019.08.038] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 07/21/2019] [Accepted: 08/22/2019] [Indexed: 12/27/2022]
|
48
|
Fang Y, Yu Z, Liu JK, Chen F. A unified neural circuit of causal inference and multisensory integration. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.067] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
49
|
Multisensory learning between odor and sound enhances beta oscillations. Sci Rep 2019; 9:11236. [PMID: 31375760 PMCID: PMC6677763 DOI: 10.1038/s41598-019-47503-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/26/2019] [Indexed: 11/22/2022] Open
Abstract
Multisensory interactions are essential to make sense of the environment by transforming the mosaic of sensory inputs received by the organism into a unified perception. Brain rhythms allow coherent processing within areas or between distant brain regions and could thus be instrumental in functionally connecting remote brain areas in the context of multisensory interactions. Still, odor and sound processing relate to two sensory systems with specific anatomofunctional characteristics. How does the brain handle their association? Rats were challenged to discriminate between unisensory stimulation (odor or sound) and the multisensory combination of both. During learning, we observed a progressive establishment of high power beta oscillations (15–35 Hz) spanning on the olfactory bulb, the piriform cortex and the perirhinal cortex, but not the primary auditory cortex. In the piriform cortex, beta oscillations power was higher in the multisensory condition compared to the presentation of the odor alone. Furthermore, in the olfactory structures, the sound alone was able to elicit a beta oscillatory response. These findings emphasize the functional differences between olfactory and auditory cortices and reveal that beta oscillations contribute to the memory formation of the multisensory association.
Collapse
|
50
|
Chandrasekaran C, Blurton SP, Gondan M. Audiovisual detection at different intensities and delays. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2019; 91:159-175. [PMID: 31404455 PMCID: PMC6688765 DOI: 10.1016/j.jmp.2019.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In the redundant signals task, two target stimuli are associated with the same response. If both targets are presented together, redundancy gains are observed, as compared with single-target presentation. Different models explain these redundancy gains, including race and coactivation models (e.g., the Wiener diffusion superposition model, Schwarz, 1994, Journal of Mathematical Psychology, and the Ornstein Uhlenbeck diffusion superposition model, Diederich, 1995, Journal of Mathematical Psychology). In the present study, two monkeys performed a simple detection task with auditory, visual and audiovisual stimuli of different intensities and onset asynchronies. In its basic form, a Wiener diffusion superposition model provided only a poor description of the observed data, especially of the detection rate (i.e., accuracy or hit rate) for low stimulus intensity. We expanded the model in two ways, by (A) adding a temporal deadline, that is, restricting the evidence accumulation process to a stopping time, and (B) adding a second "nogo" barrier representing target absence. We present closed-form solutions for the mean absorption times and absorption probabilities for a Wiener diffusion process with a drift towards a single barrier in the presence of a temporal deadline (A), and numerically improved solutions for the two-barrier model (B). The best description of the data was obtained from the deadline model and substantially outperformed the two-barrier approach.
Collapse
Affiliation(s)
- Chandramouli Chandrasekaran
- Department of Electrical Engineering, Stanford University, USA
- Howard Hughes Medical Institute, Stanford University, USA
- Department of Psychological and Brain Sciences, Boston University, USA
- Department of Anatomy and Neurobiology, Boston University, USA
| | | | | |
Collapse
|