1
|
Artigas C, Morales-Torres R, Rojas-Thomas F, Villena-González M, Rubio I, Ramírez-Benavides D, Bekinschtein T, Campos-Arteaga G, Rodríguez E. When alertness fades: Drowsiness-induced visual dominance and oscillatory recalibration in audiovisual integration. Int J Psychophysiol 2025; 212:112562. [PMID: 40187499 DOI: 10.1016/j.ijpsycho.2025.112562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2025] [Revised: 04/01/2025] [Accepted: 04/02/2025] [Indexed: 04/07/2025]
Abstract
Multisensory integration allows the brain to align inputs from different sensory modalities, enhancing perception and behavior. However, transitioning into drowsiness, a state marked by decreased attentional control and altered cortical dynamics, offers a unique opportunity to examine adaptations in these multisensory processes. In this study, we investigated how drowsiness influences reaction times (RTs) and neural oscillations during audiovisual multisensory integration. Participants performed a task where auditory and visual stimuli were presented either in a coordinated manner or with temporal misalignment (visual-first or auditory-first uncoordinated conditions). Behavioral results showed that drowsiness slowed RTs overall but revealed a clear sensory dominance effect: visual-first uncoordination facilitated RTs compared to auditory-first uncoordination, reflecting vision's dominant role in recalibrating sensory conflicts. In contrast, RTs in coordinated conditions remained stable across alert and drowsy states, suggesting that multisensory redundancy compensates for reduced cortical integration during drowsiness. At the neural level, distinct patterns of oscillatory activity emerged. Alpha oscillations supported attentional realignment and temporal alignment in visual-first conditions, while Gamma oscillations were recruited during auditory-first uncoordination, reflecting heightened sensory-specific processing demands. These effects were state-dependent, becoming more pronounced during drowsiness. Our findings demonstrate that drowsiness fundamentally reshapes multisensory integration by amplifying sensory dominance mechanisms, particularly vision. Compensatory neural mechanisms involving Alpha and Gamma oscillations maintain perceptual coherence under conditions of reduced cortical interaction. These results provide critical insights into how the brain adapts to sensory conflicts during states of diminished awareness, with broader implications for performance and decision-making in real-world drowsy states.
Collapse
Affiliation(s)
- Claudio Artigas
- Departamento de Ciencias Biológicas, Universidad Autónoma de Chile, Santiago, RM, Chile.
| | | | - Felipe Rojas-Thomas
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile
| | | | - Iván Rubio
- Psychology Department, Pontificia Universidad Católica de Chile, Santiago, RM, Chile
| | | | - Tristán Bekinschtein
- Consciousness and Cognition Laboratory, Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Eugenio Rodríguez
- Psychology Department, Pontificia Universidad Católica de Chile, Santiago, RM, Chile
| |
Collapse
|
2
|
Ioannucci S, Vetter P. Semantic audio-visual congruence modulates visual sensitivity to biological motion across awareness levels. Cognition 2025; 262:106181. [PMID: 40378502 DOI: 10.1016/j.cognition.2025.106181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2025] [Revised: 05/06/2025] [Accepted: 05/07/2025] [Indexed: 05/19/2025]
Abstract
Whether cross-modal interaction requires conscious awareness of multisensory information or whether it can occur in the absence of awareness, is still an open question. Here, we investigated if sounds can enhance detection sensitivity of semantically matching visual stimuli at varying levels of visual awareness. We presented biological motion stimuli of human actions (walking, rowing, sawing) during dynamic continuous flash suppression (CFS) to 80 participants and measured the effect of co-occurring, semantically matching or non-matching action sounds on visual sensitivity (d'). By individually thresholding stimulus contrast, we distinguished participants who detected motion either above or at chance level. Participants who reliably detected visual motion above chance showed higher sensitivity to upright versus inverted biological motion across all experimental conditions. In contrast, participants detecting visual motion at chance level, i.e. during successful suppression, demonstrated this upright advantage exclusively during trials with semantically congruent sounds. Across the whole sample, the impact of sounds on visual sensitivity increased as participants' visual detection performance decreased, revealing a systematic trade-off between auditory and visual processing. Our findings suggest that semantic congruence between auditory and visual information can selectively modulate biological motion perception when visual awareness is minimal or absent, while more robust visual signals enable perception of biological motion independent of auditory input. Thus, semantically congruent sounds may impact visual representations as a function of the level of visual awareness.
Collapse
Affiliation(s)
- Stefano Ioannucci
- Visual and Cognitive Neuroscience Lab, Dept. of Psychology, University of Fribourg, Switzerland.
| | - Petra Vetter
- Visual and Cognitive Neuroscience Lab, Dept. of Psychology, University of Fribourg, Switzerland
| |
Collapse
|
3
|
Wang G, Yang Y, Liu X, Hua A, Luo X, Cai Y, Song Y, Wang J, Liu J. Neural connectivity and balance control in aging: Insights from directed cortical networks during sensory conflict. Neuroimage 2025; 312:121218. [PMID: 40239853 DOI: 10.1016/j.neuroimage.2025.121218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2024] [Revised: 04/02/2025] [Accepted: 04/14/2025] [Indexed: 04/18/2025] Open
Abstract
Balance control is crucial for stability during daily activities, relying on the integration of sensory inputs from the visual, vestibular, and somatosensory systems. Aging impairs the efficiency of these systems, leading to an increased risk of falls; however, the neural mechanisms underlying this decline, particularly under sensory conflict, are not fully understood. This study investigated the effects of aging on neural connectivity and sensory integration during balance tasks. Ninety-six participants (47 older adults and 49 young adults) were subjected to balance perturbation tasks under sensory-congruent and sensory-conflict conditions using a virtual reality headset and rotating platform. Behavioral measures, including postural sway and perceptual accuracy, were recorded. Electroencephalography (EEG) data were analyzed using generalized partial directed coherence (GPDC) to assess the directed functional connectivity and network efficiency. Older adults exhibited significantly greater postural sway, reduced perceptual accuracy, and a diminished ability to detect sensory conflicts than young adults, particularly under conflict conditions. As demonstrated by connectivity analysis, young adults showed adaptive shifts in connectivity from the visual to somatosensory regions during sensory conflict. In contrast, older adults demonstrated a less adaptable mode of connectivity. At the same time, global efficiency and clustering coefficients of young adults were higher, suggesting more effective and modular brain networks. Correlation analyses in older adults revealed that higher visual cortex efficiency was linked to lower postural sway specifically during sensory conflict, whereas higher motor cortex efficiency was associated with greater sway only under sensory-congruent conditions. In short, neural adaptability is vital in sensory integration and balance control. Due to decreased neural flexibility and network efficiency in older adults, their sensory reweighting was undermined and instability increased during the sensory conflict. These findings establish a foundation for development of targeted interventions to strengthen balance and lower the risks of falls in older adults.
Collapse
Affiliation(s)
- Guozheng Wang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, PR China; Taizhou Key Laboratory of Medical Devices and Advanced Materials, Taizhou Institute of Zhejiang University, Taizhou, 318000, PR China
| | - Yi Yang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, PR China; Department of Sports Science, School of General Education, Wenzhou Business College, Wenzhou, 325035, PR China
| | - Xiaoxia Liu
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, PR China
| | - Anke Hua
- Department of Physical Therapy and Rehabilitation Science, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Xin Luo
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, PR China
| | - Yiming Cai
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, PR China
| | - Yanhua Song
- Taizhou Key Laboratory of Medical Devices and Advanced Materials, Taizhou Institute of Zhejiang University, Taizhou, 318000, PR China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310058, PR China
| | - Jian Wang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, PR China; Center for Psychological Science, Zhejiang University, Hangzhou, 310058, PR China
| | - Jun Liu
- Taizhou Key Laboratory of Medical Devices and Advanced Materials, Taizhou Institute of Zhejiang University, Taizhou, 318000, PR China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310058, PR China.
| |
Collapse
|
4
|
Charyasz E, Erb M, Bause J, Heule R, Bender B, Jangir VK, Grodd W, Scheffler K. Functional connectivity of thalamic nuclei during sensorimotor task-based fMRI at 9.4 Tesla. Front Neurosci 2025; 19:1568222. [PMID: 40433501 PMCID: PMC12106322 DOI: 10.3389/fnins.2025.1568222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2025] [Accepted: 04/21/2025] [Indexed: 05/29/2025] Open
Abstract
The thalamus is the brain's central communication hub, playing a key role in processing and relaying sensorimotor and cognitive information between the cerebral cortex and other brain regions. It consists of specific and non-specific nuclei, each with a different role. Specific thalamic nuclei relay sensory and motor information to specific cortical and subcortical regions to ensure precise communication. In contrast, non-specific thalamic nuclei are involved in general functions such as attention or consciousness through broader and less targeted connections. In the present study, we aimed to investigate the functional connectivity patterns of the thalamic nuclei identified in our previous study as being involved in motor (finger-tapping) and sensory (finger-touch) tasks. The results of this study show that thalamic nuclei are not static hubs with a predefined role in neural signal processing, as they show different task-specific functional connectivity patterns in the anterior, middle, lateral, and posterior thalamic nuclei. Instead, they are all functional hubs that can flexibly change their connections to other brain regions in response to task demands. This work has important implications for understanding task-dependent functional connectivity between thalamic nuclei and different brain regions using task-based fMRI at 9.4 Tesla.
Collapse
Affiliation(s)
- Edyta Charyasz
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Graduate Training Centre of Neuroscience, International Max Planck Research School, University of Tübingen, Tübingen, Germany
| | - Michael Erb
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
| | - Jonas Bause
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Rahel Heule
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
- Center for MR Research, University Children's Hospital, Zürich, Switzerland
| | - Benjamin Bender
- Department of Neuroradiology, Diagnostical, and Interventional Neuroradiology, University Hospital of Tübingen, Tübingen, Germany
| | - Vinod Kumar Jangir
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Wolfgang Grodd
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Klaus Scheffler
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
| |
Collapse
|
5
|
Kim H, Zuleger T, Slutsky‐Ganesh A, Anand M, Warren S, Diekfuss J, Schlink B, Rush J, Simon J, Myer G, Grooms D. Reliability of Brain Activity During a Supine Bilateral Leg Press and Association With Concurrent 3D Knee Joint Biomechanics. Eur J Neurosci 2025; 61:e70126. [PMID: 40304370 PMCID: PMC12042646 DOI: 10.1111/ejn.70126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 04/09/2025] [Accepted: 04/15/2025] [Indexed: 05/02/2025]
Abstract
Previous neuroimaging studies have established a foundation of knowledge regarding the supraspinal control of lower extremity movements. However, the relationship between subtle differences in lower extremity kinematics and concurrent brain activity during motor tasks is mainly unknown. Additionally, there is limited information regarding the consistency of brain activation measures during a lower extremity motor task. The current study evaluated the within-session reliability of knee joint kinematics and brain activation during a supine bilateral leg press task using functional magnetic resonance imaging in 67 adolescent female athletes. Knee joint kinematics, including the number of leg press repetitions (cycles), as well as sagittal and frontal ranges of motion and their standard deviations, were analysed with concurrent blood-oxygen-level-dependent signals to explore the relationship between these biomechanical variables and brain activation. The results showed good reliability for knee joint kinematics and moderate reliability for brain activation in sensorimotor regions (precentral and postcentral gyri, supplementary motor cortex, brainstem, and anterior cerebellum lobules). Greater knee sagittal range of motion correlated with increased activation in motor planning and sensory integration regions, such as the dorsal striatum and lateral occipital cortex. These findings establish the supine bilateral leg press task as a reliable paradigm for investigating lower extremity motor control, providing insights into the neural mechanisms underlying movement variability. Additionally, brain regions exhibiting reliable activation could serve as valuable regions of interest for future investigations, enhancing the statistical power and reproducibility of research findings.
Collapse
Affiliation(s)
- HoWon Kim
- Ohio Musculoskeletal and Neurological InstituteOhio UniversityAthensOhioUSA
- Translational Biomedical Sciences Program, School of Rehabilitation and Communication SciencesCollege of Health Sciences and Professions, Ohio UniversityAthensOhioUSA
| | - Taylor M. Zuleger
- Emory Sports Performance And Research Center (SPARC)Flowery BranchGeorgiaUSA
- Department of OrthopaedicsEmory University School of MedicineAtlantaGeorgiaUSA
- Department of Veterans AffairsAtlanta VA Medical CenterDecaturGeorgiaUSA
- Emory Sports Medicine CenterAtlantaGeorgiaUSA
| | - Alexis B. Slutsky‐Ganesh
- Emory Sports Performance And Research Center (SPARC)Flowery BranchGeorgiaUSA
- Department of OrthopaedicsEmory University School of MedicineAtlantaGeorgiaUSA
- Emory Sports Medicine CenterAtlantaGeorgiaUSA
| | - Manish Anand
- Emory Sports Performance And Research Center (SPARC)Flowery BranchGeorgiaUSA
| | - Shayla M. Warren
- Emory Sports Performance And Research Center (SPARC)Flowery BranchGeorgiaUSA
- Department of OrthopaedicsEmory University School of MedicineAtlantaGeorgiaUSA
- Emory Sports Medicine CenterAtlantaGeorgiaUSA
| | - Jed A. Diekfuss
- Emory Sports Performance And Research Center (SPARC)Flowery BranchGeorgiaUSA
- Department of OrthopaedicsEmory University School of MedicineAtlantaGeorgiaUSA
- Department of Veterans AffairsAtlanta VA Medical CenterDecaturGeorgiaUSA
- Emory Sports Medicine CenterAtlantaGeorgiaUSA
| | | | - Justin L. Rush
- Ohio Musculoskeletal and Neurological InstituteOhio UniversityAthensOhioUSA
- Division of Physical Therapy, School of Rehabilitation and Communication Sciences, College of Health Sciences and ProfessionsOhio UniversityAthensOhioUSA
| | - Janet E. Simon
- Ohio Musculoskeletal and Neurological InstituteOhio UniversityAthensOhioUSA
- Division of Athletic Training, School of Applied Health Sciences and Wellness, College of Health Sciences and ProfessionsOhio UniversityAthensOhioUSA
| | - Gregory D. Myer
- Emory Sports Performance And Research Center (SPARC)Flowery BranchGeorgiaUSA
- Department of OrthopaedicsEmory University School of MedicineAtlantaGeorgiaUSA
- Wallace H. Coulter Department of Biomedical EngineeringGeorgia Institute of Technology & Emory UniversityAtlantaGeorgiaUSA
- The Micheli Center for Sports Injury PreventionWalthamMassachusettsUSA
- Youth Physical Development CentreCardiff Metropolitan UniversityWalesUK
| | - Dustin R. Grooms
- Ohio Musculoskeletal and Neurological InstituteOhio UniversityAthensOhioUSA
- Division of Physical Therapy, School of Rehabilitation and Communication Sciences, College of Health Sciences and ProfessionsOhio UniversityAthensOhioUSA
| |
Collapse
|
6
|
Ding WQ, Song W, Shi X, Feng Z, Chen X, Xie T, Liu Y, Zhou J, Chen Y, Lin JK, Wang QM, Zhou H, Liang TY, Jiang T, Ren B, Yao H, Li YQ, Evrard HC, Poo MM, Li H, Li X, Gong H, Todd AJ, Li A, Wang X, Deng J, Sun YG. Single-neuron projectome reveals organization of somatosensory ascending pathways in the mouse brain. Neuron 2025:S0896-6273(25)00179-5. [PMID: 40209714 DOI: 10.1016/j.neuron.2025.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 10/08/2024] [Accepted: 03/03/2025] [Indexed: 04/12/2025]
Abstract
Relay of multimodal somatosensory information from the spinal cord to the brain is critical for sensory perception, but the underlying circuit organization remains unclear. We have reconstructed mouse cervical spinal projection neurons at single-cell resolution and identified 19 projectome-defined subtypes exhibiting diverse projection patterns. We also reconstructed the brain-wide axonal projections of central relay neurons that receive direct spinal inputs at the single-cell resolution. We discovered parallel, divergent, and convergent projection patterns for spinal projection neurons and central relay neurons. Our results revealed the diverse pathways channeling spinal information to the cortex. Furthermore, we identified parallel lateral and medial spinal-superior colliculus-brainstem pathways, which could be involved in orienting and defensive behaviors, respectively. These data allowed us to construct a wiring diagram for ascending somatosensory pathways with projectome-defined subtype resolution. Our single-cell projectome analysis provided a new framework for understanding the complex neural circuitry underlying coordinated processing of diverse somatosensory modalities.
Collapse
Affiliation(s)
- Wen-Qun Ding
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Wei Song
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of the Chinese Academy of Sciences, Beijing 100049, China; School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaoxue Shi
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Zhao Feng
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
| | - Xu Chen
- Lingang Laboratory, Shanghai 200031, China
| | - Taorong Xie
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yuan Liu
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jiandong Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Yu Chen
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jun-Kai Lin
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Qiu-Miao Wang
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Hua Zhou
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Tong-Yu Liang
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Tao Jiang
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
| | - Biyu Ren
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Haishan Yao
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yun-Qing Li
- Department of Anatomy, Histology and Embryology, K.K. Leung Brain Research Centre, the Fourth Military Medical University, Xi'an 710032, China
| | - Henry C Evrard
- International Center for Primate Brain Research, Center for Excellence in Brain Science and Intelligence, Institute of Neuroscience, Chinese Academy of Sciences, Songjiang, Shanghai, China; Werner Reichardt Center for Integrative Neuroscience, Karl Eberhard University of Tübingen, Tübingen, Germany; Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Mu-Ming Poo
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Hui Li
- Department of Anatomy, Histology and Embryology, K.K. Leung Brain Research Centre, the Fourth Military Medical University, Xi'an 710032, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China; State Key Laboratory of Digital Medical Engineering, Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya 572025, China
| | - Hui Gong
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China; Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Andrew J Todd
- School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, UK
| | - Anan Li
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China; Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China; State Key Laboratory of Digital Medical Engineering, Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya 572025, China.
| | - Xiaofei Wang
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China.
| | - Juan Deng
- Department of Anesthesiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Institute for Translational Brain Research, MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200032, China.
| | - Yan-Gang Sun
- Institute of Neuroscience, State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
7
|
Farfán FD, Soo L, Grani F, Grima-Murcia MD, Fernández E. Brain connectivity changes in response to cortical electrical stimulation in blind neuroprosthesis users. Cereb Cortex 2025; 35:bhaf075. [PMID: 40173310 DOI: 10.1093/cercor/bhaf075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Revised: 03/11/2025] [Accepted: 03/11/2025] [Indexed: 04/04/2025] Open
Abstract
The success of visual neuroprostheses in long-term blind individuals depends not only on the prosthetic technology but also on the brain's ability to readjust its multimodal sensory processing circuits. This study investigates longitudinal changes in resting-state cortical connectivity in two blind subjects implanted with an intracortical microelectrode array (10 × 10 Utah Electrode Array) in the visual cortex for 6 months. During this period, daily microstimulation sessions elicited phosphene perception, and periodic electroencephalographic recordings in the resting state were conducted. Cortical connectivity was quantified using spectral coherence across 64 electroencephalographic channels. Results revealed significant changes in connectivity patterns pre- and post-implantation, with linear trends observed during the implantation period. These trends varied between subjects: User 1 exhibited changes in the 7 to 13 Hz band, while user 2 showed changes in the 15 to 30 Hz band. This study highlights the brain's adaptive capacity in response to sensory restoration and provides insights into optimizing neuroplasticity for improved outcomes in neuroprosthetic interventions.
Collapse
Affiliation(s)
- Fernando Daniel Farfán
- Institute of Bioengineering, Universidad Miguel Hernández of Elche, Avinguda de la Universitat d'Elx s/n, 03202 Elche, Alicante, Spain
- Laboratorio de Investigación en Neurociencias y Tecnologías Aplicadas (LINTEC), Departamento de Bioingeniería, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán, Avenida Independencia 1800, 4000 San Miguel de Tucumán, Tucumán, Argentina
- Instituto Superior de Investigaciones Biológicas (INSIBIO), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Batalla de Chacabuco 461, T4000 San Miguel de Tucumán, Tucumán, Argentina
| | - Leili Soo
- Institute of Bioengineering, Universidad Miguel Hernández of Elche, Avinguda de la Universitat d'Elx s/n, 03202 Elche, Alicante, Spain
| | - Fabrizio Grani
- Institute of Bioengineering, Universidad Miguel Hernández of Elche, Avinguda de la Universitat d'Elx s/n, 03202 Elche, Alicante, Spain
| | - María Dolores Grima-Murcia
- Institute of Bioengineering, Universidad Miguel Hernández of Elche, Avinguda de la Universitat d'Elx s/n, 03202 Elche, Alicante, Spain
| | - Eduardo Fernández
- Institute of Bioengineering, Universidad Miguel Hernández of Elche, Avinguda de la Universitat d'Elx s/n, 03202 Elche, Alicante, Spain
- Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Av. Monforte de Lemos, 3-5. Pabellón 11, planta 0, 28029 Madrid, Spain
| |
Collapse
|
8
|
Huang YT, Li Z, Yuan C, Zhu YC, Zhao WW, Xu JJ. Organic Photoelectrochemical Multisensory Integration. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2025; 37:e2503030. [PMID: 40099588 DOI: 10.1002/adma.202503030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2025] [Indexed: 03/20/2025]
Abstract
Neuromorphic perception capable of multisensory integration (MSI) in electrolytes is important but remains challenging. Here, the aqueous implementation of artificial MSI is reported based on the newly emerged organic photoelectrochemical transistor (OPECT) by representative visual (light)-gustatory (sour) perception. Under the co-modulation of light and H+/OH-, multisensory synaptic plasticity and several typical MSI characteristics are mimicked, including "super-additive response," "inverse effectiveness effect" and "temporal congruency." To demonstrate its potential usage, different types of multisensory associative learning and corresponding reflex activities are further emulated. The chemical MSI system is also utilized to control artificial salivation by a closed loop of real-time perception, processing, integration, and actuation to emulate the biological responses toward external stimuli. In contrast to previous solid-state operations, this work offers a new strategy for developing neuromorphic MSI in aqueous environments that are analogous to those in biology.
Collapse
Affiliation(s)
- Yu-Ting Huang
- State Key Laboratory of Analytical Chemistry for Life Science, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing, 210023, China
| | - Zheng Li
- State Key Laboratory of Analytical Chemistry for Life Science, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing, 210023, China
| | - Cheng Yuan
- State Key Laboratory of Analytical Chemistry for Life Science, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing, 210023, China
| | - Yuan-Cheng Zhu
- State Key Laboratory of Pharmaceutical Biotechnology, School of Life Sciences, Nanjing University, Nanjing, 210023, China
| | - Wei-Wei Zhao
- State Key Laboratory of Analytical Chemistry for Life Science, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing, 210023, China
| | - Jing-Juan Xu
- State Key Laboratory of Analytical Chemistry for Life Science, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing, 210023, China
| |
Collapse
|
9
|
Schumann AY, Uhde TW, Houghton DC, Yang QX, Cortese BM. Odor-enhanced Visual Processing in PTSD. Neuroimage 2025; 309:121072. [PMID: 39929406 PMCID: PMC11927510 DOI: 10.1016/j.neuroimage.2025.121072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Revised: 12/29/2024] [Accepted: 01/31/2025] [Indexed: 02/17/2025] Open
Abstract
Significant differences in the independent processing of trauma-related visual or olfactory cues have been demonstrated in posttraumatic stress disorder (PTSD). Yet, it remains unclear if PTSD-related differences exist in how the olfactory and visual systems interact to process potential threat. The present fMRI study assessed odor-enhanced visual processing (i.e. greater activation in visual areas to combined odor-picture cues compared to picture cues presented alone) in 46 combat veterans (19 with PTSD (CV+PTSD) and 27 healthy controls (HCV)). As expected, general odor-enhanced visual processing was demonstrated in the overall group, and CV+PTSD, compared to HCV, demonstrated significantly more threat odor-enhanced visual cortical activation to neutral images. Unexpectedly, however, CV+PTSD, compared to HCV, demonstrated significantly less threat odor-enhanced visual cortical activation to combat-related images. Functional connectivity findings mirrored those results and indicated a PTSD-related increase in olfactory-visual connectivity with neutral images and decrease with combat-related images. These findings suggest potential sensory processing dysregulation in PTSD that could be based in an olfactory-visual coupling impairment. Findings are also consistent with a PTSD-related focus on potential threat that may override the need to process additional sensory information important for the biological functions that promote survival.
Collapse
Affiliation(s)
- Aicko Y Schumann
- Institute of Psychiatry, Medical University of South Carolina, 67 President Street, Charleston, 29425, S.C., USA; Department of Mathematics, College of Charleston, 175 Calhoun Street, Charleston, 29401, S.C., USA.
| | - Thomas W Uhde
- Institute of Psychiatry, Medical University of South Carolina, 67 President Street, Charleston, 29425, S.C., USA.
| | - David C Houghton
- Department of Psychiatry and Behavioral Science, University of Texas Medical Branch, 400 Harborside Dr., Galveston, 77550, TX, USA.
| | - Qing X Yang
- Milton S. Hershey Medical Center, Penn State University, 500 University Drive, Hershey, 17033, P.A., USA.
| | - Bernadette M Cortese
- Institute of Psychiatry, Medical University of South Carolina, 67 President Street, Charleston, 29425, S.C., USA.
| |
Collapse
|
10
|
Choi I, Lee SH. Locomotion-dependent auditory gating to the parietal cortex guides multisensory decisions. Nat Commun 2025; 16:2308. [PMID: 40055344 PMCID: PMC11889129 DOI: 10.1038/s41467-025-57347-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 02/13/2025] [Indexed: 05/13/2025] Open
Abstract
Decision-making in mammals fundamentally relies on integrating multiple sensory inputs, with conflicting information resolved flexibly based on a dominant sensory modality. However, the neural mechanisms underlying state-dependent changes in sensory dominance remain poorly understood. Our study demonstrates that locomotion in mice shifts auditory-dominant decisions toward visual dominance during audiovisual conflicts. Using circuit-specific calcium imaging and optogenetic manipulations, we found that weakened visual representation in the posterior parietal cortex (PPC) leads to auditory-dominant decisions in stationary mice. Prolonged locomotion, however, promotes visual dominance by inhibiting auditory cortical neurons projecting to the PPC (ACPPC). This shift is mediated by secondary motor cortical neurons projecting to the auditory cortex (M2AC), which specifically inhibit ACPPC neurons without affecting auditory cortical projections to the striatum (ACSTR). Our findings reveal the neural circuit mechanisms underlying auditory gating to the association cortex depending on locomotion states, providing insights into the state-dependent changes in sensory dominance during multisensory decision-making.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, IBS, Daejeon, 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, IBS, Daejeon, 34141, Republic of Korea.
- Department of Biological Sciences, KAIST, Daejeon, 34141, Republic of Korea.
| |
Collapse
|
11
|
Taberner M, Allen T, O'keefe J, Chaput M, Grooms D, Cohen DD. Evolving the Control-Chaos Continuum: Part 1 - Translating Knowledge to Enhance On-Pitch Rehabilitation. J Orthop Sports Phys Ther 2025; 55:78-88. [PMID: 39868937 DOI: 10.2519/jospt.2025.13158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2025]
Abstract
BACKGROUND: On-pitch rehabilitation is a crucial part of returning to sport after injury in elite soccer. The control-chaos continuum (CCC) initially offered a framework for practitioners to plan on-pitch rehabilitation, focusing on physical preparation and sport specificity. However, our experiences with the CCC, combined with recent research in injury neurophysiology, point to a need for an updated model that integrates practice design and physical-cognitive interactions. CLINICAL QUESTION: What are the insights from injury neurophysiology, soccer performance, and coaching science needed to update the CCC and improve the planning, delivery, and progression of on-pitch rehabilitation in elite soccer? KEY RESULTS: Drawing on extensive experience in elite sport, we explain how recent research on neurophysiological recovery from injury, game models, and practice design has been applied to update the CCC and evolve the existing framework. CLINICAL APPLICATION: The evolution of the CCC expands on the original model to enhance planning, delivery, and progression of on-pitch rehabilitation. The updated framework incorporates elements of visual cognition, attentional challenges, decision-making, and progressive representation of the game model to enhance sport-specific preparation for returning to sport. J Orthop Sports Phys Ther 2025;55(2):1-11. Epub 3 January 2025. doi:10.2519/jospt.2025.13158.
Collapse
|
12
|
Hu Y, Mohsenzadeh Y. Neural processing of naturalistic audiovisual events in space and time. Commun Biol 2025; 8:110. [PMID: 39843939 PMCID: PMC11754444 DOI: 10.1038/s42003-024-07434-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
Our brain seamlessly integrates distinct sensory information to form a coherent percept. However, when real-world audiovisual events are perceived, the specific brain regions and timings for processing different levels of information remain less investigated. To address that, we curated naturalistic videos and recorded functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data when participants viewed videos with accompanying sounds. Our findings reveal early asymmetrical cross-modal interaction, with acoustic information represented in both early visual and auditory regions, while visual information only identified in visual cortices. The visual and auditory features were processed with similar onset but different temporal dynamics. High-level categorical and semantic information emerged in multisensory association areas later in time, indicating late cross-modal integration and its distinct role in converging conceptual information. Comparing neural representations to a two-branch deep neural network model highlighted the necessity of early cross-modal connections to build a biologically plausible model of audiovisual perception. With EEG-fMRI fusion, we provided a spatiotemporally resolved account of neural activity during the processing of naturalistic audiovisual stimuli.
Collapse
Affiliation(s)
- Yu Hu
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Yalda Mohsenzadeh
- Western Institute for Neuroscience, Western University, London, ON, Canada.
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
- Department of Computer Science, Western University, London, ON, Canada.
| |
Collapse
|
13
|
Diana F, Kret ME. First predict, then bond: Rethinking the function of mimicry from prediction to affiliation in human and non-human animals. Neurosci Biobehav Rev 2025; 168:105950. [PMID: 39580008 DOI: 10.1016/j.neubiorev.2024.105950] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 11/13/2024] [Accepted: 11/19/2024] [Indexed: 11/25/2024]
Abstract
Automatic mimicry, where social animals mimic the emotional expressions of others, is a well-documented phenomenon. While research has extensively examined how being mimicked influences our perception of others, the fundamental question of why we mimic remains largely unexplored. Previous theories often link mimicry with an affiliative social goal. While we agree that mimicry can increase survival chances by enhancing group cohesion, we argue for a more primitive adaptive value that may operate independently of social bonding. By reviewing existing literature, we propose that mimicry serves as a mechanism to predict other individuals, and consequently, the environment, enhancing survival of the individual. We posit a shift towards understanding mimicry as a mechanism that minimizes prediction error, empowering individuals to navigate their surroundings more effectively. Embracing mimicry as a tool for self-preservation and environmental prediction opens new avenues for interdisciplinary research in comparative psychology and behavioral ecology.
Collapse
Affiliation(s)
- Fabiola Diana
- Department of Cognitive Psychology, Faculty of Social and Behavioral Sciences, Leiden University, Wassenaarseweg 52, Leiden 2333 AK, Netherlands; Leiden Institute for Brain and Cognition (LIBC), Leiden University, Wassenaarseweg 52, Leiden 2333 AK, Netherlands.
| | - Mariska E Kret
- Department of Cognitive Psychology, Faculty of Social and Behavioral Sciences, Leiden University, Wassenaarseweg 52, Leiden 2333 AK, Netherlands; Leiden Institute for Brain and Cognition (LIBC), Leiden University, Wassenaarseweg 52, Leiden 2333 AK, Netherlands.
| |
Collapse
|
14
|
Diana L, Casati C, Melzi L, Marzoli SB, Bolognini N. Enhancing multisensory rehabilitation of visual field defects with transcranial direct current stimulation: A randomized clinical trial. Eur J Neurol 2025; 32:e16559. [PMID: 39607286 PMCID: PMC11625917 DOI: 10.1111/ene.16559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 11/09/2024] [Accepted: 11/11/2024] [Indexed: 11/29/2024]
Abstract
BACKGROUND AND PURPOSE Visual rehabilitation is necessary for improving the quality of life of patients with acquired homonymous visual field defects (HVFDs). By modulating brain excitability and plasticity, transcranial direct current stimulation (tDCS) may accelerate and increase the effects of compensatory trainings, which are usually long and intensive. In the present proof-of-principle, double-blind, randomized, sham-controlled study, we assess whether anodal tDCS applied over ipsilesional occipital or parietal cortices can increase the effects of a compensatory audiovisual training for HVFDs. METHODS Eighteen participants with chronic HVFDs were randomized to receive anodal or sham tDCS over the ipsilesional parietal or occipital cortex during a 2-week (10 days, 2 h/day) audiovisual treatment aimed at improving oculomotor visual field exploration. Improvements were assessed by administering visual detection with eye movements and visual search tests, and a questionnaire for activities of daily living (ADLs) before the treatment, at its end, and at 1-month and 4-month follow-ups; lesion analyses were performed to look for predictors of treatment effects. RESULTS Anodal ipsilesional tDCS, regardless of the target area (occipital vs. parietal), speeds up and increases daily improvements during the training. Whereas long-lasting (up to 4 months) post-treatment improvements in visual search and ADLs were observed in all groups, a greater and stable increase of visual detections in the blind hemifield was brought about only by the adjuvant use of occipital tDCS. CONCLUSIONS Compensatory audiovisual rehabilitation of HFVDs is effective and benefits from the adjuvant application of occipital and parietal tDCS, which speeds up and increases training-induced improvement. REGISTRY NUMBER NCT06116760.
Collapse
Affiliation(s)
- Lorenzo Diana
- Department of Neurorehabilitation Sciences, Laboratory of NeuropsychologyIRCCS Istituto Auxologico ItalianoMilanItaly
| | - Carlotta Casati
- Department of Neurorehabilitation Sciences, Laboratory of NeuropsychologyIRCCS Istituto Auxologico ItalianoMilanItaly
| | - Lisa Melzi
- Neuro‐Ophthalmology Center and Ocular Electrophysiology LaboratoryIRCCS Istituto Auxologico ItalianoMilanItaly
| | - Stefania Bianchi Marzoli
- Neuro‐Ophthalmology Center and Ocular Electrophysiology LaboratoryIRCCS Istituto Auxologico ItalianoMilanItaly
| | - Nadia Bolognini
- Department of Neurorehabilitation Sciences, Laboratory of NeuropsychologyIRCCS Istituto Auxologico ItalianoMilanItaly
- Department of PsychologyUniversity of Milano‐Bicocca and NeuroMIMilanItaly
| |
Collapse
|
15
|
Yu H, Zhao Q. Brain-inspired multisensory integration neural network for cross-modal recognition through spatiotemporal dynamics and deep learning. Cogn Neurodyn 2024; 18:3615-3628. [PMID: 39712112 PMCID: PMC11655826 DOI: 10.1007/s11571-023-09932-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 12/25/2022] [Accepted: 01/13/2023] [Indexed: 02/05/2023] Open
Abstract
The integration and interaction of cross-modal senses in brain neural networks can facilitate high-level cognitive functionalities. In this work, we proposed a bioinspired multisensory integration neural network (MINN) that integrates visual and audio senses for recognizing multimodal information across different sensory modalities. This deep learning-based model incorporates a cascading framework of parallel convolutional neural networks (CNNs) for extracting intrinsic features from visual and audio inputs, and a recurrent neural network (RNN) for multimodal information integration and interaction. The network was trained using synthetic training data generated for digital recognition tasks. It was revealed that the spatial and temporal features extracted from visual and audio inputs by CNNs were encoded in subspaces orthogonal with each other. In integration epoch, network state evolved along quasi-rotation-symmetric trajectories and a structural manifold with stable attractors was formed in RNN, supporting accurate cross-modal recognition. We further evaluated the robustness of the MINN algorithm with noisy inputs and asynchronous digital inputs. Experimental results demonstrated the superior performance of MINN for flexible integration and accurate recognition of multisensory information with distinct sense properties. The present results provide insights into the computational principles governing multisensory integration and a comprehensive neural network model for brain-inspired intelligence.
Collapse
Affiliation(s)
- Haitao Yu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Quanfa Zhao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| |
Collapse
|
16
|
Le Floch A, Ropars G. Hebbian Optocontrol of Cross-Modal Disruptive Reading in Increasing Acoustic Noise in an Adult with Developmental Coordination Disorder: A Case Report. Brain Sci 2024; 14:1208. [PMID: 39766407 PMCID: PMC11674537 DOI: 10.3390/brainsci14121208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 11/22/2024] [Accepted: 11/27/2024] [Indexed: 01/11/2025] Open
Abstract
Acoustic noise is known to perturb reading for good readers, including children and adults. This external acoustic noise interfering at the multimodal areas in the brain causes difficulties reducing reading and writing performances. Moreover, it is known that people with developmental coordination disorder (DCD) and dyslexia have reading deficits even in the absence of acoustic noise. The goal of this study is to investigate the effects of additional acoustic noise on an adult with DCD and dyslexia. Indeed, as vision is the main source of information for the brain during reading, a noisy internal visual crowding has been observed in many cases of readers with dyslexia, as additional mirror or duplicated images of words are perceived by these observers, simultaneously with the primary images. Here, we show that when the noisy internal visual crowding and an increasing external acoustic noise are superimposed, a reading disruptive threshold at about 50 to 60 dBa of noise is reached, depending on the type of acoustic noise for a young adult with DCD and dyslexia but not for a control. More interestingly, we report that this disruptive noise threshold can be controlled by Hebbian mechanisms linked to a pulse-modulated lighting that erases the confusing internal crowding images. An improvement of 12 dBa in the disruptive threshold is then observed with two types of acoustic noises, showing the potential utility of Hebbian optocontrol in managing reading difficulties in adults with DCD and dyslexia.
Collapse
Affiliation(s)
- Albert Le Floch
- Laser Physics Laboratory, University of Rennes, 35042 Rennes Cedex, France;
- Quantum Electronics and Chiralities Laboratory, 20 Square Marcel Bouget, 35700 Rennes Cedex, France
| | - Guy Ropars
- Laser Physics Laboratory, University of Rennes, 35042 Rennes Cedex, France;
- UFR SPM, University of Rennes, 35042 Rennes Cedex, France
| |
Collapse
|
17
|
Wei W, Benn RA, Scholz R, Shevchenko V, Klatzmann U, Alberti F, Chiou R, Wassermann D, Vanderwal T, Smallwood J, Margulies DS. A function-based mapping of sensory integration along the cortical hierarchy. Commun Biol 2024; 7:1593. [PMID: 39613829 PMCID: PMC11607388 DOI: 10.1038/s42003-024-07224-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 11/06/2024] [Indexed: 12/01/2024] Open
Abstract
Sensory information mainly travels along a hierarchy spanning unimodal to transmodal regions, forming multisensory integrative representations crucial for higher-order cognitive functions. Here, we develop an fMRI based two-dimensional framework to characterize sensory integration based on the anchoring role of the primary cortex in the organization of sensory processing. Sensory magnitude captures the percentage of variance explained by three primary sensory signals and decreases as the hierarchy ascends, exhibiting strong similarity to the known hierarchy and high stability across different conditions. Sensory angle converts associations with three primary sensory signals to an angle representing the proportional contributions of different sensory modalities. This dimension identifies differences between brain states and emphasizes how sensory integration changes flexibly in response to varying cognitive demands. Furthermore, meta-analytic functional decoding with our model highlights the close relationship between cognitive functions and sensory integration, showing its potential for future research of human cognition through sensory information processing.
Collapse
Affiliation(s)
- Wei Wei
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France.
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.
| | - R Austin Benn
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Robert Scholz
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Max Planck School of Cognition, Leipzig, Germany
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Victoria Shevchenko
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Ulysse Klatzmann
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Francesco Alberti
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Rocco Chiou
- School of Psychology, University of Surrey, Surrey, United Kingdom
| | | | - Tamara Vanderwal
- Department of Psychiatry, University of British Columbia, Vancouver, Canada
- BC Children's Hospital Research Institute, Vancouver, Canada
| | | | - Daniel S Margulies
- Cognitive Neuroanatomy Lab, Université Paris Cité, INCC UMR 8002, CNRS, Paris, France.
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.
| |
Collapse
|
18
|
Paasonen J, Valjakka JS, Salo RA, Paasonen E, Tanila H, Michaeli S, Mangia S, Gröhn O. Whisker stimulation with different frequencies reveals non-uniform modulation of functional magnetic resonance imaging signal across sensory systems in awake rats. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.13.623361. [PMID: 39605361 PMCID: PMC11601494 DOI: 10.1101/2024.11.13.623361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Primary sensory systems are classically considered to be separate units, however there is current evidence that there are notable interactions between them. We examined the cross-sensory interplay by applying a quiet and motion-tolerant zero echo time functional magnetic resonance imaging (fMRI) technique to elucidate the evoked brain-wide responses to whisker pad stimulation in awake and anesthetized rats. Specifically, characterized the brain-wide responses in core and non-core regions to whisker pad stimulation by the varying stimulation-frequency, and determined whether isoflurane-medetomidine anesthesia, traditionally used in preclinical imaging, confounded investigations related to sensory integration. We demonstrated that unilateral whisker pad stimulation not only elicited robust activity along the whisker-mediated tactile system, but also in auditory, visual, high-order, and cerebellar regions, indicative of brain-wide cross-sensory and associative activity. By inspecting the response profiles to different stimulation frequencies and temporal signal characteristics, we observed that the non-core regions responded to stimulation in a very different way compared to the primary sensory system, likely reflecting different encoding modes between the primary sensory, cross-sensory, and integrative processing. Lastly, while the activity evoked in low-order sensory structures could be reliably detected under anesthesia, the activity in high-order processing and the complex differences between primary, cross-sensory, and associative systems were visible only in the awake state. We conclude that our study reveals novel aspects of the cross-sensory interplay of whisker-mediated tactile system, and importantly, that these would be difficult to observe in anesthetized rats.
Collapse
Affiliation(s)
- Jaakko Paasonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Juha S. Valjakka
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Raimo A. Salo
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Ekaterina Paasonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- NeuroCenter, Kuopio University Hospital, Kuopio, Finland
| | - Heikki Tanila
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Shalom Michaeli
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Silvia Mangia
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Olli Gröhn
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
19
|
Ding K, Rakhshan M, Paredes-Acuña N, Cheng G, Thakor NV. Sensory integration for neuroprostheses: from functional benefits to neural correlates. Med Biol Eng Comput 2024; 62:2939-2960. [PMID: 38760597 DOI: 10.1007/s11517-024-03118-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 04/19/2024] [Indexed: 05/19/2024]
Abstract
In the field of sensory neuroprostheses, one ultimate goal is for individuals to perceive artificial somatosensory information and use the prosthesis with high complexity that resembles an intact system. To this end, research has shown that stimulation-elicited somatosensory information improves prosthesis perception and task performance. While studies strive to achieve sensory integration, a crucial phenomenon that entails naturalistic interaction with the environment, this topic has not been commensurately reviewed. Therefore, here we present a perspective for understanding sensory integration in neuroprostheses. First, we review the engineering aspects and functional outcomes in sensory neuroprosthesis studies. In this context, we summarize studies that have suggested sensory integration. We focus on how they have used stimulation-elicited percepts to maximize and improve the reliability of somatosensory information. Next, we review studies that have suggested multisensory integration. These works have demonstrated that congruent and simultaneous multisensory inputs provided cognitive benefits such that an individual experiences a greater sense of authority over prosthesis movements (i.e., agency) and perceives the prosthesis as part of their own (i.e., ownership). Thereafter, we present the theoretical and neuroscience framework of sensory integration. We investigate how behavioral models and neural recordings have been applied in the context of sensory integration. Sensory integration models developed from intact-limb individuals have led the way to sensory neuroprosthesis studies to demonstrate multisensory integration. Neural recordings have been used to show how multisensory inputs are processed across cortical areas. Lastly, we discuss some ongoing research and challenges in achieving and understanding sensory integration in sensory neuroprostheses. Resolving these challenges would help to develop future strategies to improve the sensory feedback of a neuroprosthetic system.
Collapse
Affiliation(s)
- Keqin Ding
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD, 21205, USA.
| | - Mohsen Rakhshan
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL, 32816, USA
- Disability, Aging, and Technology Cluster, University of Central Florida, Orlando, FL, 32816, USA
| | - Natalia Paredes-Acuña
- Institute for Cognitive Systems, School of Computation, Information and Technology, Technical University of Munich, 80333, Munich, Germany
| | - Gordon Cheng
- Institute for Cognitive Systems, School of Computation, Information and Technology, Technical University of Munich, 80333, Munich, Germany
| | - Nitish V Thakor
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD, 21205, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| |
Collapse
|
20
|
Gao C, Oh S, Yang X, Stanley JM, Shinkareva SV. Neural Representations of Emotions in Visual, Auditory, and Modality-Independent Regions Reflect Idiosyncratic Conceptual Knowledge. Hum Brain Mapp 2024; 45:e70040. [PMID: 39394899 PMCID: PMC11470372 DOI: 10.1002/hbm.70040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 08/27/2024] [Accepted: 09/23/2024] [Indexed: 10/14/2024] Open
Abstract
Growing evidence suggests that conceptual knowledge influences emotion perception, yet the neural mechanisms underlying this effect are not fully understood. Recent studies have shown that brain representations of facial emotion categories in visual-perceptual areas are predicted by conceptual knowledge, but it remains to be seen if auditory regions are similarly affected. Moreover, it is not fully clear whether these conceptual influences operate at a modality-independent level. To address these questions, we conducted a functional magnetic resonance imaging study presenting participants with both facial and vocal emotional stimuli. This dual-modality approach allowed us to investigate effects on both modality-specific and modality-independent brain regions. Using univariate and representational similarity analyses, we found that brain representations in both visual (middle and lateral occipital cortices) and auditory (superior temporal gyrus) regions were predicted by conceptual understanding of emotions for faces and voices, respectively. Additionally, we discovered that conceptual knowledge also influenced supra-modal representations in the superior temporal sulcus. Dynamic causal modeling revealed a brain network showing both bottom-up and top-down flows, suggesting a complex interplay of modality-specific and modality-independent regions in emotional processing. These findings collectively indicate that the neural representations of emotions in both sensory-perceptual and modality-independent regions are likely shaped by each individual's conceptual knowledge.
Collapse
Affiliation(s)
- Chuanji Gao
- School of PsychologyNanjing Normal UniversityNanjingChina
| | - Sewon Oh
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| | - Xuan Yang
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| | - Jacob M. Stanley
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| | - Svetlana V. Shinkareva
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| |
Collapse
|
21
|
Kimura A. Cross-modal sensitivities to auditory and visual stimulations in the first-order somatosensory thalamic nucleus. Eur J Neurosci 2024; 60:5621-5657. [PMID: 39192569 DOI: 10.1111/ejn.16510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 07/15/2024] [Accepted: 08/06/2024] [Indexed: 08/29/2024]
Abstract
The ventral posterolateral nucleus (VPL), being categorized as the first-order thalamic nucleus, is considered to be dedicated to uni-modal somatosensory processing. Cross-modal sensory interactions on thalamic reticular nucleus cells projecting to the VPL, on the other hand, suggest that VPL cells are subject to cross-modal sensory influences. To test this possibility, the effects of auditory or visual stimulation on VPL cell activities were examined in anaesthetized rats, using juxta-cellular recording and labelling techniques. Recordings were obtained from 70 VPL cells, including 65 cells responsive to cutaneous electrical stimulation of the hindpaw. Auditory or visual alone stimulation did not elicit cell activity except in three bi-modal cells and one auditory cell. Cross-modal alterations of somatosensory response by auditory and/or visual stimulation were recognized in 61 cells with regard to the response magnitude, latency (time and jitter) and/or burst spiking properties. Both early (onset) and late responses were either suppressed or facilitated, and de novo cell activity was also induced. Cross-modal alterations took place depending on the temporal interval between the preceding counterpart and somatosensory stimulations, the intensity and frequency of sound. Alterations were observed mostly at short intervals (< 200 ms) and up to 800 ms intervals. Sounds of higher intensities and lower frequencies were more effective for modulation. The susceptibility to cross-modal influences was related to cell location and/or morphology. These and previously reported similar findings in the auditory and visual thalamic nuclei suggest that cross-modal sensory interactions pervasively take place in the first-order sensory thalamic nuclei.
Collapse
Affiliation(s)
- Akihisa Kimura
- Department of Physiology, Wakayama Medical University, Wakayama, Japan
| |
Collapse
|
22
|
Peng B, Huang JJ, Li Z, Zhang LI, Tao HW. Cross-modal enhancement of defensive behavior via parabigemino-collicular projections. Curr Biol 2024; 34:3616-3631.e5. [PMID: 39019036 PMCID: PMC11373540 DOI: 10.1016/j.cub.2024.06.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/19/2024] [Accepted: 06/20/2024] [Indexed: 07/19/2024]
Abstract
Effective detection and avoidance from environmental threats are crucial for animals' survival. Integration of sensory cues associated with threats across different modalities can significantly enhance animals' detection and behavioral responses. However, the neural circuit-level mechanisms underlying the modulation of defensive behavior or fear response under simultaneous multimodal sensory inputs remain poorly understood. Here, we report in mice that bimodal looming stimuli combining coherent visual and auditory signals elicit more robust defensive/fear reactions than unimodal stimuli. These include intensified escape and prolonged hiding, suggesting a heightened defensive/fear state. These various responses depend on the activity of the superior colliculus (SC), while its downstream nucleus, the parabigeminal nucleus (PBG), predominantly influences the duration of hiding behavior. PBG temporally integrates visual and auditory signals and enhances the salience of threat signals by amplifying SC sensory responses through its feedback projection to the visual layer of the SC. Our results suggest an evolutionarily conserved pathway in defense circuits for multisensory integration and cross-modality enhancement.
Collapse
Affiliation(s)
- Bo Peng
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, USA
| | - Junxiang J Huang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Graduate Program in Biomedical and Biological Sciences, University of Southern California, Los Angeles, CA 90033, USA
| | - Zhong Li
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Li I Zhang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| | - Huizhong Whit Tao
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| |
Collapse
|
23
|
Huang YT, Wu CT, Fang YXM, Fu CK, Koike S, Chao ZC. Crossmodal hierarchical predictive coding for audiovisual sequences in the human brain. Commun Biol 2024; 7:965. [PMID: 39122960 PMCID: PMC11316022 DOI: 10.1038/s42003-024-06677-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 08/02/2024] [Indexed: 08/12/2024] Open
Abstract
Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledge is represented and learned in the brain, by identifying the hierarchical networks underlying crossmodal predictions when information of one sensory modality leads to a prediction in another modality. We record electroencephalogram (EEG) during a crossmodal audiovisual local-global oddball paradigm, in which the predictability of transitions between tones and images are manipulated at both the stimulus and sequence levels. To dissect the complex predictive signals in our EEG data, we employed a model-fitting approach to untangle neural interactions across modalities and hierarchies. The model-fitting result demonstrates that audiovisual integration occurs at both the levels of individual stimulus interactions and multi-stimulus sequences. Furthermore, we identify the spatio-spectro-temporal signatures of prediction-error signals across hierarchies and modalities, and reveal that auditory and visual prediction errors are rapidly redirected to the central-parietal electrodes during learning through alpha-band interactions. Our study suggests a crossmodal predictive coding mechanism where unimodal predictions are processed by distributed brain networks to form crossmodal knowledge.
Collapse
Affiliation(s)
- Yiyuan Teresa Huang
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Chien-Te Wu
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Yi-Xin Miranda Fang
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chin-Kun Fu
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Shinsuke Koike
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- University of Tokyo Institute for Diversity & Adaptation of Human Mind (UTIDAHM), Tokyo, Japan
| | - Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
24
|
Wu H, Huang Y, Qin P, Wu H. Individual Differences in Bodily Self-Consciousness and Its Neural Basis. Brain Sci 2024; 14:795. [PMID: 39199487 PMCID: PMC11353174 DOI: 10.3390/brainsci14080795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 08/02/2024] [Accepted: 08/02/2024] [Indexed: 09/01/2024] Open
Abstract
Bodily self-consciousness (BSC), a subject of interdisciplinary interest, refers to the awareness of one's bodily states. Previous studies have noted the existence of individual differences in BSC, while neglecting the underlying factors and neural basis of such individual differences. Considering that BSC relied on integration from both internal and external self-relevant information, we here review previous findings on individual differences in BSC through a three-level-self model, which includes interoceptive, exteroceptive, and mental self-processing. The data show that cross-level factors influenced individual differences in BSC, involving internal bodily signal perceptibility, multisensory processing principles, personal traits shaped by environment, and interaction modes that integrate multiple levels of self-processing. Furthermore, in interoceptive processing, regions like the anterior cingulate cortex and insula show correlations with different perceptions of internal sensations. For exteroception, the parietal lobe integrates sensory inputs, coordinating various BSC responses. Mental self-processing modulates differences in BSC through areas like the medial prefrontal cortex. For interactions between multiple levels of self-processing, regions like the intraparietal sulcus involve individual differences in BSC. We propose that diverse experiences of BSC can be attributed to different levels of self-processing, which moderates one's perception of their body. Overall, considering individual differences in BSC is worth amalgamating diverse methodologies for the diagnosis and treatment of some diseases.
Collapse
Affiliation(s)
- Haiyan Wu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China; (H.W.); (Y.H.)
| | - Ying Huang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China; (H.W.); (Y.H.)
| | - Pengmin Qin
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China; (H.W.); (Y.H.)
- Pazhou Lab, Guangzhou 510330, China
| | - Hang Wu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Institute for Brain Research and Rehabilitation, Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| |
Collapse
|
25
|
Vannasing P, Dionne-Dostie E, Tremblay J, Paquette N, Collignon O, Gallagher A. Electrophysiological responses of audiovisual integration from infancy to adulthood. Brain Cogn 2024; 178:106180. [PMID: 38815526 DOI: 10.1016/j.bandc.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.
Collapse
Affiliation(s)
- Phetsamone Vannasing
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Emmanuelle Dionne-Dostie
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Julie Tremblay
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Natacha Paquette
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Olivier Collignon
- Institute of Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-La-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Anne Gallagher
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada; Cerebrum, Department of Psychology, University of Montreal, Montreal, Qc, Canada.
| |
Collapse
|
26
|
Frumento S, Preatoni G, Chee L, Gemignani A, Ciotti F, Menicucci D, Raspopovic S. Unconscious multisensory integration: behavioral and neural evidence from subliminal stimuli. Front Psychol 2024; 15:1396946. [PMID: 39091706 PMCID: PMC11291458 DOI: 10.3389/fpsyg.2024.1396946] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 07/04/2024] [Indexed: 08/04/2024] Open
Abstract
Introduction The prevailing theories of consciousness consider the integration of different sensory stimuli as a key component for this phenomenon to rise on the brain level. Despite many theories and models have been proposed for multisensory integration between supraliminal stimuli (e.g., the optimal integration model), we do not know if multisensory integration occurs also for subliminal stimuli and what psychophysical mechanisms it follows. Methods To investigate this, subjects were exposed to visual (Virtual Reality) and/or haptic stimuli (Electro-Cutaneous Stimulation) above or below their perceptual threshold. They had to discriminate, in a two-Alternative Forced Choice Task, the intensity of unimodal and/or bimodal stimuli. They were then asked to discriminate the sensory modality while recording their EEG responses. Results We found evidence of multisensory integration for supraliminal condition, following the classical optimal model. Importantly, even for subliminal trials participant's performances in the bimodal condition were significantly more accurate when discriminating the intensity of the stimulation. Moreover, significant differences emerged between unimodal and bimodal activity templates in parieto-temporal areas known for their integrative role. Discussion These converging evidences - even if preliminary and needing confirmation from the collection of further data - suggest that subliminal multimodal stimuli can be integrated, thus filling a meaningful gap in the debate about the relationship between consciousness and multisensory integration.
Collapse
Affiliation(s)
- Sergio Frumento
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Greta Preatoni
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Lauren Chee
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
- Clinical Psychology Branch, Azienda Ospedaliero-Universitaria Pisana, Pisa, Italy
| | - Federico Ciotti
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Danilo Menicucci
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Stanisa Raspopovic
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| |
Collapse
|
27
|
Nwabudike I, Che A. Early-life maturation of the somatosensory cortex: sensory experience and beyond. Front Neural Circuits 2024; 18:1430783. [PMID: 39040685 PMCID: PMC11260818 DOI: 10.3389/fncir.2024.1430783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 06/20/2024] [Indexed: 07/24/2024] Open
Abstract
Early life experiences shape physical and behavioral outcomes throughout lifetime. Sensory circuits are especially susceptible to environmental and physiological changes during development. However, the impact of different types of early life experience are often evaluated in isolation. In this mini review, we discuss the specific effects of postnatal sensory experience, sleep, social isolation, and substance exposure on barrel cortex development. Considering these concurrent factors will improve understanding of the etiology of atypical sensory perception in many neuropsychiatric and neurodevelopmental disorders.
Collapse
Affiliation(s)
- Ijeoma Nwabudike
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, United States
| | - Alicia Che
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, United States
| |
Collapse
|
28
|
Magrou L, Joyce MKP, Froudist-Walsh S, Datta D, Wang XJ, Martinez-Trujillo J, Arnsten AFT. The meso-connectomes of mouse, marmoset, and macaque: network organization and the emergence of higher cognition. Cereb Cortex 2024; 34:bhae174. [PMID: 38771244 PMCID: PMC11107384 DOI: 10.1093/cercor/bhae174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/29/2024] [Accepted: 04/08/2024] [Indexed: 05/22/2024] Open
Abstract
The recent publications of the inter-areal connectomes for mouse, marmoset, and macaque cortex have allowed deeper comparisons across rodent vs. primate cortical organization. In general, these show that the mouse has very widespread, "all-to-all" inter-areal connectivity (i.e. a "highly dense" connectome in a graph theoretical framework), while primates have a more modular organization. In this review, we highlight the relevance of these differences to function, including the example of primary visual cortex (V1) which, in the mouse, is interconnected with all other areas, therefore including other primary sensory and frontal areas. We argue that this dense inter-areal connectivity benefits multimodal associations, at the cost of reduced functional segregation. Conversely, primates have expanded cortices with a modular connectivity structure, where V1 is almost exclusively interconnected with other visual cortices, themselves organized in relatively segregated streams, and hierarchically higher cortical areas such as prefrontal cortex provide top-down regulation for specifying precise information for working memory storage and manipulation. Increased complexity in cytoarchitecture, connectivity, dendritic spine density, and receptor expression additionally reveal a sharper hierarchical organization in primate cortex. Together, we argue that these primate specializations permit separable deconstruction and selective reconstruction of representations, which is essential to higher cognition.
Collapse
Affiliation(s)
- Loïc Magrou
- Department of Neural Science, New York University, New York, NY 10003, United States
| | - Mary Kate P Joyce
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Sean Froudist-Walsh
- School of Engineering Mathematics and Technology, University of Bristol, Bristol, BS8 1QU, United Kingdom
| | - Dibyadeep Datta
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Xiao-Jing Wang
- Department of Neural Science, New York University, New York, NY 10003, United States
| | - Julio Martinez-Trujillo
- Departments of Physiology and Pharmacology, and Psychiatry, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada
| | - Amy F T Arnsten
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
| |
Collapse
|
29
|
Hermosillo RJM, Moore LA, Feczko E, Miranda-Domínguez Ó, Pines A, Dworetsky A, Conan G, Mooney MA, Randolph A, Graham A, Adeyemo B, Earl E, Perrone A, Carrasco CM, Uriarte-Lopez J, Snider K, Doyle O, Cordova M, Koirala S, Grimsrud GJ, Byington N, Nelson SM, Gratton C, Petersen S, Feldstein Ewing SW, Nagel BJ, Dosenbach NUF, Satterthwaite TD, Fair DA. A precision functional atlas of personalized network topography and probabilities. Nat Neurosci 2024; 27:1000-1013. [PMID: 38532024 PMCID: PMC11089006 DOI: 10.1038/s41593-024-01596-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 02/08/2024] [Indexed: 03/28/2024]
Abstract
Although the general location of functional neural networks is similar across individuals, there is vast person-to-person topographic variability. To capture this, we implemented precision brain mapping functional magnetic resonance imaging methods to establish an open-source, method-flexible set of precision functional network atlases-the Masonic Institute for the Developing Brain (MIDB) Precision Brain Atlas. This atlas is an evolving resource comprising 53,273 individual-specific network maps, from more than 9,900 individuals, across ages and cohorts, including the Adolescent Brain Cognitive Development study, the Developmental Human Connectome Project and others. We also generated probabilistic network maps across multiple ages and integration zones (using a new overlapping mapping technique, Overlapping MultiNetwork Imaging). Using regions of high network invariance improved the reproducibility of executive function statistical maps in brain-wide associations compared to group average-based parcellations. Finally, we provide a potential use case for probabilistic maps for targeted neuromodulation. The atlas is expandable to alternative datasets with an online interface encouraging the scientific community to explore and contribute to understanding the human brain function more precisely.
Collapse
Affiliation(s)
- Robert J M Hermosillo
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA.
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA.
| | - Lucille A Moore
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Eric Feczko
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | - Óscar Miranda-Domínguez
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | - Adam Pines
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Penn Lifespan Informatics and Neuroimaging Center, University of Pennsylvania, Philadelphia, PA, USA
| | - Ally Dworetsky
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
- Department of Psychology, Florida State University, Tallahassee, FL, USA
| | - Gregory Conan
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Department of Psychiatry, Oregon Health & Science University, Portland, OR, USA
| | - Michael A Mooney
- Department of Psychiatry, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, OR, USA
- Knight Cancer Institute, Oregon Health & Science University, Portland, OR, USA
- Center for Mental Health Innovation, Oregon Health and Science University, Portland, OR, USA
| | - Anita Randolph
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | - Alice Graham
- Department of Psychiatry, Oregon Health & Science University, Portland, OR, USA
| | - Babatunde Adeyemo
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Eric Earl
- Data Science and Sharing Team, National Institute of Mental Health, Bethesda, MD, USA
| | - Anders Perrone
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Cristian Morales Carrasco
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | | | - Kathy Snider
- Department of Psychiatry, Oregon Health & Science University, Portland, OR, USA
| | - Olivia Doyle
- Department of Psychiatry, Oregon Health & Science University, Portland, OR, USA
| | - Michaela Cordova
- Joint Doctoral Program in Clinical Psychology, San Diego State University, San Diego, CA, USA
- Joint Doctoral Program in Clinical Psychology, University of California San Diego, San Diego, CA, USA
| | - Sanju Koirala
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Institute of Child Development, University of Minnesota, Minneapolis, MN, USA
| | - Gracie J Grimsrud
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Nora Byington
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Steven M Nelson
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | - Caterina Gratton
- Department of Psychology, Northwestern University, Evanston, IL, USA
- Department of Psychology, Florida State University, Tallahassee, FL, USA
- Department of Psychological and Brain Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Steven Petersen
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Psychological and Brain Sciences, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
- Department of Biomedical Engineering, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Bonnie J Nagel
- Department of Psychiatry, Oregon Health & Science University, Portland, OR, USA
| | - Nico U F Dosenbach
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Theodore D Satterthwaite
- Penn Lifespan Informatics and Neuroimaging Center, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA
| | - Damien A Fair
- Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
- Institute of Child Development, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
30
|
Schnepel P, Paricio-Montesinos R, Ezquerra-Romano I, Haggard P, Poulet JFA. Cortical cellular encoding of thermotactile integration. Curr Biol 2024; 34:1718-1730.e3. [PMID: 38582078 DOI: 10.1016/j.cub.2024.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 12/24/2023] [Accepted: 03/13/2024] [Indexed: 04/08/2024]
Abstract
Recent evidence suggests that primary sensory cortical regions play a role in the integration of information from multiple sensory modalities. How primary cortical neurons integrate different sources of sensory information is unclear, partly because non-primary sensory input to a cortical sensory region is often weak or modulatory. To address this question, we take advantage of the robust representation of thermal (cooling) and tactile stimuli in mouse forelimb primary somatosensory cortex (fS1). Using a thermotactile detection task, we show that the perception of threshold-level cool or tactile information is enhanced when they are presented simultaneously, compared with presentation alone. To investigate the cortical cellular correlates of thermotactile integration, we performed in vivo extracellular recordings from fS1 in awake resting and anesthetized mice during unimodal and bimodal stimulation of the forepaw. Unimodal stimulation evoked thermal- or tactile- specific excitatory and inhibitory responses of fS1 neurons. The most prominent features of combined thermotactile stimulation are the recruitment of unimodally silent fS1 neurons, non-linear integration features, and response dynamics that favor longer response durations with additional spikes. Together, we identify quantitative and qualitative changes in cortical encoding that may underlie the improvement in perception of thermotactile surfaces during haptic exploration.
Collapse
Affiliation(s)
- Philipp Schnepel
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany
| | - Ricardo Paricio-Montesinos
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany
| | - Ivan Ezquerra-Romano
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany; Institute of Cognitive Neuroscience, University College London (UCL), London WC1N 3AZ, UK
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London (UCL), London WC1N 3AZ, UK
| | - James F A Poulet
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany.
| |
Collapse
|
31
|
Ristic J, Capozzi F. The role of visual and auditory information in social event segmentation. Q J Exp Psychol (Hove) 2024; 77:626-638. [PMID: 37154602 PMCID: PMC10880416 DOI: 10.1177/17470218231176471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/10/2023]
Abstract
Humans organise their social worlds into social and nonsocial events. Social event segmentation refers to the ability to parse the environmental content into social and nonsocial events or units. Here, we investigated the role that perceptual information from visual and auditory modalities, in isolation and in conjunction, played in social event segmentation. Participants viewed a video clip depicting an interaction between two actors and marked the boundaries of social and nonsocial events. Depending on the condition, the clip at first contained only auditory or only visual information. Then, the clip was shown containing both auditory and visual information. Higher overall group consensus and response consistency in parsing the clip was found for social segmentation and when both auditory and visual information was available. Presenting the clip in the visual domain only benefitted group agreement in social segmentation while the inclusion of auditory information (under the audiovisual condition) also improved response consistency in nonsocial segmentation. Thus, social segmentation utilises information from the visual modality, with the auditory cues contributing under ambiguous or uncertain conditions and during segmentation of nonsocial content.
Collapse
Affiliation(s)
- Jelena Ristic
- Department of Psychology, McGill University, Montreal, Québec, Canada
| | - Francesca Capozzi
- Department of Psychology, Université du Québec à Montréal, Montreal, Québec, Canada
| |
Collapse
|
32
|
Diana L, Casati C, Melzi L, Bianchi Marzoli S, Bolognini N. The effects of occipital and parietal tDCS on chronic visual field defects after brain injury. Front Neurol 2024; 15:1340365. [PMID: 38419713 PMCID: PMC10899507 DOI: 10.3389/fneur.2024.1340365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/24/2024] [Indexed: 03/02/2024] Open
Abstract
Introduction Homonymous visual field defects (HVFDs) following acquired brain lesions affect independent living by hampering several activities of everyday life. Available treatments are intensive and week- or month-long. Transcranial Direct current stimulation (tDCS), a plasticity-modulating non-invasive brain stimulation technique, could be combined with behavioral trainings to boost their efficacy or reduce treatment duration. Some promising attempts have been made pairing occipital tDCS with visual restitution training, however less is knows about which area/network should be best stimulated in association with compensatory approaches, aimed at improving exploratory abilities, such as multisensory trainings. Methods In a proof-of-principle, sham-controlled, single-blind study, 15 participants with chronic HVFDs underwent four one-shot sessions of active or sham anodal tDCS applied over the ipsilesional occipital cortex, the ipsilesional or contralesional posterior parietal cortex. tDCS was delivered during a compensatory multisensory (audiovisual) training. Before and immediately after each tDCS session, participants carried out a visual detection task, and two visual search tasks (EF and Triangles search tests). Accuracy (ACC) and response times (RTs) were analyzed with generalized mixed models. We investigated differences in baseline performance, clinical-demographic and lesion factors between tDCS responders and non-responders, based on post-tDCS behavioral improvements. Lastly, we conducted exploratory analyses to compare left and right brain-damaged participants. Results RTs improved after active ipsilesional occipital and parietal tDCS in the visual search tasks, while no changes in ACC were detected. Responders to ipsilesional occipital tDCS (Triangle task) had shorter disease duration and smaller lesions of the parietal cortex and the superior longitudinal fasciculus. On the other end, on the EF test, those participants with larger damage of the temporo-parietal cortex or the fronto-occipital white matter tracts showed a larger benefit from contralesional parietal tDCS. Overall, the visual search RTs improvements were larger in participants with right-sided hemispheric lesions. Conclusion The present result shows the facilitatory effects of occipital and parietal tDCS combined with compensatory multisensory training on visual field exploration in HVFDs, suggesting a potential for the development of new neuromodulation treatments to improve visual scanning behavior in brain-injured patients.
Collapse
Affiliation(s)
- Lorenzo Diana
- Laboratory of Neuropsychology, Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Carlotta Casati
- Laboratory of Neuropsychology, Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Lisa Melzi
- Neuro-Ophthalmology Center and Ocular Electrophysiology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Stefania Bianchi Marzoli
- Neuro-Ophthalmology Center and Ocular Electrophysiology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Nadia Bolognini
- Laboratory of Neuropsychology, Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, University of Milano-Bicocca and NeuroMI, Milan, Italy
| |
Collapse
|
33
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
34
|
Ross LA, Molholm S, Butler JS, Del Bene VA, Brima T, Foxe JJ. Neural correlates of audiovisual narrative speech perception in children and adults on the autism spectrum: A functional magnetic resonance imaging study. Autism Res 2024; 17:280-310. [PMID: 38334251 DOI: 10.1002/aur.3104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/19/2024] [Indexed: 02/10/2024]
Abstract
Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- School of Mathematics and Statistics, Technological University Dublin, City Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- Heersink School of Medicine, Department of Neurology, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Tufikameni Brima
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| |
Collapse
|
35
|
Pishghadam R, Shayesteh S, Daneshvarfard F, Boustani N, Seyednozadi Z, Zabetipour M, Pishghadam M. Cognition-Emotion Interaction during L2 Sentence Comprehension: The Correlation of ERP and GSR Responses to Sense Combinations. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2024; 53:7. [PMID: 38281286 DOI: 10.1007/s10936-024-10039-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/18/2023] [Indexed: 01/30/2024]
Abstract
This study mainly examined the role of the combination of three senses (i.e., auditory, visual, and tactile) and five senses (i.e., auditory, visual, tactile, olfactory, and gustatory) in the correlation between electrophysiological and electrodermal responses underlying second language (L2) sentence comprehension. Forty subjects did two acceptability judgment tasks, encompassing congruent and semantically/pragmatically incongruent sentences. The event-related potential (ERP) and galvanic skin response (GSR) data for both the target and final words of the sentences were collected and analyzed. The results revealed that there is an interaction between cognitive and emotional responses in both semantically and pragmatically incongruent sentences, yet the timing of the interaction is longer in sentences with pragmatic incongruity due to their complexity. Based on the ERP and GSR correlation results, it was further found that the five-sense combination approach improves L2 sentence comprehension and interest in learning materials yet reduces the level of excitement or arousal. While this approach might be beneficial for some learners, it might be detrimental for those in favor of stimulating learning environments.
Collapse
Affiliation(s)
- Reza Pishghadam
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran
| | - Shaghayegh Shayesteh
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran.
| | - Farveh Daneshvarfard
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran
| | - Nasim Boustani
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran
| | - Zahra Seyednozadi
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran
| | - Mohammad Zabetipour
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran
| | - Morteza Pishghadam
- Faculty of Letters and Humanities, Ferdowsi University of Mashhad, Azadi Square, Mashhad, Khorasan-e-Razavi, Iran
| |
Collapse
|
36
|
Ku Y, Zhou Y. Crossmodal Associations and Working Memory in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:91-100. [PMID: 38270855 DOI: 10.1007/978-981-99-7611-9_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Crossmodal associations between stimuli from different sensory modalities could emerge in non-synesthetic people and be stored in working memory to guide goal-directed behaviors. This chapter reviews a plethora of studies in this field to summarize where, when, and how crossmodal associations and working memory are processed. It has been found that in those brain regions that are traditionally considered as unimodal primary sensory areas, neural activity could be influenced by crossmodal sensory signals at temporally very early stage of information processing. This phenomenon could not be due to feedback projections from higher level associative areas. Sequentially, neural processes would then occur in associative cortical areas including the posterior parietal cortex and prefrontal cortex. Neural oscillations in multiple frequency bands may reflect brain activity in crossmodal associations, and it is likely that neural synchrony is related to potential neural mechanisms underlying these processes. Primary sensory areas and associative areas coordinate together through neural synchrony to fulfil crossmodal associations and to guide working memory performance.
Collapse
Affiliation(s)
- Yixuan Ku
- Department of Psychology, Center for Brain and Mental Well-being, Sun Yat-sen University, Guangzhou, China.
- Peng Cheng Laboratory, Shenzhen, China.
| | - Yongdi Zhou
- School of Psychology, Shenzhen University, Shenzhen, China
| |
Collapse
|
37
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
38
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
39
|
Zuo Y, Wang Z. Neural Oscillations and Multisensory Processing. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:121-137. [PMID: 38270857 DOI: 10.1007/978-981-99-7611-9_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Neural oscillations play a role in sensory processing by coordinating synchronized neuronal activity. Synchronization of gamma oscillations is engaged in local computation of feedforward signals and synchronization of alpha-beta oscillations is engaged in feedback processing over long-range areas. These spatially and spectrally segregated bi-directional signals may be integrated by a mechanism of cross-frequency coupling. Synchronization of neural oscillations has also been proposed as a mechanism for information integration across multiple sensory modalities. A transient stimulus or rhythmic stimulus from one modality may lead to phase alignment of ongoing neural oscillations in multiple sensory cortices, through a mechanism of cross-modal phase reset or cross-modal neural entrainment. Synchronized activities in multiple sensory cortices are more likely to boost stronger activities in downstream areas. Compared to synchronized oscillations, asynchronized oscillations may impede signal processing, and may contribute to sensory selection by setting the oscillations in the target-related cortex and the oscillations in the distractor-related cortex to opposite phases.
Collapse
Affiliation(s)
- Yanfang Zuo
- Department of Neurology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, China
- Center for Medical Research on Innovation and Translation, Institute of Clinical Medicine, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, China
| | - Zuoren Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science & Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
40
|
Leib R, Howard IS, Millard M, Franklin DW. Behavioral Motor Performance. Compr Physiol 2023; 14:5179-5224. [PMID: 38158372 DOI: 10.1002/cphy.c220032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
The human sensorimotor control system has exceptional abilities to perform skillful actions. We easily switch between strenuous tasks that involve brute force, such as lifting a heavy sewing machine, and delicate movements such as threading a needle in the same machine. Using a structure with different control architectures, the motor system is capable of updating its ability to perform through our daily interaction with the fluctuating environment. However, there are issues that make this a difficult computational problem for the brain to solve. The brain needs to control a nonlinear, nonstationary neuromuscular system, with redundant and occasionally undesired degrees of freedom, in an uncertain environment using a body in which information transmission is subject to delays and noise. To gain insight into the mechanisms of motor control, here we survey movement laws and invariances that shape our everyday motion. We then examine the major solutions to each of these problems in the three parts of the sensorimotor control system, sensing, planning, and acting. We focus on how the sensory system, the control architectures, and the structure and operation of the muscles serve as complementary mechanisms to overcome deviations and disturbances to motor behavior and give rise to skillful motor performance. We conclude with possible future research directions based on suggested links between the operation of the sensorimotor system across the movement stages. © 2024 American Physiological Society. Compr Physiol 14:5179-5224, 2024.
Collapse
Affiliation(s)
- Raz Leib
- Neuromuscular Diagnostics, TUM School of Medicine and Health, Department of Health and Sport Sciences, Technical University of Munich, Munich, Germany
| | - Ian S Howard
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UK
| | - Matthew Millard
- Institute of Sport and Movement Science, University of Stuttgart, Stuttgart, Germany
- Institute of Engineering and Computational Mechanics, University of Stuttgart, Stuttgart, Germany
| | - David W Franklin
- Neuromuscular Diagnostics, TUM School of Medicine and Health, Department of Health and Sport Sciences, Technical University of Munich, Munich, Germany
- Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, Munich, Germany
- Munich Data Science Institute (MDSI), Technical University of Munich, Munich, Germany
| |
Collapse
|
41
|
Liu Y, Wang Z, Wei T, Zhou S, Yin Y, Mi Y, Liu X, Tang Y. Alterations of Audiovisual Integration in Alzheimer's Disease. Neurosci Bull 2023; 39:1859-1872. [PMID: 37812301 PMCID: PMC10661680 DOI: 10.1007/s12264-023-01125-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/22/2023] [Indexed: 10/10/2023] Open
Abstract
Audiovisual integration is a vital information process involved in cognition and is closely correlated with aging and Alzheimer's disease (AD). In this review, we evaluated the altered audiovisual integrative behavioral symptoms in AD. We further analyzed the relationships between AD pathologies and audiovisual integration alterations bidirectionally and suggested the possible mechanisms of audiovisual integration alterations underlying AD, including the imbalance between energy demand and supply, activity-dependent degeneration, disrupted brain networks, and cognitive resource overloading. Then, based on the clinical characteristics including electrophysiological and imaging data related to audiovisual integration, we emphasized the value of audiovisual integration alterations as potential biomarkers for the early diagnosis and progression of AD. We also highlighted that treatments targeted audiovisual integration contributed to widespread pathological improvements in AD animal models and cognitive improvements in AD patients. Moreover, investigation into audiovisual integration alterations in AD also provided new insights and comprehension about sensory information processes.
Collapse
Affiliation(s)
- Yufei Liu
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Zhibin Wang
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Tao Wei
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Shaojiong Zhou
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yunsi Yin
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yingxin Mi
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Xiaoduo Liu
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yi Tang
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China.
| |
Collapse
|
42
|
Braunitzer G, Tót K, Eördegh G, Hegedűs A, Kiss Á, Kóbor J, Pertich Á, Nagy A. Suboptimal multisensory processing in pediatric migraine without aura: a comparative, cross-sectional study. Sci Rep 2023; 13:19422. [PMID: 37940637 PMCID: PMC10632508 DOI: 10.1038/s41598-023-46088-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/27/2023] [Indexed: 11/10/2023] Open
Abstract
Alterations of sensory processing in migraine are well known. There is some evidence to suggest that multisensory processing is altered in migraine as well, but the area is underexplored, especially regarding pediatric migraine. A visual and an audiovisual version of the Rutgers Acquired Equivalence Test paradigm was administered to pediatric patients with migraine without aura (aged 7-17.5 years) and to age- and sex-matched controls. The application of audiovisual stimuli significantly facilitated associative pair learning in migraine-free children and adolescents, but not in pediatric migraine patients. The results of this study corroborate the hypothesis that multisensory processing is altered in pediatric migraine without aura.
Collapse
Affiliation(s)
- Gábor Braunitzer
- Laboratory for Perception and Cognition and Clinical Neuroscience, Nyírő Gyula Hospital, Lehel Utca 59-61, Budapest, 1135, Hungary
| | - Kálmán Tót
- Department of Physiology, Medical School, University of Szeged, Szeged, Hungary
| | - Gabriella Eördegh
- Faculty of Health Sciences and Social Studies, University of Szeged, Szeged, Hungary
| | - András Hegedűs
- Department of Physiology, Medical School, University of Szeged, Szeged, Hungary
| | - Ádám Kiss
- Department of Physiology, Medical School, University of Szeged, Szeged, Hungary
| | - Jenő Kóbor
- Department of Pediatrics and Pediatric Health Center, Medical School, University of Szeged, Szeged, Hungary
| | - Ákos Pertich
- Department of Physiology, Medical School, University of Szeged, Szeged, Hungary
| | - Attila Nagy
- Department of Physiology, Medical School, University of Szeged, Szeged, Hungary.
| |
Collapse
|
43
|
Antono JE, Dang S, Auksztulewicz R, Pooresmaeili A. Distinct Patterns of Connectivity between Brain Regions Underlie the Intra-Modal and Cross-Modal Value-Driven Modulations of the Visual Cortex. J Neurosci 2023; 43:7361-7375. [PMID: 37684031 PMCID: PMC10621764 DOI: 10.1523/jneurosci.0355-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 07/30/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Past reward associations may be signaled from different sensory modalities; however, it remains unclear how different types of reward-associated stimuli modulate sensory perception. In this human fMRI study (female and male participants), a visual target was simultaneously presented with either an intra- (visual) or a cross-modal (auditory) cue that was previously associated with rewards. We hypothesized that, depending on the sensory modality of the cues, distinct neural mechanisms underlie the value-driven modulation of visual processing. Using a multivariate approach, we confirmed that reward-associated cues enhanced the target representation in early visual areas and identified the brain valuation regions. Then, using an effective connectivity analysis, we tested three possible patterns of connectivity that could underlie the modulation of the visual cortex: a direct pathway from the frontal valuation areas to the visual areas, a mediated pathway through the attention-related areas, and a mediated pathway that additionally involved sensory association areas. We found evidence for the third model demonstrating that the reward-related information in both sensory modalities is communicated across the valuation and attention-related brain regions. Additionally, the superior temporal areas were recruited when reward was cued cross-modally. The strongest dissociation between the intra- and cross-modal reward-driven effects was observed at the level of the feedforward and feedback connections of the visual cortex estimated from the winning model. These results suggest that, in the presence of previously rewarded stimuli from different sensory modalities, a combination of domain-general and domain-specific mechanisms are recruited across the brain to adjust the visual perception.SIGNIFICANCE STATEMENT Reward has a profound effect on perception, but it is not known whether shared or disparate mechanisms underlie the reward-driven effects across sensory modalities. In this human fMRI study, we examined the reward-driven modulation of the visual cortex by visual (intra-modal) and auditory (cross-modal) reward-associated cues. Using a model-based approach to identify the most plausible pattern of inter-regional effective connectivity, we found that higher-order areas involved in the valuation and attentional processing were recruited by both types of rewards. However, the pattern of connectivity between these areas and the early visual cortex was distinct between the intra- and cross-modal rewards. This evidence suggests that, to effectively adapt to the environment, reward signals may recruit both domain-general and domain-specific mechanisms.
Collapse
Affiliation(s)
- Jessica Emily Antono
- Perception and Cognition Lab, European Neuroscience Institute Goettingen-A Joint Initiative of the University Medical Center Goettingen and the Max-Planck-Society, Germany, Goettingen, 37077, Germany
| | - Shilpa Dang
- Perception and Cognition Lab, European Neuroscience Institute Goettingen-A Joint Initiative of the University Medical Center Goettingen and the Max-Planck-Society, Germany, Goettingen, 37077, Germany
- School of Artificial Intelligence and Data Science, Indian Institute of Technology Jodhpur, Karwar, Jodhpur 342030, India
| | - Ryszard Auksztulewicz
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, 14195, Germany
| | - Arezoo Pooresmaeili
- Perception and Cognition Lab, European Neuroscience Institute Goettingen-A Joint Initiative of the University Medical Center Goettingen and the Max-Planck-Society, Germany, Goettingen, 37077, Germany
| |
Collapse
|
44
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
45
|
Layer N, Abdel-Latif KHA, Radecke JO, Müller V, Weglage A, Lang-Roth R, Walger M, Sandmann P. Effects of noise and noise reduction on audiovisual speech perception in cochlear implant users: An ERP study. Clin Neurophysiol 2023; 154:141-156. [PMID: 37611325 DOI: 10.1016/j.clinph.2023.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 06/19/2023] [Accepted: 07/14/2023] [Indexed: 08/25/2023]
Abstract
OBJECTIVE Hearing with a cochlear implant (CI) is difficult in noisy environments, but the use of noise reduction algorithms, specifically ForwardFocus, can improve speech intelligibility. The current event-related potentials (ERP) study examined the electrophysiological correlates of this perceptual improvement. METHODS Ten bimodal CI users performed a syllable-identification task in auditory and audiovisual conditions, with syllables presented from the front and stationary noise presented from the sides. Brainstorm was used for spatio-temporal evaluation of ERPs. RESULTS CI users revealed an audiovisual benefit as reflected by shorter response times and greater activation in temporal and occipital regions at P2 latency. However, in auditory and audiovisual conditions, background noise hampered speech processing, leading to longer response times and delayed auditory-cortex-activation at N1 latency. Nevertheless, activating ForwardFocus resulted in shorter response times, reduced listening effort and enhanced superior-frontal-cortex-activation at P2 latency, particularly in audiovisual conditions. CONCLUSIONS ForwardFocus enhances speech intelligibility in audiovisual speech conditions by potentially allowing the reallocation of attentional resources to relevant auditory speech cues. SIGNIFICANCE This study shows for CI users that background noise and ForwardFocus differentially affect spatio-temporal cortical response patterns, both in auditory and audiovisual speech conditions.
Collapse
Affiliation(s)
- Natalie Layer
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany.
| | | | - Jan-Ole Radecke
- Dept. of Psychiatry and Psychotherapy, University of Lübeck, Germany; Center for Brain, Behaviour and Metabolism (CBBM), University of Lübeck, Germany
| | - Verena Müller
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
| | - Anna Weglage
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
| | - Ruth Lang-Roth
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
| | - Martin Walger
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany; Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
| | - Pascale Sandmann
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany; Department of Otolaryngology, Head and Neck Surgery, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
46
|
Zaidel A, Salomon R. Multisensory decisions from self to world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220335. [PMID: 37545311 PMCID: PMC10404927 DOI: 10.1098/rstb.2022.0335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 06/19/2023] [Indexed: 08/08/2023] Open
Abstract
Classic Bayesian models of perceptual inference describe how an ideal observer would integrate 'unisensory' measurements (multisensory integration) and attribute sensory signals to their origin(s) (causal inference). However, in the brain, sensory signals are always received in the context of a multisensory bodily state-namely, in combination with other senses. Moreover, sensory signals from both interoceptive sensing of one's own body and exteroceptive sensing of the world are highly interdependent and never occur in isolation. Thus, the observer must fundamentally determine whether each sensory observation is from an external (versus internal, self-generated) source to even be considered for integration. Critically, solving this primary causal inference problem requires knowledge of multisensory and sensorimotor dependencies. Thus, multisensory processing is needed to separate sensory signals. These multisensory processes enable us to simultaneously form a sense of self and form distinct perceptual decisions about the external world. In this opinion paper, we review and discuss the similarities and distinctions between multisensory decisions underlying the sense of self and those directed at acquiring information about the world. We call attention to the fact that heterogeneous multisensory processes take place all along the neural hierarchy (even in forming 'unisensory' observations) and argue that more integration of these aspects, in theory and experiment, is required to obtain a more comprehensive understanding of multisensory brain function. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Roy Salomon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
- Department of Cognitive Sciences, University of Haifa, Mount Carmel, Haifa 3498838, Israel
| |
Collapse
|
47
|
Yang J, Ganea N, Kanazawa S, Yamaguchi MK, Bhattacharya J, Bremner AJ. Cortical signatures of visual body representation develop in human infancy. Sci Rep 2023; 13:14696. [PMID: 37679386 PMCID: PMC10484977 DOI: 10.1038/s41598-023-41604-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 08/28/2023] [Indexed: 09/09/2023] Open
Abstract
Human infants cannot report their experiences, limiting what we can learn about their bodily awareness. However, visual cortical responses to the body, linked to visual awareness and selective attention in adults, can be easily measured in infants and provide a promising marker of bodily awareness in early life. We presented 4- and 8-month-old infants with a flickering (7.5 Hz) video of a hand being stroked and recorded steady-state visual evoked potentials (SSVEPs). In half of the trials, the infants also received tactile stroking synchronously with visual stroking. The 8-month-old, but not the 4-month-old infants, showed a significant enhancement of SSVEP responses when they received tactile stimulation concurrent with the visually observed stroking. Follow-up experiments showed that this enhancement did not occur when the visual hand was presented in an incompatible posture with the infant's own body or when the visual stimulus was a body-irrelevant video. Our findings provide a novel insight into the development of bodily self-awareness in the first year of life.
Collapse
Affiliation(s)
- Jiale Yang
- School of Psychology, Chukyo University, Nagoya, Japan.
| | - Natasa Ganea
- Child Study Center, Yale University, New Haven, CT, USA
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Tokyo, Japan
| | | | | | - Andrew J Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
48
|
Saltafossi M, Zaccaro A, Perrucci MG, Ferri F, Costantini M. The impact of cardiac phases on multisensory integration. Biol Psychol 2023; 182:108642. [PMID: 37467844 DOI: 10.1016/j.biopsycho.2023.108642] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/21/2023]
Abstract
The brain continuously processes information coming from both the external environment and visceral signals generated by the body. This constant information exchange between the body and the brain allows signals originating from the oscillatory activity of the heart, among others, to influence perception. Here, we investigated how the cardiac phase modulates multisensory integration, which is the process that allows information from multiple senses to combine non-linearly to reduce environmental uncertainty. Forty healthy participants completed a Simple Detection Task with unimodal (Auditory, Visual, Tactile) and bimodal (Audio-Tactile, Audio-Visual, Visuo-Tactile) stimuli presented 250 ms and 500 ms after the R-peak of the electrocardiogram, that is, systole and diastole, respectively. First, we found a nonspecific effect of the cardiac cycle phases on detection of both unimodal and bimodal stimuli. Reaction times were faster for stimuli presented during diastole, compared to systole. Then, applying the Race Model Inequality approach to quantify multisensory integration, Audio-Tactile and Visuo-Tactile, but not Audio-Visual stimuli, showed higher integration when presented during diastole than during systole. These findings indicate that the impact of the cardiac phase on multisensory integration may be specific for stimuli including somatosensory (i.e., tactile) inputs. This suggests that the heartbeat-related noise, which according to the interoceptive predictive coding theory suppresses somatosensory inputs, also affects multisensory integration during systole. In conclusion, our data extend the interoceptive predictive coding theory to the multisensory domain. From a more mechanistic view, they may reflect a reduced optimization of neural oscillations orchestrating multisensory integration during systole.
Collapse
Affiliation(s)
- Martina Saltafossi
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Andrea Zaccaro
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Mauro Gianni Perrucci
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Francesca Ferri
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| |
Collapse
|
49
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
50
|
Yildiz FG, Temucin CM. Multimodal integration and modulation of visual and somatosensory inputs on the corticospinal excitability. Neurophysiol Clin 2023; 53:102842. [PMID: 36724583 DOI: 10.1016/j.neucli.2022.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 12/06/2022] [Accepted: 12/17/2022] [Indexed: 02/01/2023] Open
Abstract
OBJECTIVE Corticospinal excitability may be affected by various sensory inputs under physiological conditions. In this study, we aimed to investigate the corticospinal excitability by using multimodal conditioning paradigms of combined somatosensory electrical and visual stimulation to understand the sensory-motor integration. METHODS We examined motor evoked potentials (MEP) obtained by using transcranial magnetic stimulation (TMS) that were conditioned by using a single goggle-light-emitting diode (LED) stimulation, peripheral nerve electrical stimulation (short latency afferent inhibition protocol), or a combination of both (goggle-LED+electrical stimulation) at different interstimulus intervals (ISIs) in 14 healthy volunteers. RESULTS We found MEP inhibition at ISIs of 50-60 ms using the conditioned goggle-LED stimulation. The combined goggle-LED stimulation at a 60 ms ISI resulted in an additional inhibition to the electrical stimulation. CONCLUSIONS Visual inputs cause significant modulatory effects on the corticospinal excitability. Combined visual and somatosensory stimuli integrate probably via different neural circuits and/or interneuron populations. To our knowledge, multimodal integration of visual and somatosensory inputs by using TMS-short latency inhibition protocol have been evaluated via electrophysiological methods for the first time in this study.
Collapse
Affiliation(s)
- Fatma Gokcem Yildiz
- Faculty of Medicine, Department of Neurology, Hacettepe Univesity, EMG-TMS Unit, Ankara, Turkey; Hacettepe University, Institute of Neurological Sciences and Psychiatry, Ankara, Turkey.
| | - Cagri Mesut Temucin
- Faculty of Medicine, Department of Neurology, Hacettepe Univesity, EMG-TMS Unit, Ankara, Turkey
| |
Collapse
|